Streamsx-kafka

Latest version: v1.10.2

Safety actively analyzes 631110 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

1.5.1

This toolkit release is a bugfix release that resolves following issues:

- 124 - KafkaConsumer does not assign to all Partitions after restart when in CR
- 123 - KafkaConsumer should keep the generated group.id accross PE restarts
- 122 - resolve security vulnerabilities in third-party libs
- 121 - KafkaConsumer throws NullPointerException when topic is missing

1.5.0

Not secure
This release of the Kafka toolkit contains following new features and enhancements:
- enable Kafka consumer groups when in consistent region, 72. Please have a look at the sample [KafkaConsumerGroupWithConsistentRegion](https://github.com/IBMStreams/streamsx.kafka/tree/v1.5.0/samples/KafkaConsumerGroupWithConsistentRegion).
- Compatibility with Kafka brokers at version 0.10.2, 0.11, 1.0, 1.1, and 2.0
- new custom metrics `nAssignedPartitions`, `isGroupManagementActive`, `nPartitionRebalances`, `drainTimeMillis`, and `drainTimeMillisMax`
- Operators generate a client ID that allows to identify the Job and Streams operator, when no client ID is specified. 109 . The pattern for the client Id is `{C|P}-J<job-ID>-<operator name>`, where *C* denotes a consumer operator, *P* a producer.


Solved issues in this release:
- 102 - KafkaConsumer crashes when fused with other Kafka consumers
- 104 - toolkit build fails with IBM Java from QSE
- 105 - Message resources for non en_US locale should return en_US message when message is not available in specific language
- 115 - KafkaProducer: adapt transaction timeout to consistent region period
- 116 - KafkaProducer: exactly-once delivery semanitc is worse than at-least-once

The online version of the **SPL documentation** for this toolkit is available [here](https://ibmstreams.github.io/streamsx.kafka).

1.4.2

This bugfix release fixes following issue:

100 - KafkaProducer registers for governance as input/source instead of output/sink

1.4.1

Not secure
This bugfix release fixes following issues:
- 98 - KafkaConsumer can silently stop consuming messages

1.4.0

Not secure
* Monitoring of memory consumption in addition to monitoring the internal queue fill (issue 91)
* New custom metrics:
* `nLowMemoryPause` - Number times message polling was paused due to low memory.
* `nQueueFullPause` - Number times message polling was paused due to full queue.
* `nPendingMessages` - Number of pending messages to be submitted as tuples.
* Committing offsets turned from synchronous to asynchronously. This can help improve throughput
* Offsets are committed now *after* tuple submission. In previous versions, offsets have been committed immediately after messages have been buffered internally - before tuple submission. (issue 76)
* New operator parameter **commitCount** to specify the commit period in terms of number of submitted tuples when the operator does *not* participate in a consistent region.
* Offset commit on drain when the operator is part of a consistent region. (issue 95)
* corrected issue 96 - KafkaConsumer: final de-assignment via control port does not work

1.3.3

**This release contains following fixes/enhancements:**

- internationalization (translated messages) for de_DE, es_ES, fr_FR, it_IT, ja_JP, ko_KR, pt_BR, ru_RU, zh_CN, and zh_TW
- fixed issue 88 - Consumer op is tracing per tuple at debug level

Known issues
- When Kafka's group management is enabled (KafkaConsumer not in consistent region and **startPosition** parameter unset or `Default`), the KafkaConsumer can silently stop consuming messages when committing Kafka offsets fails. 98. As a **workaround**, the consumer property `enable.auto.commit=true` can be used in a property file or app Option.

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.