Redistimeseries

Latest version: v1.4.5

Safety actively analyzes 629994 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 6

1.2.2

This is the General Availability Release of RedisTimeSeries 1.2 (v1.2.2)!

Headlines:
* Compression added which can reduce memory up to 98% and improve read performance up to 50%.
* Stable ingestion time independent of the number of the data points on a time-series.
* Reviewed API with performance improvements and removed ambiguity.
* Extended [client support](https://oss.redislabs.com/redistimeseries/#client-libraries)

(we will blog about this release soon including performance improvements results and the link here)

Full details:

* Added functionality
* 261 Samples are compressed using `Double Delta compression` which results in cost savings and faster query times.
* Based on the [Gorilla paper](https://www.vldb.org/pvldb/vol8/p1816-teller.pdf).
* In theory, this can save space up to 98%. (2 bits per sample in stead of 128).
* In practice, a memory reduction of 90% is common but depends on the use case.
* Initial benchmarks show performance improvements in reads up to 50%.
* `UNCOMPRESSED` [option](https://oss.redislabs.com/redistimeseries/commands/#tscreate) in `TS.CREATE`.

* API changes / Enhancements
* 241 Overwriting the last sample with the same timestamp is not allowed.
* 242 [revised](https://oss.redislabs.com/redistimeseries/commands/#tsincrbytsdecrby) `TS.INCRBY/DECRBY`
* Returns a timestamp. The behaviour is now aligned with `TS.ADD`.
* The `RESET` functionality was removed. `RESET` contradicted the rewriting of the last sample (241).
Alternatively, you can reconstruct similar behaviour by
* `TS.ADD ts * 1` + `sum` aggregation
* `TS.INCRBY ts 1` + `range` aggregation
* 317 Aligning response on empty series of `TS.GET` with `TS.RANGE`.
* 285 318 Changed default behaviour of [`TS.MRANGE`](https://oss.redislabs.com/redistimeseries/commands/#tsmrange) and [`TS.MGET`](https://oss.redislabs.com/redistimeseries/commands/#tsmget) to no longer returns the labels of each time-series in order reduce network traffic. Optional `WITHLABELS` argument added.
* 319 `TS.RANGE` and `TS.MRANGE` aggregation starting from requested timestamp.

* Performance improvements
* 237 Downsampling after time window is closed vs. downsampling with each sample.
* 285 318 Optional `WITHLABELS` argument added. This feature improves read performance drastically.

* Minor Enhancements
* 230 `TS.INFO` now [includes](https://oss.redislabs.com/redistimeseries/commands/#tsinfo) `total samples`, `memory usage`,`first time stamp`, ...
* 230 `MEMORY` calculates series memory footprint.

* Bugfixes since v1.0.3
* 204 Module initialization params changed to 64 bits.
* 266 Memory leak in the aggregator context.
* 260 Better error messages.
* 259 257 219 Miscellaneous.
* 320 Delete the existing key prior to restoring it.
* 323 Empty first sample on aggregation.

* Known issues:
* 358 Data corruption issue, please skip this version and upgrade to 1.2.5 where this is fixed.

note: the version inside Redis will be 10202 or 1.2.2 in semantic versioning.

1.2.0

This is the first Release Candidate for RedisTimeSeries 1.2.

* Added functionality
* 261 Samples are compressed using `Double Delta compression`.
* Based on the [Gorilla paper](https://www.vldb.org/pvldb/vol8/p1816-teller.pdf).
* In theory, this can save space up to 98%. (2 bits per sample in stead of 128).
* In practice, a memory reduction of 5-8x is common but depends on the use case.
* Initial benchmarks show performance improvements in both read and writes.
* `UNCOMPRESSED` [option](https://oss.redislabs.com/redistimeseries/commands/#tscreate) in `TS.CREATE`.

* Major Enhancements
* 241 Overwriting the last sample with the same timestamp is not allowed.
* 237 Downsampling after time window is closed vs. downsampling with each sample.
* 242 [revised](https://oss.redislabs.com/redistimeseries/commands/#tsincrbytsdecrby) `TS.INCRBY/DECRBY`
* Returns a timestamp. The behaviour is now aligned with `TS.ADD`.
* The `RESET` functionality was removed. `RESET` contradicted the rewriting of the last sample (241).
Alternatively, you can reconstruct similar behaviour by
* `TS.ADD ts * 1` + `sum` aggregation
* `TS.INCRBY ts 1` + `range` aggregation

* Minor Enhancements
* 230 `TS.INFO` now [includes](https://oss.redislabs.com/redistimeseries/commands/#tsinfo) `total samples`, `memory usage`,`first time stamp`, ...
* 230 `MEMORY` calculates series memory footprint.
* 285 Changed default behaviour of [`TS.MRANGE`](https://oss.redislabs.com/redistimeseries/commands/#tsmrange) to no longer returns the labels of each time-series in order reduce network traffic. Optional `WITHLABELS` argument added.

* Bugfixes
* 204 Module initialization params changed to 64 bits.
* 266 Memory leak in the aggregator context.
* 260 Better error messages.
* 259 257 219 Miscellaneous.

note: the version inside Redis will be 10200 or 1.2.0 in semantic versioning.

1.0.3

Update urgency: Medium
This is a maintenance release for version 1.0.

This release improves overall stability and provides fixes for founded issues.

Main Features:
* 143 Standard Deviation for Aggregations
* 163 `TS.RANGE` and `TS.MRANGE` can limit results via optional `COUNT` flag
* 161 Support for ARM architectures
* 160 Optional `TIMESTAMP` in `TS.INCRBY` and `TS.DECRBY`

Main Fixes:
* 199 `RETENTION` is now 64bit
* 211 write commands to return OOM error when redis reaches max memory

Main Performance improvements:
* https://github.com/RedisTimeSeries/RedisTimeSeries/commit/3651ef8eb65b390e333053b91a64617fc2382f6e Do not use `_union` if there's only 1 leaf in the index
* https://github.com/RedisTimeSeries/RedisTimeSeries/commit/0a68d4eca95108595ac7dfbae68d3f0371e41470 Make _difference faster by iterating over the left dict (which is always smaller)

1.0.1

Update urgency: Minor
This is a maintenance release for version 1.0.

Secondary Index should work faster when a filter consistent a list of k=v predicates.

1.0.0

This is the General Availability release of RedisTimeSeries! Please read the [full story here](https://redislabs.com/blog/redistimeseries-ga-making-4th-dimension-truly-immersive)

Features
In RedisTimeSeries, we are introducing a new data type that uses chunks of memory of fixed size for time series samples, indexed by the same [Radix Tree implementation](https://github.com/antirez/rax) as Redis Streams. With Streams, you can create a [capped stream](https://redis.io/commands/xadd), effectively limiting the number of messages by count. In RedisTimeSeries, you can apply a retention policy in milliseconds. This is better for time series use cases, because they are typically interested in the data during a given time window, rather than a fixed number of samples.

Downsampling / compaction
If you want to keep all of your raw data points indefinitely, your data set will grow linearly over time. However, if your use case allows you to have less fine-grained data further back in time, downsampling can be applied. This allows you to keep fewer historical data points by aggregating raw data for a given time window using a given aggregation function. [RedisTimeSeries supports downsampling](https://oss.redislabs.com/redistimeseries/commands/#tscreaterule) with the [following aggregations](https://oss.redislabs.com/redistimeseries/commands/#tsrange): avg, sum, min, max, range, count, first and last.

Secondary indexing
When using Redis’ core data structures, you can only retrieve a time series by knowing the exact key holding the time series. Unfortunately, for many time series use cases (such as root cause analysis or monitoring), your application won’t know the exact key it’s looking for. These use cases typically want to query a set of time series that relate to each other in a couple of dimensions to extract the insight you need. You could create your own secondary index with core Redis data structures to help with this, but it would come with a high development cost and require you to manage edge cases to make sure the index is correct.

RedisTimeSeries does this indexing for you based on `field value` pairs (a.k.a labels) you can add to each time series, and use to filter at query time (a full list of these filters is available in our documentation). Here’s an example of creating a time series with two labels (sensor_id and area_id are the fields with values 2 and 32 respectively) and a retention window of 60,000 milliseconds:

TS.CREATE temperature RETENTION 60000 LABELS sensor_id 2 area_id 32

Aggregation at read time
When you need to query a time series, it’s cumbersome to stream all raw data points if you’re only interested in, say, an average over a given time interval. RedisTimeSeries follows the Redis philosophy to only transfer the minimum required data to ensure lowest latency. Below is an example of aggregation query over time buckets of 5,000 milliseconds with an [aggregation function](https://oss.redislabs.com/redistimeseries/commands/#tsrange):

0.99.1

This is the second release candidate for RedisTimeSeries 1.0.

A full feature list will be available by the time RedisTimeSeries goes GA. For now, our documentation is the best location to find the feature list and to get you started: redistimeseries.io.

note: the version inside Redis will be 9901 or 0.99.1 in semantic versioning. Since the version of a module in Redis is numeric, we use 0.99 to resemble that it's almost 1.0

Page 5 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.