Apread

Latest version: v1.1.3

Safety actively analyzes 630026 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

1.1.2alpha5

1.1.2alpha4

1.1.2alpha3

1.1.2alpha2

- Add collect function to reader

1.1.2alpha1

- Updated plot mechanism

1.1.1

New features

Parallel reading of data

See `test/testing.py` for a full example. Modify the following around your `APReader` call:

python
import multiprocessing as mp
...


if __name__ == '__main__': this line has to be included!
without 'processes=...'!
pool = mp.Pool()

pass the pool to the reader
reader = APReader(file, parallelPool=pool)

make sure to close the pool after you are done with it
mp.close()
mp.join()


For the parallel loading to work, you have to define a parallel pool of processes in your top-level script. These processes will be accessed from within `APReader`-Functions. When passing no arguments to `mp.Pool()` it will automatically create as many processes as possible, according to the amount of threads your CPU allows (cores + virtual cores). It does not make sense to pass in more, since the `APReader` spawns the same amount of processes as there are CPU Threads. Increasing the amount of processes in your pool does not increase the amount of parallelism. It is fixed.

> Keep in mind, that parallelisation is not always faster. Spawning of processes is expensive and can be wasteful for small files.

The results from `APReader` stay the same and you can continue your analysis.

Improvements

- Typo in `Group.intervalstr` fixed (micro and nano-seconds where swapped)
- Unit of `Group.interval` is now seconds

What's Changed
* Development on Version 1.1.1 by leonbohmann in https://github.com/leonbohmann/APReader/pull/18


**Full Changelog**: https://github.com/leonbohmann/APReader/compare/v1.1.0...v1.1.1

Page 2 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.