Pypet

Latest version: v0.6.1

Safety actively analyzes 629765 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

0.2b.0

* Erroneous Release due to PyPI fault :/

0.1b.12

* Renaming of the MultiprocContext's `start` function to `f_start`

* BUG FIX: Correct lock acquiring for multiprocessing with StorageContextManager

* BUG FIX: `v_full_copy` is now `False` by default

* BUG FIX: `v_full_copy` is no longer automatically set to `True` when using
`freeze_pool_input`.

* New `consecutive_merge` parameter for `f_merge` to allow faster merging of several
trajectories.

* New `f_merge_many` function to merge many trajectories at once.

* New experimental ``'PIPE'`` wrapping which is even faster (but more unreliable)
than ``'QUEUE'``.

0.1b.11

* If one wants the old logging method, `log_config` should not be specified, i.e.
setting it to `None` is no longer sufficient`

* BUG FIX: Connection loss between the queue manager and the pool/processes has been resolved.
This caused a minor slowing down of multiprocessing using a queue.

* New ``freeze_pool_input`` functionality for faster starting of single runs if using
a pool.

0.1b.10

* New `v_crun_` property simply returning ``'run_ALL'`` if ``v_crun`` is ``None``.

* BUG FIX: Removed recursive evaluation due to usage of `itertools.chain` during
recursive node traversal

* max_depth is now also supported by `f_store`, `f_store_child`, `f_load`, `f_load_child`

* Loading and Storing internally are no longer truly recursive but iteratively handled.

* New `v_auto_run_prepend` property of the trajectory to switch off auto run name prepending if
desired.

* The trajectory no longer relies on evil `eval` to construct a class. Instead
it relies on the global scope.

* Better counting of loading and storing nodes to display rate in nodes/s

* BUG FIX: Minor bug in the progressbar has been fixed to detect automatic resets.

* Now support for non-nested empty containers: Empty dictionary, empty list, empty tuple and
empty numpy array. All of them supported by the ArrayParameter and normal Results.

* Support for Sparse matrices containing *NO* data (i.e. only zeros).

* Performance optimization for storage and loading

* Improved test handling and parsing in `pypet.tests`

* Environment now supports `git_fail` option to fail if there are not committed changes
instead of triggering a new commit.

* Users can now define their own functions to produce run-names

* Likewise users can define their onw wildcards

* The lazy version of adding elements (`traj.par.x = 42, 'A comment') now needs to
be turned on by the user via (`traj.v_lazy_adding = True`) before it can be used.

* HDF5_STRCOL_MAX_ARRAY_LENGTH has been renamed to HDF5_STRCOL_MAX_RANGE_LENGTH

* The summary tables have been shortened. Now there's no distinction anymore between
the actual runs and everything else.

* Moreover, data added to summary tables is no longer deleted. There also exists a
maximum length for these tables (1000).

* The overview about the explored parameters in each run has been removed (due to size)

* Summary tables are now only based on the comments not the names!

* One can pass an estimate for memory that each run needs to better protect the memory
cap.

* All tree nodes except the trajectory now use __slots__ for faster and more compact
creation.

* You can now request to load a trajectory without `run_information` to save time for huge
trajectories

* Trajectories use ordered dictionaries to remember what was added during a single run.
Accordingly, now every data added during a single run regardless if they were added below
a group `run_XXXXXXXX` is stored.

* BUG FIX: The `'QUEUE'` wrapping no longer waits for chunks of data, but starts
storing immediately. Thus, if you have fast simulations, the storage service no longer
waits until the end of you simulation before it starts storing data.
In order to avoid overhead, the hdf5 is simply kept open until
the queue is closed.

* BUG FIX: If `log_stdout=True`, the original stream is restored instead of
`sys.__stdout__`. Thus, using another Python interpreter with a redirection of
`stdout` and calling `f_disable_logging` no longer disables `print` completely.

* Refactored 'QUEUE' wrapping. The user can now decide the maximum size of the Storage Queue.

* CAP values are now in `%`, so choose values between 0.0 and 100.0

* BUG FIX: Links removed during single runs are now no longer stored

* BUG FIX: `pypet.` is no longer prepended to unpickled logger names. Accordingly,
pypet logger names are now fully qualified names like `pypet.trajectory.Trajectory`.

0.1b.9

* BUG FIX: Fixed backwards compatibility

* BUG FIX: Metadata is loaded only once

* Results no longer support the ``v_no_data_string`` property

* Data of Results is no longer sorted in case of calling `f_val_to_string`

* In accordance with the python default to call `__repr__` for displaying contained
objects, `f_val_to_str` calls `repr` on the contained data in parameters and results.

* Added informative __repr__ for the most commonly used classes

* The (annoyingly long) keyword `dynamically_imported_classes` is changed to
`dynamic_imports`. For backwards compatibility, the old keyword can still
be used.

* New `f_get_default` method, now one can specify a default value that should be
returned if the requested data is not found in the trajectory

* `progressbar` displays the run and remaining time.

* New LINK features allowing group nodes to refer to other parts of the tree

* The SingleRun has been removed and all interactions are with real Trajectory objects,
but the API remained the same.

* All *pypet* relevant imported packages will be stored by the trajectory

* Internally the queue no longer needs to be re-passed to the QueueSender, allowing for
easier custom multiprocessing

* New MultiprocessWrapper (aka a light-weight environment for multiprocessing) for
custom multiprocessing

* StorageServices provide an ``multiproc_safe`` attribute to enable the user to check
if they work in a multi-process safe environment

* Environments can be used as context managers to disable the logging to files after the
experiment.

* Environments provide the ``v_log_path`` property to get the current log path

* BUG FIX: Trajectories with only a single explored parameter can now be merged
several times

* Backwards search no longer supported!

* `f_get_all` now supports shortcuts and abbreviations like `crun` or `par`

* `$` always translates to the run the trajectory is set to, also for
adding new items to the tree

* If the current run is not set, ``traj.v_crun`` is set to ``None``

* Moreover, ``f_iter_nodes`` and ``f_iter_leaves`` is no longer affected by the setting
of a current run and always return all nodes and leaves

* The iteration functions from above now allow for a predicate function to filter potential nodes

* Storing a leaf or a group via ``traj.f_store_item(item, overwrite=True)`` now also
replaces all annotations and comments

* Passing `overwrite_file=True` to an environment will overwrite the hdf5 file.

* `remove_empty_groups` is no longer supported

* All messages logged by *pypet* are now no longer using the root logger but one called 'pypet'.

* Better customization of logging. The user can now pass a list of ``logger_names`` and
corresponding ``log_levels`` which are logged to files in the ``log_path``.

* The environment no longer adds config information about hdf5 to the trajectory directly.
This is now done by the service itself.

* The keyword arguments passed to the environment regarding the storage service are
no longer handled by the environment but are directly passed to the service.

* BUG FIX: Fixed merging of result summaries that are only found in one trajectory

* BUG FIX: Log files are now closed when the handlers are removed

* BUG FIX: ``max_depth`` is now really always in relation to the start node and not
in relation to intermediate results

* API change for `f_migrate` to match new concept of storage service

* Short function names for item additions like `f_apar` besides `f_add_parameter`.

* Abbreviations like `par` and `dpar` can now also be used for item creation and
are always translated

* To streamline the API you can now no longer specify the name of backup files for merging

* Locked parameters can no longer be loaded and must be unlocked before.

* Parameters are no longer required to implement ``__len__`` because it can be
ambiguous, instead they must implement ``f_get_range_length`` function.

* BUG FIX: ``crun`` is now also accepted for adding of data and not only for requests

* Setting `ncores=0` lets *pypet* determine the number of CPUs automatically (requires psutil).

0.1b.8

* Support for python 3.3 and 3.4!

* Proper handling of unicode strings (well, see above^^)

* Checking if names of leaf and group nodes only contain alphanumeric characters

* PickleParameter and PickleResult now explicitly store the pickle protocol
because retrieval from the pickle dump is not always possible in python 3.

* Children of groups are no longer listed via __dir__ in case of debugging to
prevent unwanted locking.

* Better support for PyTables 2 and 3 with same code base.

* pypet and pypet.brian now provide the __all__ list.

Page 3 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.