Cluster-tools

Latest version: v1.61

Safety actively analyzes 619599 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 11

1.61

Honor the slurm limit for maximum array job size (`MaxArraySize`), by splitting up larger job batches into multiple smaller array jobs.
Honor the slurm limit for the maximum number of jobs which are allowed to be submitted by a user at the same time (`MaxSubmitJobsPerUser`) by looking up the number of currently submitted jobs and only submitting new batches once they fit within the limit.

The slurm command used to look up `MaxArraySize` is `scontrol show config | sed -n '/^MaxArraySize/s/.*= *//p'`.
The slurm command used to look up `MaxSubmitJobsPerUser` is `sacctmgr list -n user $USER withassoc format=maxsubmitjobsperuser` and if not defined `sacctmgr list -n qos normal format=maxsubmitjobsperuser`.
The slurm command used to look up the number of currently submitted jobs is `squeue --array -u $USER -h | wc -l`.

1.60

Fix logging when using multiprocessing with another `start_method` than `fork`. Previously, logging output disappeared in that situation, because loggers were not correctly re-setup.

1.59

Add `DebugSequentialExecutor` which can be used for debugging purposes. This executor does not spawn new processes for its jobs. Therefore, setting `breakpoint()`'s should be possible without context-related problems. Use `get_executor("debug_sequential")` to get an instance.

1.58

1.57

Allow callers to setup logging using a callback. To do that, provide `logging_setup_fn` when calling `cluster_tools.get_executor`.

1.56

On slurm/pbs: Correctly overwrite stale output from previously failing jobs (if dedicated target path was provided for the output pickle).

Page 1 of 11

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.