Ray

Latest version: v2.22.0

Safety actively analyzes 630094 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 15

1.11.1

Not secure
Patch release including fixes for the following issues:

- Ray Job Submission not working with remote `working_dir` URLs in their runtime environment (https://github.com/ray-project/ray/pull/22018)
- Ray Tune + MLflow integration failing to set MLflow experiment ID (https://github.com/ray-project/ray/pull/23662)
- Dependencies for `gym` not pinned, leading to version incompatibility issues (https://github.com/ray-project/ray/pull/23705)

1.11.0

Not secure
Highlights

🎉 Ray no longer starts Redis by default. Cluster metadata previously stored in Redis is stored in the GCS now.

Ray Autoscaler

🎉 New Features
- AWS Cloudwatch dashboard support 20266

💫 Enhancements
- Kuberay autoscaler prototype 21086

🔨 Fixes
- Ray.autoscaler.sdk import issue 21795

Ray Core

🎉 New Features
- Set actor died error message in ActorDiedError 20903
- Event stats is enabled by default 21515

🔨 Fixes
- Better support for nested tasks
- Fixed 16GB mac perf issue by limit the plasma store size to 2GB 21224
- Fix `SchedulingClassInfo.running_tasks` memory leak 21535
- Round robin during spread scheduling 19968

🏗 Architecture refactoring
- Refactor scheduler resource reporting public APIs 21732
- Refactor ObjectManager wait logic to WaitManager 21369

Ray Data Processing
🎉 New Features
- More powerful to_torch() API, providing more control over the GPU batch format. (21117)

🔨 Fixes
- Fix simple Dataset sort generating only 1 non-empty block. (21588)
- Improve error handling across sorting, groupbys, and aggregations. (21610, 21627)
- Fix boolean tensor column representation and slicing. (22358)

RLlib

🎉 New Features
- Better utils for flattening complex inputs and enable prev-actions for LSTM/attention for complex action spaces. (21330)
- `MultiAgentEnv` pre-checker (21476)
- Base env pre-checker. (21569)

🔨 Fixes
- Better defaults for QMix (21332)
- Fix contrib/MADDPG + pettingzoo coop-pong-v4. (21452)
- Fix action unsquashing causes inf/NaN actions for unbounded action spaces. (21110)
- Ignore PPO KL-loss term completely if kl-coeff == 0.0 to avoid NaN values (21456)
- `unsquash_action` and `clip_action` (when None) cause wrong actions computed by `Trainer.compute_single_action`. (21553)
- Conv2d default filter tests and add default setting for 96x96 image obs space. (21560)
- Bing back and fix offline RL(BC & MARWIL) learning tests. (21574, 21643)
- SimpleQ should not use a prio. replay buffer. (21665)
- Fix video recorder env wrapper. Added test case. (21670)

🏗 Architecture refactoring
- Decentralized multi-agent learning (21421)
- Preparatory PR for multi-agent multi-GPU learner (alpha-star style) (21652)

Ray Workflow
🔨 Fixes
- Fixed workflow recovery issue due to a bug of dynamic output 21571

Tune
🎉 New Features
- It is now possible to load all evaluated points from an experiment into a Searcher (21506)
- Add CometLoggerCallback (20766)

💫 Enhancements
- Only sync the checkpoint folder instead of the entire trial folder for cloud checkpoint. (21658)
- Add test for heterogeneous resource request deadlocks (21397)
- Remove unused `return_or_clean_cached_pg` (21403)
- Remove `TrialExecutor.resume_trial` (21225)
- Leave only one canonical way of stopping a trial (21021)

🔨 Fixes
- Replace deprecated `running_sanity_check` with `sanity_checking` in PTL integration (21831)
- Fix loading an `ExperimentAnalysis` object without a registered `Trainable` (21475)
- Fix stale node detection bug (21516)
- Fixes to allow `tune/tests/test_commands.py` to run on Windows (21342)
- Deflake PBT tests (21366)
- Fix dtype coercion in `tune.choice` (21270)

📖 Documentation
- Fix typo in `schedulers.rst` (21777)

Train
🎉 New Features
- Add PrintCallback (21261)
- Add MLflowLoggerCallback(20802)

💫 Enhancements
- Refactor Callback implementation (21468, 21357, 21262)

🔨 Fixes
- Fix Dataloader (21467)

📖 Documentation
- Documentation and example fixes (​​21761, 21689, 21464)

Serve
🎉 New Features
- Checkout our revampt end-to-end [tutorial](https://docs.ray.io/en/master/serve/end_to_end_tutorial.html) that walks through the deployment journey! (#20765)

🔨 Fixes
- Warn when serve.start() with different options (21562)
- Detect http.disconnect and cancel requests properly (21438)

Thanks
Many thanks to all those who contributed to this release!
isaac-vidas, wuisawesome, stephanie-wang, jon-chuang, xwjiang2010, jjyao, MissiontoMars, qbphilip, yaoyuan97, gjoliver, Yard1, rkooo567, talesa, czgdp1807, DN6, sven1977, kfstorm, krfricke, simon-mo, hauntsaninja, pcmoritz, JamieSlome, chaokunyang, jovany-wang, sidward14, DmitriGekhtman, ericl, mwtian, jwyyy, clarkzinzow, hckuo, vakker, HuangLED, iycheng, edoakes, shrekris-anyscale, robertnishihara, avnishn, mickelliu, ndrwnaguib, ijrsvt, Zyiqin-Miranda, bveeramani, SongGuyang, n30111, WangTaoTheTonic, suquark, richardliaw, qicosmos, scv119, architkulkarni, lixin-wei, Catch-Bull, acxz, benblack769, clay4444, amogkam, marin-ma, maxpumperla, jiaodong, mattip, isra17, raulchen, wilsonwang371, carlogrisetti, ashione, matthewdeng

1.10.0

Not secure
Highlights

- 🎉 Ray Windows support is now in beta – a significant fraction of the Ray test suite is now passing on Windows. We are eager to learn about your experience with Ray 1.10 on Windows, please file issues you encounter at https://github.com/ray-project/ray/issues. In the upcoming releases we will spend more time on making Ray Serve and Runtime Environment tests pass on Windows and on polishing things.

Ray Autoscaler
💫Enhancements:
- Add autoscaler update time to prometheus metrics (20831)
- Fewer non terminated nodes calls in autoscaler update (20359, 20623)

🔨 Fixes:
- GCP TPU autoscaling fix (20311)
- Scale-down stability fix (21204)
- Report node launch failure in driver logs (20814)


Ray Client
💫Enhancements
- Client task options are encoded with pickle instead of json (20930)


Ray Core
🎉 New Features:
- `runtime_env`’s `pip` field now installs pip packages in your existing environment instead of installing them in a new isolated environment. (20341)

🔨 Fixes:
- Fix bug where specifying runtime_env conda/pip per-job using local requirements file using Ray Client on a remote cluster didn’t work (20855)
- Security fixes for `log4j2` – the `log4j2` version has been bumped to 2.17.1 (21373)

💫Enhancements:
- Allow runtime_env working_dir and py_modules to be pathlib.Path type (20853, 20810)
- Add environment variable to skip local runtime_env garbage collection (21163)
- Change runtime_env error log to debug log (20875)
- Improved reference counting for runtime_env resources (20789)

🏗 Architecture refactoring:
- Refactor runtime_env to use protobuf for multi-language support (19511)

📖Documentation:
- Add more comprehensive runtime_env documentation (20222, 21131, 20352)


Ray Data Processing
🎉 New Features:
- Added stats framework for debugging Datasets performance (20867, 21070)
- [Dask-on-Ray] New config helper for enabling the Dask-on-Ray scheduler (21114)

💫Enhancements:
- Reduce memory usage during when converting to a Pandas DataFrame (20921)

🔨 Fixes:
- Fix slow block evaluation when splitting (20693)
- Fix boundary sampling concatenation on non-uniform blocks (20784)
- Fix boolean tensor column slicing (20905)

🏗 Architecture refactoring:
- Refactor table block structure to support more tabular block formats (20721)


RLlib

🎉 New Features:
- Support for RE3 exploration algorithm (for tf only). (19551)
- Environment pre-checks, better failure behavior and enhanced environment API. (20481, 20832, 20868, 20785, 21027, 20811)

🏗 Architecture refactoring:
- Evaluation: Support evaluation setting that makes sure `train` doesn't ever have to wait for `eval` to finish (b/c of long episodes). (20757); Always attach latest eval metrics. (21011)
- Soft-deprecate `build_trainer()` utility function in favor of sub-classing `Trainer` directly (and overriding some of its methods). (20635, 20636, 20633, 20424, 20570, 20571, 20639, 20725)
- Experimental no-flatten option for actions/prev-actions. (20918)
- Use `SampleBatch` instead of an input dict whenever possible. (20746)
- Switch off `Preprocessors` by default for `PGTrainer` (experimental). (21008)
- Toward a Replay Buffer API (cleanups; docstrings; renames; move into `rllib/execution/buffers` dir) (20552)

📖Documentation:
- Overhaul of auto-API reference pages. (19786, 20537, 20538, 20486, 20250)
- README and RLlib landing page overhaul (20249).
- Added example containing code to compute an adapted (time-dependent) GAE used by the PPO algorithm (20850).

🔨 Fixes:
- Smaller fixes and enhancements: 20704, 20541, 20793, 20743.


Tune
🎉 New Features:
- Introduce TrialCheckpoint class, making checkpoint down/upload easie (20585)
- Add random state to `BasicVariantGenerator` (20926)
- Multi-objective support for Optuna (20489)

💫Enhancements:
- Add `set_max_concurrency` to Searcher API (20576)
- Allow for tuples in _split_resolved_unresolved_values. (20794)
- Show the name of training func, instead of just ImplicitFunction. (21029)
- Enforce one future at a time for any given trial at any given time. (20783)
move `on_no_available_trials` to a subclass under `runner` (20809)
- Clean up code (20555, 20464, 20403, 20653, 20796, 20916, 21067)
- Start restricting TrialRunner/Executor interface exposures. (20656)
- TrialExecutor should not take in Runner interface. (20655)


🔨Fixes:
- Deflake test_tune_restore.py (20776)
- Fix best_trial_str for nested custom parameter columns (21078)
- Fix checkpointing error message on K8s (20559)
- Fix testResourceScheduler and testMultiStepRun. (20872)
- Fix tune cloud tests for function and rllib trainables (20536)
- Move _head_bundle_is_empty after conversion (21039)
- Elongate test_trial_scheduler_pbt timeout. (21120)


Train
🔨Fixes:
- Ray Train environment variables are automatically propagated and do not need to be manually set on every node (20523)
- Various minor fixes and improvements (20952, 20893, 20603, 20487)
📖Documentation:
- Update saving/loading checkpoint docs (20973). Thanks jwyyy!
- Various minor doc updates (20877, 20683)


Serve
💫Enhancements:
- Add validation to Serve AutoscalingConfig class (20779)
- Add Serve metric for HTTP error codes (21009)

🔨Fixes:
- No longer create placement group for deployment with no resources (20471)
- Log errors in deployment initialization/configuration user code (20620)


Jobs
🎉 New Features:
- Logs can be streamed from job submission server with `ray job logs` command (20976)
- Add documentation for ray job submission (20530)
- Propagate custom headers field to JobSubmissionClient and apply to all requests (20663)

🔨Fixes:
- Fix job serve accidentally creates local ray processes instead of connecting (20705)

💫Enhancements:
- [Jobs] Update CLI examples to use the same setup (20844)

Thanks
Many thanks to all those who contributed to this release!

dmatrix, suquark, tekumara, jiaodong, jovany-wang, avnishn, simon-mo, iycheng, SongGuyang, ArturNiederfahrenhorst, wuisawesome, kfstorm, matthewdeng, jjyao, chenk008, Sertingolix, larrylian, czgdp1807, scv119, duburcqa, runedog48, Yard1, robertnishihara, geraint0923, amogkam, DmitriGekhtman, ijrsvt, kk-55, lixin-wei, mvindiola1, hauntsaninja, sven1977, Hankpipi, qbphilip, hckuo, newmanwang, clay4444, edoakes, liuyang-my, iasoon, WangTaoTheTonic, fgogolli, dproctor, gramhagen, krfricke, richardliaw, bveeramani, pcmoritz, ericl, simonsays1980, carlogrisetti, stephanie-wang, AmeerHajAli, mwtian, xwjiang2010, shrekris-anyscale, n30111, lchu-ibm, Scalsol, seonggwonyoon, gjoliver, qicosmos, xychu, iamhatesz, architkulkarni, jwyyy, rkooo567, mattip, ckw017, MissiontoMars, clarkzinzow

1.9.2

Not secure
Patch release to bump the `log4j` version from `2.16.0` to `2.17.0`. This resolves the security issue [CVE-2021-45105](https://github.com/advisories/GHSA-p6xc-xr62-6r2g).

1.9.1

Not secure
Patch release to bump the `log4j2` version from `2.14` to `2.16`. This resolves the security vulnerabilities https://nvd.nist.gov/vuln/detail/CVE-2021-44228 and https://nvd.nist.gov/vuln/detail/CVE-2021-45046.

No library or core changes included.

Thanks seonggwonyoon and ijrsvt for contributing the fixes!

1.9.0

Not secure
Highlights

- Ray Train is now in beta! If you are using Ray Train, we’d love to hear your feedback [here](https://docs.google.com/forms/d/e/1FAIpQLSfI3asn-m1cQSIbdrk_cd6qYenZvt-eNTVfTwba3SVhmHcHIg/viewform)!
- Ray Docker images for multiple CUDA versions are now provided (19505)! You can specify a `-cuXXX` suffix to pick a specific version.
- `ray-ml:cpu` images are now deprecated. The `ray-ml` images are only built for GPU.
- Ray Datasets now supports groupby and aggregations! See the [groupby API](https://docs.ray.io/en/master/data/package-ref.html#ray.data.Dataset.groupby) and [GroupedDataset](https://docs.ray.io/en/master/data/package-ref.html#groupeddataset-api) docs for usage.
- We are making continuing progress in improving Ray stability and usability on Windows. We encourage you to try it out and report feedback or issues at https://github.com/ray-project/ray/issues.
- We are launching a Ray Job Submission server + CLI & SDK clients to make it easier to submit and monitor Ray applications when you don’t want an active connection using Ray Client. This is currently in alpha, so the APIs are subject to change, but please test it out and file issues / leave feedback on GitHub & discuss.ray.io!


Ray Autoscaler
💫Enhancements:
- Graceful termination of Ray nodes prior to autoscaler scale down (20013)
- Ray Clusters on AWS are colocated in one Availability Zone to reduce costs & latency (19051)

Ray Client
🔨 Fixes:
- ray.put on a list of of objects now returns a single object ref (​​19737)

Ray Core
🎉 New Features:
- Support remote file storage for runtime_env (20280, 19315)
- Added ray job submission client, cli and rest api (19567, 19657, 19765, 19845, 19851, 19843, 19860, 19995, 20094, 20164, 20170, 20192, 20204)

💫Enhancements:
- Garbage collection for runtime_env (20009, 20072)
- Improved logging and error messages for runtime_env (19897, 19888, 18893)

🔨 Fixes:
- Fix runtime_env hanging issues (19823)
- Fix specifying runtime env in ray.remote decorator with Ray Client (19626)
- Threaded actor / core worker / named actor race condition fixes (19751, 19598, 20178, 20126)

📖Documentation:
- New page “Handling Dependencies”
- New page “Ray Job Submission: Going from your laptop to production”

Ray Java
API Changes:
- Fully supported namespace APIs. ([Check out the namespace for more information.](https://docs.ray.io/en/latest/namespaces.html)) #19468 19986 20057
- Removed global named actor APIs and global placement group APIs. 20219 20135
- Added timeout parameter for `Ray.Get()` API. 20282

Note:
- Use `Ray.getActor(name, namespace)` API to get a named actor between jobs instead of `Ray.getGlobalActor(name)`.
- Use `PlacementGroup.getPlacementGroup(name, namespace)` API to get a placement group between jobs instead of `PlacementGroup.getGlobalPlacementGroup(name)`.

Ray Datasets
🎉 New Features:
- Added groupby and aggregations (19435, 19673, 20010, 20035, 20044, 20074)
- Support custom write paths (19347)

🔨 Fixes:
- Support custom CSV write options (19378)

🏗 Architecture refactoring:
- Optimized block compaction (19681)

Ray Workflow
🎉 New Features:
- Workflow right now support events (19239)
- Allow user to specify metadata for workflow and steps (19372)
- Allow in-place run a step if the resources match (19928)

🔨 Fixes:
- Fix the s3 path issue (20115)

RLlib
🏗 Architecture refactoring:
- “framework=tf2” + “eager_tracing=True” is now (almost) as fast as “framework=tf”. A check for tf2.x eager re-traces has been added making sure re-tracing does not happen outside the initial function calls. All CI learning tests (CartPole, Pendulum, FrozenLake) are now also run as framework=tf2. (19273, 19981, 20109)
- Prepare deprecation of `build_trainer`/`build_(tf_)?policy` utility functions. Instead, use sub-classing of `Trainer` or `Torch|TFPolicy`. POCs done for `PGTrainer`, `PPO[TF|Torch]Policy`. (20055, 20061)
- V-trace (APPO & IMPALA): Don’t drop last ts can be optionally switch on. The default is still to drop it, but this may be changed in a future release. (19601)
- Upgrade to gym 0.21. (19535)

🔨 Fixes:
- Minor bugs/issues fixes and enhancements: 19069, 19276, 19306, 19408, 19544, 19623, 19627, 19652, 19693, 19805, 19807, 19809, 19881, 19934, 19945, 20095, 20128, 20134, 20144, 20217, 20283, 20366, 20387

📖Documentation:
- RLlib main page (“RLlib in 60sec”) overhaul. (20215, 20248, 20225, 19932, 19982)
- Major docstring cleanups in preparation for complete overhaul of API reference pages. (19784, 19783, 19808, 19759, 19829, 19758, 19830)
- Other documentation enhancements. (19908, 19672, 20390)


Tune

💫Enhancements:
- Refactored and improved experiment analysis (20197, 20181)
- Refactored cloud checkpointing API/SyncConfig (20155, 20418, 19632, 19641, 19638, 19880, 19589, 19553, 20045, 20283)
- Remove magic results (e.g. config) before calculating trial result metrics (19583)
- Removal of tech debt (19773, 19960, 19472, 17654)
- Improve testing (20016, 20031, 20263, 20210, 19730
- Various enhancements (19496, 20211)

🔨Fixes:
- Documentation fixes (20130, 19791)
- Tutorial fixes (20065, 19999)
- Drop 0 value keys from PGF (20279)
- Fix shim error message for scheduler (19642)
- Avoid looping through _live_trials twice in _get_next_trial. (19596)
- clean up legacy branch in update_avail_resources. (20071)
- fix Train/Tune integration on Client (20351)

Train

Ray Train is now in Beta! The beta version includes various usability improvements for distributed PyTorch training and checkpoint management, support for [Ray Client](https://docs.ray.io/en/master/cluster/ray-client.html), and an [integration with Ray Datasets](https://docs.ray.io/en/master/train/user_guide.html#distributed-data-ingest-ray-datasets) for distributed data ingest.

Check out the docs [here](https://docs.ray.io/en/latest/train/train.html), and the migration guide from Ray SGD to Ray Train [here](https://docs.ray.io/en/latest/train/migration-guide.html). If you are using Ray Train, we’d love to hear your feedback [here](https://docs.google.com/forms/d/e/1FAIpQLSfI3asn-m1cQSIbdrk_cd6qYenZvt-eNTVfTwba3SVhmHcHIg/viewform)!

🎉 New Features:
- New `train.torch.prepare_model(...)` and `train.torch.prepare_data_loader(...)` [API](https://docs.ray.io/en/master/train/user_guide.html#update-training-function) to automatically handle preparing your PyTorch model and DataLoader for distributed training (20254).
- Checkpoint management and support for custom checkpoint strategies (19111).
- Easily [configure](https://docs.ray.io/en/master/train/user_guide.html#configuring-checkpoints) what and how many checkpoints to save to disk.
- Support for [Ray Client](https://docs.ray.io/en/master/cluster/ray-client.html) (#20123, 20351).

💫Enhancements:
- Simplify workflow for training with a single worker (19814).
- [Ray Placement Groups](https://docs.ray.io/en/master/placement-group.html) are used for scheduling the training workers (#20091).
- `PACK` strategy is used by default but can be changed by setting the `TRAIN_ENABLE_WORKER_SPREAD` environment variable.
- Automatically unwrap Torch DDP model and convert to CPU when saving a model as checkpoint (20333).

🔨Fixes:
- Fix `HorovodBackend` to automatically detect NICs- thanks tgaddair! (19533).

📖Documentation:
- Denote public facing APIs with beta stability (20378)
- Doc updates (20271)

Serve
We would love to hear from you! Fill out the [Ray Serve survey here](https://forms.gle/zg4gDS84z8wTpKBLA).

🎉 New Features:
- New `checkpoint_path` configuration allows Serve to save its internal state to external storage (disk, S3, and GCS) and [recover upon failure](https://docs.ray.io/en/master/serve/deployment.html#failure-recovery). (19166, 19998, 20104)
- [Replica autoscaling](https://docs.ray.io/en/master/serve/core-apis.html#autoscaling) is ready for testing out! (19559, 19520)
- Native [Pipeline API for model composition](https://docs.ray.io/en/master/serve/pipeline.html) is ready for testing as well!

🔨Fixes:
- Serve deployment functions or classes can take no parameters (19708)
- Replica slow start message is improved. You can now see whether it is slow to allocate resources or slow to run constructor. (19431)
- `pip install ray[serve]` will now install `ray[default]` as well. (19570)

🏗 Architecture refactoring:
- The terminology of “backend” and “endpoint” are officially deprecated in favor of “deployment”. (20229, 20085, 20040, 20020, 19997, 19947, 19923, 19798).
- Progress towards Java API compatibility (19463).

Dashboard
- Ray Dashboard is now enabled on Windows! (19575)

Thanks
Many thanks to all those who contributed to this release!
krfricke, stefanbschneider, ericl, nikitavemuri, qicosmos, worldveil, triciasfu, AmeerHajAli, javi-redondo, architkulkarni, pdames, clay4444, mGalarnyk, liuyang-my, matthewdeng, suquark, rkooo567, mwtian, chenk008, dependabot[bot], iycheng, jiaodong, scv119, oscarknagg, Rohan138, stephanie-wang, Zyiqin-Miranda, ijrsvt, roireshef, tkaymak, simon-mo, ashione, jovany-wang, zenoengine, tgaddair, 11rohans, amogkam, zhisbug, lchu-ibm, shrekris-anyscale, pcmoritz, yiranwang52, mattip, sven1977, Yard1, DmitriGekhtman, ckw017, WangTaoTheTonic, wuisawesome, kcpevey, kfstorm, rhamnett, renos, TeoZosa, SongGuyang, clarkzinzow, avnishn, iasoon, gjoliver, jjyao, xwjiang2010, dmatrix, edoakes, czgdp1807, heng2j, sungho-joo, lixin-wei

Page 6 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.