Ray

Latest version: v2.22.0

Safety actively analyzes 630094 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 15

1.8.0

Not secure
Highlights
- Ray SGD has been rebranded to Ray Train! The new documentation landing page can be found [here](https://docs.ray.io/en/master/train/train.html).
- Ray Datasets is now in beta! The beta release includes a new integration with Ray Train yielding scalable ML ingest for distributed training. Check out the docs [here](https://docs.ray.io/en/master/data/dataset.html), try it out for your ML ingest and batch inference workloads, and let us know how it goes!
- This Ray release supports Apple Silicon (M1 Macs). [Check out the installation instructions for more information!](https://docs.ray.io/en/master/installation.html#apple-silicon-support)
Ray Autoscaler
🎉 New Features:
- Fake multi-node mode for autoscaler testing (18987)

💫Enhancements:
- Improve unschedulable task warning messages by integrating with the autoscaler (18724)

Ray Client
💫Enhancements
- Use async rpc for remote call and actor creation (18298)

Ray Core
💫Enhancements
- Eagerly install job-level runtime_env (19449, 17949)

🔨 Fixes:
- Fixed resource demand reporting for infeasible 1-CPU tasks (19000)
- Fixed printing Python stack trace in Python worker (19423)
- Fixed macOS security popups (18904)
- Fixed thread safety issues for coreworker (18902, 18910, 18913 19343)
- Fixed placement group performance and resource leaking issues (19277, 19141, 19138, 19129, 18842, 18652)
- Improve unschedulable task warning messages by integrating with the autoscaler (18724)
- Improved Windows support (19014, 19062, 19171, 19362)
- Fix runtime_env issues (19491, 19377, 18988)

Ray Data
Ray Datasets is now in beta! The beta release includes a new integration with Ray Train yielding scalable ML ingest for distributed training. It supports repeating and rewindowing pipelines, zipping two pipelines together, better cancellation of Datasets workloads, and many performance improvements. Check out the docs [here](https://docs.ray.io/en/master/data/dataset.html), try it out for your ML ingest and batch inference workloads, and let us know how it goes!

🎉 New Features:
- Ray Train integration (17626)
- Add support for repeating and rewindowing a DatasetPipeline (19091)
- .iter_epochs() API for iterating over epochs in a DatasetPipeline (19217)
- Add support for zipping two datasets together (18833)
- Transformation operations are now cancelled when one fails or the entire workload is killed (18991)
- Expose from_pandas()/to_pandas() APIs that accept/return plain Pandas DataFrames (18992)
- Customize compression, read/write buffer size, metadata, etc. in the IO layer (19197)
- Add spread resource prefix for manual round-robin resource-based task load balancing

💫Enhancements:
- Minimal rows are now dropped when doing an equalized split (18953)
- Parallelized metadata fetches when reading Parquet datasets (19211)

🔨 Fixes:
- Tensor columns now properly support table slicing (19534)
- Prevent Datasets tasks from being captured by Ray Tune placement groups (19208)
- Empty datasets are properly handled in most transformations (18983)

🏗 Architecture refactoring:
- Tensor dataset representation changed to a table with a single tensor column (18867)

RLlib

🎉 New Features:
- Allow n-step > 1 and prioritized replay for R2D2 and RNNSAC agents. (18939)

🔨 Fixes:
- Fix memory leaks in TF2 eager mode. (19198)
- Faster worker spaces inference if specified through configuration. (18805)
- Fix bug for complex obs spaces containing Box([2D shape]) and discrete components. (18917)
- Torch multi-GPU stats not protected against race conditions. (18937)
- Fix SAC agent with dict space. (19101)
- Fix A3C/IMPALA in multi-agent setting. (19100)

🏗 Architecture refactoring:
- Unify results dictionary returned from Trainer.train() across agents regardless of (tf or pytorch, multi-agent, multi-gpu, or algos that use >1 SGD iterations, e.g. ppo) (18879)

Ray Workflow

🎉 New Features:
- Introduce workflow.delete (19178)

🔨Fixes:
- Fix the bug which allow workflow step to be executed multiple times (19090)

🏗 Architecture refactoring:
- Object reference serialization is decoupled from workflow storage (18328)


Tune

🎉 New Features:
- PBT: Add burn-in period (19321)

💫Enhancements:
- Optional forcible trial cleanup, return default autofilled metrics even if Trainable doesn't report at least once (19144)
- Use queue to display JupyterNotebookReporter updates in Ray client (19137)
- Add resume="AUTO" and enhance resume error messages (19181)
- Provide information about resource deadlocks, early stopping in Tune docs (18947)
- Fix HEBOSearch installation docs (18861)
- OptunaSearch: check compatibility of search space with evaluated_rewards (18625)
- Add `save` and `restore` methods for searchers that were missing it & test (18760)
- Add documentation for reproducible runs (setting seeds) (18849)
- Depreciate `max_concurrent` in `TuneBOHB` (18770)
- Add `on_trial_result` to ConcurrencyLimiter (18766)
- Ensure arguments passed to tune `remote_run` match (18733)
- Only disable ipython in remote actors (18789)

🔨Fixes:
- Only try to sync driver if sync_to_driver is actually enabled (19589)
- sync_client: Fix delete template formatting (19553)
- Force no result buffering for hyperband schedulers (19140)
- Exclude trial checkpoints in experiment sync (19185)
- Fix how durable trainable is retained in global registry (19223, 19184)
- Ensure `loc` column in progress reporter is filled (19182)
- Deflake PBT Async test (19135)
- Fix `Analysis.dataframe()` documentation and enable passing of `mode=None` (18850)

Ray Train (SGD)

Ray SGD has been rebranded to Ray Train! The new documentation landing page can be found [here](https://docs.ray.io/en/master/train/train.html). Ray Train is integrated with Ray Datasets for distributed data loading while training, documentation available [here](https://docs.ray.io/en/master/train/user_guide.html#distributed-data-ingest-ray-datasets).

🎉 New Features:
- Ray Datasets Integration (17626)

🔨Fixes:
- Improved support for multi-GPU training (18824, 18958)
- Make actor creation async (19325)

📖Documentation:
- Rename Ray SGD v2 to Ray Train (19436)
- Added migration guide from Ray SGD v1 (18887)

Serve
🎉 New Features:
- Add ability to recover from a checkpoint on cluster failure (19125)
- Support kwargs to deployment constructors (19023)

🔨Fixes:
- Fix asyncio compatibility issue (19298)
- Catch spurious ConnectionErrors during shutdown (19224)
- Fix error with uris=None in runtime_env (18874)
- Fix shutdown logic with exit_forever (18820)

🏗 Architecture refactoring:
- Progress towards Serve autoscaling (18793, 19038, 19145)
- Progress towards Java support (18630)
- Simplifications for long polling (19154, 19205)

Dashboard
🎉 New Features:
- Basic support for the dashboard on Windows (19319)

🔨Fixes:
- Fix healthcheck issue causing the dashboard to crash under load (19360)
- Work around aiohttp 4.0.0+ issues (19120)

🏗 Architecture refactoring:
- Improve dashboard agent retry logic (18973)

Thanks
Many thanks to all those who contributed to this release!
rkooo567, lchu-ibm, scv119, pdames, suquark, antoine-galataud, sven1977, mvindiola1, krfricke, ijrsvt, sighingnow, marload, jmakov, clay4444, mwtian, pcmoritz, iycheng, ckw017, chenk008, jovany-wang, jjyao, hauntsaninja, franklsf95, jiaodong, wuisawesome, odp, matthewdeng, duarteocarmo, czgdp1807, gjoliver, mattip, richardliaw, max0x7ba, Jasha10, acxz, xwjiang2010, SongGuyang, simon-mo, zhisbug, ccssmnn, Yard1, hazeone, o0olele, froody, robertnishihara, amogkam, sasha-s, xychu, lixin-wei, architkulkarni, edoakes, clarkzinzow, DmitriGekhtman, avnishn, liuyang-my, stephanie-wang, Chong-Li, ericl, juliusfrost, carlogrisetti

1.6.0

Not secure
Highlights

* [Runtime Environments](https://docs.ray.io/en/releases-1.6.0/advanced.html#runtime-environments) are ready for general use! This feature enables you to dynamically specify per-task, per-actor and per-job dependencies, including a working directory, environment variables, pip packages and conda environments. Install it with `pip install -U 'ray[default]'`.
* Ray Dataset is now in alpha! Dataset is an interchange format for distributed datasets, powered by Arrow. You can also use it for a basic Ray native data processing experience. [Check it out here. ](https://docs.ray.io/en/releases-1.6.0/data/dataset.html)
* [Ray Lightning](https://github.com/ray-project/ray_lightning) v0.1 has been released! You can install it via `pip install ray-lightning`. Ray Lightning is a library of PyTorch Lightning plugins for distributed training using Ray. Features:
* Enables quick and easy parallel training
* Supports [PyTorch DDP](https://github.com/ray-project/ray_lightning#pytorch-distributed-data-parallel-plugin-on-ray), [Horovod](https://github.com/ray-project/ray_lightning#horovod-plugin-on-ray), and [Sharded DDP with Fairscale](https://github.com/ray-project/ray_lightning#model-parallel-sharded-training-on-ray)
* Integrates with [Ray Tune for hyperparameter optimization](https://github.com/ray-project/ray_lightning#hyperparameter-tuning-with-ray-tune) and is compatible with [Ray Client](https://github.com/ray-project/ray_lightning#multi-node-training-from-your-laptop)
* `pip install ray` now has a significantly reduced set of dependencies. Features such as the dashboard, the cluster launcher, runtime environments, and observability metrics may require `pip install -U 'ray[default]'` to be enabled. Please report any issues on Github if this is an issue!

Ray Autoscaler

🎉 New Features:

* The Ray autoscaler now supports TPUs on GCP. Please refer to this example for spinning up a [simple TPU cluster](https://github.com/ray-project/ray/blob/releases/1.6.0/python/ray/autoscaler/gcp/tpu.yaml). (#17278)

💫Enhancements:

* Better AWS networking configurability (17236 17207 14080)
* Support for running autoscaler without NodeUpdaters (17194, 17328)

🔨 Fixes:

* Code clean up and corrections to downscaling policy (17352)
* Docker file sync fix (17361)

Ray Client

💫Enhancements:

* Updated docs for client server ports and ray.init(ray://) (17003, 17333)
* Better error handling for deserialization failures (17035)

🔨 Fixes:

* Fix for server proxy not working with non-default redis passwords (16885)

Ray Core

🎉 New Features:

* [Runtime Environments](https://docs.ray.io/en/releases-1.6.0/advanced.html#runtime-environments) are ready for general use!
* Specify a working directory to upload your local files to all nodes in your cluster.
* Specify different conda and pip dependencies for your tasks and actors and have them installed on the fly.

🔨 Fixes:

* Fix plasma store bugs for better data processing stability (16976, 17135, 17140, 17187, 17204, 17234, 17396, 17550)
* Fix a placement group bug where CUDA_VISIBLE_DEVICES were not properly detected (17318)
* Improved Ray stacktrace messages. (17389)
* Improved GCS stability and scalability (17456, 17373, 17334, 17238, 17072)

🏗 Architecture refactoring:

* Plasma store refactor for better testability and extensibility. (17332, 17313, 17307)

Ray Data Processing

Ray Dataset is now in alpha! Dataset is an interchange format for distributed datasets, powered by Arrow. You can also use it for a basic Ray native data processing experience. [Check it out here. ](https://docs.ray.io/en/releases-1.6.0/data/dataset.html)

RLLib

🎉 New Features:

* Support for RNN/LSTM models with SAC (new agent: "RNNSAC"). Shoutout to ddworak94! (16577)
* Support for ONNX model export (tf and torch). (16805)
* Allow Policies to be added to/removed from a Trainer on-the-fly. (17566)

🔨 Fixes:

* Fix for view requirements captured during compute actions test pass. Shoutout to Chris Bamford (15856)
* Issues: 17397, 17425, 16715, 17174\. When on driver, Torch|TFPolicy should not use `ray.get_gpu_ids()` (b/c no GPUs assigned by ray). (17444)

* Other bug fixes: 15709, 15911, 16083, 16716, 16744, 16896, 16999, 17010, 17014, 17118, 17160, 17315, 17321, 17335, 17341, 17356, 17460, 17543, 17567, 17587

🏗 Architecture refactoring:

* CV2 to Skimage dependency change (CV2 still supported). Shoutout to Vince Jankovics. (16841)
* Unify tf and torch policies wrt. multi-GPU handling: PPO-torch is now 33% faster on Atari and 1 GPU. (17371)
* Implement all policy maps inside RolloutWorkers to be LRU-caches so that a large number of policies can be added on-the-fly w/o running out of memory. (17031)
* Move all tf static-graph code into DynamicTFPolicy, such that policies can be deleted and their tf-graph is GC'd. (17169)
* Simplify multi-agent configs: In most cases, creating dummy envs (only to retrieve spaces) are no longer necessary. (16565, 17046)

📖Documentation:

* Examples scripts do-over (shoutout to Stefan Schneider for this initiative).
* Example script: League-based self-play with "open spiel" env. (17077)
* Other doc improvements: 15664 (shoutout to kk-55), 17030, 17530

Tune

🎉 New Features:

* Dynamic trial resource allocation with ResourceChangingScheduler (16787)
* It is now possible to use a define-by-run function to generate a search space with OptunaSearcher (17464)

💫Enhancements:

* String names of searchers/schedulers can now be used directly in tune.run (17517)
* Filter placement group resources if not in use (progress reporting) (16996)
* Add unit tests for flatten_dict (17241)

🔨Fixes:

* Fix HDFS sync down template (17291)
* Re-enable TensorboardX without Torch installed (17403)

📖Documentation:

* LightGBM integration (17304)
* Other documentation improvements: 17407 (shoutout to amavilla), 17441, 17539, 17503

SGD

🎉 New Features:

* We have started initial development on a new RaySGD v2! We will be rolling it out in a future version of Ray. See the documentation [here](https://docs.ray.io/en/master/raysgd/v2/raysgd.html#sgd-v2-docs). (17536, 17623, 17357, 17330, 17532, 17440, 17447, 17300, 17253)

💫Enhancements:

* Placement Group support for TorchTrainer (17037)

Serve

🎉 New Features:

* Add Ray API stability annotations to Serve, marking many `serve.\*` APIs as `Stable` (17295)
* Support `runtime_env`'s `working_dir` for Ray Serve (16480)

🔨Fixes:

* Fix FastAPI's response_model not added to class based view routes (17376)
* Replace `backend` with `deployment` in metrics & logging (17434)

🏗Stability Enhancements:

* Run Ray Serve with multi & single deployment large scale (1K+ cores) test running nightly (17310, 17411, 17368, 17026, 17277)

Thanks
Many thanks to all who contributed to this release:

suquark, xwjiang2010, clarkzinzow, kk-55, mGalarnyk, pdames, Souphis, edoakes, sasha-s, iycheng, stephanie-wang, antoine-galataud, scv119, ericl, amogkam, ckw017, wuisawesome, krfricke, vakker, qingyun-wu, Yard1, juliusfrost, DmitriGekhtman, clay4444, mwtian, corentinmarek, matthewdeng, simon-mo, pcmoritz, qicosmos, architkulkarni, rkooo567, navneet066, dependabot[bot], jovany-wang, kombuchafox, thomasjpfan, kimikuri, Ivorforce, franklsf95, MissiontoMars, lantian-xu, duburcqa, ddworak94, ijrsvt, sven1977, kira-lin, SongGuyang, kfstorm, Rohan138, jamesmishra, amavilla, fyrestone, lixin-wei, stefanbschneider, jiaodong, richardliaw, WangTaoTheTonic, chenk008, Catch-Bull, Bam4d

1.5.2

Not secure
Cherrypick release to address RLlib issue, no library or core changes included.

1.5.1

Not secure
Cherrypick release to address a few external integration and documentation issues, no library or core changes included.

1.5.0

Not secure
Highlight
- Ray Datasets is now in alpha (https://docs.ray.io/en/master/data/dataset.html)
- LightGBM on Ray is now in beta (https://github.com/ray-project/lightgbm_ray).
- enables multi-node and multi-GPU training
- integrates seamlessly with distributed hyperparameter optimization library Ray Tune
- comes with fault tolerance handling mechanisms, and
- supports distributed dataframes and distributed data loading

Ray Autoscaler
🎉 New Features:
- Aliyun support (15712)

💫 Enhancements:
- [Kubernetes] Operator refactored to use Kopf package (15787)
- Flag to control config bootstrap for rsync (16667)
- Prometheus metrics for Autoscaler (16066, 16198)
- Allows launching in subnets where public IP assignments off by default (16816)

🔨 Fixes:
- [Kubernetes] Fix GPU=0 resource handling (16887)
- [Kubernetes] Release docs updated with K8s test instructions (16662)
- [Kubernetes] Documentation update (16570)
- [Kubernetes] All official images set to rayproject/ray:latest (15988 16205)
- [Local] Fix bootstrapping ray at a given static set of ips (16202, 16281)
- [Azure] Fix Azure Autoscaling Failures (16640)
- Handle node type key change / deletion (16691)
- [GCP] Retry GCP BrokenPipeError (16952)

Ray Client
🎉 New Features:
- Client integrations with major Ray Libraries (15932, 15996, 16103, 16034, 16029, 16111, 16301)
- Client Connect now returns a context that has`disconnect` and can be used as a context manager (16021)

💫 Enhancements:
- Better support for multi-threaded client-side applications (16731, 16732)
- Improved error messages and warnings when misusing Ray Client (16454, 16508, 16588, 16163)
- Made Client Object & Actor refs a subclass of their non-client counterparts (16110)

🔨 Fixes:
- `dir()` Works for client-side Actor Handles (16157)
- Avoid server-side time-outs (16554)
- Various fixes to the client-server proxy (16040, 16038, 16057, 16180)

Ray Core
🎉 New Features:
- Ray dataset alpha is available!

🔨 Fixes:
- Fix various Ray IO layer issues that fixes hanging & high memory usage (16408, 16422, 16620, 16824, 16791, 16487, 16407, 16334, 16167, 16153, 16314, 15955, 15775)
- Namespace now properly isolates placement groups (16000)
- More efficient object transfer for spilled objects (16364, 16352)

🏗 Architecture refactoring:
- From Ray 1.5.0, liveness of Ray jobs are guaranteed as long as there’s enough disk space in machines with the “fallback allocator” mechanism which allocates plasma objects to the disk directly when objects cannot be created in memory or spilled to the disk.

RLlib
🎉 New Features:
- Support for adding/deleting Policies to a Trainer on-the-fly (16359, 16569, 16927).
- Added new “input API” for customizing offline datasets (shoutout to Julius F.). (16957)
- Allow for external env PolicyServer to listen on n different ports (given n rollout workers); No longer require creating an env on the server side to get env’s spaces. (16583).

🔨 Fixes:
- CQL: Bug fixes and clean-ups (fixed iteration count). (16531, 16332)
- D4RL: 16721
- ensure curiosity exploration actions are passed in as tf tensors (shoutout to Manny V.). (15704)
- Other bug fixes and cleanups: 16162 and 16309 (shoutout to Chris B.), 15634, 16133, 16860, 16813, 16428, 16867, 16354, 16218, 16118, 16429, 16427, 16774, 16734, 16019, 16171, 16830, 16722

📖 Documentation and testing:
- 16311, 15908, 16271, 16080, 16740, 16843

🏗 Architecture refactoring:
- All RLlib algos operating on Box action spaces now operate on normalized actions by default (ranging from -1.0 to 1.0). This enables PG-style algos to learn in skewed action spaces. (16531)

Tune
🎉 New Features:
- New integration with LightGBM via Tune callbacks (16713).
- New cost-efficient HPO searchers (BlendSearch and CFO) available from the FLAML library (https://github.com/microsoft/FLAML). (#16329)

💫 Enhancements:
- Pass in configurations that have already been evaluated separately to Searchers. This is useful for warm-starting or for meta-searchers, for example (16485)
- Sort trials in reporter table by metric (16576)
- Add option to keep random values constant over grid search (16501)
- Read trial results from json file (15915)

🔨 Fixes:
- Fix infinite loop when using ``Searcher`` that limits concurrency internally in conjunction with a ``ConcurrencyLimiter`` (16416)
- Allow custom sync configuration with ``DurableTrainable`` (16739)
- Logger fixes. W&B: 16806, 16674, 16839. MLflow: 16840
- Various bug fixes: 16844, 16017, 16575, 16675, 16504, 15811, 15899, 16128, 16396, 16695, 16611

📖 Documentation and testing:
- Use BayesOpt for quick start example (16997)
- 16793, 16029, 15932, 16980, 16450, 16709, 15913, 16754, 16619

SGD
🎉 New Features:
- Torch native mixed precision is now supported! (16382)

🔨 Fixes:
- Use target label count for training batch size (16400)

📖 Documentation and testing:
- 15999, 16111, 16301, 16046

Serve
💫 Enhancements: UX improvements (16227, 15909), Improved logging (16468)
🔨 Fixes: Fix shutdown logic (16524), Assorted bug fixes (16647, 16760, 16783)
📖 Documentation and testing: 16042, 16631, 16759, 16786

Thanks
Many thanks to all who contributed to this release:

Tonyhao96, simon-mo, scv119, Yard1, llan-ml, xcharleslin, jovany-wang, ijrsvt, max0x7ba, annaluo676, rajagurunath, zuston, amogkam, yorickvanzweeden, mxz96102, chenk008, Bam4d, mGalarnyk, kfstorm, crdnb, suquark, ericl, marload, jiaodong, thexiang, ellimac54, qicosmos, mwtian, jkterry1, sven1977, howardlau1999, mvindiola1, stefanbschneider, juliusfrost, krfricke, matthewdeng, zhuangzhuang131419, brandonJY, Eleven1Liu, nikitavemuri, richardliaw, iycheng, stephanie-wang, HuangLED, clarkzinzow, fyrestone, asm582, qingyun-wu, ckw017, yncxcw, DmitriGekhtman, benjamindkilleen, Chong-Li, kathryn-zhou, pcmoritz, rodrigodelazcano, edoakes, dependabot[bot], pdames, frenkowski, loicsacre, gabrieleoliaro, achals, thomasjpfan, rkooo567, dibgerge, clay4444, architkulkarni, lixin-wei, ConeyLiu, WangTaoTheTonic, AnnaKosiorek, wuisawesome, gramhagen, zhisbug, franklsf95, vakker, jenhaoyang, liuyang-my, chaokunyang, SongGuyang, tgaddair

1.4.1

Not secure
Ray Python Wheels

Python 3.9 wheels (Linux / MacOS / Windows) are available ([16347](https://github.com/ray-project/ray/pull/16347) [#16586](https://github.com/ray-project/ray/pull/16586))


Ray Autoscaler

🔨 Fixes: On-prem bug resolved ([16281](https://github.com/ray-project/ray/pull/16281))


Ray Client

💫Enhancements:



* Add warnings when many tasks scheduled ([16454](https://github.com/ray-project/ray/pull/16454))
* Better error messages ([16163](https://github.com/ray-project/ray/pull/16163))

🔨 Fixes:



* Fix gRPC Timeout Options ([16554](https://github.com/ray-project/ray/pull/16554))
* Disconnect on dataclient error ([16588](https://github.com/ray-project/ray/pull/16588))


Ray Core

🔨 Fixes:



* Runtime Environments
* Docs ([16290](https://github.com/ray-project/ray/pull/16290))
* Bug fixes ([16475](https://github.com/ray-project/ray/pull/16475), [#16535](https://github.com/ray-project/ray/pull/16535), [#16378](https://github.com/ray-project/ray/pull/16378))
* Logging improvement ([16516](https://github.com/ray-project/ray/pull/16516))
* Fix race condition leading to failed imports [16278](https://github.com/ray-project/ray/pull/16278)
* Don't broadcast empty resources data ([16104](https://github.com/ray-project/ray/pull/16104))
* Fix async actor lost object bug ([16414](https://github.com/ray-project/ray/pull/16414))
* Always report job timestamps in milliseconds ([16455](https://github.com/ray-project/ray/pull/16455), [#16545](https://github.com/ray-project/ray/pull/16545), [#16548](https://github.com/ray-project/ray/pull/16548))
* Multi-node placement group and job config bug fixes ([16345](https://github.com/ray-project/ray/pull/16345))
* Fix bug in task dependency management for duplicate args ([16365](https://github.com/ray-project/ray/pull/16365))
* Unify Python and core worker ids ([16712](https://github.com/ray-project/ray/pull/16712))


Dask

💫Enhancements: Dask 2021.06.1 support ([16547](https://github.com/ray-project/ray/pull/16547))


Tune

💫Enhancements: Support object refs in with_params ([16753](https://github.com/ray-project/ray/pull/16753))

Serve

🔨Fixes: Ray serve shutdown goes through Serve controller ([16524](https://github.com/ray-project/ray/pull/16524))

Java

🔨Fixes: Upgrade dependencies to fix CVEs ([16650](https://github.com/ray-project/ray/pull/16650), [#16657](https://github.com/ray-project/ray/pull/16657))

Documentation

* Runtime Environments ([16290](https://github.com/ray-project/ray/pull/16290))
* Feature contribution [Tune] ([16477](https://github.com/ray-project/ray/pull/16477))
* Ray design patterns and anti-patterns ([16478](https://github.com/ray-project/ray/pull/16477))
* PyTorch Lightning ([16484](https://github.com/ray-project/ray/pull/16484))
* Ray Client ([16497](https://github.com/ray-project/ray/pull/16497))
* Ray Deployment ([16538](https://github.com/ray-project/ray/pull/16477))
* Dask version compatibility ([16595](https://github.com/ray-project/ray/pull/16595))

CI

Move wheel and Docker image upload from Travis to Buildkite ([16138](https://github.com/ray-project/ray/pull/16138) [#16241](https://github.com/ray-project/ray/pull/16138))

Thanks
Many thanks to all those who contributed to this release!


rkooo567, clarkzinzow, WangTaoTheTonic, ckw017, stephanie-wang, Yard1, mwtian, jovany-wang, jiaodong, wuisawesome, krfricke, architkulkarni, ijrsvt, simon-mo, DmitriGekhtman, amogkam, richardliaw

Page 7 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.