Ray

Latest version: v2.22.0

Safety actively analyzes 630094 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 15

1.0

We're happy to announce the release of Ray 1.0, an important step towards the goal of providing a universal API for distributed computing.

To learn more about Ray 1.0, check out our [blog post](https://www.anyscale.com/blog/announcing-ray-1-0) and [whitepaper](https://docs.ray.io/en/master/whitepaper.html).

Ray Core
- The ray.init() and `ray start` commands have been cleaned up to remove deprecated arguments
- The Ray Java API is now stable
- Improved detection of Docker CPU limits
- Add support and documentation for Dask-on-Ray and MARS-on-Ray: https://docs.ray.io/en/master/ray-libraries.html
- Placement groups for fine-grained control over scheduling decisions: https://docs.ray.io/en/latest/placement-group.html.
- New architecture whitepaper: https://docs.ray.io/en/master/whitepaper.html

Autoscaler
- Support for multiple instance types in the same cluster: https://docs.ray.io/en/master/cluster/autoscaling.html
- Support for specifying GPU/accelerator type in `ray.remote`

Dashboard & Metrics
- Improvements to the memory usage tab and machine view
- The dashboard now supports visualization of actor states
- Support for Prometheus metrics reporting: https://docs.ray.io/en/latest/ray-metrics.html

RLlib
- Two Model-based RL algorithms were added: MB-MPO (“Model-based meta-policy optimization”) and “Dreamer”. Both algos were benchmarked and are performing comparably to the respective papers’ reported results.
- A “Curiosity” (intrinsic motivation) module was added via RLlib’s Exploration API and benchmarked on a sparse-reward Unity3D environment (Pyramids).
- Added documentation for the Distributed Execution API.
- Removed (already soft-deprecated) APIs: Model(V1) class, Trainer config keys, some methods/functions. Where you would see a warning previously when using these, there will be an error thrown now.
- Added DeepMind Control Suite examples.

Tune

**Breaking changes:**
- Multiple tune.run parameters have been deprecated: `ray_auto_init, run_errored_only, global_checkpoint_period, with_server` (10518)
- `tune.run(upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint` have been moved to `tune.SyncConfig` [[docs](https://docs.ray.io/en/releases-1.0.0/tune/tutorials/tune-distributed.html#syncing)] (10518)

**New APIs:**
- `mode, metric, time_budget` parameters for tune.run (10627, 10642)
- Search Algorithms now share a uniform API: (10621, 10444). You can also use the new `create_scheduler/create_searcher` shim layer to create search algorithms/schedulers via string, reducing boilerplate code (10456).
- Native callbacks for: [MXNet, Horovod, Keras, XGBoost, PytorchLightning](https://docs.ray.io/en/releases-1.0.0/tune/api_docs/integration.html) (#10533, 10304, 10509, 10502, 10220)
- PBT runs can be replayed with PopulationBasedTrainingReplay scheduler (9953)
- Search Algorithms are saved/resumed automatically (9972)
- New Optuna Search Algorithm [docs](https://docs.ray.io/en/releases-1.0.0/tune/api_docs/suggestion.html#optuna-tune-suggest-optuna-optunasearch) (10044)
- Tune now can sync checkpoints across Kubernetes pods (10097)
- Failed trials can be rerun with `tune.run(resume="run_errored_only")` (10060)

**Other Changes:**
- Trial outputs can be saved to file via `tune.run(log_to_file=...)` (9817)
- Trial directories can be customized, and default trial directory now includes trial name (10608, 10214)
- Improved Experiment Analysis API (10645)
- Support for Multi-objective search via SigOpt Wrapper (10457, 10446)
- BOHB Fixes (10531, 10320)
- Wandb improvements + RLlib compatibility (10950, 10799, 10680, 10654, 10614, 10441, 10252, 8521)
- Updated documentation for FAQ, Tune+serve, search space API, lifecycle (10813, 10925, 10662, 10576, 9713, 10222, 10126, 9908)


RaySGD:
* Creator functions are subsumed by the TrainingOperator API (10321)
* Training happens on actors by default (10539)

Serve

- [`serve.client` API](https://docs.ray.io/en/master/serve/deployment.html#lifetime-of-a-ray-serve-instance) makes it easy to appropriately manage lifetime for multiple Serve clusters. (10460)
- Serve APIs are fully typed. (10205, 10288)
- Backend configs are now typed and validated via Pydantic. (10559, 10389)
- Progress towards application level backend autoscaler. (9955, 9845, 9828)
- New [architecture page](https://docs.ray.io/en/master/serve/architecture.html) in documentation. (#10204)

Thanks
We thank all the contributors for their contribution to this release!

MissiontoMars, ijrsvt, desktable, kfstorm, lixin-wei, Yard1, chaokunyang, justinkterry, pxc, ericl, WangTaoTheTonic, carlos-aguayo, sven1977, gabrieleoliaro, alanwguo, aryairani, kishansagathiya, barakmich, rkube, SongGuyang, qicosmos, ffbin, PidgeyBE, sumanthratna, yushan111, juliusfrost, edoakes, mehrdadn, Basasuya, icaropires, michaelzhiluo, fyrestone, robertnishihara, yncxcw, oliverhu, yiranwang52, ChuaCheowHuan, raphaelavalos, suquark, krfricke, pcmoritz, stephanie-wang, hekaisheng, zhijunfu, Vysybyl, wuisawesome, sanderland, richardliaw, simon-mo, janblumenkamp, zhuohan123, AmeerHajAli, iamhatesz, mfitton, noahshpak, maximsmol, weepingwillowben, raulchen, 09wakharet, ashione, henktillman, architkulkarni, rkooo567, zhe-thoughts, amogkam, kisuke95, clarkzinzow, holli, raoul-khour-ts

1.0.0

Not secure

0.8.7

Not secure
Highlight
---------
- Ray is moving towards 1.0! It has had several important naming changes.
- `ObjectID`s are now called `ObjectRef`s because they are not just IDs.
- The Ray Autoscaler is now called the Ray Cluster Launcher. The autoscaler will be a module of the Ray Cluster Launcher.
- The Ray Cluster Launcher now has a much cleaner and concise output style. Try it out with `ray up --log-new-style`. The new output style will be enabled by default (with opt-out) in a later release.
- Windows is now officially supported by RLlib. Multi node support for Windows is still in progress.

Cluster Launcher/CLI (formerly autoscaler)
--------------------------------------------
- **Highlight:** This release contains a new colorful, concise output style for `ray up` and `ray down`, available with the `--log-new-style` flag. It will be enabled by default (with opt-out) in a later release. Full output style coverage for Cluster Launcher commands will also be available in a later release. (9322, 9943, 9960, 9690)
- Documentation improvements (with guides and new sections) (9687
- Improved Cluster launcher docker support (9001, 9105, 8840)
- Ray now has Docker images available on Docker hub. Please check out the [ray image](https://hub.docker.com/u/rayproject/ray) (#9732, 9556, 9458, 9281)
- Azure improvements (8938)
- Improved on-prem cluster autoscaler (9663)
- Add option for continuous sync of file mounts (9544)
- Add `ray status` debug tool and `ray --version` (9091, 8886).
- `ray memory` now also supports redis_password (9492)
- Bug fixes for the Kubernetes cluster launcher mode (9968)
- __Various improvements:__ disabling the cluster config cache (8117), Python API requires keyword arguments (9256), removed fingerprint checking for SSH (9133), Initial support for multiple worker types (9096), various changes to the internal node provider interface (9340, 9443)

Core
-----
- Support Python type checking for Ray tasks (9574)
- Rename ObjectID => ObjectRef (9353)
- New GCS Actor manager on by default (8845, 9883, 9715, 9473, 9275)
- Worker towards placement groups (9039)
- Plasma store process is merged with raylet (8939, 8897)
- Option to automatically reconstruct objects stored in plasma after a failure. See the [documentation](https://docs.ray.io/en/master/fault-tolerance.html#objects) for more information. (9394, 9557, 9488)
- Many bug fixes.

RLlib
-----
- New algorithm: __“Model-Agnostic Meta-Learning” (MAML)__. An algo that learns and generalizes well across a __distribution__ of environments.
- New algorithm: __“Model-Based Meta-Policy-Optimization” (MB-MPO)__. Our first __model-based RL algo__.
- __Windows__ is now __officially supported__ by RLlib.
- __Native TensorFlow 2.x support__. Use framework=”tf2” in your config to tap into TF2’s full potential. Also: SAC, DDPG, DQN Rainbow, ES, and ARS now run in TF1.x Eager mode.
- __DQN PyTorch__ support for full Rainbow setup (including distributional DQN).
- __Python type hints__ for Policy, Model, Offline, Evaluation, and Env classes.
- __Deprecated “Policy Optimizer”__ package (in favor of new distributed execution API).
- Enhanced __test coverage__ and __stability__.
- __Flexible multi-agent replay modes__ and `replay_sequence_length`. We now allow a) storing sequences (over time) in replay buffers and retrieving “lock-stepped” multi-agent samples.
- Environments: __Unity3D soccer game__ (tuned example/benchmark) and __DM Control__ Suite wrapper and examples.
- Various __Bug fixes__: QMIX not learning, DDPG torch bugs, IMPALA learning rate updates, PyTorch custom loss, PPO not learning MuJoCo due to action clipping bug, DQN w/o dueling layer error.

Tune
-----

- **API Changes**:
- The Tune Function API now supports checkpointing and is now usable with all search and scheduling algorithms! (8471, 9853, 9517)
- The Trainable class API has renamed many of its methods to be public (9184)
- You can now stop experiments upon convergence with Bayesian Optimization (8808)
- `DistributedTrainableCreator`, a simple wrapper for distributed parameter tuning with multi-node DistributedDataParallel models (9550, 9739)
- New integration and tutorial for using Ray Tune with __Weights and Biases__ (Logger and native API) (9725)
- Tune now provides a Scikit-learn compatible wrapper for hyperparameter tuning (9129)
- __New tutorials__ for integrations like __XGBoost__ (9060), __multi GPU PyTorch__ (9338), __PyTorch Lightning__ (9151, 9451), and __Huggingface-Transformers__ (9789)
- CLI Progress reporting improvements (8802, 9537, 9525)
- Various __bug fixes__: handling of NaN values (9381), Tensorboard logging improvements (9297, 9691, 8918), enhanced cross-platform compatibility (9141), re-structured testing (9609), documentation reorganization and versioning (9600, 9427, 9448)

RaySGD
--------
- Variable worker CPU requirements (8963)
- Simplified cuda visible device setting (8775)

Serve
------
- Horizontal scalability: Serve will now start one HTTP server per Ray node. (9523)
- Various performance improvement matching Serve to FastAPI (9490,8709, 9531, 9479 ,9225, 9216, 9485)
- API changes
- `serve.shadow_traffic(endpoint, backend, fraction)` duplicates and sends a fraction of the incoming traffic to a specific backend. (9106)
- `serve.shutdown()` cleanup the current Serve instance in Ray cluster. (8766)
- Exception will be raised if `num_replicas` exceeds the maximum resource in the cluster (9005)
- Added doc examples for how to perform metric [monitoring](https://docs.ray.io/en/master/serve/advanced.html#monitoring) and [model composition](https://docs.ray.io/en/master/serve/advanced.html#composing-multiple-models).

Dashboard
-----------
- __Configurable Dashboard Port__: The port on which the dashboard will run is now configurable using the argument `--dashboard-port` and the argument `dashboard_port` to `ray.init`
- __GPU monitoring improvements__
- For machines with more than one GPU, the GPU and GRAM utilization is now broken out on a per-GPU basis.
- Assignments to physical GPUs are now shown at the worker level.
- __Sortable Machine View__: It is now possible to sort the machine view by almost any of its columns by clicking next to the title. In addition, whereas the workers are normally grouped by node, you can now ungroup them if you only want to see details about workers.
- __Actor Search Bar__: It is possible to search for actors by their title now (this is the class name of the actor in python in addition to the arguments it received.)
- __Logical View UI Updates__: This includes things like color-coded names for each of the actor states, a more grid-like layout, and tooltips for the various data.
- __Sortable Memory View__: Like the machine view, the memory view now has sortable columns and can be grouped / ungrouped by node.

Windows Support
------------------
- Improve GPU detection (9300)
- Work around msgpack issue on PowerPC64LE (9140)

Others
-------
- Ray Streaming Library Improvements (9240, 8910, 8780)
- Java Support Improvements (9371, 9033, 9037, 9032, 8858, 9777, 9836, 9377)
- Parallel Iterator Improvements (8964, 8978)


Thanks
-------
We thank the following contributors for their work on this release:
jsuarez5341, amitsadaphule, krfricke, williamFalcon, richardliaw, heyitsmui, mehrdadn, robertnishihara, gabrieleoliaro, amogkam, fyrestone, mimoralea, edoakes, andrijazz, ElektroChan89, kisuke95, justinkterry, SongGuyang, barakmich, bloodymeli, simon-mo, TomVeniat, lixin-wei, alanwguo, zhuohan123, michaelzhiluo, ijrsvt, pcmoritz, LecJackS, sven1977, ashione, JerryLeeCS, raphaelavalos, stephanie-wang, ruifangChen, vnlitvinov, yncxcw, weepingwillowben, goulou, acmore, wuisawesome, gramhagen, anabranch, internetcoffeephone, Alisahhh, henktillman, deanwampler, p-christ, Nicolaus93, WangTaoTheTonic, allenyin55, kfstorm, rkooo567, ConeyLiu, 09wakharet, piojanu, mfitton, KristianHolsheimer, AmeerHajAli, pdames, ericl, VishDev12, suquark, stefanbschneider, raulchen, dcfidalgo, chappers, aaarne, chaokunyang, sumanthratna, clarkzinzow, BalaBalaYi, maximsmol, zhongchun, wumuzi520, ffbin

0.8.6

Not secure
Highlight
---------
- Experimental support for Windows is now available for single node Ray usage. Check out the Windows section below for known issues and other details.
- Have you had troubles monitoring GPU or memory usage while you used Ray? The Ray dashboard now supports the GPU monitoring and a memory view.
- Want to use RLlib with Unity? RLlib officially supports the Unity3D adapter! Please check out the [documentation](https://docs.ray.io/en/master/rllib-env.html?highlight=unity#external-agents-and-applications).
- Ray Serve is ready for feedback! We've gotten feedback from many users, and Ray Serve is already being used in production. Please reach out to us with your use cases, ideas, documentation improvements, and feedback. We'd love to hear from you. Please do so on the Ray Slack and join serve! Please see the Serve section below for more details.

Core
-----
- We’ve introduced a new feature to automatically retry failed actor tasks after an actor has been restarted by Ray (by specifying `max_restarts` in `ray.remote`). Try it out with `max_task_retries=-1` where -1 indicates that the system can retry the task until it succeeds.

API Change
- To enable automatic restarts of a failed actor, you must now use `max_restarts` in the `ray.remote` decorator instead of `max_reconstructions`. You can use -1 to indicate infinity, i.e., the system should always restart the actor if it fails unexpectedly.
- We’ve merged the named and detached actor APIs. To create an actor that will survive past the duration of its job (a “detached” actor), specify `name=<str>` in its remote constructor (`Actor.options(name='<str>').remote()`). To delete the actor, you can use `ray.kill`.

RLlib
-----
- PyTorch: IMPALA PyTorch version and all `rllib/examples` scripts now work for either TensorFlow or PyTorch (`--torch` command line option).
- Switched to using distributed execution API by default (replaces Policy Optimizers) for all algorithms.
- Unity3D adapter (supports all Env types: multi-agent, external env, vectorized) with example scripts for running locally or in the cloud.
- Added support for variable length observation Spaces ("Repeated").
- Added support for arbitrarily nested action spaces.
- Added experimental GTrXL (Transformer/Attention net) support to RLlib + learning tests for PPO and IMPALA.
- QMIX now supports complex observation spaces.

API Change
- Retire `use_pytorch` and `eager` flags in configs and replace these with `framework=[tf|tfe|torch]`.
- Deprecate PolicyOptimizers in favor of the new distributed execution API.
- Retired support for Model(V1) class. Custom Models should now only use the ModelV2 API. There is still a warning when using ModelV1, which will be changed into an error message in the next release.
- Retired TupleActions (in favor of arbitrarily nested action Spaces).

Ray Tune / RaySGD
-------------------
- There is now a Dataset API for handling large datasets with RaySGD. (7839)
- You can now filter by an average of the last results using the `ExperimentAnalysis` tool (8445).
- BayesOptSearch received numerous contributions, enabling preliminary random search and warm starting. (8541, 8486, 8488)

API Changes
- `tune.report` is now the right way to use the Tune function API. `tune.track` is deprecated (8388)

Serve
------
- New APIs to inspect and manage Serve objects:
- `serve.list_backends` and `serve.list_endpoints` (8737)
- `serve.delete_backend` and `serve.delete_endpoint` (8252, 8256)
- `serve.create_endpoint` now requires specifying the backend directly. You can remove `serve.set_traffic` if there's only one backend per endpoint. (8764)
- `serve.init` API cleanup, the following options were removed:
- `blocking`, `ray_init_kwargs`, `start_server` (8747, 8447, 8620)
- `serve.init` now supports namespacing with `name`. You can run multiple serve clusters with different names on the same ray cluster. (8449)
- You can specify session affinity when splitting traffic with backends using `X-SERVE-SHARD-KEY` HTTP header. (8449)
- Various documentation improvements. Highlights:
- A new section on how to perform A/B testing and incremental rollout (8741)
- Tutorial for batch inference (8490)
- Instructions for specifying GPUs and resources (8495)

Dashboard / Metrics
---------------------
- The Machine View of the dashboard now shows information about GPU utilization such as:
- Average GPU/GRAM utilization at a node and cluster level
- Worker-level information about how many GPUs each worker is assigned as well as its GRAM use.
- The dashboard has a new Memory View tab that should be very useful for debugging memory issues. It has:
- Information about objects in the Ray object store, including size and call-site
- Information about reference counts and what is keeping an object pinned in the Ray object store.

Small changes
- IDLE workers get automatically sorted to the end of the worker list in the Machine View

Autoscaler
-----------
- Improved logging output. Errors are more clearly propagated and excess output has been reduced. (7198, 8751, 8753)
- Added support for k8s services.

API Changes
- `ray up` accepts remote URLs that point to the desired cluster YAML. (8279)

Windows support
------------------
- Windows wheels are now available for basic experimental usage (via `ray.init()`).
- Windows support is currently unstable. Unusual, unattended, or production usage is *not* recommended.
- Various functionality may still lack support, including Ray Serve, Ray SGD, the autoscaler, the dashboard, non-ASCII file paths, etc.
- Please check the latest nightly wheels & known issues (9114), and let us know if any issue you encounter has not yet been addressed.
- Wheels are available for Python 3.6, 3.7, and 3.8. (8369)
- redis-py has been patched for Windows sockets. (8386)

Others
-------
- Moving towards highly available Ray (8650, 8639, 8606, 8601, 8591, 8442)
- Java Support (8730, 8640, 8637)
- Ray streaming improvements (8612, 8594, 7464)
- Parallel iterator improvements (8140, 7931, 8712)

Thanks
------
We thank the following contributors for their work on this release:
pcmoritz, akharitonov, devanderhoff, ffbin, anabranch, jasonjmcghee, kfstorm, mfitton, alecbrick, simon-mo, konichuvak, aniryou, wuisawesome, robertnishihara, ramanNarasimhan77, 09wakharet, richardliaw, istoica, ThomasLecat, sven1977, ceteri, acxz, iamhatesz, JarnoRFB, rkooo567, mehrdadn, thomasdesr, janblumenkamp, ujvl, edoakes, maximsmol, krfricke, amogkam, gehring, ijrsvt, internetcoffeephone, LucaCappelletti94, chaokunyang, WangTaoTheTonic, fyrestone, raulchen, ConeyLiu, stephanie-wang, suquark, ashione, Coac, JosephTLucas, ericl, AmeerHajAli, pdames

0.8.5

Not secure
Highlight
---------
- You can now cancel remote tasks using the `ray.cancel` API.
- PyTorch is now a first-class citizen in RLlib! We've achieved parity between TensorFlow and PyTorch.
- Did you struggle to find good example code for Ray ML libraries? We wrote more examples for Ray SGD and Ray Serve.
- Ray serve: [Keras/Tensorflow](https://docs.ray.io/en/master/rayserve/tutorials/tensorflow-tutorial.html), [PyTorch](https://docs.ray.io/en/master/rayserve/tutorials/pytorch-tutorial.html), [Scikit-Learn](https://docs.ray.io/en/master/rayserve/tutorials/sklearn-tutorial.html).
- Ray SGD: New [Semantic Segmentation](https://github.com/ray-project/ray/tree/master/python/ray/util/sgd/torch/examples/segmentation) and [HuggingFace GLUE Fine-tuning](https://github.com/ray-project/ray/tree/master/python/ray/util/sgd/torch/examples/transformers) Examples.

Core
-----
- Task cancellation is now available for locally submitted tasks. (7699)
- Experimental support for recovering objects that were lost from the Ray distributed memory store. You can try this out by setting `lineage_pinning_enabled: 1` in the internal config. (7733)

RLlib
-----
- PyTorch support has now reached parity with TensorFlow. (7926, 8188, 8120, 8101, 8106, 8104, 8082, 7953, 7984, 7836, 7597, 7797)
- Improved callbacks API. (6972)
- Enable Ray distributed reference counting. (8037)
- Work towards customizable distributed training workflows. (7958, 8077)

Tune
-----
- Documentation has improved with a new format. (8083, 8201, 7716)
- Search algorithms are refactored to make them easier to extend, deprecating `max_concurrent` argument. (7037, 8258, 8285)
- TensorboardX errors are now handled safely. (8174)
- Bug fix in PBT checkpointing. (7794)
- New ZOOpt search algorithm added. (7960)

Serve
------
- Improved APIs.
- Add delete_endpoint and delete_backend. (8252, 8256)
- Use dictionary to update backend config. (8202)
- Added overview section to the documentation.
- Added tutorials for serving models in Tensorflow/Keras, PyTorch, and Scikit-Learn.
- Made serve clusters tolerant to process failures. (8116, 8008,7970,7936)

SGD
-----
- New Semantic Segmentation and HuggingFace GLUE Fine-tuning Examples. (7792, 7825)
- Fix GPU Reservations in SLURM usage. (8157)
- Update learning rate scheduler stepping parameter. (8107)
- Make serialization of data creation optional. (8027)
- Automatic DDP wrapping is now optional. (7875)

Others Projects
----------------
- Progress towards the highly available and fault tolerant control plane. (8144, 8119, 8145, 7909, 7949, 7771, 7557, 7675)
- Progress towards the Ray streaming library. (8044, 7827, 7955, 7961, 7348)
- Autoscaler improvement. (8178, 8168, 7986, 7844, 7717)
- Progress towards Java support. (8014)
- Progress towards the Window compatibility. (8237, 8186)
- Progress towards cross language support. (7711)


Thanks
------
We thank the following contributors for their work on this release:

simon-mo, robertnishihara, BalaBalaYi, ericl, kfstorm, tirkarthi, nflu, ffbin, chaokunyang, ijrsvt, pcmoritz, mehrdadn, sven1977, iamhatesz, nmatthews-asapp, mitchellstern, edoakes, anabranch, billowkiller, eisber, ujvl, allenyin55, yncxcw, deanwampler, DavidMChan, ConeyLiu, micafan, rkooo567, datayjz, wizardfishball, sumanthratna, ashione, marload, stephanie-wang, richardliaw, jovany-wang, MissiontoMars, aannadi, fyrestone, JarnoRFB, wumuzi520, roireshef, acxz, gramhagen, Servon-Lee, ClarkZinzow, mfitton, maximsmol, janblumenkamp, istoica

0.8.4

Not secure
Highlight
----------
- Add Python 3.8 support. (7754)

Core
----
- Fix asycnio actor deserialization. (7806)
- Fix importing Pyarrow lead to symbol collison segfault. (7568)
- `ray memory` will collect statistics from all nodes. (7721)
- Pin lineage of plasma objects that are still in scope. (7690)

RLlib
-----
- Add contextual bandit algorithms. (7642)
- Add parameter noise exploration API. (7772)
- Add [scaling guide](https://ray.readthedocs.io/en/latest/rllib-training.html#scaling-guide). (7780)
- Enable restore keras model from h5 file. (7482)
- Store tf-graph by default when doing `Policy.export_model()`. (7759)
- Fix default policy overrides torch policy. (7756, 7769)

RaySGD
----
- BREAKING: Add new API for tuning TorchTrainer using Tune. (7547)
- BREAKING: Convert the head worker to a local model. (7746)
- Added a new API for save/restore. (7547)
- Add tqdm support to TorchTrainer. (7588)

Tune
------
- Add sorted columns and TensorBoard to Tune tab. (7140)
- Tune experiments can now be cancelled via the REST client. (7719)
- `fail_fast` enables experiments to fail quickly. (7528)
- override the IP retrieval process if needed. (7705)
- TensorBoardX nested dictionary support. (7705)

Serve
-----
- Performance improvements:
- Push route table updates to HTTP proxy. (7774)
- Improve serialization. (7688)
- Add async methods support for serve actors. (7682)
- Add multiple method support for serve actors. (7709)
- You can specify HTTP methods in `serve.create_backend(..., methods=["GET", "POST"])`.
- The ability to specify which actor method to execute in HTTP through `X-SERVE-CALL-METHOD` header or in `RayServeHandle` through `handle.options("method").remote(...)`.

Others
------
- Progress towards highly available control plane. (7822, 7742)
- Progress towards Windows compatibility. (7740, 7739, 7657)
- Progress towards Ray Streaming library. (7813)
- Progress towards metrics export service. (7809)
- Basic C++ worker implementation. (6125)


Thanks
------
We thank the following contributors for their work on this release:

carlbalmer, BalaBalaYi, saurabh3949, maximsmol, SongGuyang, istoica, pcmoritz, aannadi, kfstorm, ijrsvt, richardliaw, mehrdadn, wumuzi520, cloudhan, edoakes, mitchellstern, robertnishihara, hhoke, simon-mo, ConeyLiu, stephanie-wang, rkooo567, ffbin, ericl, hubcity, sven1977

Page 9 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.