Mlagents

Latest version: v1.0.0

Safety actively analyzes 629639 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 14

0.6.1preview

0.6.0

Important

Brains have been changed to ScriptableObjects instead of MonoBehaviors. This will allow you to set Brains in prefabs and use the same Brains across multiple scenes. Please see [Migrating from v0.5 to v0.6 documentation](../0.6.0/docs/Migrating.mdmigrating-from-ml-agents-toolkit-v05-to-v06) for more information.
* `Internal` and `External` Brain types have been replaced by a `LearningBrain` asset.
* `Heuristic` Brain type have been replaced by a `HeuristicBrain` asset.
* `Player` Brain type have been replaced by a `PlayerBrain` asset.
* Brains are now exposed to the Python training process through the "Broadcast Hub" within the Academy component.

New Features

* **[Unity]** [Demonstration Recorder](../0.6.0/docs/Training-Imitation-Learning.mdrecording-demonstrations). It is now possible to record the actions and observations of an Agent from the Editor, and use them to train Agents at a later time. This allows you to reuse training data for multiple training sessions.
* **[Communication]** Added a `make_for_win.bat` file to generate the protobuf objects in `protobuf-definitions` on Windows machines.
* Added debug warnings to the `LearningBrain` when models are not compatibles with the `Brain Parameters`.

Changes

* Removed the graph scope from trained models. When training multiple Brains during the same session, one graph per Brain will be created instead of one single graph with multiple graph scopes.

Fixes & Performance Improvements

* Various improvements to documentation.

Known Issues

* Ending training early using `CTL+C` does not save the model on Windows.

Acknowledgements

Thanks to everyone at Unity who contributed to v0.5.0, as well as: eltronix, bjmolitor, luhairong, YuMurata

0.6.0a

Fixes and Improvements
* Fixes typo on documentation.
* Fixes Division by zero error when using recurrent and discrete control.
* Fixes UI bug on Learning Brain warnings with visual observations.
* Fixes Curriculum Learning Brain names.
* Fixes Ctrl-C bug on Windows in which the model would not be saved when training was interrupted.
* Fixes In Editor Training Bug with Docker.
* Fixes Docker Training Bug in which models would not be saved after training was interrupted.

0.5.0

Important

We have reorganized the project repository. Please see [Migrating from v0.4 to v0.5 documentation](../master/docs/Migrating.mdmigrating-from-ml-agents-toolkit-v03-to-v04) for more information. Highlighted changes to repository structure include:

* The `python` folder has been renamed `ml-agents.` It now contains a python package called `mlagents`.
* The `unity-environment` folder, containing the Unity project, has been renamed `UnitySDK`.
* The protobuf definitions used for communication have been added to a new `protobuf-definitions` folder.
* Example curricula and the trainer configuration file have been moved to a new `config` sub-directory.

Environments

To learn more about new and improved environments, see our [Example Environments page](../master/docs/Learning-Environment-Examples.md).

Improved

The following environments have been changes to use Multi Discrete Action:
* WallJump
* BananaCollector

The following environment has been modified to use Action Masking:
* GridWorld

New Features

* **[Gym]** New package `gym-unity` which provides gym interface to wrap `UnityEnvironment`. More information [here](../master/gym-unity/Readme.md).

* **[Training]** Can now run multiple concurent training sessions with the `--num-runs=<n>` [command line option](../master/docs/Training-ML-Agents.mdcommand-line-training-options). (Training sessions are independent, and do not improve learning performance.)

* **[Unity]** [Meta-Curriculum](../master/docs/Training-Curriculum-Learning.md). Supports curriculum learning in multi-brain environments.

* **[Unity]** [Action Masking for Discrete Control](../master/docs/Learning-Environment-Design-Agents.mdmasking-discrete-actions) - It is now possible to mask invalid actions each step to limit the actions an agent can take.

* **[Unity]** [Action Branches for Discrete Control](../master/docs/Learning-Environment-Design-Agents.mddiscrete-action-space) - It is now possible to define discrete action spaces which contain multiple branches, each with its own space size.

Changes

* Can now visualize value estimates when using models trained with PPO from Unity with `GetValueEstimate()`.
* It is now possible to specify which camera the `Monitor` displays to.
* Console summaries will now be displayed even when running inference mode from python.
* Minimum supported Unity version is now 2017.4.

Fixes & Performance Improvements

* Replaced some activation functions to `swish`.
* Visual Observations use PNG instead of JPEG to avoid compression losses.
* Improved python unit tests.
* Fix to enable multiple training sessions on single GPU.
* Curriculum lessons are now tracked correctly.

Known Issues

* Ending training early using `CTL+C` does not save the model on Windows.
* Sequentially opening and closing multiple instances of `UnityEnvironment` within a single process is not possible.

Acknowledgements

Thanks to everyone at Unity who contributed to v0.5.0, as well as: sterling000, bartlomiejwolk, Sohojoe, Phantomb.

0.5.0preview

0.5.0a

Fixes and Improvements
* Fixes typo on documentation.
* Removes unnecessary `gitignore` line.
* Fixes imitation learning scenes.
* Fixes `BananaCollector` environment.
* Enables `gym_unity` with multiple visual observations.

Acknowledgements
Thanks to everyone at Unity who contributed to v0.5.0a, as well as: Sohojoe, fengredrum, and xiaodi-faith.

Page 10 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.