Allenact

Latest version: v0.5.4

Safety actively analyzes 629004 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.2.2

This minor release brings:

* Minor bug fixes including a crash occurring when attempting to use very large models.
* Usability improvements when running ObjectNav and PointNav tasks in AI2-THOR, namely we now automatically detect X-displays when possible rather than them having to be manually defined.

0.2.1

This minor release brings:

* Updates to the RoboTHOR ObjectNav baselines in preparation for the [2021 RoboTHOR ObjectNav Challenge](https://ai2thor.allenai.org/robothor/cvpr-2021-challenge).
* Minor usability improvements (e.g. changing the default experiment path to be the directory from which training is run).

0.2.0

In this release we add:

Speed improvements 🚀

Faster training in iTHOR/RoboTHOR: we can now hit more than 1200 FPS on a server with 8 GPUs. This is thanks to the new FifoServer interface to AI2-THOR and improved caching in AllenAct.

Pip installability 📥

AllenAct can now be installed via `pip` and our various environment plugins can be installed separately from the underlying framework. Check the new [installation instructions](https://allenact.org/installation/installation-allenact/). To make this possible we have renamed some modules:
-- Old `core` -> `allenact`.
-- Old `plugins` -> `allenact_plugins`.

Continuous and multi-dimensional action spaces 🤖

Does your agent need to take actions that are more complex than choosing from a discrete list? We now support continuous (and multi-dimensional) actions for which you can associate arbitrary probability distributions. No longer do you need to artificially discretize actions: for example, you could now allow your agent to specify 7 different continuous torques to be applied to its robotic arm at once. These features are exemplified in a [new tutorial](https://allenact.org/tutorials/gym-tutorial/).

A new OpenAI Gym plugin / support 📦

We now support all [Box2D tasks](https://gym.openai.com/envs/#box2d) with continuous actions. See our [new tutorial](https://allenact.org/tutorials/gym-tutorial/).

Stability and logging improvements ⚖️

We revamped the logging system improving reliability and consistency. We've also made numerous small improvements so that we generate better error messages and fail more gracefully.

Cumulative support 📈

|Environments|Tasks|Algorithms|
|------------|-----|----------|
|[iTHOR](https://ai2thor.allenai.org/ithor/), [RoboTHOR](https://ai2thor.allenai.org/robothor/), [Habitat](https://aihabitat.org/), [MiniGrid](https://github.com/maximecb/gym-minigrid), [Gym](https://gym.openai.com/)|[PointNav](https://arxiv.org/pdf/1807.06757.pdf), [ObjectNav](https://arxiv.org/pdf/2006.13171.pdf), [MiniGrid tasks](https://github.com/maximecb/gym-minigrid), [Gym Box2D tasks](https://gym.openai.com/envs/#box2d)|[A2C](https://arxiv.org/pdf/1611.05763.pdf), [PPO](https://arxiv.org/pdf/1707.06347.pdf), [DD-PPO](https://arxiv.org/pdf/1911.00357.pdf), [DAgger](https://www.ri.cmu.edu/pub_files/2011/4/Ross-AISTATS11-NoRegret.pdf), Off-policy Imitation|

0.1.0

AllenAct is a modular and flexible learning framework designed with a focus on the unique requirements of Embodied-AI research. It provides first-class support for a growing collection of embodied environments, tasks and algorithms, provides reproductions of state-of-the-art models, and includes extensive documentation, tutorials, start-up code, and pre-trained models.

In this first release we provide:

* _Support for several environments_: We support different environments used for Embodied AI research such as [AI2-THOR](https://ai2thor.allenai.org/), [Habitat](https://aihabitat.org/) and [MiniGrid](https://github.com/maximecb/gym-minigrid). We have made it easy to incorporate new environments.
* _Different input modalities_: The framework supports a variety of input modalities such as RGB images, depth, language, and GPS readings.
* _Customizable training pipelines_: The framework includes not only various training algorithms (A2C, PPO, DAgger, etc.) but also allows one to easily combine these algorithms in pipelines (e.g., imitation learning followed by reinforcement learning).

AllenAct currently supports the following environments, tasks, and algorithms. We are actively working on integrating recently developed models and frameworks. Moreover, in [our documentation](https://allenact.org), we provide tutorials to demonstrating how to integrate the algorithms, tasks, and environments of your choice.

| Environments | Tasks | Algorithms |
| -------------------------- | --------------- | --------------- |
| [iTHOR](https://ai2thor.allenai.org/ithor/), [RoboTHOR](https://ai2thor.allenai.org/robothor/), [Habitat](https://aihabitat.org/), [MiniGrid](https://github.com/maximecb/gym-minigrid) | [PointNav](https://arxiv.org/pdf/1807.06757.pdf), [ObjectNav](https://arxiv.org/pdf/2006.13171.pdf), [MiniGrid tasks](https://github.com/maximecb/gym-minigrid) | [A2C](https://arxiv.org/pdf/1611.05763.pdf), [PPO](https://arxiv.org/pdf/1707.06347.pdf), [DD-PPO](https://arxiv.org/pdf/1911.00357.pdf), [DAgger](https://www.ri.cmu.edu/pub_files/2011/4/Ross-AISTATS11-NoRegret.pdf), Off-policy Imitation |

Note that we allow for distributed training of all above algorithms.

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.