Allenact

Latest version: v0.5.4

Safety actively analyzes 619401 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.5.2

What's Changed
* Fixing clip requirement. by Lucaweihs in https://github.com/allenai/allenact/pull/340
* Fix for memoryless agents by jordis-ai2 in https://github.com/allenai/allenact/pull/342
* Valid on initial model weights by jordis-ai2 in https://github.com/allenai/allenact/pull/344
* Inference agent fix, improvements when using NCCL backend, and other minor improvements. by Lucaweihs in https://github.com/allenai/allenact/pull/345
* Merging main into callbacks and fixing merge conflict. by Lucaweihs in https://github.com/allenai/allenact/pull/347
* Fixing callbacks PR comments and other misc improvements. by Lucaweihs in https://github.com/allenai/allenact/pull/350
* Add Callback Support by mattdeitke in https://github.com/allenai/allenact/pull/339


**Full Changelog**: https://github.com/allenai/allenact/compare/v0.5.0...v0.5.2

0.5.0

In this release we add several new substantial features.

Multi-machine distributed training support

We've added a new tutorial (see [here](https://allenact.org/tutorials/distributed-objectnav-tutorial/)) and scripts necessary to be able to run allenact across multiple machines.

Improved Navigation Models with Auxiliary Tasks

[Recent work](https://arxiv.org/abs/2007.04561) has shown that certain auxiliary tasks (e.g. inverse/forward dynamics) can be used to speed up training and improve the performance of navigation agents. We have implemented a large number of these auxiliary tasks (see for instance the `InverseDynamicsLoss`, `TemporalDistanceLoss`, `CPCA16Loss`, and `MultiAuxTaskNegEntropyLoss` classes in the `allenact.embodiedai.aux_losses.losses` module as well as a new base architecture for visual navigation (`allenact.embodiedai.models.VisualNavActorCritic`) which makes it very easy to use these auxiliary losses during training.

CLIP Preprocessors and Embodied CLIP experiments

We've added a new `clip_plugin` that makes preprocessors available which use the CLIP-pretrained visual encoders. See the `projects/objectnav_baselines/experiments/robothor/clip/objectnav_robothor_rgb_clipresnet50gru_ddppo.py` experiment configuration file which uses these new proprocessors to obtain SOTA results on the [RoboTHOR ObjectNav leaderboard](https://leaderboard.allenai.org/robothor_objectnav/). These results correspond to [our new paper](https://arxiv.org/abs/2111.09888) on using CLIP visual encoders for embodied tasks.

New storage flexibility

We've substantially generalized the rollout storage class. This is an "advanced" option but it is now possible to implement custom storage classes which can enable new types of training (e.g. Q-learning) and even mixing various training paradigms (e.g. training with Q-learning, PPO, and offline imitation learning simultaneously).

Better Habitat Support and Experiments

We've added support for training ObjectNav models in Habitat and include a experiment config that trains such a model with a CLIP visual encoder backbone (see `projects/objectnav_baselines/experiments/habitat/clip/objectnav_habitat_rgb_clipresnet50gru_ddppo.py`).

0.4.0

In this release we add:

Hierarchical policy support 📶

We have improved our support of hierarchical agent policies via the addition of a `SequentialDistr` class. This class allows for multi-stage hierarchical policy distributions. There are two common settings where this can be very useful. (1) You have an agent who needs to choose a high level objective before choosing a low-level action (e.g. maybe it wants to `"MoveAhead"` if it's high level goal is to explore but wants to `"Crouch"` if its goal is to hide). This is naturally modeled as a hierarchical policy where the agent first samples it's objective and then samples a low-level action conditional on this objective. (2) You have an conditional low-level action space, e.g. perhaps your agent needs to specify `x,y` coordinates when taking a `"PickupObject"` action but doesn't need such coordinates when taking a `"MoveAhead"` action.

We also include native support for hierarchical policies with Categorical sub-policies.

Misc improvements 📚

We also include several smaller enhancements:
* Early stopping criteria for non-distributed training - It can be useful to automatically stop training when some success criterion is met (e.g. training reward has saturated). This is now enabled (for non-distributed training) via the `early_stopping_criteria` parameter of the `PipelineStage` class.
* Better error (and keyboard interrupt) handing - Depending on the complexity of a training task, AllenAct may start a large number of parallel processes. We are now more consistent about how we handle exit signals so that processs are less likely to remain alive after a kill signal is sent.

Backwards incompatible changes 💔

* Several vision sensors have moved from `allenact.base_abstractions.sensor` to `allenact.embodiedai.sensors.vision_sensors`. This change improves consistency of class location.

0.3.1

This patch release introduces several small bug fixes:

* In refactoring the `compute_cnn_output` a function, a prior commit failed to remove all instances of a `use_agents` variable, this is fixed.
* Visualization code has been improved to work more seamlessly with new version of AI2-THOR.

0.3.0

This minor release brings:

* Command line updates to experiment runs - often you might want to be able to specify certain experiment parameters from the command line (e.g. which gpus to use or the number of processes to train with). We have now enabled this functionality with the `--config_kwargs` flag. This flag can be used to pass parameters directly to the initializer of your `ExperimentConfig` class before training/evaluation enabling huge amounts of flexibility in how you want your experiment to run.
* Improved logging - logs are now semantically colored to highlight "info", "warning", and "error" messages. This also includes a bug fix where failing to call `init_logging` before `get_logger` would result in `print` statements being hidden.
* Less mysterious testing - using AllenAct for testing was previously somewhat mysterious/magical as we made several assumptions regarding how directories (+ checkpoint file names) were named. This has now been simplified, this simplification **does introduce a minor backwards incompatible change** so please see our documentation for how evaluation should now be run.

0.2.3

This minor release brings:

* **Semantic / free-space mapping support** - this includes new sensors (for building maps as agents traverse their environment) and an implementation of the Active Neural SLAM mapping module from [Chaplot et al. (2020)](https://www.cs.cmu.edu/~dchaplot/projects/neural-slam.html).
* Bug fix in the RoboTHOR Object Navigation task where distances between points were sometimes reported incorrectly (this bug did not impact the Success or SPL metrics).

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.