PyUp Safety actively tracks 282,845 Python packages for vulnerabilities and notifies you when to upgrade.
- `dockerDataRoot` parameter fix - NVMe EBS volume mount fix
- Massive code refactoring. - AWS and GCP instances are on-demand by default. To run a spot instance use the `spotInstance: true` parameter for AWS and `preemptibleInstance: true` for GCP. - AWS EC2 volumes and GCP disks are retained by default. - Changes for the `spotty run` command: - It syncs the project with the instance before running a script (the `-S, --sync` flag was replaced with the `--no-sync` flag). - Added support for custom arguments. They can be provided to the script after the double-dash ("--") argument. - Scripts now support shebang (!) to use custom interpreters. - The "restart" flag was removed as it doesn't work well with the "docker exec" process. - Added the `spotty exec` command to execute custom commands in the container (for example, to run python scripts in the container with PyCharm). - Added the `-C` flag to the `spotty start` command to start or restart a container without restarting the instance itself. - Added the `instanceProfileArn` parameter to specify custom instance profiles for AWS instances (tsdalton, 42). - Nitro-based instances support (64, 66). - Added support for multiple container configurations in the `spotty.yaml` file (44). - Container configuration supports custom environmental variables. - `cfn-init` logs automatically downloaded to the local machine if the instance failed to start (52, 44, 48). - Added support for the `spotty.override.yaml` file. It overrides the values of the main `spotty.yaml` file and supposed to be added to the `.gitignore` file. - Added the "local" provider to build and run docker containers locally. - Added the "remote" provider to run containers on any accessible via SSH machine with the Docker installed. - `spotty ssh` command was renamed to `spotty sh` as it didn't make sense for the "local" provider and also it's shorter. - `ports` parameter was moved from the container config to the instance config. - Disabled the host network mode by default as it doesn't work on macOS (added the `ports` parameter to the container configuration to publish specific ports to the host OS). - GCP provider uses the "common-gce-gpu-image" image as a base image by default. - Dropped support for the custom AWS AMI and the custom GCP image. - Added the `runAsHostUser` parameter to run containers as a host user. - Added the `-u` flag to `spotty run` and `spotty sh` commands to connect to the container as a root user.
- GCP: - fixed setup.py - added the `bootDiskSize` parameter - added custom startup commands for instance and container - updated Docker version for a custom image - updated shared GCP image - AWS: - fixed the Node.js runtime for the AMI stack - updated Docker version for custom AMI - using the latest Deep Learning AMI (AWS stopped maintaining "Deep Learning Base AMI")
- GCP provider (beta): - deletion policies for disks are not implemented yet, disks always retain once an instance is stopped - only one list of "exclude" filters for synchronization is supported at the moment - the "spotty download" command is not implemented - stop changing ownership of files when mounting volumes - fixed S3 bucket creation in the "us-east-1" region - the "parameters" argument for the "spotty run" command was renamed to "parameter" and now should be used multiple times to specify several script parameters - the "filters" argument for the "spotty download" command was renamed to "include" and now should be used multiple times to specify several patterns - using the "gp2" type for EBS volumes by default - added the "type" parameter to the EBS volume configuration
- using AWS Deep Learning Base AMI by default instead of creating a Spotty AMI - added the "managedPolicyArns" parameter to attach managed policies to the instance role - creating an Instance Profile per instance - added the "commands" parameter to the instance config to run custom commands on the host OS before the container is started
Bug fixes: - in case of the "create_snapshot" and "update_snapshot" deletion policies, once an instance is stopped, an EBS volume was deleted even if there was an error or timeout (10 minutes) during a snapshot creation - the "onDemandInstance" parameter wasn't working if the old configuration used - the "spotty start" and "spotty status" commands were failing if an on-demand instance was used (but the instance itself was working) - if the "subnetId" parameter was specified, an AMI creation didn't work - displaying a proper error message if the user is trying to connect to an instance without a public IP address
- the format of a configuration file was changed: - container parameters were separated from instance parameters - a configuration file describes a list of instances, not just one - added an abstraction over cloud providers. Spotty still supports only AWS, but now it's relatively easy to add any other cloud provider. As a result, all specific to AWS commands were moved under the "spotty aws" command. - added an abstraction over volumes. Spotty still supports only EBS volumes, but now it's possible to add support for EFS or S3 volumes. - old configuration files are still supported, but a warning message will be displayed - deletion policies are being applied through AWS API (corresponding lambda functions were removed from the CloudFormation template). So now deletion policies can be changed in a configuration file after an instance is started. - changes for the "spotty run" command: - a tmux window won't be killed once the process is exited. So now the user can see an output of the exited process. The "Ctrl+b, then x" combination of keys closes a tmux window. - by default, the "spotty run" command won't log outputs from a script. But the user can use the "-l" flag to enable logging. - added the "-r" flag to restart a tmux session without closing it. But before using it, the running process should be stopped, because a killed "docker exec" command won't kill a spawned process automatically (see the issue [here]( https://github.com/moby/moby/issues/9098)). - scripts can be parametrized using [Mustache tags](https://mustache.github.io/). The user can specify parameters using the "-p" flag. - custom commands can be integrated with Spotty using Python entry points - instances won't write logs to the CloudWatch anymore - volumes can be restored from a custom snapshot using its name - added the "dry-run" flag for the "spotty start" command - added the "debug-mode" flag for the "create-ami" command. In this case, an AMI will not be created and the user will be able to connect to the running instance. - added the "spotty status" command that shows a current state of the instance - added the "spotty download" command to download any project files from the instance - added the "-l" flag for the "spotty ssh" command to list opened tmux sessions - added the "amiId" parameter to the configuration. It can be useful to share Spotty AMI with users, so they won't need to create it.
- fixed NVIDIA Docker installation - updated Docker CE to version 18.09.3
Fixed NVIDIA Docker installation.
- updated NVIDIA driver to version 410 for CUDA 10 support - increased timeout for building a Docker image
- additional runtime parameters for the container can be added using the "runtimeParameters" parameter in the configuration file - version flag for the "spotty" command
- fixed the lambdas runtime - the "onDemandInstance" parameter to run on-demand instances
- "subnetId" parameter (for the case when default subnets don't exist) - "localSshPort" parameter (for the case when the instance doesn't have public IP address and SSH access is provided through a tunnel to a local port) - "aws sync" fix for files of the same size - fixed the docker "image" parameter
- fix permissions for key files - choosing an availability zone fix - flag for the "run" command to do a sync before running a script - added "g3s.xlarge" instance type to the list of valid GPU instances
Fix: made the "bin/spotty" file executable.
Fix requirements in the `setup.py` file.
- changed the default working directory for a container to a project directory - changed a Docker build's context path to a Dockerfile directory - added the "container" alias to connect to the Docker container from the host OS
- added the "availabilityZone" parameter - added the "retain" deletion policy - capability to attach existing volumes to the instance - "snapshotName" parameter renamed to "name" - set the "create_snapshot" deletion policy as a default one - auto-resize volumes restored from snapshots - display current spot price once an instance is started - checking the maximum price before starting an instance
CF template fix
Disable daily updates for apt: fix for the issue when the cloud-init fails to install packages because the "/var/lib/dpkg/lock" file is locked.
- updated CF template for AMI creation - fixes
- attaching multiple volumes to the instance - 2 snapshot modes for volumes: "update_snapshot" and "create_snapshot" - Instance Profile resource moved to separated stack to launch an instance faster - "volumes" parameter is optional - other fixes
"amiName" parameter became optional
- "clean-logs" command - display project synchronization process - fixes
First Spotty release
First CloudTraining release