Isitfit

Latest version: v0.20.11

Safety actively analyzes 628477 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 15

0.19.1314

- enh: figured out issue 10 is about datadog hostname versus AWS ID
- added some more output to the tests in case my bugfix doesn't work
- useful references
- https://docs.datadoghq.com/agent/faq/how-datadog-agent-determines-the-hostname/?tab=agentv6v7potential-host-names
- https://docs.datadoghq.com/api/?lang=pythonsearch-hosts
- bugfix: issue 10 uncovered that I need to map from AWS ID to Datadog hostname before I filter in datadog.. fixed
- enh: add some timer code to gather (in matomo) time-to-run of calculations
- enh: add interim timer call with number of ec2 or rds entries to calculate code performance in seconds per resource

0.19.9

- bugfix: run the `aws iam get-user` only after receiving another error. Also, reword from "error" to "hint"

0.19.8

- enh: docker: dropped redis installation from docker image
- enh: move the aws credentials test at the launch of the CLI to not be run if `isitfit version` is launched
- enh: readme: add quickstart with pip and docker

0.19.7

- bugfix: add more exception handling when counting resources in regions to account for case of Roles with no access
- bugfix: added a simple `aws iam get-user` command test run at the launch of isitfit just to make sure that the user has proper credentials
- if any error with that is faced, then probably there is a problem with the credentials in the first place

0.19.6

- feat: cost optimize: save account.cost.optimize recommendations to sqlite database, with a `dt_created` field that gets preserved between re-runs
- this helps identify the date on which a recommendation was first created
- feat: cost optimize: do not load recommendations from sqlite instead of re-calculating
- Update 2019-12-27
- initially, this was "load recommendations from sqlite instead of re-calculating"
- but I decided to just re-calculate at each request, and keep the sqlite usage to the interactive implementation
- also, cleaned up the implementation by using the `pre` listener for checking existing sqlite (which I don't use anyway now)
- Earlier notes 2019-12-26
- also made some code changes for separation of concerns
- implementation is horrible ATM, with a major requirement on how to have a different result "per ndays" request
- the `pipeline_factory` function now got very messy as well
- and there still is no way to pass a `--refresh` option to recalculate instead of load from sqlite
- bugfix: cost optimize: filter the `ec2_df` for only the latest size. This fixes the issue of cpu.max.max being a value for size s1 whereas the current size is s2
- bugfix: cost optimize: when `ec2_df` is shorter than 7 days of daily data, return "Not enough data" in classification

0.19.5

- enh: minor wording in "will skip" message to be more explicit

Page 2 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.