Datacube

Latest version: v1.8.18

Safety actively analyzes 628969 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 12

1.1.15

Not secure
- Fixed a data loading issue when reading HDF4_EOS datasets.

1.1.14

Not secure
- Added support for buffering/padding of GridWorkflow tile searches
- Improved the Query class to make filtering by a source or parent dataset easier. For example, this can be used to filter Datasets by Geometric Quality Assessment (GQA). Use `source_filter` when requesting data.
- Additional data preparation and configuration scripts
- Various fixes for single point values for lat, lon & time searches
- Grouping by solar day now overlays scenes in a consistent, northern scene takes precedence manner. Previously it was non-deterministic which scene/tile would be put on top.

1.1.13

Not secure
- Added support for accessing data through `http` and `s3` protocols
- Added `dataset search` command for filtering datasets (lists `id`, `product`, `location`)
- `ingestion_bounds` can again be specified in the ingester config
- Can now do range searches on non-range fields (e.g. `dc.load(orbit=(20, 30)`)
- Merged several bug-fixes from CEOS-SEO branch
- Added Polygon Drill recipe

1.1.12

Not secure
- Fixed the affine deprecation warning
- Added `datacube metadata_type` cli tool which supports `add` and `update`
- Improved `datacube product` cli tool logging

1.1.11

- Improved ingester task throughput when using distributed executor
- Fixed an issue where loading tasks from disk would use too much memory
- `GeoPolygon.to_crs` now adds additional points (~every 100km) to improve reprojection accuracy

1.1.10

- Ingester can now be configured to have WELD/MODIS style tile indexes (thanks Chris Holden)
- Added `--queue-size` option to `datacube ingest` to control number of tasks queued up for execution
- Product name is now used as primary key when adding datasets.
This allows easy migration of datasets from one database to another
- Metadata type name is now used as primary key when adding products.
This allows easy migration of products from one database to another
- `DatasetResource.has` now takes dataset id insted of `model.Dataset`
- Fixed an issues where database connections weren't recycled fast enough in some cases
- Fixed an issue where `DatasetTypeResource.get` and `DatasetTypeResource.get_by_name`
would cache `None` if product didn't exist

Page 9 of 12

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.