Blobfile

Latest version: v2.1.1

Safety actively analyzes 630094 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 8

1.0.1

* Better error message for bad refresh token, thanks hauntsaninja for reporting this
* Include more error information when a request fails
* Fix `bf.copy(..., parallel=True)` logic, versions `1.0.0` and `0.17.3` could upload the wrong data when requests are retried internally by `bf.copy`. Also azure paths were not properly escaped.

1.0.0

* Remove deprecated functions `LocalBlobFile` (use `BlobFile` with `streaming=False`) and `set_log_callback` (use `configure` with `log_callback=<fn>`)

0.17.3

* Change default write block size to 8 MB
* Add `parallel` option to `bf.copy` to do some operations in parallel as well as `parallel_executor` argument to set the executor to be used.
* Fix `bf.copy` between multiple azure storage accounts, thanks hauntsaninja for reporting this

0.17.2

* Allow seeking past end of file
* Allow anonymous access for azure containers. Try anonymous access if other methods fail and allow blobfile to work if user has no valid azure credentials.

0.17.1

* Fixed GCS cloud copy for large files from hauntsaninja
* Added workaround for TextIOWrapper to buffer the same way when reading in text or binary mode
* Don't clear block blobs when starting to write to them, instead clear only the uncommitted blocks.

0.17.0

* Log all request failures by default rather than just errors after the first one, can now be set with the `retry_log_threshold` argument to `configure()`. To get the previous behavior, use `bf.configure(retry_log_threshold=1)`
* Use block blobs instead of append blobs in Azure Storage, the block size can be set via the `azure_write_chunk_size` option to `configure()`. Writing a block blob will delete any existing file before starting the writing process and writing may raise a `ConcurrentWriteFailure` in the event of multiple processes writing to the same file at the same time. If this happens, either avoid writing concurrently to the same file, or retry after some period.
* Make service principals fall back to storage account keys and improve detection of when to fall back
* Added `set_mtime` function to set the modified time for an object
* Added `md5` to stat object, which will be the md5 hexdigest if present on a remote file. Also add `version` which, for remote objects, represents some unique id that is changed when the file is changed.
* Improved error descriptions
* Require keyword arguments to `configure()`
* Add `scanglob` which is `glob` but returnes `DirEntry` objects instead of strings
* Add `scandir` which is `listdir` but returns `DirEntry` objects instead of strings
* `listdir` entries for local paths are no longer returned in sorted order
* Add ability to set max count of connection pools, this may be useful for Azure where each storage account has its own connection pool.
* Handle `:` with `join`

Page 6 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.