Eqcorrscan

Latest version: v0.5.0

Safety actively analyzes 629639 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

0.3.3

* Make test-script more stable.
* Fix bug where `set_xcorr` as context manager did not correctly reset
stream_xcorr methods.
* Correct test-script (`test_eqcorrscan.py`) to find paths properly.
* BUG-FIX in `Party.decluster` when detections made at exactly the same time
the first, rather than the highest of these was taken.
* Catch one-sample difference in day properly in pre-processing.dayproc
* Shortproc now clips and pads to the correct length asserted by starttime and
endtime.
* Bug-fix: Match-filter collection objects (Tribe, Party, Family) implemented
addition (`__add__`) to alter the main object. Now the main object is left
unchanged.
* `Family.catalog` is now an immutable property.

0.3.2

* Implement reading Party objects from multiple files, including wildcard
expansion. This will only read template information if it was not
previously read in (which is a little more efficient).
* Allow reading of Party objects without reading the catalog files.
* Check quality of downloaded data in `Tribe.client_detect()` and remove it if it
would otherwise result in errors.
* Add `process_cores` argument to `Tribe.client_detect()` and `Tribe.detect()`
to provide a separate number of cores for processing and peak-finding - both
functions are less memory efficient that fftw correlation and can result in
memory errors if using lots of cores.
* Allow passing of `cores_outer` kwarg through to fftw correlate functions to
control inner/outer thread numbers. If given, `cores` will define the number
of inner-cores (used for parallel fft calculation) and `cores_outer` sets
the number of channels to process in parallel (which results in increased
memory usage).
* Allow Tribe and Party IO to use QUAKEML or SC3ML format for catalogs (NORDIC
to come once obspy updates).
* Allow Party IO to not write detection catalogs if so desired, because
writing and reading large catalogs can be slow.
* If detection-catalogs are not read in, then the detection events will be
generated on the fly using `Detection._calculate_event`.
* BUG-FIX: When one template in a set of templates had a channel repeated,
all detections had an extra, spurious pick in their event object. This
should no-longer happen.
* Add `select` method to `Party` and `Tribe` to allow selection of a
specific family/template.
* Use a compiled C peak-finding function instead of scipy ndimage - speed-up
of about 2x in testing.
* BUG-FIX: When `full_peaks=True` for `find_peaks2_short` values that were not
above their neighbours were returned. Now only values greater than their two
neighbours are returned.
* Add ability to "retry" downloading in `Tribe.client_detect`.
* Change behaviour of template_gen for data that are daylong, but do not start
within 1 minute of a day-break - previous versions enforced padding to
start and end at day-breaks, which led to zeros in the data and undesirable
behaviour.
* BUG-FIX: Normalisation errors not properly passed back from internal fftw
correlation functions, gaps not always properly handled during long-period
trends - variance threshold is now raised, and Python checks for low-variance
and applies gain to stabilise correlations if needed.
* Plotting functions are now tested and have a more consistent interface:
* All plotting functions accept the keyword arguments `save`, `savefile`,
`show`, `return_figure` and `title`.
* All plotting functions return a figure.
* `SVD_plot` renamed to `svd_plot`
* Enforce pre-processing even when no filters or resampling is to be done
to ensure gaps are properly processed (when called from `Tribe.detect`,
`Template.detect` or `Tribe.client_detect`)
* BUG-FIX in `Tribe.client_detect` where data were processed from data
one sample too long resulting in minor differences in data processing
(due to difference in FFT length) and therefore minor differences
in resulting correlations (~0.07 per channel).
* Includes extra stability check in fftw_normxcorr which affects the
last sample before a gap when that sample is near-zero.
* BUG-FIX: fftw correlation dot product was not thread-safe on some systems.
The dot-product did not have the inner index protected as a private variable.
This did not appear to cause issues for Linux with Python 3.x or Windows, but
did cause issues for on Linux for Python 2.7 and Mac OS builds.
* KeyboardInterrupt (e.g. ctrl-c) should now be caught during python parallel
processes.
* Stopped allowing outer-threading on OSX, clang openMP is not thread-safe
for how we have this set-up. Inner threading is faster and more memory
efficient anyway.
* Added testing script (`test_eqcorrscan.py`, which will be installed to your
path on installation of EQcorrscan) that will download all the relevant
data and run the tests on the installed package - no need to clone
EQcorrscan to run tests!

0.3.1

* Cleaned imports in utils modules
* Removed parallel checking loop in archive_read.
* Add better checks for timing in lag-calc functions (207)
* Removed gap-threshold of twice the template length in `Tribe.client_detect`, see
issue 224.
* Bug-fix: give multi_find_peaks a cores kwarg to limit thread
usage.
* Check for the same value in a row in continuous data when computing
correlations and zero resulting correlations where the whole window
is the same value repeated (224, 230).
* BUG-FIX: template generation `from_client` methods for swin=P_all or S_all
now download all channels and return them (as they should). See 235 and 206
* Change from raising an error if data from a station are not long enough, to
logging a critical warning and not using the station.
* Add ability to give multiple `swin` options as a list. Remains backwards
compatible with single `swin` arguments.
* Add option to `save_progress` for long running `Tribe` methods. Files
are written to temporary files local to the caller.
* Fix bug where if gaps overlapped the endtime set in pre_processing an error
was raised - happened when downloading data with a deliberate pad at either
end.

0.3.0

* Compiled peak-finding routine written to speed-up peak-finding.
* Change default match-filter plotting to not decimate unless it has to.
* BUG-FIX: changed minimum variance for fftw correlation backend.
* Do not try to process when no processing needs to be done in
core.match_filter._group_process.
* Length checking in core.match_filter._group_process done in samples rather
than time.
* BUG-FIX: Fix bug where data lengths were not correct in
match_filter.Tribe.detect when sampling time-stamps were inconsistent between
channels, which previously resulted in error.
* BUG-FIX: Fix memory-leak in tribe.construct
* Add plotting options for plotting rate to Party.plot
* Add filtering detections by date as Party.filter
* BUG-FIX: Change method for Party.rethreshold: list.remove was not reliable.
* Add option `full_peaks` to detect methods to map to find_peaks.
* pre-processing (and match-filter object methods) are now gap-aware and will
accept gappy traces and can return gappy traces. By default gaps are filled to
maintain backwards compatibility. Note that the fftw correlation backend
requires gaps to be padded with zeros.
* **Removed sfile_utils** This support for Nordic IO has been upgraded and moved
to obspy for obspy version 1.1.0. All functions are there and many bugs have
been fixed. This also means the removal of nordic-specific functions in
EQcorrscan - the following functions have been removed:
* template_gen.from_sfile
* template_gen.from_contbase
* mag_calc.amp_pick_sfile
* mag_calc.pick_db
All removed functions will error and tell you to use obspy.io.nordic.core.
This now means that you can use obspy's `read_events` to read in sfiles.
* Added `P_all` and `S_all` options to template generation functions
to allow creation of multi-channel templates starting at the P and S
times respectively.
* Refactored `template_gen`, all options are available via
`template_gen(method=...)`, and depreciation warnings are in place.
* Added some docs for converting older templates and detections into Template
and Party objects.

0.2.7

* Patch multi_corr.c to work with more versions of MSVC;
* Revert to using single-precision floats for correlations (as in previous,
< 0.2.x versions) for memory efficiency.

0.2.6

* Added the ability to change the correlation functions used in detection
methods through the parameter xcorr_func of match_filter, Template.detect
and Tribe.detect, or using the set_xcorr context manager in
the utils.correlate module. Supported options are:
* numpy
* fftw
* time-domain
* or passing a function that implements the xcorr interface.
* Added the ability to change the concurrency strategy of xcorr functions
using the paramter concurrency of match_filter, Template.detect
and Tribe.detect. Supported options are:
* None - for single-threaded execution in a single process
* multithread - for multi-threaded execution
* multiprocess- for multiprocess execution
* concurrent - allows functions to describe their own preferred currency
methods, defaults to multithread
* Change debug printing output, it should be a little quieter;
* Speed-up time-domain using a threaded C-routine - separate from frequency
domain C-routines;
* Expose useful parallel options for all correlation routines;
* Expose cores argument for match-filter objects to allow limits to be placed
on how much of your machine is used;
* Limit number of workers created during pre-processing to never be more than
the number of traces in the stream being processed;
* Implement openMP parallelisation of cross-correlation sum routines - memory
consumption reduced by using shared memory, and by computing the
cross-correlation sums rather than individual channel cross-correlations.
This also leads to a speed-up. This routine is the default concurrent
correlation routine;
* Test examples in rst doc files to ensure they are up-to-date;
* Tests that were prone to timeout issues have been migrated to run on circleci
to allow quick re-starting of fails not due to code errors

Page 2 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.