Vegas

Latest version: v6.1.1

Safety actively analyzes 630360 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 10

2.1.1

==========================
This is a very minor upgrade. Missed out a couple of variable declarations
in one of the cython routines that slowed the routine (and vegas) down
significantly (eg, 20%) in some situations. These are now fixed. This
changes nothing other than the run time.

2.1

=========================
vegas normally uses weighted averages to combine results from different
iterations. This is important since earlier iterations may have much larger
errors and so should carry less weight in the average. The weighted averages
mean, however, that the integral estimates are biased (see discussion of
systematic error in the Tutorial). The bias is completely negligible compared
to the statistical errors, and so unproblematic, unless the number of
iterations (nitn) is made very large (eg, thousands). vegas does not need to,
and no longer does use weighted averages when parameter adapt=False, because
then different iterations are all the same (since there is no adaptation
going on). Consequently the estimates for the mean and std deviation
are unbiased when adapt=False. This is likely a non-issue for most
and possibly all applications (since the bias vanishes so quickly with
increasing neval --- like 1/neval), but taking unweighted averages is
more correct when adapt=False so that is what vegas does now.

Other changes:

- Added parameter adapt to Integrator, as mentioned above. Setting alpha=False
prevents vegas from adapting any further. See the discussion in the Tutorial.

- RWAvg and RWAvgArray have changed names to RAvg and RAvgArray. The R
stands for "running", since these objects keep a running total. The "W"
used to stand for "weighted" but is inappropriate now since averages
may be weighted or unweighted (depending upon parameter Integrator.adapt).

- Changed the way vegas handles situations where variances
(or diagonal elements of covariance matrices) are negative or otherwise
afflicted by roundoff error. A small positive number is added, scaled by
the mean**2 (roughly 1e-15 * mean**2). This helps vegas survive unusual
situations like a constant integrand (independent of x) without generating
nan's or divide checks.

- Have more robust mechanisms for defining integrands for vegas's vector
mode. Deriving from vegas.VecIntegand but failing to define a __call__
results in an error message (saying there is no __call__). Also there
is now a function decorator, vegas.vecintegrand, that can be applied
to an ordinary function to make it suitable as an integrand.

2.0.1

===========================
Tiny improvement in how vegas interacts with the gvar module. The gvar
module can now be installed by itself, without the rest of the lsqfit
distribution: pip install gvar. Array-valued integrands work much better
with gvar installed.

2.0

==========================
This is a significant upgrade and cleanup of the code. As a result
it is not entirely backwards compatible with earlier versions (see below).

- Integrands are allowed to be array-valued now, with different elements
of the array representing different integrands. vegas always tunes on
the first function in the array. vegas determines whether the
integrand is scalar- or array-valued automatically, and returns
results that are either scalar or array-valued, as appropriate.
This functionality replaces method Integrator.multi, and is
implemented quite a bit differently (and better);
Integrator.multi has now disappeared. There is no longer a need for a
separate method for array-valued integrands.

- The calling conventions for integrands in vector mode have been changed
(simplified): eg,

class fv(vegas.VecIntegrand):
def __call__(self, x):
return x[:, 0] ** 2 + x[:, 1] ** 4

See discussion in the tutorial. This is not compatible with the old
convention. The fcntype argument to Integrator is no longer needed.

- Renamed RunningWAvg to RWAvg -- shorter name. Also introduced RWAvgArray
for arrays of same.

- Major reorganization of the internal code to simplify the developer's
life. The code appears to be somewhat faster, though probably not
enough to be noticed by anyone other than the developer.

1.3

========================

- Introduced new method Integrator.multi for doing multiple integrals
simultaneously, using the same integration points for all of the
integrals. Integrating simultaneously can lead to very large reductions
in the uncertainties for ratios or differences of integrals whose
integrands are very similar. See discussion in the documentation under
"Multiple Integrands Simultaneously."

- Introduced iterators (Integrator.random and Integrator.random_vec)
that return |vegas| integration points and weights
for applications that use |vegas| as a random number generator.

- Changed the semantics concerning the memory optimization introduced in
v1.2. To run with minimum memory set parameter minimize_mem = True. This
will cause vegas to use extra integrand evaluations, which can slow it by
50-100%, but also decouples the internal memory used from neval. The
default value, False, is the better choice unless vegas is running out
of RAM. Parameter max_nhcube limits the number of h-cubes used in the
stratification, unless beta=0 or minimize_mem=True in which case it is
ignored.

1.2

========================

- Memory optimization: The (new) adaptive stratified sampling algorithm
can use a lot of memory since it must store a float (sigf = the std dev of
the integrand) for each h-cube. When neval gets to be 1e8 or larger,
the memory needs start to approach typical RAM limits (in laptops,
anyway). To avoid exceeding these limits, which would greatly slow
progress, vegas now switches to a different mode of operation when
the number of h-cubes exceeds parameter max_nhcube (set by default
to 5e8). Rather than store values of sigf for every h-cube for use
in the next iteration, it recomputes sigf just before it uses it
to move integrand evalutions around (and then throws the sigf value away).
This requires extra integrand evaluations, beyond those used to estimate
the integral. The number of extra evaluations is between 50% and 100% of
the number used to estimate the integral, typically increasing
execution time by the same fractions. This is worthwhile provided the
adaptive stratified sampling decreases errors by at least 30%
(since omitting it would allow up to 2x as many integration points
for the same cost, decreasing errors by a factor of 1/sqrt(2)). The
adaptive stratified sampling usually decreases errors by this amount,
and frequently by much more. The new mode is in operation if (internal)
attribute minimize_sigf_mem is True. Again the threshold for this
new behavior is set by max_nhcube which is 5e8 by default, which
is sufficiently large that this new mode will be used quite
infrequently.

- Refactored Integrator._integrate to prepare for future project.

- Tests for beta=0.0 mode and for the propagation of Python exceptions
from the integrand.

- More polished documentation - still a work in progress.

- Fixed bug in pickling of Integrator. Added testing for pickling.

Page 9 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.