Gpboost

Latest version: v1.4.0

Safety actively analyzes 629564 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.6.7

- add Grabit model / Tobit objective function
- support calculation of approximate standard deviations of fixed effects coefficients in GLMMs
- [R package] added function for creating partial dependence plots (gpb.plot.partial.dependence)
- [R package] use R’s internal .Call function, correct function registration, use R’s internal error function, use R standard routines to access data in C++, move more finalizer logic into C++ side, fix PROTECT/UNPROTECT issues, limit exported symbols in DLL,
- [Python package] Fix bug in scikit-learn wrapper for classification
- change in initialization and checking of convergence criterion for mode finding algorithm for Laplace approximation for non Gaussian data

0.6.0

* add support for Wendland covariance function and covariance tapering
* add Nelder-Mead as covariance parameter optimizer option
* change calculation of gradient for GPBoost algorithm and use permutations for Cholesky factors for non-Gaussian data
* use permutations for Cholesky factors for Gaussian data when having sparse matrices
* make “gradient_descent” the default optimizer option also for Gaussian data

0.5.0

- add function in R and Python packages that allows for choosing tuning parameters using deterministic or random grid search
- faster training and prediction for grouped random effects models for non-Gaussian data when there is only one grouping variable
- faster training and prediction for Gaussian process models for non-Gaussian data when there are duplicate locations
- faster prediction for grouped random effects models for Gaussian data when there is only one grouping variable
- support pandas DataFrame and Series in Python package
- fix bug in initialization of score for the GPBoost algorithm for non-Gaussian data
- add lightweight option for saving booster models with gp_models by not saving the raw data (this is the new default)
- update eigen to newest version (commit b271110788827f77192d38acac536eb6fb617a0d)

0.4.0

- update LightGBM part to version 3.1.1.99 (git commit 42d1633aebe124821cff42c728a42551db715168)
- add support for scikit-learn wrapper interface for GPBoost
- change initialization of score (=tree ensemble) for non-Gaussian data for GPBoost algorithm
- add support for saving and loading models from file in R and Python packages

0.3.0

- Add support for non-Gaussian data (other loss functions than L2 loss). Currently supported: binary data, Poisson data, gamma distributed data
- Changed the default value for 'use_gp_model_for_validation' from False to True
- Covariance parameter estimation: add safeguard against too large steps also when using Nesterov acceleration
- Changed the default value for 'use_nesterov_acc' from False to True. This is only relevant for gradient descent based covariance parameter estimation. For Gaussian data (=everything the library could handle so far before version 0.3.0), Fisher scoring (aka natural gradient descent) is used by default and this is not relevant for Fisher scoring
- Change default values for gradient descent based covariance parameter estimation: 'lr_cov=0.1' (before 0.01), 'lr_coef=0.1' (before 0.01), 'acc_rate_coef =0.5' (before 0.1). This is only relevant for gradient descent based covariance parameter estimation. For Gaussian data (=everything the library could handle so far before version 0.3.0), Fisher scoring (aka natural gradient descent) is used by default and this is not relevant for Fisher scoring
- Change parameter 'std_dev' from being a single parameter in 'fit' of a GPModel function to being a part of the 'params' parameter of the 'fit' function
- Removed the boosting parameter 'has_gp_model' (not visible to most users)
- Removed storage of the optimizer paramters 'optimizer_cov' and 'init_cov_pars' from R/Python to C++ only (not visible to user)

0.2.0

- GPModel : change default convergence criterion to relative change in negative log-likelihood for model fitting
- GPModel : add safeguard against too large steps (step halving) for gradient descent and Fisher scoring (without Nesterov acceleration) when doing model fitting
- Add support for R version 4.0
- GPModel: faster initialization of GPModel for grouped data with grouping data that is not ordered
- GPModel: faster model fitting for grouped data due to changes in the use of the Woodburry identity

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.