Changes: * Changed default border count for float feature binarization to 254 on CPU to achieve better quality * Fixed random seed to `0` by default * Support model with more than 254 feature borders or one hot values when doing predictions * Added model summation support in python: use `catboost.sum_models()` to sum models with provided weights. * Added json model tutorial [json_model_tutorial.ipynb](https://github.com/catboost/catboost/blob/master/catboost/tutorials/apply_model/json_model_tutorial.ipynb)
0.10.4.1
Not secure
Changes: - Bugfix for 518
0.10.4
Not secure
Breaking changes: In python 3 some functions returned dictionaries with keys of type `bytes` - particularly eval_metrics and get_best_score. These are fixed to have keys of type `str`. Changes: - New metric NumErrors:greater_than=value - New metric and objective L_q:q=value - model.score(X, y) - can now work with Pool and labels from Pool
0.10.3
Not secure
Changes: * Added EvalResult output after GPU catboost training * Supported prediction type option on GPU * Added `get_evals_result()` method and `evals_result_` property to model in python wrapper to allow user access metric values * Supported string labels for GPU training in cmdline mode * Many improvements in JNI wrapper * Updated NDCG metric: speeded up and added NDCG with exponentiation in numerator as a new NDCG mode * CatBoost doesn't drop unused features from model after training * Write training finish time and catboost build info to model metadata * Fix automatic pairs generation for GPU PairLogitPairwise target
0.10.2
Not secure
Main changes: * Fixed Python 3 support in `catboost.FeaturesData` * 40% speedup QuerySoftMax CPU training
0.10.1
Not secure
Improvements * 2x Speedup pairwise loss functions * For all the people struggling with occasional NaNs in test datasets - now we only write warnings about it Bugfixes * We set up default loss_function in `CatBoostClassifier` and `CatBoostRegressor` * Catboost write `Warning` and `Error` logs to stderr