Citrine

Latest version: v3.2.4

Safety actively analyzes 630094 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 22

1.53.1

Not secure
In this release of Citrine Python, we've made some changes to improve your quality of life. We've improved the ability to connect candidates returned in python to those you see in your browser, and made our GEMD documentation more useful. We're also preparing for some future deprecations on our backend to improve the traceability of our candidates.

What's New
* We now include a `candidate_uid` with each candidate that can be cross-referenced with the Citrine Platform web app URL. 803

Improvements
* We'e updated our Data Model documentation to be cleaner, more readable, and more clear about how to update your GEMD Templates 802

Deprecated
* We've added a deprecation warning that will trigger when one attempts to switch the assets in a Design Workflow that has generated candidates. This will become undoable in the future to maintain traceability of candidates to predictors and training data. It is recommended to create a new Design Workflow instead. 801

**Full Changelog**: https://github.com/CitrineInformatics/citrine-python/compare/v1.51.1...v1.53.1

1.51.1

Not secure
In this release of Citrine Python, we are happy to introduce new methods to support exciting features coming in the web application of the Citrine Platform. With both predictor versioning and the ability to update data on an entire branch exposed in this release, you will be able to automate updating branch data and operations on specific versions of your predictors. In addition, we've added a few more improvements to make Citrine Python better documented, easier to debug, and more consistent across classes. As always, we are continuously working of fixes and improvements to keep all our users running smoothly.

What's New
* You can now see the version of a Predictor via the `version` attribute and also interact with specific versions of your Predictors by using the `version` argument in `PredictorCollection` methods (e.g. `project.predictors.get()`). Note that updating a Predictor on platform will always overwrite any existing Draft. If validation is successful, the Predictor will be incremented to the latest version. 785, 796
* In line with upcoming UI releases, you now have the capability to update the data on you Branch to automate the process of pointing all predictors on your branch to the latest version(s) of your data source(s) with one call: `project.branches.update_data(branch=my_branch)`. 793, 797

Improvements
* We've enhanced our documentation to describe interactions with Experiment Data Sources and updating Branches to the latest data sources. 792, 798, 799
* The `status_detail` field for Predictors and Design Spaces now has a more detailed structure, with individual message strings separated into list elements. 791
* The method for creating a quick default predictor now includes `create_default` to replace the `auto_configure` method. The behavior is the same, but the method is now much more consistent with other areas of our platform (e.g. Design Spaces). 794

Fixes
* Minor internal fixes. 795

**Full Changelog**: https://github.com/CitrineInformatics/citrine-python/compare/v1.44.1...v1.51.1

1.44.1

Not secure
This release of Citrine Python introduces a few new capabilities as well as paving the way for new features coming in the Citrine Platform. Users can now set random seeds for Predictor Evaluation Workflows to ensure repeatability, create Design Spaces that are constrained to the bounds of their training data, and stream the contents of files on our platform directly. Additionally, we've introduced the first methods around the Experiment Data Source, which will be introduced in more detail in upcoming releases.

What's New
* We now allow Predictor Evaluation Workflows to be triggered with an optional `random_state` argument to pass a random seed to the evaluation method. This will allow users to set the random seed and ensure evaluation results are deterministic and reproducible. 788
* We have added the capability in the `design_spaces.create_default()` method to constrain parameters based on the predictor's training data. By passing `include_parameter_constraints=True` to the `create_default` method, process parameters will be constrained to the range of the training data in the resulting design space. 789
* We now have the ability to directly access the byte stream of a `file_link` via the `read` method. 790
* In preparation for upcoming platform features, we have added the ability to `read` Experiment Data Sources to a CSV format. An Experiment Data Source can be identified as an attribute of a specific Branch, or as part of the `training_data` of a Predictor. By calling the `.read()` method on the `ExperimentDataSourceCollection` resource and passing a UID or `ExperimentDatasource` object. The resulting information will allow you to inspect what is in the Data Source so you can verify, ingest, or perform additional analysis on your training data. We will include more documentation around how to interact with Experiment Data Sources in future releases. 787

**Full Changelog**: https://github.com/CitrineInformatics/citrine-python/compare/v1.41.1...v1.44.1

1.41.1

Not secure
In this release of Citrine Python, we've made a minor, but critical update to our dependencies to keep you all running smoothly as the Python ecosystem continue to develop.

Fixes
* Updated default install of pint for gemd-python to account for an upstream interface change https://github.com/CitrineInformatics/citrine-python/pull/786

**Full Changelog**: https://github.com/CitrineInformatics/citrine-python/compare/v1.41.0...v1.41.1

1.41.0

Not secure
In this release of Citrine Python, we've added some additional functionality that will enable users to share Predictor Evaluation Workflows (and their results) within a Team. We've also removed official support for python 3.6, which reached end of life in late 2021, to keep up with the steady drum beat of software development.

What's New
* Users can now publish Predictor Evaluation Workflows (PEWs) from a project to a team and pull PEWs into other projects within that team. Doing so allows users to view PEW results from any project with access to it and the predictor that was evaluated. https://github.com/CitrineInformatics/citrine-python/pull/784

Deprecation
* Remove official support for python 3.6. https://github.com/CitrineInformatics/citrine-python/pull/783

**Full Changelog**: https://github.com/CitrineInformatics/citrine-python/compare/v1.40.0...v1.41.0

1.40.0

Not secure
In this release of Citrine Python, we are shipping a new tool while paving the way for future functionality. We are proud to introduce the Holdout Set Evaluator, which allows users to evaluate model performance on a user-defined holdout set in lieu of our typical cross validation strategy. We've also included updates to our python SDK that will eventually allow users to select different algorithms in their AutoML Predictors and handle archival of predictors once they have been properly versioned. Both features are still in development for most production deployments, but these changes allow us to test and iterate before full deployment.

What's New
* Preparation for algorithmic selection in AutoML Predictors. The addition of the `estimators` field will eventually allow users to select additional algorithms to be considered during training. Full backend functionality still to come. 780
* Introduction of the Holdout Set Evaluator for generating model error metrics on a customizable holdout set in lieu of typical cross validation. The [`HoldoutSetEvaluator`](https://github.com/CitrineInformatics/citrine-python/blob/984df3d6ff399fe22dd001086b3989e2711f5f17/src/citrine/informatics/predictor_evaluator.py#L138), which can be used in a Predictor Evaluation Workflow alongside or instead of a `CrossValidationEvaluator`, takes a Data Source as an argument that is then predicted with the model during workflow execution. The same set of model performance metrics, such as RMSE and PvA results, can then be calculated and returned in the execution results. 768

Improvements
* Updated reported predictor archival status to prepare for a change with predictor versions. 782

**Full Changelog**: https://github.com/CitrineInformatics/citrine-python/compare/v1.37.1...v1.40.0

Page 5 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.