Atomai

Latest version: v0.7.8

Safety actively analyzes 630169 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.7.0

New functionalities
1) Deep kernel learning (DKL)-based Gaussian process (GP) regression.
The DKL-GP is based on [this paper](https://arxiv.org/abs/1511.02222) and can be used for predicting a functional property (or properties) from structural data such as images. Example of usage:
python3
Structural image data
n, d1, d2 = imgstack.shape
x_train = imgstack.reshape(n, d1*d2)
Property
y_train = P[:, 0] can be a scalar or vector variable

Input data dims
data_dim = x_train.shape[-1]
Initialize model
dklgp = aoi.models.dklGPR(data_dim)
Train
dklgp.fit(
x_train y_train, inputs and outputs
training_cycles=100, precision="single", lr=1e-2 training parameters
)

Make a prediction (with quantified uncertainty) with the trained model
mean, var = dklgp.predict(x_new)


For more details, see the example [notebook](https://colab.research.google.com/github/pycroscopy/atomai/blob/master/examples/notebooks/atomai_dkl_ferroic.ipynb)

2) Pre-trained models
One can now load pre-trained models for atomic feature finding in graphene and BFO-like systems. Currently limited to STEM data. Example of usage:
python
Load model for atom finding in graphene, which was trained on simulated data
model = aoi.models.load_pretrained_model("G_MD")
Apply to your data
nn_out, coords = model.predict(new_data)


As with any machine learning model, there is a caveat that the performance of pre-trained models will likely degrade significantly on the [out-of-distribution](https://arxiv.org/pdf/2007.01434.pdf) data (different feature size, presence of objects on the surface not accounted for in the simulations, etc.)

Bug fixes
- The extractor of image patches now checks for NaNs in the cropped data.

0.6.8

- Add atomai.utils.dataset module with experimental datasets for scientific machine learning
- Minor bug fixes and documentation improvement

0.6.7

New functionalities
- Utility functions for converting Segmentor output (coordinates and classes) to files readable by packages such as Atomic Simulation Environment, VESTA, etc.
- Optional time-dependent learning rate. For example,
python
We are going to start with a constant learning rate, then after 600 iterations we begin
linearly decreasing it over the next 200 iterations, and keep constant afterwards
lr_t = np.ones(800) * 1e-3
lr_t[600:800] = np.linspace(1e-3, 1e-4, 200)

model.fit(images, labels, images_test, labels_test, training data
training_cycles=1000, compute_accuracy=True, basic training parameters
swa=True, lr_scheduler=lr_t advanced training parameters
)


Other changes
- Added new examples (Graph analysis and Im2Spec) and expanded explanations in the markdown parts for the old ones
- Improved (slightly) documentation

0.6.6

New functionality
- Added option for controlled information capacity increase to VAE and rVAE (jVAE and jrVAE have them by default). Based on Eq (8) in https://arxiv.org/pdf/1804.03599.pdf

Bug fixes
- Fixed bug that was preventing from loading (older) AtomAI models without saved optimizer in their meta-dictionary

Other
- Fixed some inconsistencies in classes/methods documentation

0.6.5

New functionalities:
- Add VAE that can learn (simultaneously) both discrete and continuous latent representations
![image](https://user-images.githubusercontent.com/34245227/109348476-905b2880-7842-11eb-8f50-b9cca334e942.png)
- Add option for annealing of the KL terms associated with rotation and image content to rVAE

Bug fixes
- Fix bug that prevented rVAE from working in non-square images
- Fix bug that was causing VAE decoders to "forget" apply sigmoid in evaluation regime after training with BCE with logits

Other improvements
- Add option to set custom encoder and decoder modules in all VAEs
- Add a substantial amount of tests for VI trainer and VAE modules
- Update docs

0.6.2

New functionalities:
- ResHedNet model for advanced edge detection. This model is based on the holistically-nested edge detection [paper](https://ieeexplore.ieee.org/document/7410521). We improved the original model by replacing vanilla convolutional layers with ResNet-like blocks in each segment and by reducing the number of max-pooling operations to 2 (we found that 3 different scales are enough for learning the relevant features in typical microscopy images)
- SegResNet model for general semantic segmentation as an alternative to default UNet model. It has ResNet-like connections in each segment in addition to UNet-like skip connections between encoding and decoding paths.
Bug fixes:
- Fix bug that was preventing from saving/loading custom models
- Fix bug that was performing a zoom-in operation even when set to False during data augmentation
- Fix bug in the output_shape in BasePredictor, which required the output shape to be identical to the input shape
Improvements:
- Add option to pass a custom loss function to trainers for semantic segmentation and im2spec/spec2im
- Add option to store all training data on CPU when the size of the training data exceeds a certain limit (default limit is 4GB). In this case, only the individual batches are moved to a GPU device at training/test steps.
- Make computation of coordinates optional for SegPredictor
- Automatically save VAE models after each training cycle ("epoch") and not just at the end of training
New examples:
- New [notebook](https://colab.research.google.com/github/pycroscopy/atomai/blob/master/examples/notebooks/atomai_custom_model.ipynb) on constructing and using (training+predicting) a custom image denoiser with AtomAI
- New [notebook](https://colab.research.google.com/github/pycroscopy/atomai/blob/master/examples/notebooks/atomai_rVAE_digits.ipynb) on applications of rotationally invariant VAE (rVAE) and class-conditioned rVAE to arbitrary rotated handwritten digits

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.