Pytorch-pretrained-bert

Latest version: v0.6.2

Safety actively analyzes 629639 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.3.0

This release comprise the following improvements and updates:
- added two new pre-trained models from Google: `bert-large-cased` and `bert-base-multilingual-cased`,
- added a model that can be fine-tuned for token-level classification: `BertForTokenClassification`,
- added tests for every model class, with and without labels,
- fixed tokenizer loading function `BertTokenizer.from_pretrained()` when loading from a directory containing a pretrained model,
- fixed typos in model docstrings and completed the docstrings,
- improved examples (added `do_lower_case`argument).

0.2.0

Improvement:
- Added a `cache_dir` option to `from_pretrained()` function to select a specific path to download and cache the pre-trained model weights. Useful for distributed training (see readme) (fix issue 44).

Bug fixes in model training and tokenizer loading:
- Fixed error in CrossEntropyLoss reshaping (issue 55).
- Fixed unicode error in vocabulary loading (issue 52).

Bug fixes in examples:
- Fix weight decay in examples (previously bias and layer norm weights were also decayed due to an erroneous check in training loop).
- Fix fp16 grad norm is None error in examples (issue 43).

Updated readme and docstrings

0.1.2

This is the first release of `pytorch_pretrained_bert`.

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.