Flair

Latest version: v0.13.1

Safety actively analyzes 628969 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

0.11

New Features

Regular Expression Tagger (2533)

You can now do sequence labeling in Flair with regular expressions! Simply define a `RegexpTagger` and add some regular expressions, like in the example below:

python
sentence with a number and two quotes
sentence = Sentence('Figure 11 is both "too colorful" and "not informative enough".')

instantiate regex tagger with a quote matching pattern
tagger = RegexpTagger(mapping=(r'(["\'])(?:(?=(\\?))\2.)*?\1', 'QUOTE'))

also add a number mapping
tagger.register_labels(mapping=(r'\b\d+\b', 'NUMBER'))

tag sentence
tagger.predict(sentence)

check out matches
for entity in sentence.get_labels():
print(entity)


Clustering with Flair (2573 2619)

Flair now supports clustering by ways of sklearn. Embed your sentences with a pre-trained embedding like below, then cluster then with any algorithm. Check the example below where we use sentence transformers and k-means clustering. A 'trained' clustering model can be saved and loaded for prediction, just like and other Flair classifier:

python
from sklearn.cluster import KMeans

from flair.data import Sentence
from flair.datasets import TREC_6
from flair.embeddings import SentenceTransformerDocumentEmbeddings
from flair.models import ClusteringModel

embeddings = SentenceTransformerDocumentEmbeddings()
store all embeddings in memory which is required to perform clustering
corpus = TREC_6(memory_mode='full').downsample(0.05)

clustering_model = ClusteringModel(model=KMeans(n_clusters=6), embeddings=embeddings)

fit the model on a corpus
clustering_model.fit(corpus)

save the model
clustering_model.save(model_file="clustering_model.pt")

load saved clustering model
model = ClusteringModel.load(model_file="clustering_model.pt")

make example sentence
sentence = Sentence('Getting error in manage categories - not found for attribute "navigation _ column"')

predict for sentence
model.predict(sentence)

print sentence with prediction
print(sentence)


Dataset Manipulations

You can now change label names, ignore labels and add custom preprocessing when loading a dataset.

For instance, the standard WNUT_17 dataset comes with 7 NER labels:

python
corpus = WNUT_17(in_memory=False)
print(corpus.make_label_dictionary('ner'))


which prints:

console
Dictionary with 7 tags: <unk>, person, location, group, corporation, product, creative-work


With the following code, you rename some labels ('person' is renamed to 'PER'), merge 2 labels into 1 ('group' and 'corporation' are merged into 'LOC'), and ignore 2 other labels ('creative-work' and 'product' are ignored):

python
corpus = WNUT_17(in_memory=False, label_name_map={
'person': 'PER',
'location': 'LOC',
'group': 'ORG',
'corporation': 'ORG',
'product': 'O',
'creative-work': 'O', by renaming to 'O' this tag gets ignored
})


which prints:

console
Dictionary with 4 tags: <unk>, PER, LOC, ORG


You can manipulate the data even more with custom preprocessing functions. See the example in 2708.


Other New Features and Data Sets

- A new `WordTagger` class for simple word-level predictions (2607)
- Classic `WordEmbeddings` can now be fine-tuned in Flair (2491) by setting fine_tune=True. Also adds fine-tuning mode of https://arxiv.org/abs/2110.02861 which seem to "reduce gradient variance that comes from the highly non-uniform distribution of input tokens"
- Add `NER_MULTI_CONER` Dataset (2507)
- Add support for HIPE 2022 (2675)
- Allow trainer to work with mutliple learning rates (2641)
- Update hyperparameter tuning (2633)


Preview Features

Some preview features in beta stage, use at your own risk.

Prototypical networks in Flair (2627)

Prototype networks learn prototypes for each target class. For each data point to be classified, the network predicts a vector in class-prototype-space, which is then compared to all class prototypes.The prediction is then the closest class prototype. See paper [Prototypical Networks for Few-shot Learning](https://arxiv.org/abs/1703.05175) for more info.

plonerma implemented a custom decoder that can be added to any Flair model that inherits from `DefaultClassifier` (i.e. early all Flair models). For instance, use this script:

python
from flair.data import Corpus
from flair.datasets import UP_ENGLISH
from flair.embeddings import TransformerWordEmbeddings
from flair.models import WordTagger
from flair.nn import PrototypicalDecoder
from flair.trainers import ModelTrainer

what tag do we want to predict?
tag_type = 'frame'

get a corpus
corpus: Corpus = UP_ENGLISH().downsample(0.1)

make the tag dictionary from the corpus
tag_dictionary = corpus.make_label_dictionary(label_type=tag_type)

initialize simple embeddings
embeddings = TransformerWordEmbeddings(model="distilbert-base-uncased",
fine_tune=True,
layers='-1')

initialize prototype decoder
decoder = PrototypicalDecoder(num_prototypes=len(tag_dictionary),
embeddings_size=embeddings.embedding_length,
distance_function='euclidean',
normal_distributed_initial_prototypes=True,
)

initialize the WordTagger, but pass the prototype decoder
tagger = WordTagger(embeddings,
tag_dictionary,
tag_type,
decoder=decoder)

initialize trainer
trainer = ModelTrainer(tagger, corpus)

run training
trainer.fine_tune('resources/taggers/prototypical_decoder')



Other Beta features

- Dependency Parsing in Flair (2486 2579)
- Lemmatization in Flair (2531)
- Initial implementation of JsonCorpora and Datasets (2653)


Major Refactorings

With Flair expanding to many new NLP tasks (relation extraction, entity linking, etc.) and model types, we made a number of refactorings to reduce redundancy and make it easier to extend Flair.

Major refactoring of Label Logic in Flair (2607 2609 2645)

The labeling logic was growing too complex to accommodate new tasks. With this release, we refactored this logic such that complex label classes like `SpanLabel`, `RelationLabel` etc. are removed in favor of a single `Label` class for all types of label. The `Sentence` object will now be automatically aware of all labels added to it.

To illustrate the difference, consider a before-and-after of how to add an entity label to a sentence.

Before:

python
example sentence
sentence = Sentence("Humboldt Universität zu Berlin is located in Berlin .")

create span for "Humboldt Universität zu Berlin"
span = Span(sentence[0:4])

make a Span-label
span_label = SpanLabel(span=span, value='University')

add Span-label to sentence
sentence.add_complex_label(typename='ner', label=span_label)


Now:

python
example sentence
sentence = Sentence("Humboldt Universität zu Berlin is located in Berlin .")

directly add a label to the span "Humboldt Universität zu Berlin"
sentence[0:4].add_label("ner", "Organization")


So you can now just get a span from the sentence and add a label to it directly. It will get registered on the sentence as well.

Refactoring of printouts (2704)

We changed and unified printouts across all Flair data points and labels, and updated the documentation to reflect this. Printouts should hopefully now be more concise. Let us know what you think.

Unified classes to reduce redundancy

Next to too many Label classes (see above), we also had too many corpora that essentially do the same thing, two partially overlapping transformer embedding classes and too much redundancy in our tokenization classes. This release makes many refactorings to make the code more maintainable:

- *Unify Corpora* (2607): Unifies several corpora into a single object. Before, we had `ColumnCorpus`, `UniversalDependenciesCorpus`, `CoNNLuCorpus`, and `EntityLinkingCorpus`, which resulted in too much redundancy. Now, there is only the `ColumnCorpus` for all such datasets
- *Unify Transformer Embeddings* (2558, 2584, 2586): There was too much redundancy and inconsistency between the two Transformer-based embeddings classes `TransformerWordEmbedding` and `TransformerDocumentEmbedding`. Thanks to helpmefindaname, they now both inherit from the same base object and now share all features.
- *Unify Tokenizers* (2607) : The `Tokenizer` classes no longer return lists of `Token`, rather lists of strings that the `Sentence` object converts to tokens, centralizing the offset and whitespace_after detection in one place.


Simplifications to DefaultClassifier

The `DefaultClassifier` is the base class for nearly all models in Flair. With this release, we make a number of simplifications to reduce redundancy across classes and make it more modular.
- `forward_pass` simplified to return 3 instead of 4 arguments
- `forward_pass` returns embeddings instead of logits allowing us to easily switch out the decoder (see Beta feature on Prototype Networks below)
- removed the unintuitive `spawn` logic we no longer need due to Label refactoring
- unify dropouts across all classes (2669)


Sequence tagger refactoring (2361 2550, 2561,2564, 2585, 2565)

Major refactoring of `SequenceTagger` for better modularity and code readability.

Refactoring of Span Logic (2607 2609 2645)

Spans are no longer stored as word-level 'bioes' tags, but rather directly stored as span-level annotations. The `SequenceTagger` will still internally use BIO/BIOES tags, but the corpora and sentences no longer explicitly store this information.

So you now choose the labeling format when instantiating the `SequenceTagger`, i.e.:
python
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type="ner",
tag_format="BIOES", choose if you want to use BIOES or BIO internally
)


Internally, this refactoring makes a number of changes and simplifications:
- a number of fields have been added or moved up to the `DataPoint` class, for convenience, including properties to get `start_position` and `end_position` of datapoints, their `text`, their `tag` and `score` (if they have only one tag) and an `unlabeled_identifier`
- moves up `set_embedding()` and `to()` from the data point classes (`Sentence`, `Token`, etc.) to their parent `DataPoint`
- a number of methods like `get_tag` and `add_tag` have been removed from Token in favor of the `get_label` and `add_label` method of the parent DataPoint class
- The `ColumnCorpus` will automatically identify which columns are span labels and treat them accordingly


Code Quality Checks (2611)

They are back and more strict than ever! Thanks to helpmefindaname, we now include mypy and formatting tests as part of our build process, which lead to many changes in the code and a much greater chance at catching errors early.


Speed and Memory Improvements:
- `EntityLinker` class refactored for speed (2607)
- Performance improvements in standard `evaluate()` method, especially for large datasets (2607)
- `ColumnCorpus` no longer does disk reads when `in_memory=False`, it simply stores the raw data in memory leading to significant speed-ups on large datasets (2607)
- Memory management improvements for embeddings (2645)
- Efficiency improvements for WordEmbeddings (2491) and OneHotEmbeddings (2490)


Bug Fixes and Improvements
- Add equality method to `Dictionary` (2532)
- Fix encoding error in lemmatizer (2539)
- Fixed printing and logging inconsistencies. (2665)
- Readme (2525 2618 2617 2662)
- Fix bug in `WSD_UFSAC` corpus (2521)
- change position of model saving in between epochs (2548)
- Fix loss weights in TextPairClassifier and RelationExtractor models (2576)
- Fix token positions on column corpus (2440)
- long sequence transformers of any kind (2599)
- The deprecated data_fetcher is finally removed (2607)
- Small lm training improvements (2590)
- Remove minor bug in NEL_ENGLISH_AIDA corpus (2615)
- Fix module import bug (2616)
- Fix reloading fast tokenizers (2622)
- Fix two small bugs (2634)
- Fix .pre-commit-config.yaml (2651)
- patch the missing document_delmiter for lm.__get_state__() (2658)
- `DocumentPoolEmbeddings` class can now be instantiated only with a single embedding (2645)
- You can now specify a `min_count` when computing the label dictionary. Labels below that count will be UNK'ed. (e.g. `tag_dictionary = corpus.make_label_dictionary("ner", min_count=10)`) (2607)
- The `Dictionary` will now compute count statistics for labels in a corpus (2607)
- The `ColumnCorpus` can now handle relation annotation, dependency tree information and UD feats and misc (2607)
- Embeddings are stored as a torch Embedding instead of a gensim keyedvector. That way it will never come to version issues, if gensim doesn't ensure backwards compatibility
- Make transformer offset calculation more robust (2714)

0.10

This release adds several new features such as in-built "model cards" for all Flair models, the first pre-trained models for Relation Extraction, better support for fine-tuning and a refactoring of the model training methods for more flexibility. It also fixes a number of critical bugs that were introduced by the refactorings in Flair 0.9.

Model Trainer Enhancements

_Breaking change_: We changed the `ModelTrainer` such that you now no longer pass the optimizer during initialization. Rather, it is now passed as a parameter of the `train` or `fine_tune` method.

**Old syntax**:

python
1. initialize trainer with AdamW optimizer
trainer = ModelTrainer(classifier, corpus, optimizer=torch.optim.AdamW)

2. run training with small learning rate and mini-batch size
trainer.train('resources/taggers/question-classification-with-transformer',
learning_rate=5.0e-5,
mini_batch_size=4,
)


**New syntax** (optimizer is parameter of train method):

python
1. initialize trainer
trainer = ModelTrainer(classifier, corpus)

2. run training with AdamW, small learning rate and mini-batch size
trainer.train('resources/taggers/question-classification-with-transformer',
learning_rate=5.0e-5,
mini_batch_size=4,
optimizer=torch.optim.AdamW,
)


Convenience function for fine-tuning (2439)

Adds a `fine_tune` routine that sets default parameters used for fine-tuning (AdamW optimizer, small learning rate, few epochs, cyclic learning rate scheduling, etc.). Uses the new linear scheduler with warmup (2415).

**New syntax** with `fine_tune` method:

python
from flair.data import Corpus
from flair.datasets import TREC_6
from flair.embeddings import TransformerDocumentEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer

1. get the corpus
corpus: Corpus = TREC_6()

2. what label do we want to predict?
label_type = 'question_class'

3. create the label dictionary
label_dict = corpus.make_label_dictionary(label_type=label_type)

4. initialize transformer document embeddings (many models are available)
document_embeddings = TransformerDocumentEmbeddings('distilbert-base-uncased', fine_tune=True)

5. create the text classifier
classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, label_type=label_type)

6. initialize trainer
trainer = ModelTrainer(classifier, corpus)

7. run training with fine-tuning
trainer.fine_tune('resources/taggers/question-classification-with-transformer',
learning_rate=5.0e-5,
mini_batch_size=4,
)


Model Cards (2457)

When you train any Flair model, a "model card" will now automatically be saved that stores all training parameters and versions used to train this model. Later when you load a Flair model, you can print the model card and understand how the model was trained.

The following example trains a small POS-tagger and prints the model card in the end:

python
initialize corpus and make label dictionary for POS tags
corpus = UD_ENGLISH().downsample(0.01)
tag_type = "pos"
tag_dictionary = corpus.make_label_dictionary(tag_type)

simple sequence tagger
tagger = SequenceTagger(hidden_size=256,
embeddings=WordEmbeddings("glove"),
tag_dictionary=tag_dictionary,
tag_type=tag_type)

initialize model trainer and experiment path
trainer = ModelTrainer(tagger, corpus)
path = f'resources/taggers/model-card'

train for a few epochs
trainer.train(path,
max_epochs=20,
)

load best model and print "model card"
trained_model = SequenceTagger.load(path + '/best-model.pt')
trained_model.print_model_card()


This should print a model card like:
~~~
------------------------------------
--------- Flair Model Card ---------
------------------------------------
- this Flair model was trained with:
-- Flair version 0.9
-- PyTorch version 1.7.1
-- Transformers version 4.8.1
------------------------------------
------- Training Parameters: -------
------------------------------------
-- base_path = resources/taggers/model-card
-- learning_rate = 0.1
-- mini_batch_size = 32
-- mini_batch_chunk_size = None
-- max_epochs = 20
-- train_with_dev = False
-- train_with_test = False
[... shortened ...]
------------------------------------
~~~

Resume training any model (2457)

Previously, we distinguished between checkpoints and model files. Now all models can function as checkpoints, meaning you can load them and continue training them. Say you want to load the model above (trained to epoch 20) and continue training it to epoch 25. Do it like this:

python
resume training best model, but this time until epoch 25
trainer.resume(trained_model,
base_path=path + '-resume',
max_epochs=25,
)


Pass optimizer and scheduler instance

You can also now pass an initialized optimizer and scheduler to the train and fine_tune methods.

Multi-Label Predictions and Confidence Threshold in TARS models (2430)

Adding the possibility to set confidence thresholds on multi-label prediction in TARS, and setting whether a problem is single-label or multi-label:

python
from flair.models import TARSClassifier
from flair.data import Sentence

1. Load our pre-trained TARS model for English
tars: TARSClassifier = TARSClassifier.load('tars-base')

switch to a multi-label task (emotion detection)
tars.switch_to_task('GO_EMOTIONS')

sentence with two emotions
sentence = Sentence("I am happy and sad")

predict normally
tars.predict(sentence)
print(sentence)

predict with lower label threshold (you can set this to 0. to get all labels)
tars.predict(sentence, label_threshold=0.01)
print(sentence)

predict and enforce a single-label prediction
tars.predict(sentence, label_threshold=0.01, multi_label=False)
print(sentence)


Relation Extraction ( 2471 2492)

We refactored the RelationExtractor for more options, hopefully better code clarity and small speed improvements.

We also added two few relation extraction models, trained over a modified version of TACRED: `relations` and `relations-fast`. To use these models, you also need an entity tagger. The tagger identifies entities, then the relation extractor possible entities.

For instance use this code:

python
from flair.data import Sentence
from flair.models import RelationExtractor, SequenceTagger

1. make example sentence
sentence = Sentence("George was born in Washington")

2. load entity tagger and predict entities
tagger = SequenceTagger.load('ner-fast')
tagger.predict(sentence)

check which entities have been found in the sentence
entities = sentence.get_labels('ner')
for entity in entities:
print(entity)

3. load relation extractor
extractor: RelationExtractor = RelationExtractor.load('relations-fast')

predict relations
extractor.predict(sentence)

check which relations have been found
relations = sentence.get_labels('relation')
for relation in relations:
print(relation)



Embeddings

- Refactoring of WordEmbeddings to avoid gensim version issues and enable further fine-tuning of pre-trained embeddings (2491)
- Refactoring of OneHotEmbeddings to fix errors caused by some corpora and enable "stable embeddings" (2490 )


Other Enhancements and Bug Fixes

- Compatibility with gensim 4 and Python 3.9 (2496)
- Fix TransformerWordEmbeddings if model_max_length not set in Tokenizer (2502)
- Fix TransformerWordEmbeddings handling of lang ids (2417)
- Fix attention mask for special Transformer architectures (2485)
- Fix regression model (2424)
- Fix problems caused by refactoring of Dictionary (2429 2435 2453)
- Fix infinite loop in Span::to_original_text (2462)
- Fix result object in ModelTrainer (2519)
- Fix bug in wsd_ufsac corpus (2521)
- Fix bugs in TARS and simple sequence tagger (2468)
- Add Amharic FLAIR EMBEDDING model (2494)
- Add MultiCoNer Dataset (2507)
- Add Korean Flair Tutorials (2516 2517)
- Remove hyperparameter features (2518)
- Make it optional to create logfiles and loss files (2421)
- Small simplification of TransformerWordEmbeddings (2425)

0.9

With release 0.9 we are refactoring Flair for simplicity and speed, to make Flair faster and more easily scale to new NLP tasks. The first new tasks included in this release are **Relation Extraction** (RE), support for **GLUE benchmark** tasks and **Entity Linking** - all in *beta for early adopters*! We're working towards a Flair 1.0 release that will span the whole suite of standard NLP tasks. Also included is a new approach for **Zero-Shot Sequence Labeling** based on TARS! This release also includes a wealth of new datasets for all these tasks and tons of other new features and bug fixes.

Zero-Shot Sequence Labeling with TARS (2260)

We extend the TARS zero-shot learning approach to sequence labeling and ship a pre-trained model for English NER. Try defining some classes and see if the model can find them:

python
1. Load zero-shot NER tagger
tars = TARSTagger.load('tars-ner')

2. Prepare some test sentences
sentences = [
Sentence("The Humboldt University of Berlin is situated near the Spree in Berlin, Germany"),
Sentence("Bayern Munich played against Real Madrid"),
Sentence("I flew with an Airbus A380 to Peru to pick up my Porsche Cayenne"),
Sentence("Game of Thrones is my favorite series"),
]

3. Define some classes of named entities such as "soccer teams", "TV shows" and "rivers"
labels = ["Soccer Team", "University", "Vehicle", "River", "City", "Country", "Person", "Movie", "TV Show"]
tars.add_and_switch_to_new_task('task 1', labels, label_type='ner')

4. Predict for these classes and print results
for sentence in sentences:
tars.predict(sentence)
print(sentence.to_tagged_string("ner"))


This should print:

console
The Humboldt <B-University> University <I-University> of <I-University> Berlin <E-University> is situated near the Spree <S-River> in Berlin <S-City> , Germany <S-Country>

Bayern <B-Soccer Team> Munich <E-Soccer Team> played against Real <B-Soccer Team> Madrid <E-Soccer Team>

I flew with an Airbus <B-Vehicle> A380 <E-Vehicle> to Peru <S-City> to pick up my Porsche <B-Vehicle> Cayenne <E-Vehicle>

Game <B-TV Show> of <I-TV Show> Thrones <E-TV Show> is my favorite series


So in these examples, we are finding entity classes such as "TV show" (_Game of Thrones_), "vehicle" (_Airbus A380_ and _Porsche Cayenne_), "soccer team" (_Bayern Munich_ and _Real Madrid_) and "river" (_Spree_), even though the model was never explicitly trained for this. Note that this is ongoing research and the examples are a bit cherry-picked. We expect the zero-shot model to improve quite a bit until the next release.

New NLP Tasks and Datasets

We prototypically now support new tasks such as GLUE benchmark, Relation Extraction and Entity Linking. With this, we ship the datasets and model classes you need to train your own models. But we are still tweaking both methods, meaning that we don't ship any pre-trained models as-of-yet.

GLUE Benchmark (2149 2363)

A standard benchmark to evaluate progress in language understanding, mostly consisting of single and pairwise sentence classification tasks.

New datasets in Flair:

- 'GLUE_COLA' - The Corpus of Linguistic Acceptability from GLUE benchmark
- 'GLUE_MNLI' - The Multi-Genre Natural Language Inference Corpus from the GLUE benchmark
- 'GLUE_RTE' - The RTE task from the GLUE benchmark
- 'GLUE_QNLI' - The Stanford Question Answering Dataset formated as NLI task from the GLUE benchmark
- 'GLUE_WNLI' - The Winograd Schema Challenge formated as NLI task from the GLUE benchmark
- 'GLUE_MRPC' - The MRPC task from GLUE benchmark
- 'GLUE_QQP' - The Quora Question Pairs dataset where the task is to determine whether a pair of questions are semantically equivalent

Initialize datasets like so:

python
from flair.datasets import GLUE_QNLI

load corpus
corpus = GLUE_QNLI()

print corpus
print(corpus)

print first sentence-pair of training data split
print(corpus.train[0])

print all labels in corpus
print(corpus.make_label_dictionary("entailment"))


Relation Extraction (2333 2352)

Relation extraction classifies if and which relationship holds between two entities in a text.

Model class: `RelationExtractor`

Datasets in Flair:
- 'RE_ENGLISH_CONLL04' - the [CoNLL-04](https://github.com/bekou/multihead_joint_entity_relation_extraction/tree/master/data/CoNLL04) Relation Extraction dataset (#2333)
- 'RE_ENGLISH_SEMEVAL2010' - the [SemEval-2010 Task 8](https://aclanthology.org/S10-1006.pdf) dataset on Multi-Way Classification of Semantic Relations Between Pairs of Nominals (#2333)
- 'RE_ENGLISH_TACRED' - the TAC Relation Extraction Dataset](https://nlp.stanford.edu/projects/tacred/) with 41 relations (download required) (#2333)
- 'RE_ENGLISH_DRUGPROT' - the [DrugProt corpus from Biocreative VII Track 1](https://zenodo.org/record/5119892#.YSdSaVuxU5k/) on drug and chemical-protein interactions (2340 2352)

Initialize datasets like so:

python
initalize CoNLL 04 corpus for Relation extraction
corpus = RE_ENGLISH_CONLL04()
print(corpus)

print first sentence of training split with annotations
sentence = corpus.train[0]

print label dictionary
label_dict = corpus.make_label_dictionary("relation")
print(label_dict)


Entity Linking (2375)

Entity Linking goes one step further than NER and uniquely links entities to knowledge bases such as Wikipedia.

Model class: `EntityLinker`

Datasets in Flair:
- 'NEL_ENGLISH_AIDA' - the [AIDA CoNLL-YAGO Entity Linking corpus](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) on the CoNLL-03 dataset for English
- 'NEL_ENGLISH_AQUAINT' - the Aquaint Entity Linking corpus introduced in [Milne and Witten (2008)](https://www.cms.waikato.ac.nz/~ihw/papers/08-DNM-IHW-LearningToLinkWithWikipedia.pdf)
- 'NEL_ENGLISH_IITB' - the ITTB Entity Linking corpus introduced in [Sayali et al. (2009)](https://dl.acm.org/doi/10.1145/1557019.1557073)
- 'NEL_ENGLISH_REDDIT' - the Reddit Entity Linking corpus introduced in [Botzer et al. (2021)](https://arxiv.org/abs/2101.01228v2) (only gold annotations)
- 'NEL_ENGLISH_TWEEKI' - the ITTB Entity Linking corpus introduced in [Harandizadeh and Singh (2020)](https://aclanthology.org/2020.wnut-1.29.pdf)
- 'NEL_GERMAN_HIPE' - the [HIPE](https://impresso.github.io/CLEF-HIPE-2020/) Entity Linking corpus for historical German as a [sentence-segmented version](https://github.com/stefan-it/clef-hipe)

python
from flair.datasets import NEL_ENGLISH_REDDIT

load corpus
corpus = NEL_ENGLISH_REDDIT()

print corpus
print(corpus)

print a sentence of training data split
print(corpus.train[3])


New NER Datasets
- 'NER_ARABIC_ANER' - [Arabic Named Entity Recognition Corpus](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp) 4-class NER (#2188)
- 'NER_ARABIC_AQMAR' - [American and Qatari Modeling of Arabic](http://www.cs.cmu.edu/~ark/AQMAR/) 4-class NER (modified) (#2188)
- 'NER_ENGLISH_PERSON' - NER for [person names](https://github.com/das-sudeshna/genid) (#2271)
- 'NER_ENGLISH_WEBPAGES' - 4-class NER on web pages from [Ratinov and Roth (2009)](https://aclanthology.org/W09-1119/) (#2232 )
- 'NER_GERMAN_POLITICS' - [NEMGP](https://www.thomas-zastrow.de/nlp/) corpus for German politics (#2341)
- 'NER_JAPANESE' - [Japanese NER](https://github.com/Hironsan/IOB2Corpus) dataset automatically generated from Wikipedia (#2154)
- 'NER_MASAKHANE' - [MasakhaNER: Named Entity Recognition for African Languages](https://github.com/masakhane-io/masakhane-ner) corpora (#2212, 2214, 2227, 2229, 2230, 2231, 2222, 2234, 2242, 2243)

Other datasets

- 'YAHOO_ANSWERS' - The [10 largest main categories](https://course.fast.ai/datasets#nlp) from the Yahoo! Answers (2198)
- Various Universal Dependencies datasets (2211, 2216, 2219, 2221, 2244, 2245, 2246, 2247, 2223, 2248, 2235, 2236, 2239, 2226)

New Functionality

Support for Arabic NER (2188)

Flair now supports NER and POS tagging for Arabic. To tag an Arabic sentence, just load the appropriate model:

python

load model
tagger = SequenceTagger.load('ar-ner')

make Arabic sentence
sentence = Sentence("احب برلين")

predict NER tags
tagger.predict(sentence)

print sentence with predicted tags
for entity in sentence.get_labels('ner'):
print(entity)


This should print:
console

0.8

FLERT (2031 2032 2104)

This release adds the "FLERT" approach to train sequence tagging models using cross-sentence features as presented in [our recent paper](https://arxiv.org/abs/2011.06993). This yields new state-of-the-art models which we include in Flair, as well as the features to easily train your own "FLERT" models.

Pre-trained FLERT models (2130)

We add 5 new NER models for English (4-class and 18-class), German, Dutch and Spanish (4-class each). Load for instance with:

python
from flair.data import Sentence
from flair.models import SequenceTagger

load tagger
tagger = SequenceTagger.load("ner-large")

make example sentence
sentence = Sentence("George Washington went to Washington")

predict NER tags
tagger.predict(sentence)

print sentence
print(sentence)

print predicted NER spans
print('The following NER tags are found:')
iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)


If you want to test these models in action, for instance the new large English Ontonotes model with 18 classes, you can now use the hosted inference API on the HF model hub, like [here](https://huggingface.co/flair/ner-english-ontonotes-large).


Contextualized Sentences

In order to enable cross-sentence context, we made some changes to the Sentence object and data readers:

1. `Sentence` objects now have `next_sentence()` and `previous_sentence()` methods that are set automatically if loaded through `ColumnCorpus`. This is a pointer system to navigate through sentences in a corpus:
python
load corpus
corpus = MIT_MOVIE_NER_SIMPLE(in_memory=False)

get a sentence
sentence = corpus.test[123]
print(sentence)
get the previous sentence
print(sentence.previous_sentence())
get the sentence after that
print(sentence.next_sentence())
get the sentence after the next sentence
print(sentence.next_sentence().next_sentence())

This allows dynamic computation of contexts in the embedding classes.

2. `Sentence` objects now have the `is_document_boundary` field which is set through the `ColumnCorpus`. In some datasets, there are sentences like "-DOCSTART-" that just indicate document boundaries. This is now recorded as a boolean in the object.


Refactored TransformerWordEmbeddings (breaking)

`TransformerWordEmbeddings` refactored for dynamic context, robustness to long sentences and readability. The names of some constructor arguments have changed for clarity: `pooling_operation` is now `subtoken_pooling` (to make clear that we pool subtokens), `use_scalar_mean` is now `layer_mean` (we only do a simple layer mean) and `use_context` can now optionally take an integer to indicate the length of the context. Default arguments are also changed.

For instance, to create embeddings with a document-level context of 64 subtokens, init like this:
python
embeddings = TransformerWordEmbeddings(
model='bert-base-uncased',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=64,
)


Train your Own FLERT Models

You can train a FLERT-model like this:

python
import torch

from flair.data import Sentence
from flair.datasets import CONLL_03, WNUT_17
from flair.embeddings import TransformerWordEmbeddings, DocumentPoolEmbeddings, WordEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer


corpus = CONLL_03()

use_context = 64
hf_model = 'xlm-roberta-large'

embeddings = TransformerWordEmbeddings(
model=hf_model,
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=use_context,
)

tag_dictionary = corpus.make_tag_dictionary('ner')

init bare-bones tagger (no reprojection, LSTM or CRF)
tagger: SequenceTagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)

train with XLM parameters (AdamW, 20 epochs, small LR)
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
from torch.optim.lr_scheduler import OneCycleLR

context_string = '+context' if use_context else ''

trainer.train(f"resources/flert",
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)


We recommend training FLERT this way if accuracy is by far the most important feature you need. FLERT is quite slow since it works on the document-level.


HuggingFace model hub integration (2040 2108 2115)

We now host Flair sequence tagging models on the HF model hub (thanks for all the support HuggingFace!).

**Overview of all models.** There is a dedicated 'Flair' tag on the hub, so to get a list of all Flair models, check [here](https://huggingface.co/models?filter=flair).

The hub allows all users to upload and share their own models. Even better, you can enable the **Inference API** and so test all models online without downloading and running them. For instance, you can test our new very powerful English 18-class NER model [here](https://huggingface.co/flair/ner-english-ontonotes-large).

To load any sequence tagger on the model hub, use the string identifier when instantiating a model. For instance, to load our English ontonotes model with the id "flair/ner-english-ontonotes-large", do

python
from flair.data import Sentence
from flair.models import SequenceTagger

load tagger
tagger = SequenceTagger.load("flair/ner-english-ontonotes-large")

make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")

predict NER tags
tagger.predict(sentence)

print sentence
print(sentence)

print predicted NER spans
print('The following NER tags are found:')
iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)



Other New Features

New Task: Recognizing Textual Entailment (2123)

Thanks to marcelmmm we now support training textual entailment tasks (in fact, all pairwise sentence classification tasks) in Flair.

For instance, if you want to train an RTE task of the GLUE benchmark use this script:

python
import torch

from flair.data import Corpus
from flair.datasets import GLUE_RTE
from flair.embeddings import TransformerDocumentEmbeddings

1. get the entailment corpus
corpus: Corpus = GLUE_RTE()

2. make the tag dictionary from the corpus
label_dictionary = corpus.make_label_dictionary()

3. initialize text pair tagger
from flair.models import TextPairClassifier

tagger = TextPairClassifier(
document_embeddings=TransformerDocumentEmbeddings(),
label_dictionary=label_dictionary,
)

4. train trainer with AdamW
from flair.trainers import ModelTrainer

trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)

5. run training
trainer.train('resources/taggers/glue-rte-english',
learning_rate=2e-5,
mini_batch_chunk_size=2, this can be removed if you hae a big GPU
train_with_dev=True,
max_epochs=3)


Add possibility to specify empty label name to CSV corpora (2068)

Some CSV classification datasets contain a value that means "no class". We now extend the `CSVClassificationDataset` so that it is possible to specify which value should be skipped using the `no_class_label` argument.

For instance:

python
load corpus
corpus = CSVClassificationCorpus(
data_folder='resources/tasks/code/',
train_file='java_io.csv',
skip_header=True,
column_name_map={3: 'text', 4: 'label', 5: 'label', 6: 'label', 7: 'label', 8: 'label', 9: 'label'},
no_class_label='NONE',
)


This causes all entries of NONE in one of the label columns to be skipped.

More options for splits in corpora and training (2034)

For various reasons, we might want to have a `Corpus` that does not define all three splits (train/dev/test). For instance, we might want to train a model over the entire dataset and not hold out any data for validation/evaluation.

We add several ways of doing so.

1. If a dataset has predefined splits, like most NLP datasets, you can pass the arguments `train_with_test` and `train_with_dev` to the `ModelTrainer`. This causes the trainer to train over all three splits (and do no evaluation):

python
trainer.train(f"path/to/your/folder",
learning_rate=0.1,
mini_batch_size=16,
train_with_dev=True,
train_with_test=True,
)


2. You can also now create a Corpus with fewer splits without having all three splits automatically sampled. Pass `sample_missing_splits=False` as argument to do this. For instance, to load SemCor WSD corpus only as training data, do:

python
semcor = WSD_UFSAC(train_file='semcor.xml', sample_missing_splits=False, autofind_splits=False)


Add TFIDF Embeddings (2086)

We added some old-school embeddings (thanks yosipk), namely the legendary TF-IDF document embeddings. These are often good baselines, and additionally they keep NLP veterans nostalgic, if not happy.

To initialize these embeddings, you must pass the train split of your training corpus, i.e.

python
embeddings = DocumentTFIDFEmbeddings(corpus.train, max_features=10000)


This triggers the process where the most common words are used to featurize documents.

New Datasets

Hungarian NER Corpus (2045)

Added the Hungarian business news corpus annotated with NER information (thanks to alibektas).

python
load Hungarian business NER corpus
corpus = BUSINESS_HUN()
print(corpus)
print(corpus.make_tag_dictionary('ner'))


StackOverflow NER Corpus (2052)

python
load StackOverflow business NER corpus
corpus = STACKOVERFLOW_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))


Added GermEval 18 Offensive Language dataset (2102)

python
load StackOverflow business NER corpus
corpus = GERMEVAL_2018_OFFENSIVE_LANGUAGE()
print(corpus)
print(corpus.make_label_dictionary()


Added RTE corpora of GLUE and SuperGLUE

python
load the recognizing textual entailment corpus of the GLUE benchmark
corpus = GLUE_RTE()
print(corpus)
print(corpus.make_label_dictionary()


Improvements

Allow newlines as Tokens in a Sentence (2070)

Newlines and tabs can now become Tokens in a Sentence:

python
make sentence with newlines and tabs
sentence: Sentence = Sentence(["I", "\t", "ich", "\n", "you", "\t", "du", "\n"], use_tokenizer=True)

Alternatively: sentence: Sentence = Sentence("I \t ich \n you \t du \n", use_tokenizer=False)

print sentence and each token
print(sentence)
for token in sentence:
print(token)


Improve transformer serialization (2046)

We improved the serialization of the `TransformerWordEmbeddings` class such that you can now train a model with one version of the transformers library and load it with another version. Previously, if you trained a model with transformers 3.5.1 and loaded it with 3.1.01, or trained with 3.5.1 and loaded with 4.1.1, or other version mismatches, there would either be errors or bad predictions.

**Migration guide:** If you have a model trained with an older version of Flair that uses `TransformerWordEmbeddings` you can save it in the new version-independent format by loading the model with the same transformers version you used to train it, and then saving it again. The newly saved model is then version-independent:

python
load old model, but use the *same transformer version you used when training this model*
tagger = SequenceTagger.load('path/to/old-model.pt')

save the model. It is now version-independent and can for instance be loaded with transformers 4.
tagger.save('path/to/new-model.pt')


Fix regression prediction errors (2067)

Fix of two problems in the regression model:
- the predict() method was unable to set labels and threw errors (see 2056)
- predicted labels had no label name

Now, you can set a label name either in the predict method or during instantiation of the regression model you want to train. So the full code for training a regression model and using it to predict is:

python
load regression dataset
corpus = WASSA_JOY()

make simple document embeddings
embeddings = DocumentPoolEmbeddings([WordEmbeddings('glove')], fine_tune_mode='linear')

init model and give name to label
model = TextRegressor(embeddings, label_name='happiness')

target folder
output_folder = 'resources/taggers/regression_test/'

run training
trainer = ModelTrainer(model, corpus)
trainer.train(
output_folder,
mini_batch_size=16,
max_epochs=10,
)

load model
model = TextRegressor.load(output_folder + 'best-model.pt')

predict for sentence
sentence = Sentence('I am so happy')
model.predict(sentence)

print sentence and prediction
print(sentence)


In my example run, this prints the following sentence + predicted value:
~~~
Sentence: "I am so happy" [− Tokens: 4 − Sentence-Labels: {'happiness': [0.9239126443862915 (1.0)]}]
~~~

Do not shuffle first epoch during training (2058)

Normally, we shuffle sentences at each epoch during training in the ModelTrainer class. However, in some cases it makes sense to see sentences in their natural order during the first epoch, and shuffle only from the second epoch onward.


Bug Fixes and Improvements

- Update to transformers 4 (2057)
- Fix the evaluate() method in the SimilarityLearner class (2113)
- Fix memory memory leak in WordEmbeddings (2018)
- Add support for Transformer-XL Embeddings (2009)
- Restrict numpy version to <1.20 for Python 3.6 (2014)
- Small formatting and variable declaration changes (2022)
- Fix document boundary offsets for Dutch CoNLL-03 (2061)
- Changed the torch version in requirements.txt: Torch>=1.5.0 (2063)
- Fix linear input dimension if the reproject (2073)
- Various improvements for TARS (2090 2128)
- Added a link to the interpret-flair repo (2096)
- Improve documentatin ( 2110)
- Update sentencepiece and gdown version (2131)
- Add to_plain_string method to Span class (2091)

0.7

Few-Shot and Zero-Shot Classification with TARS (1917 1926)

With TARS we add a major new feature to Flair for zero-shot and few-shot classification. Details on the approach can be found in our paper [Halder et al. (2020)](https://kishaloyhalder.github.io/pdfs/tars_coling2020.pdf). Our approach allows you to classify text in cases in which you have little or even no training data at all.

This example illustrates how you predict new classes without training data:

python
1. Load our pre-trained TARS model for English
tars = TARSClassifier.load('tars-base')

2. Prepare a test sentence
sentence = flair.data.Sentence("I am so glad you liked it!")

3. Define some classes that you want to predict using descriptive names
classes = ["happy", "sad"]

4. Predict for these classes
tars.predict_zero_shot(sentence, classes)

Print sentence with predicted labels
print(sentence)


For a full overview of TARS features, please refer to our new [TARS tutorial](/resources/docs/TUTORIAL_10_TRAINING_ZERO_SHOT_MODEL.md).


Other New Features

Option to set Flair seed (1979)

Adds the possibility to set a seed via wrapping the Hugging Face Transformers library helper method (thanks stefan-it).

By specifying a seed with:

python
import flair

flair.set_seed(42)


you can make experimental runs reproducible. The wrapped `set_seed` method sets seeds for `random`, `numpy` and `torch`. More details [here](https://github.com/huggingface/transformers/blob/08f534d2da47875a4b7eb1c125cfa7f0f3b79642/src/transformers/trainer_utils.py#L29-L48).

Control multi-word behavior in UD datasets (1981)

To better handle multi-words in UD corpora, we introduce the `split_multiwords` constructor argument to all UD corpora which by default is set to `True`. It controls the handling of multiwords that are split into different tokens. For instance the German "am" is split into two different tokens: "am" -> "an" + "dem". Or the French "aux" -> "a" + "les".

If `split_multiwords` is set to `True`, they are split as in UD. If set to `False`, we keep the original multiword as a single token. Example:

python
default mode: multiwords are split
corpus = UD_GERMAN(split_multiwords=True)
print sentence 179
print(corpus.dev[179].to_plain_string())

alternative mode: multiwords are kept as original
corpus = UD_GERMAN(split_multiwords=False)
print sentence 179
print(corpus.dev[179].to_plain_string())


This prints

~~~
Ein Hotel zu dem Wohlfühlen.

Ein Hotel zum Wohlfühlen.
~~~

The latter is how it appears in text, the former is after splitting of multiwords.

Pass pretokenized sentence to Sentence object (1965)

You can now pass pass a pretokenized sequence as list of words (thanks ulf1):

python
from flair.data import Sentence
sentence = Sentence(['The', 'grass', 'is', 'green', '.'])
print(sentence)


This should print:

console
Sentence: "The grass is green ." [− Tokens: 5]


Map label names in sequence labeling datasets (1988)

You can now pass a label map to sequence labeling datasets to change label names (thanks pharnisch).

python
print tag dictionary with mapped names
corpus = CONLL_03_DUTCH(label_name_map={'PER': 'person', 'ORG': 'organization', 'LOC': 'location', 'MISC': 'other'})
print(corpus.make_tag_dictionary('ner'))

print tag dictionary with original names
corpus = CONLL_03_DUTCH()
print(corpus.make_tag_dictionary('ner'))


Data Sets

Universal Proposition Banks (1870 1866 1888)

Flair 0.7 adds support 7 Universal Proposition Banks to train your own multilingual semantic role labelers (thanks to Dabendorf).

Load for instance with:

python
load English Universal Proposition Bank
corpus = UP_ENGLISH()
print(corpus)

make dictionary of frames
frame_dictionary = corpus.make_tag_dictionary('frame')
print(frame_dictionary)


Now available for Finnish, Chinese, Italian, French, German, Spanish and English

NER Corpora

We add support for 6 new NER corpora:

Arabic NER Corpus (1901)

Added the ANER corpus for Arabic NER (thanks to megantosh).

python
load Arabic NER corpus
corpus = ANER_CORP()
print(corpus)


Movie NER Corpora (1912)

Added the MIT movie reviews corpora annotated with NER information, in the simple and complex variant (thanks to pharnisch).

python
load simple movie NER corpus
corpus = MITMovieNERSimple()
print(corpus)
print(corpus.make_tag_dictionary('ner'))

load complex movie NER corpus
corpus = MITMovieNERComplex()
print(corpus)
print(corpus.make_tag_dictionary('ner'))


Added SEC Fillings NER corpus (1922)

Added corpus of SEC fillings annotated with 4-class NER tags (thanks to samahakk).

python
load SEC fillings corpus
corpus = SEC_FILLINGS()
print(corpus)
print(corpus.make_tag_dictionary('ner'))


WNUT 2020 NER dataset support (1942)

Added corpus of wet lab protocols annotated with NER information used for WNUT 2020 challenge (thanks to aynetdia).

python
load wet lab protocol data
corpus = WNUT_2020_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))


Weibo NER dataset support (1944)

Added dataset about NER for Chinese Social Media (thanks to 87302380).

python
load Weibo NER data
corpus = WEIBO_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))


Added Finnish NER corpus (1946)

Added the TURKU corpus for Finnish NER (thanks to melvelet).

python
load Finnish NER data
corpus = TURKU_NER()
print(corpus)
print(corpus.make_tag_dictionary('ner'))


Universal Depdency Treebanks

We add support for 11 new UD treebanks:

- Greek UD Treebank (1933, thanks malamasn)
- Livvi UD Treebank (1953, thanks hebecked)
- Naija UD Treebank (1952, thanks teddim420)
- Buryat UD Treebank (1954, thanks MaxDall)
- North Sami UD Treebank (1955, thanks dobbersc)
- Maltese UD Treebank (1957, thanks phkuep)
- Marathi UD Treebank (1958, thanks polarlyset)
- Afrikaans UD Treebank (1959, thanks QueStat)
- Gothic UD Treebank (1961, thanks wjSimon)
- Old French UD Treebank (1964, thanks Weyaaron)
- Wolof UD Treebank (1967, thanks LukasOpp)

Load each with language name, for instance:

python
load Gothic UD treebank data
corpus = UD_GOTHIC()
print(corpus)
print(corpus.test[0])


Added GoEmotions text classification corpus (1914)

Added [GoEmotions dataset]( https://github.com/google-research/google-research/tree/master/goemotions ) containing 58k Reddit comments labeled with 27 emotion categories. Load with:

python
load GoEmotions corpus
corpus = GO_EMOTIONS()
print(corpus)
print(corpus.make_label_dictionary())


Enhancements and bug fixes
- Add handling for micro-average precision and recall (1935)
- Make dev and test splits in treebanks optional (1951)
- Updated communicative functions model (1857)
- Biomedical Data: Explicit encodings for Windows Support (1893)
- Fix wrong abstract method (1923 1940)
- Improve tutorial (1939)
- Fix requirements (1971 )

0.6.1

Release 0.6.1 is bugfix release that fixes the issues caused by moving the server that originally hosted the Flair models. Additionally, this release adds a ton of new NER datasets, including the XTREME corpus for 40 languages, and a new model for NER on German-language legal text.

New Model: Legal NER (1872)

Add legal NER model for German. Trained using the German legal NER dataset available [here](https://github.com/elenanereiss/Legal-Entity-Recognition) that can be loaded in Flair with the `LER_GERMAN` corpus object.

Uses German Flair and FastText embeddings and gets **96.35** F1 score.

Use like this:

python
load German LER tagger
tagger = SequenceTagger.load('de-ler')

example text
text = "vom 6. August 2020. Alle Beschwerdeführer befinden sich derzeit gemeinsam im Urlaub auf der Insel Mallorca , die vom Robert-Koch-Institut als Risikogebiet eingestuft wird. Sie wollen am 29. August 2020 wieder nach Deutschland einreisen, ohne sich gemäß § 1 Abs. 1 bis Abs. 3 der Verordnung zur Testpflicht von Einreisenden aus Risikogebieten auf das SARS-CoV-2-Virus testen zu lassen. Die Verordnung sei wegen eines Verstoßes der ihr zugrunde liegenden gesetzlichen Ermächtigungsgrundlage, des § 36 Abs. 7 IfSG , gegen Art. 80 Abs. 1 Satz 1 GG verfassungswidrig."

sentence = Sentence(text)

predict and print entities
tagger.predict(sentence)

for entity in sentence.get_spans('ner'):
print(entity)


New Datasets

Add XTREME and WikiANN corpora for multilingual NER (1862)

These huge corpora provide training data for NER in 176 languages. You can either load the language-specific parts of it by supplying a language code:

python
load German Xtreme
german_corpus = XTREME('de')
print(german_corpus)

load French Xtreme
french_corpus = XTREME('fr')
print(french_corpus)


Or you can load the default 40 languages at once into one huge MultiCorpus by not providing a language ID:

python
load Xtreme MultiCorpus for all
multi_corpus = XTREME()
print(multi_corpus)


Add Twitter NER Dataset (1850)

Dataset of [tweets](
https://raw.githubusercontent.com/aritter/twitter_nlp/master/data/annotated/ner.txt) annotated with NER tags. Load with:

python
load twitter dataset
corpus = TWITTER_NER()

print example tweet
print(corpus.test[0])


Add German Europarl NER Dataset (1849)

Dataset of German-language speeches in the European parliament annotated with standard NER tags like person and location. Load with:

python
load corpus
corpus = EUROPARL_NER_GERMAN()
print(corpus)

print first test sentence
print(corpus.test[1])


Add MIT Restaurant NER Dataset (1177)

Dataset of English restaurant reviews annotated with entities like "dish", "location" and "rating". Load with:

python
load restaurant dataset
corpus = MIT_RESTAURANTS()

print example sentence
print(corpus.test[0])


Add Universal Propositions Banks for French and German (1866)

Our kickoff into supporting the [Universal Proposition Banks](https://github.com/System-T/UniversalPropositions) adds the first two UP datasets to Flair. Load with:

python
load German UP
corpus = UP_GERMAN()
print(corpus)

print example sentence
print(corpus.dev[1])


Add Universal Dependencies Dataset for Chinese (1880)

Adds the Kyoto dataset for Chinese. Load with:

python
load Chinese UD dataset
corpus = UD_CHINESE_KYOTO()

print example sentence
print(corpus.test[0])


Bug fixes

- Move models to HU server (1834 1839 1842)
- Fix deserialization issues in transformer tokenizers 1865
- Documentation fixes (1819 1821 1836 1852)
- Add link to a repo with examples of Flair on GCP (1825)
- Correct variable names (1875)
- Fix problem with custom delimiters in ColumnDataset (1876)
- Fix offensive language detection model (1877)
- Correct Dutch NER model (1881)

Page 3 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.