Train a text labeler

This article is part of a tutorial series on txtai, an AI-powered search engine.

The Hugging Face Model Hub has a wide range of models that can handle many tasks. While these models perform well, the best performance often is found when fine-tuning …


This content originally appeared on DEV Community and was authored by David Mezzetti

This article is part of a tutorial series on txtai, an AI-powered search engine.

The Hugging Face Model Hub has a wide range of models that can handle many tasks. While these models perform well, the best performance often is found when fine-tuning a model with task-specific data.

Hugging Face provides a number of full-featured examples available to assist with training task-specific models. When building models from the command line, these scripts are a great way to get started.

txtai provides a training pipeline that can be used to train new models programatically using the Transformers Trainer framework. The training pipeline supports the following:

  • Building transient models without requiring an output directory
  • Load training data from Hugging Face datasets, pandas DataFrames and list of dicts
  • Text sequence classification tasks (single/multi label classification and regression) including all GLUE tasks
  • All training arguments

This article shows examples of how to use txtai to train/fine-tune new models.

Install dependencies

Install txtai and all dependencies.

pip install txtai

Train a model

Let's get right to it! The following example fine-tunes a tiny Bert model with the sst2 dataset.

The trainer pipeline is basically a one-liner that fine-tunes any text classification/regression model available (locally and/or from the HF Hub).

from datasets import load_dataset

from txtai.pipeline import HFTrainer

trainer = HFTrainer()

# Hugging Face dataset
ds = load_dataset("glue", "sst2")
model, tokenizer = trainer("google/bert_uncased_L-2_H-128_A-2", ds["train"], columns=("sentence", "label"))

The default trainer pipeline functionality will not store any logs, checkpoints or models to disk. The trainer can take any of the standard TrainingArguments to enable persistent models.

The next section creates a Labels pipeline using the newly built model and runs the model against the sst2 validation set.

from txtai.pipeline import Labels

labels = Labels((model, tokenizer), dynamic=False)

# Determine accuracy on validation set
results = [row["label"] == labels(row["sentence"])[0][0] for row in ds["validation"]]
sum(results) / len(ds["validation"])
0.8188073394495413

81.88% accuracy - not bad for a tiny Bert model.

Train a model with Lists

As mentioned earlier, the trainer pipeline supports Hugging Face datasets, pandas DataFrames and lists of dicts. The example below trains a model using lists.

data = [{"text": "This is a test sentence", "label": 0}, {"text": "This is not a test", "label": 1}]

model, tokenizer = trainer("google/bert_uncased_L-2_H-128_A-2", data)

Train a model with DataFrames

The next section builds a new model using data stored in a pandas DataFrame.

import pandas as pd

df = pd.DataFrame(data)

model, tokenizer = trainer("google/bert_uncased_L-2_H-128_A-2", data)

Train a regression model

The previous models were classification tasks. The following model trains a sentence similarity model with a regression output per sentence pair between 0 (dissimilar) and 1 (similar).

ds = load_dataset("glue", "stsb")
model, tokenizer = trainer("google/bert_uncased_L-2_H-128_A-2", ds["train"], columns=("sentence1", "sentence2", "label"))
labels = Labels((model, tokenizer), dynamic=False)
labels([("Sailing to the arctic", "Dogs and cats don't get along"), 
        ("Walking down the road", "Walking down the street")])
[[(0, 0.551963746547699)], [(0, 0.9760823845863342)]]


This content originally appeared on DEV Community and was authored by David Mezzetti


Print Share Comment Cite Upload Translate Updates
APA

David Mezzetti | Sciencx (2021-08-18T15:45:17+00:00) Train a text labeler. Retrieved from https://www.scien.cx/2021/08/18/train-a-text-labeler/

MLA
" » Train a text labeler." David Mezzetti | Sciencx - Wednesday August 18, 2021, https://www.scien.cx/2021/08/18/train-a-text-labeler/
HARVARD
David Mezzetti | Sciencx Wednesday August 18, 2021 » Train a text labeler., viewed ,<https://www.scien.cx/2021/08/18/train-a-text-labeler/>
VANCOUVER
David Mezzetti | Sciencx - » Train a text labeler. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2021/08/18/train-a-text-labeler/
CHICAGO
" » Train a text labeler." David Mezzetti | Sciencx - Accessed . https://www.scien.cx/2021/08/18/train-a-text-labeler/
IEEE
" » Train a text labeler." David Mezzetti | Sciencx [Online]. Available: https://www.scien.cx/2021/08/18/train-a-text-labeler/. [Accessed: ]
rf:citation
» Train a text labeler | David Mezzetti | Sciencx | https://www.scien.cx/2021/08/18/train-a-text-labeler/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.