Bootstrapping a Sentence-Level Corpus Quality Classifier for Web Text using Active Learning (RANLP25)

A multi-label sentence classifier trained with Active Learning for predicting high- or low-qality labels of german webtext.

Training and evaluation code: https://github.com/maximilian-bley/german-webtext-quality-classification

Model Details

Labels

  • 0=Sentence Boundary: Sentence boundary errors occur if the start or ending of a sentence is malformed. This is the case if it begins with a lower case letter or an atypical character, or lacks a proper terminal punctuation mark (e.g., period, exclamation mark, or question mark).

  • 1=Grammar Mistake: Grammar mistakes are any grammatical errors such as incorrect articles, cases, word order and incorrect use or absence of words. Moreover, random-looking sequences of words, usually series of nouns, should be tagged. In most cases where this label is applicable, the sentence' comprehensibility or message is impaired.

  • 2=Spelling Anomaly: A spelling anomaly is tagged when a word does not correspond to German spelling. This includes typos and incorrect capitalization (e.g. “all caps” or lower-case nouns). Spelling anomalies are irregularities that occur within the word boundary, meaning here text between two whitespaces. In particular, individual letters or nonsensical word fragments are also tagged.

  • 3=Punctuation Error: Punctuation errors are tagged if a punctuation symbol has been placed incorrectly or is missing in the intended place. This includes comma errors, missing quotation marks or parentheses, periods instead of question marks or incorrect or missing dashes or hyphens.

  • 4=Non-linguistic Content: Non-linguistic content includes all types of encoding errors, language-atypical occurrences of numbers and characters (e.g. random sequences of characters or letters), code (remnants), URLs, hashtags and emoticons.

  • 5=Letter Spacing: Letter spacings are deliberately inserted spaces between the characters of a word.

  • 6=Clean: Assigned if none of the other labels apply.

Results

F1-Measures: f1, macro, micro, sample
[0.96 0.86 0.57 0.62 0.77 0.94 0.86] 0.8 0.84 0.81

Precison: P, macro, micro, sample
[0.95 0.85 0.74 0.68 0.8  1.   0.83] 0.83 0.85 0.82

Recall: R, macro, micro, sample
[0.98 0.87 0.47 0.56 0.75 0.88 0.89] 0.77 0.82 0.82

Subset-Acc:  0.67

Model Description

  • Model Type: SetFit
  • Classification head: a SetFitHead instance
  • Maximum Sequence Length: 512 tokens Number of Classes: 7 Language: German

Model Sources

  • Repository:
  • Paper:

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("在 Greding 出 口 离 开 A9 高 速 公 路 。")

Training Details

Training Hyperparameters

  • batch_size: (8, 8)
  • num_epochs: (1, 16)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CoSENTLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: True
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • max_length: 512
  • seed: 13579
  • eval_max_steps: -1
  • load_best_model_at_end: False

Framework Versions

  • Python: 3.10.4
  • SetFit: 1.1.2
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.7.0+cu126
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
171
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mbley/german-webtext-quality-classifier-base

Finetuned
(300)
this model

Dataset used to train mbley/german-webtext-quality-classifier-base