GIST-all-MiniLM-L6-v2 Text Quality Model

This model rates the quality of English text for AI learning. Input a text string, and it outputs a numeric quality score reflecting overall informativeness and usefulness.

Performance

On the evaluation set, it achieved:

  • Loss: 0.1572
  • MSE: 0.1572
  • Combined Score: 0.1572
  • Tokens processed during training: 102,398,720

Usage Example

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_name = "agentlans/GIST-all-MiniLM-L6-v2-quality-v3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name).to("cuda" if torch.cuda.is_available() else "cpu")

# Higher scores indicate higher text quality.
# The sign of the score has no particular meaning.
# For example, a negative score doesn't necessarily mean that the text is low quality.
def quality(text):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True).to(model.device)
    with torch.no_grad():
        score = model(**inputs).logits.squeeze().cpu().item()
    return score

print(quality("Your text here."))

Limitations

  • Works best on non-fiction and general-purpose texts.
  • Scores give an overall quality estimate but don’t explain why.
  • For higher accuracy, try agentlans/deberta-v3-base-quality-v3.
  • Check for biases and suitability before use.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Mse Combined Score Input Tokens Seen
0.1777 1.0 10000 0.2354 0.2354 0.2354 10239872
0.1389 2.0 20000 0.1572 0.1572 0.1572 20479744
0.1 3.0 30000 0.1961 0.1961 0.1961 30719616
0.0687 4.0 40000 0.1596 0.1596 0.1596 40959488
0.0559 5.0 50000 0.1757 0.1757 0.1757 51199360
0.0409 6.0 60000 0.1677 0.1677 0.1677 61439232
0.0319 7.0 70000 0.1852 0.1852 0.1852 71679104
0.0266 8.0 80000 0.1840 0.1840 0.1840 81918976
0.0202 9.0 90000 0.1724 0.1724 0.1724 92158848
0.0172 10.0 100000 0.1731 0.1731 0.1731 102398720

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
14
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for agentlans/GIST-all-MiniLM-L6-v2-quality-v3

Finetuned
(2)
this model

Dataset used to train agentlans/GIST-all-MiniLM-L6-v2-quality-v3