agentlans's picture
Upload 13 files
4ded3e9 verified
|
raw
history blame
2.39 kB
metadata
library_name: transformers
license: mit
base_model: agentlans/deberta-v3-base-zyda-2-v2
tags:
  - generated_from_trainer
model-index:
  - name: deberta-v3-base-zyda-2-v2-text-quality-v3
    results: []

deberta-v3-base-zyda-2-v2-text-quality-v3

This model is a fine-tuned version of agentlans/deberta-v3-base-zyda-2-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1408
  • Mse: 0.1408
  • Combined Score: 0.1408
  • Num Input Tokens Seen: 102398720

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Mse Combined Score Input Tokens Seen
0.1635 1.0 10000 0.1854 0.1854 0.1854 10239872
0.1241 2.0 20000 0.1408 0.1408 0.1408 20479744
0.0882 3.0 30000 0.1747 0.1747 0.1747 30719616
0.054 4.0 40000 0.1528 0.1528 0.1528 40959488
0.0372 5.0 50000 0.1480 0.1480 0.1480 51199360
0.0263 6.0 60000 0.1524 0.1524 0.1524 61439232
0.0203 7.0 70000 0.1495 0.1495 0.1495 71679104
0.0135 8.0 80000 0.1482 0.1482 0.1482 81918976
0.0098 9.0 90000 0.1450 0.1450 0.1450 92158848
0.0073 10.0 100000 0.1453 0.1453 0.1453 102398720

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0