opus-mt-ar-en-finetuned_augmented_MTback-ar-to-en

This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8227
  • Bleu: 66.3415
  • Gen Len: 59.569

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
2.3965 1.0 1098 1.1276 52.6355 62.191
2.0086 2.0 2196 0.9785 58.7217 61.399
1.7825 3.0 3294 0.9172 61.1046 61.316
1.6434 4.0 4392 0.8788 63.501 60.232
1.5295 5.0 5490 0.8571 64.7425 59.709
1.4316 6.0 6588 0.8419 65.7013 59.381
1.3766 7.0 7686 0.8315 65.9805 59.585
1.3241 8.0 8784 0.8254 66.2432 59.516
1.2965 9.0 9882 0.8238 66.2241 59.604
1.2877 10.0 10980 0.8227 66.3415 59.569

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for asas-ai/opus-mt-ar-en-finetuned_augmented_MT-ar-to-en

Finetuned
(20)
this model