qa-persian

This model is a fine-tuned version of HooshvareLab/bert-fa-base-uncased on the persian_qa and pquad dataset.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3

Code

You can find the github code from Persian QA Github

Downloads last month
136
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for AliBagherz/qa-persian

Finetuned
(9)
this model

Datasets used to train AliBagherz/qa-persian