--- datasets: - squad library_name: transformers pipeline_tag: question-answering --- This model is a fine-tuned version of huawei-noah/TinyBERT_General_4L_312D on SQuAD dataset. ### Training parameters - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 32 - optimizer:paged_adamw_32bit - num_epochs: 1 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0