roberta-base es for QA

This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-bne on the squad_es(v2) training dataset.

Hyperparameters

The hyperparameters were chosen based on those used in deepset/roberta-base-squad2, an english-based model trained for similar purposes

 --num_train_epochs 2
 --learning_rate  3e-5
 --max_seq_length 386
 --doc_stride 128 

Performance

Evaluated on the squad_es(v2) dev set.

  eval_exact": 62.13526733007252,
  eval_f1": 69.38515019522332,  
  
  
  eval_HasAns_exact": 53.07017543859649,
  eval_HasAns_f1": 67.57238714827123,
  eval_HasAns_total": 5928,
  eval_NoAns_exact": 71.19730185497471,
  eval_NoAns_f1": 71.19730185497471,
  eval_NoAns_total": 5930,
  

Team

Santiago Maximo: smaximo

Downloads last month
23
Safetensors
Model size
124M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train somosnlp-hackathon-2022/roberta-base-bne-squad2-es

Space using somosnlp-hackathon-2022/roberta-base-bne-squad2-es 1