SpanBERT (spanbert-base-cased) fine-tuned on SQuAD v1.1
SpanBERT created by Facebook Research and fine-tuned on SQuAD 1.1 for Q&A downstream task.
Details of SpanBERT
A pre-training method that is designed to better represent and predict spans of text.
SpanBERT: Improving Pre-training by Representing and Predicting Spans
Details of the downstream task (Q&A) - Dataset
SQuAD 1.1 contains 100,000+ question-answer pairs on 500+ articles.
Dataset | Split | # samples |
---|---|---|
SQuAD1.1 | train | 87.7k |
SQuAD1.1 | eval | 10.6k |
Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found here
Results:
Metric | # Value |
---|---|
EM | 85.49 |
F1 | 91.98 |
Raw metrics:
{
"exact": 85.49668874172185,
"f1": 91.9845699540379,
"total": 10570,
"HasAns_exact": 85.49668874172185,
"HasAns_f1": 91.9845699540379,
"HasAns_total": 10570,
"best_exact": 85.49668874172185,
"best_exact_thresh": 0.0,
"best_f1": 91.9845699540379,
"best_f1_thresh": 0.0
}
Comparison:
Model | EM | F1 score |
---|---|---|
SpanBert official repo | - | 92.4* |
spanbert-finetuned-squadv1 | 85.49 | 91.98 |
Model in action
Fast usage with pipelines:
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/spanbert-finetuned-squadv1",
tokenizer="mrm8488/spanbert-finetuned-squadv1"
)
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
Created by Manuel Romero/@mrm8488 | LinkedIn
Made with ♥ in Spain
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.