DistilBERT-base-uncased-english-finetuned-squad

This model was finetuned on squad dataset. Use TFDistilBertForQuestionAnswering to import model. Requires DistilBertTokenizerFast to generate tokens that are accepted by this model.

Model description

Base DistilBERT model finetuned using squad dataset for NLP tasks such as context based question answering.

Training procedure

Trained for 3 epochs.

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: Adam with learning_rate=5e-5
  • training_precision: float32

Training results

Loss on final epoch: 0.6417 & validation loss: 1.2772 evaluation yet to be done.

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chintaaaaaan/DistilBERT-base-uncased-english-finetuned-squad

Finetuned
(7083)
this model

Dataset used to train chintaaaaaan/DistilBERT-base-uncased-english-finetuned-squad