disitilbert_zeroshot_economics
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
Training and evaluation data
train on teknium/dataforge-economics on the task of zeroshot
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
eval loss : 0.0503
Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for AyoubChLin/DistilBERT_eco_ZeroShot
Unable to build the model tree, the base model loops to the model itself. Learn more.