ShelterW's picture
Update README.md
0aa2b19
|
raw
history blame
1.79 kB
metadata
datasets:
  - seamew/ChnSentiCorp
metrics:
  - accuracy
  - precision
  - f1
  - recall
model-index:
  - name: gpt2-imdb-sentiment-classifier
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: imdb
          type: imdb
          args: plain_text
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9394
language:
  - zh
pipeline_tag: text-classification

gpt2-imdb-sentiment-classifier

This model is a fine-tuned version of hfl/rbt6 on the ChnSentiCorp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.294600
  • Accuracy: 0.933884

Intended uses & limitations

This is comparable to distilbert-imdb and trained with exactly the same script

It achieves slightly lower loss (0.1703 vs 0.1903) and slightly higher accuracy (0.9394 vs 0.928)

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • weight_decay=1e-2
  • num_train_epochs=3

Training results

Epoch Training Loss Validation Loss Accuracy F1 Precision Recall 1 0.359700 0.306089 0.924242 0.926230 0.918699 0.933884 2 0.200600 0.295512 0.942761 0.943615 0.946755 0.940496 3 0.105600 0.294600 0.941919 0.942452 0.951178 0.933884

Framework versions

  • Pytorch 2.0.0
  • Python 3.9.12