asahi417 commited on
Commit
bb3f093
·
1 Parent(s): 4f76b13

model update

Browse files
Files changed (2) hide show
  1. README.md +6 -6
  2. metric_summary.json +1 -1
README.md CHANGED
@@ -18,13 +18,13 @@ model-index:
18
  metrics:
19
  - name: F1
20
  type: f1
21
- value: 0.05197873597164796
22
  - name: F1 (macro)
23
  type: f1_macro
24
- value: 0.016470147857009173
25
  - name: Accuracy
26
  type: accuracy
27
- value: 0.05197873597164796
28
  pipeline_tag: text-classification
29
  widget:
30
  - text: "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
@@ -37,9 +37,9 @@ widget:
37
  This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). This model is fine-tuned on `train_2020` split and validated on `test_2021` split of tweet_topic.
38
  Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
39
 
40
- - F1 (micro): 0.05197873597164796
41
- - F1 (macro): 0.016470147857009173
42
- - Accuracy: 0.05197873597164796
43
 
44
 
45
  ### Usage
 
18
  metrics:
19
  - name: F1
20
  type: f1
21
+ value: 0.8824571766095688
22
  - name: F1 (macro)
23
  type: f1_macro
24
+ value: 0.7401873227149222
25
  - name: Accuracy
26
  type: accuracy
27
+ value: 0.8824571766095688
28
  pipeline_tag: text-classification
29
  widget:
30
  - text: "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
 
37
  This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). This model is fine-tuned on `train_2020` split and validated on `test_2021` split of tweet_topic.
38
  Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
39
 
40
+ - F1 (micro): 0.8824571766095688
41
+ - F1 (macro): 0.7401873227149222
42
+ - Accuracy: 0.8824571766095688
43
 
44
 
45
  ### Usage
metric_summary.json CHANGED
@@ -1 +1 @@
1
- {"test/eval_loss": 1.917781949043274, "test/eval_f1": 0.05197873597164796, "test/eval_f1_macro": 0.016470147857009173, "test/eval_accuracy": 0.05197873597164796, "test/eval_runtime": 55.9561, "test/eval_samples_per_second": 30.256, "test/eval_steps_per_second": 1.894}
 
1
+ {"test/eval_loss": 0.6086769104003906, "test/eval_f1": 0.8824571766095688, "test/eval_f1_macro": 0.7401873227149222, "test/eval_accuracy": 0.8824571766095688, "test/eval_runtime": 53.4778, "test/eval_samples_per_second": 31.658, "test/eval_steps_per_second": 1.982}