model update
Browse files
README.md
CHANGED
@@ -102,7 +102,7 @@ model-index:
|
|
102 |
metrics:
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
-
value: 0.
|
106 |
- task:
|
107 |
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
type: multiple-choice-qa
|
@@ -198,7 +198,7 @@ This model achieves the following results on the relation understanding tasks:
|
|
198 |
- Accuracy on U4: 0.5879629629629629
|
199 |
- Accuracy on Google: 0.938
|
200 |
- Accuracy on ConceptNet Analogy: 0.43456375838926176
|
201 |
-
- Accuracy on T-Rex Analogy: 0.
|
202 |
- Accuracy on NELL-ONE Analogy: 0.6833333333333333
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-iloob-c-semeval2012/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.9184872683441314
|
|
|
102 |
metrics:
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
+
value: 0.5683060109289617
|
106 |
- task:
|
107 |
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
type: multiple-choice-qa
|
|
|
198 |
- Accuracy on U4: 0.5879629629629629
|
199 |
- Accuracy on Google: 0.938
|
200 |
- Accuracy on ConceptNet Analogy: 0.43456375838926176
|
201 |
+
- Accuracy on T-Rex Analogy: 0.5683060109289617
|
202 |
- Accuracy on NELL-ONE Analogy: 0.6833333333333333
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-iloob-c-semeval2012/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.9184872683441314
|