Update README.md
Browse files
README.md
CHANGED
@@ -1,14 +1,20 @@
|
|
1 |
-
---
|
2 |
-
library_name: transformers
|
3 |
-
datasets:
|
4 |
-
- facebook/xnli
|
5 |
-
metrics:
|
6 |
-
- accuracy
|
7 |
-
base_model:
|
8 |
-
- FacebookAI/xlm-roberta-large
|
9 |
-
|
10 |
-
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
<!-- Provide a quick summary of what the model is/does. -->
|
14 |
|
@@ -16,33 +22,46 @@ base_model:
|
|
16 |
|
17 |
- **Developed by:** Adrien J.
|
18 |
- **Model type:** XLM-RoBERTa
|
19 |
-
- **
|
20 |
-
- **Finetuned from model:** FacebookAI/xlm-roberta-large
|
21 |
|
22 |
## How to Get Started
|
23 |
|
24 |
-
This model is ready-to-use for
|
25 |
|
26 |
```py
|
|
|
27 |
from transformers import pipeline
|
28 |
|
29 |
-
# Load
|
30 |
-
classifier = pipeline("text-classification",
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
|
|
|
|
|
|
|
|
39 |
```
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
## Dataset
|
42 |
|
43 |
-
The [XNLI dataset](https://huggingface.co/datasets/facebook/xnli) (Cross-lingual Natural Language Inference) is a benchmark dataset
|
44 |
-
|
45 |
-
|
|
|
|
|
46 |
|
47 |
## Training Hyperparameters
|
48 |
|
@@ -68,4 +87,4 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
68 |
|
69 |
- **Hardware Type:** 4x GPUs NVIDIA A100 SXM4 80GB
|
70 |
- **Hours used:** 7 hours
|
71 |
-
- **Compute Region:** France
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
datasets:
|
4 |
+
- facebook/xnli
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
base_model:
|
8 |
+
- FacebookAI/xlm-roberta-large
|
9 |
+
license: mit
|
10 |
+
tags:
|
11 |
+
- xlm-roberta
|
12 |
+
- finetuning
|
13 |
+
- xnli
|
14 |
+
- mnli
|
15 |
+
---
|
16 |
+
|
17 |
+
# XLM-RoBERTa Large finetuned on XNLI dataset
|
18 |
|
19 |
<!-- Provide a quick summary of what the model is/does. -->
|
20 |
|
|
|
22 |
|
23 |
- **Developed by:** Adrien J.
|
24 |
- **Model type:** XLM-RoBERTa
|
25 |
+
- **Languages (NLP):** Multilingual
|
26 |
+
- **Finetuned from model:** [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large)
|
27 |
|
28 |
## How to Get Started
|
29 |
|
30 |
+
This model is ready-to-use for text classification.
|
31 |
|
32 |
```py
|
33 |
+
import pandas as pd
|
34 |
from transformers import pipeline
|
35 |
|
36 |
+
# Load the classification pipeline
|
37 |
+
classifier = pipeline("text-classification", "ajayat/xlm-roberta-large-xnli")
|
38 |
+
classifier.model.config.id2label = {
|
39 |
+
0: "entailment",
|
40 |
+
1: "neutral",
|
41 |
+
2: "contradiction"
|
42 |
+
}
|
43 |
+
# Example premise and hypothesis
|
44 |
+
premise = "A soccer game with multiple males playing."
|
45 |
+
hypothesis = "Some men are playing a sport."
|
46 |
+
|
47 |
+
# Provide input as a dictionary with text and text_pair keys
|
48 |
+
result = classifier({'text': premise, 'text_pair': hypothesis}, top_k=None)
|
49 |
+
pd.DataFrame(result)
|
50 |
```
|
51 |
|
52 |
+
| | label | score |
|
53 |
+
|:-:|:-------------:|:--------:|
|
54 |
+
| 0 | entailment | 0.996513 |
|
55 |
+
| 1 | neutral | 0.003228 |
|
56 |
+
| 2 | contradiction | 0.000260 |
|
57 |
+
|
58 |
## Dataset
|
59 |
|
60 |
+
The [XNLI dataset](https://huggingface.co/datasets/facebook/xnli) (Cross-lingual Natural Language Inference) is a benchmark dataset
|
61 |
+
created by Facebook AI for evaluating cross-lingual understanding.
|
62 |
+
It extends the MultiNLI corpus by translating 7,500 human-annotated English sentence pairs (premise and hypothesis) into 14 languages.
|
63 |
+
|
64 |
+
Each pair is labeled as `entailment`, `contradiction`, or `neutral`.
|
65 |
|
66 |
## Training Hyperparameters
|
67 |
|
|
|
87 |
|
88 |
- **Hardware Type:** 4x GPUs NVIDIA A100 SXM4 80GB
|
89 |
- **Hours used:** 7 hours
|
90 |
+
- **Compute Region:** France
|