hassan4830 commited on
Commit
da37e80
·
1 Parent(s): 292d863

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -10
README.md CHANGED
@@ -20,16 +20,6 @@ The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation
20
 
21
  It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.
22
 
23
- ## Intended uses & limitations
24
-
25
- You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
26
- be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
27
- fine-tuned versions on a task that interests you.
28
-
29
- Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
30
- to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
31
- generation you should look at model like GPT2.
32
-
33
  ### How to use
34
 
35
  You can use this model directly with a pipeline for masked language modeling:
 
20
 
21
  It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data.
22
 
 
 
 
 
 
 
 
 
 
 
23
  ### How to use
24
 
25
  You can use this model directly with a pipeline for masked language modeling: