Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -16,11 +16,9 @@ The dataset aims to describe the entity that we can found in wikda.
|
|
16 |
|
17 |
<!-- Provide a longer summary of what this dataset is. -->
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
- **Curated by:** [Jean petit]
|
22 |
- **Language(s) (NLP):** This dataset it can be use to do NLP task like question-ansewering, semantic annotation, entity generation
|
23 |
-
- **License:**
|
24 |
|
25 |
|
26 |
## Uses
|
@@ -36,8 +34,7 @@ This dataset have used to fine tune LLM for semantic annotation task
|
|
36 |
The train dataset contains 03 colums:
|
37 |
* Label : the name of the wikidata entity
|
38 |
* entity: the wikidata uri of the given entity
|
39 |
-
* description: Description
|
40 |
|
41 |
-
[More Information Needed]
|
42 |
|
43 |
## Dataset Creation
|
|
|
16 |
|
17 |
<!-- Provide a longer summary of what this dataset is. -->
|
18 |
|
19 |
+
- **Curated by:** Jean petit
|
|
|
|
|
20 |
- **Language(s) (NLP):** This dataset it can be use to do NLP task like question-ansewering, semantic annotation, entity generation
|
21 |
+
- **License:** MIT
|
22 |
|
23 |
|
24 |
## Uses
|
|
|
34 |
The train dataset contains 03 colums:
|
35 |
* Label : the name of the wikidata entity
|
36 |
* entity: the wikidata uri of the given entity
|
37 |
+
* description: Description of the entity, it can be empty
|
38 |
|
|
|
39 |
|
40 |
## Dataset Creation
|