nielsr HF Staff commited on
Commit
d09744f
·
verified ·
1 Parent(s): 279b2f6

Improve dataset card: Add task category, library name, and update paper link

Browse files

This PR improves the dataset card for `UroLlmEvalSet` by:
- Adding `question-answering` to the `task_categories` metadata, which better reflects the dataset's use in identifying and extracting specific information from text.
- Specifying `library_name: datasets` in the metadata, as the dataset is designed to be loaded with the `datasets` library.
- Updating the paper link in the "Citation and further information" section to point to the Hugging Face Papers page: https://huggingface.co/papers/2501.12106. The `url` field in the bibtex entry is also updated for consistency.

Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -7,6 +7,7 @@ size_categories:
7
  task_categories:
8
  - text-classification
9
  - feature-extraction
 
10
  pretty_name: UroLlmEvalSet
11
  dataset_info:
12
  features:
@@ -31,6 +32,7 @@ configs:
31
  path: data/eval-*
32
  tags:
33
  - medical
 
34
  ---
35
 
36
  # UroLlmEvalSet
@@ -80,7 +82,7 @@ As this dataset is primarily intended for evaluation purposes, the license restr
80
 
81
  ## Citation and further information
82
 
83
- Further information about the dataset and about the benchmark can be found in the following [article](https://doi.org/10.1186/s13040-025-00463-8):
84
 
85
  ```bibtex
86
  @article{UroLlmEval_2025,
@@ -92,7 +94,7 @@ Further information about the dataset and about the benchmark can be found in th
92
  volume = {18},
93
  number = {1},
94
  doi = {10.1186/s13040-025-00463-8},
95
- url = {https://doi.org/10.1186/s13040-025-00463-8}
96
  }
97
  ```
98
 
 
7
  task_categories:
8
  - text-classification
9
  - feature-extraction
10
+ - question-answering
11
  pretty_name: UroLlmEvalSet
12
  dataset_info:
13
  features:
 
32
  path: data/eval-*
33
  tags:
34
  - medical
35
+ library_name: datasets
36
  ---
37
 
38
  # UroLlmEvalSet
 
82
 
83
  ## Citation and further information
84
 
85
+ Further information about the dataset and about the benchmark can be found in the following [article](https://huggingface.co/papers/2501.12106):
86
 
87
  ```bibtex
88
  @article{UroLlmEval_2025,
 
94
  volume = {18},
95
  number = {1},
96
  doi = {10.1186/s13040-025-00463-8},
97
+ url = {https://huggingface.co/papers/2501.12106}
98
  }
99
  ```
100