Add task category, license, code link

#2
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -1,11 +1,16 @@
1
  ---
2
  viewer: true
 
 
 
3
  ---
 
4
  # Medical Spoken Named Entity Recognition
5
 
6
  ## Description:
7
  Spoken Named Entity Recognition (NER) aims to extracting named entities from speech and categorizing them into types like person, location, organization, etc. In this work, we present VietMed-NER - the first spoken NER dataset in the medical domain. To our best knowledge, our real-world dataset is the largest spoken NER dataset in the world in terms of the number of entity types, featuring 18 distinct types. Secondly, we present baseline results using various state-of-the-art pre-trained models: encoder-only and sequence-to-sequence. We found that pre-trained multilingual models XLM-R outperformed all monolingual models on both reference text and ASR output. Also in general, encoders perform better than sequence-to-sequence models for the NER task. By simply translating, the transcript is applicable not just to Vietnamese but to other languages as well.
8
 
 
9
 
10
  Please cite this paper: https://arxiv.org/abs/2406.13337
11
 
@@ -17,8 +22,6 @@ Please cite this paper: https://arxiv.org/abs/2406.13337
17
  archivePrefix={arXiv},
18
  }
19
 
20
-
21
-
22
  ## Contact:
23
 
24
  Core developers:
 
1
  ---
2
  viewer: true
3
+ license: mit
4
+ task_categories:
5
+ - automatic-speech-recognition
6
  ---
7
+
8
  # Medical Spoken Named Entity Recognition
9
 
10
  ## Description:
11
  Spoken Named Entity Recognition (NER) aims to extracting named entities from speech and categorizing them into types like person, location, organization, etc. In this work, we present VietMed-NER - the first spoken NER dataset in the medical domain. To our best knowledge, our real-world dataset is the largest spoken NER dataset in the world in terms of the number of entity types, featuring 18 distinct types. Secondly, we present baseline results using various state-of-the-art pre-trained models: encoder-only and sequence-to-sequence. We found that pre-trained multilingual models XLM-R outperformed all monolingual models on both reference text and ASR output. Also in general, encoders perform better than sequence-to-sequence models for the NER task. By simply translating, the transcript is applicable not just to Vietnamese but to other languages as well.
12
 
13
+ Code: https://github.com/leduckhai/MultiMed
14
 
15
  Please cite this paper: https://arxiv.org/abs/2406.13337
16
 
 
22
  archivePrefix={arXiv},
23
  }
24
 
 
 
25
  ## Contact:
26
 
27
  Core developers: