Datasets:
Dataset Viewer
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
fiNERweb
Dataset Description
fiNERweb is a multilingual named entity recognition dataset containing annotated text in multiple languages. Each example contains:
- Original text
- Tokenized text
- BIO tags
- Character spans for entities
- Token spans for entities
Languages
Currently supported languages:
- vi: Vietnamese
- ta: Tamil
- or: Odia (Oriya)
- sk: Slovak
- af: Afrikaans
- cs: Czech
- ga: Irish
- pt: Portuguese
- so: Somali
- sl: Slovenian
- cy: Welsh
- fy: Western Frisian
- uk: Ukrainian
- is: Icelandic
- la: Latin
- hy: Armenian
- bg: Bulgarian
- tr: Turkish
- uz: Uzbek
- nl: Dutch
- ps: Pashto
- be: Belarusian
- en: English
- xh: Xhosa
- jv: Javanese
- hi: Hindi
- my: Burmese
- br: Breton
- ur: Urdu
- sr: Serbian
- zh: Chinese (Mandarin)
- ka: Georgian
- hr: Croatian
- ml: Malayalam
- km: Khmer
- te: Telugu
- ru: Russian
- ar: Arabic
- de: German
- fr: French
- om: Oromo
- sw: Swahili
- az: Azerbaijani
- gl: Galician
- ko: Korean
- sd: Sindhi
- fi: Finnish
- lv: Latvian
- eo: Esperanto
- kk: Kazakh
- lt: Lithuanian
- mk: Macedonian
- eu: Basque
- am: Amharic
- he: Hebrew
- si: Sinhala
- ne: Nepali
- yi: Yiddish
- sq: Albanian
- it: Italian
- kn: Kannada
- mn: Mongolian
- ja: Japanese
- gu: Gujarati
- su: Sundanese
- ro: Romanian
- sa: Sanskrit
- ku: Kurdish
- ky: Kyrgyz
- ug: Uyghur
- gd: Scottish Gaelic
- es: Spanish
- et: Estonian
- th: Thai
- sv: Swedish
- hu: Hungarian
- bs: Bosnian
- bn: Bengali
- ca: Catalan
- mr: Marathi
- da: Danish
- pl: Polish
- el: Greek
- ms: Malay
- mg: Malagasy
- pa: Punjabi
- lo: Lao
- fa: Persian
- tl: Tagalog
- as: Assamese
- id: Indonesian
Dataset Structure
Each example contains:
{
"text": str, # Original text
"tokens": List[str], # Tokenized text
"bio_tags": List[str], # BIO tags for NER
"char_spans": List[Dict], # Character-level entity spans
"token_spans": List[Dict] # Token-level entity spans
}
Usage
from datasets import load_dataset
# Load a specific language
dataset = load_dataset("whoisjones/fiNERweb", "am") # For Amharic
# or
dataset = load_dataset("whoisjones/fiNERweb", "en") # For English
# Access the data
print(dataset["train"][0])
Citation
If you use this dataset, please cite:
@misc{fiNERweb,
author = {Jonas Golde},
title = {fiNERweb: Multilingual Named Entity Recognition Dataset},
year = {2024},
publisher = {HuggingFace},
journal = {HuggingFace Datasets},
howpublished = {\\url{https://huggingface.co/datasets/whoisjones/fiNERweb}}
}
Key changes:
- Updated all language codes to their ISO 639-1 equivalents
- Updated the language list in the metadata section
- Updated the language descriptions to use ISO codes
- Updated the usage examples to use ISO codes
This should resolve the language code validation errors in the metadata. The ISO 639-1 codes are the standard two-letter codes that Hugging Face expects in the metadata.
- Downloads last month
- 254