--- language: - it license: apache-2.0 size_categories: - n<1K task_categories: - token-classification pretty_name: PharmaER.IT tags: - medical dataset_info: features: - name: document_id dtype: string - name: text dtype: string - name: tokens sequence: string - name: ner_tags sequence: string splits: - name: train num_bytes: 3895029 num_examples: 37 - name: validation num_bytes: 572348 num_examples: 10 - name: test num_bytes: 817616 num_examples: 10 - name: silver num_bytes: 266987758 num_examples: 2138 download_size: 67108580 dataset_size: 272272751 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* - split: silver path: data/silver-* ---

# PharmaER.IT PharmaER.IT is a dataset for entity recognition in the pharmaceutical domain in Italian. It is developed within the MAESTRO and RESPIRA projects, with the aim of providing high-quality annotations to support the development of specialized NLP models for the biomedical sector. The dataset was built through a semi-automatic procedure that combines automatic pre-annotation with validation by human experts (Human-in-the-Loop), allowing to guarantee accuracy and at the same time optimize time and resources. ## Dataset Details ### Dataset Description The dataset was created using the leaflets of drugs authorized by the Italian Medicines Agency (AIFA). The annotated entity types include: - **DRUG**: names of pharmaceutical products; - **DISEASE**: terms referring to diseases; - **SYMPTOM**: words describing symptoms; - **ANATOMICAL_PART**: references to anatomical parts of the human body. It follows the BIO (Beginning, Inside, Outside) tagging format, which is commonly used for sequence labeling tasks in Named Entity Recognition (NER). PharmaER.IT is composed of two corpus: - **Gold**: Consists of 57 documents divided into training (37), validation (10), and test (10) sets. Annotations were produced through a semi-automatic procedure and finalized with expert human validation. - **Silver**: Includes a larger set of 2138 documents automatically annotated using the same procedure applied to the Gold set, but without final manual validation. The current version is 0.3. ### Dataset Info - **Curated by:** - Leonardo Rigutini, Andrea Zugarini, Stefano Ligabue, Simone Martin Marotta, Marta Spagnolli, Vincenzo Masucci - expert.ai - **Shared by:** - Leonardo Rigutini, Andrea Zugarini - **Funded by:** - MAESTRO - Mitigare le Allucinazioni dei Large Language Models: ESTRazione di informazioni Ottimizzate” a project funded by Provincia Autonoma di Trento with the Lp 6/99 Art. 5:ricerca e sviluppo, PAT/RFS067-05/06/2024-0428372, CUP:C79J23001170001 12; - ReSpiRA - REplicabilità, SPIegabilità e Ragionamento”, a project financed by FAIR, Affiliated to spoke no. 2, falling within the PNRR MUR programme, Mission 4, Component 2, Investment 1.3, D.D. No. 341 of 03/15/2022, Project PE0000013, CUP B43D22000900004; - **Language(s) (NLP):** - Italian - **License:** - Apache2.0 ### Dataset Structure Each set consists of a json array of objects with: - document_id: the original document id; - text: the raw text content extracted from the original PDF; - tokens: an array of strings representing the tokenized text; - labels: an array of strings representing the annotation assigned to each token. The data points considered are: - **DRUG**: words representing drugs; - **DISEASE**: words indicating diseases; - **SYMPTOM**: words indicating a symptom; - **ANATOMICAL_PART**: words representing anatomical parts of the human body; - **O**: used for all the remaining words that do not correspond to any of the previous entities. The dataset consists of two sets: the **GOLD** and the **SILVER** corpus. ### The GOLD Corpus The GOLD corpus was created following a semi-automatic procedure. After downloading about 8000 leaflets from the AIFA website, a part of them (67) were labeled using a committee made up of expert systems and LLMs. The generated annotations were reported on the original documents, highlighting the cases of agreement and disagreement between the committee's models. The set was divided into 5 "BATCH" that were provided to a team of 5 experts with the task of validating the annotations, by accepting or modifying the proposals inserted by the automatic procedure. Finally, the resulting dataset was splitted into three sets: train (37), validation (10) and test (10) sets. The following table reports the distribution of the entities in the GOLD coaprus: | Data Point | Train | Validation | Test | **Total** | |:------------------|:-----:|:----------:|:----:|:---------:| | DRUG | 5911 | 716 | 1222 | 7849 | | DISEASE | 3344 | 477 | 614 | 4435 | | SYMPTOM | 2582 | 363 | 480 | 3425 | | ANATOMICAL_PART | 817 | 121 | 186 | 1124 | | | | | | | | **Total** | 12654 | 1677 | 2502 | 16833 | #### Quality assessment of GOLD corpus supervisions In each BATCH, documents shared, in pairs, with other annotators were inserted. These documents were used to evaluate some agreement indices in order to provide a measure of the consistency of the annotations in the GOLD corpus. The results are reported in the following table: | Data Point | JPA | CPA | Coverage | k-Cohen | |------------------|:-----:|:-----:|:--------:|:-------:| | DRUG | 0.85 | 0.91 | 0.91 | 0.90 | | DISEASE | 0.98 | 0.84 | 0.86 | 0.83 | | SYMPTOM | 0.74 | 0.86 | 0.87 | 0.84 | | ANATOMICAL_PART | 0.68 | 0.84 | 0.84 | 0.76 | | | | | | | | **Average** | 0.81 | 0.86 | 0.87 | 0.83 | ### The SILVER Corpus The SILVER corpus consists of 2138 leaflets, sampled from the remaining 8567 documents. These were pre-annotated using the same algorithm adopted for the GOLD corpus, but without any subsequent human revision. The following table reports the distribution of the entities in the SILVER corpus: | Data Point | **Total** | |:------------------|:---------:| | DRUG | 385210 | | DISEASE | 245240 | | SYMPTOM | 80763 | | ANATOMICAL_PART | 70587 | | | | | **Total** | 781800 | ## Leaderboard Results of state-of-the-art encoders fine-tuned for token classification: | #pos | Models | Precision | Recall | **F1** | |:----:|-------------------------|:---------:|:------:|:------:| | 1° | xlm-roberta-large | 0.7025 | 0.7428 | 0.7221 | | 2° | roberta-large | 0.7142 | 0.7191 | 0.7166 | | 3° | roberta | 0.6686 | 0.7171 | 0.6920 | | 4° | bert-italian-cased | 0.6537 | 0.7257 | 0.6879 | | 5° | xlm-roberta | 0.6616 | 0.7149 | 0.6872 | | 6° | bert-multilingual-cased | 0.6460 | 0.6810 | 0.6630 | Zero-shot extraction with simple prompt: | #pos | Models | Precision | Recall | **F1** | |:----:|------------------------|:---------:|:------:|:------:| | 1° | Mistral-Small-3.1-24B | 0.4361 | 0.6190 | 0.5117 | | 2° | LLaMAntino-3-8B | 0.4020 | 0.5536 | 0.4658 | | 3° | Llama-3.1-8B | 0.3890 | 0.3847 | 0.3869 | | 4° | EuroLLM-9B | 0.4313 | 0.1665 | 0.2402 | ## Dataset Card Contacts Leonardo Rigutini (lrigutini@expert.ai), Andrea Zugarini (azugarini@expert.ai)