The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'Biochemical', 'B-experimental_method'}) and 2 missing columns ({'Molecular', 'O'}). This happened while the csv dataset builder was generating data using hf://datasets/mevol/protein_structure_NER_model_v1.2/annotation_IOB/dev.tsv (at revision 1d43168eae8d6c7904319cf951ed8de2752da047) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast Biochemical: string B-experimental_method: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 527 to {'Molecular': Value(dtype='string', id=None), 'O': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'Biochemical', 'B-experimental_method'}) and 2 missing columns ({'Molecular', 'O'}). This happened while the csv dataset builder was generating data using hf://datasets/mevol/protein_structure_NER_model_v1.2/annotation_IOB/dev.tsv (at revision 1d43168eae8d6c7904319cf951ed8de2752da047) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Molecular
string | O
string |
---|---|
Dissection | O |
of | O |
Xyloglucan | B-chemical |
Recognition | O |
in | O |
a | O |
Prominent | O |
Human | B-species |
Gut | O |
Symbiont | O |
Polysaccharide | B-gene |
utilization | I-gene |
loci | I-gene |
( | O |
PUL | B-gene |
) | O |
within | O |
the | O |
genomes | O |
of | O |
resident | O |
human | B-species |
gut | O |
Bacteroidetes | B-taxonomy_domain |
are | O |
central | O |
to | O |
the | O |
metabolism | O |
of | O |
the | O |
otherwise | O |
indigestible | O |
complex | O |
carbohydrates | B-chemical |
known | O |
as | O |
“ | O |
dietary | O |
fiber | O |
.” | O |
However | O |
, | O |
functional | O |
characterization | O |
of | O |
PUL | B-gene |
lags | O |
significantly | O |
behind | O |
sequencing | O |
efforts | O |
, | O |
which | O |
limits | O |
physiological | O |
understanding | O |
of | O |
the | O |
human | B-species |
- | O |
bacterial | B-taxonomy_domain |
symbiosis | O |
. | O |
Here | O |
, | O |
we | O |
present | O |
the | O |
biochemical | B-experimental_method |
, | I-experimental_method |
structural | I-experimental_method |
, | I-experimental_method |
and | I-experimental_method |
reverse | I-experimental_method |
genetic | I-experimental_method |
characterization | I-experimental_method |
of | O |
two | O |
unique | O |
cell | B-protein_type |
surface | I-protein_type |
glycan | I-protein_type |
- | I-protein_type |
binding | I-protein_type |
proteins | I-protein_type |
( | O |
SGBPs | B-protein_type |
) | O |
encoded | O |
by | O |
a | O |
xyloglucan | B-gene |
utilization | I-gene |
locus | I-gene |
( | O |
XyGUL | B-gene |
) | O |
from | O |
Bacteroides | B-species |
Overview
This data was used to train model: https://huggingface.co/mevol/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.2
There are 19 different entity types in this dataset: "chemical", "complex_assembly", "evidence", "experimental_method", "gene", "mutant", "oligomeric_state", "protein", "protein_state", "protein_type", "ptm", "residue_name", "residue_name_number","residue_number", "residue_range", "site", "species", "structure_element", "taxonomy_domain"
The data prepared as IOB formated input has been used during training, develiopment and testing. Additional data formats such as JSON and XML as well as CSV files are also available and are described below.
Annotation was carried out with the free annotation tool TeamTat (https://www.teamtat.org/) and documents were downloaded as BioC XML before converting them to IOB, annotation only JSON and CSV format.
The number of annotations and sentences in each file is given below:
document ID | number of annotations in BioC XML | number of annotations in IOB/JSON/CSV | number of sentences |
---|---|---|---|
PMC4850273 | 1121 | 1121 | 204 |
PMC4784909 | 865 | 865 | 204 |
PMC4850288 | 716 | 708 | 146 |
PMC4887326 | 933 | 933 | 152 |
PMC4833862 | 1044 | 1044 | 192 |
PMC4832331 | 739 | 718 | 134 |
PMC4852598 | 1229 | 1218 | 250 |
PMC4786784 | 1549 | 1549 | 232 |
PMC4848090 | 987 | 985 | 191 |
PMC4792962 | 1268 | 1268 | 256 |
total | 10451 | 10409 | 1961 |
Documents and annotations are easiest viewed by using the BioC XML files and opening them in free annotation tool TeamTat (https://www.teamtat.org/). More about the BioC format can be found here: https://bioc.sourceforge.net/
Raw BioC XML files
These are the raw, un-annotated XML files for the publications in the dataset in BioC format. The files are found in the directory: "raw_BioC_XML". There is one file for each document and they follow standard naming "unique PubMedCentral ID"_raw.xml.
Annotations in IOB format
The IOB formated files can be found in the directory: "annotation_IOB" The four files are as follows:
- all.tsv --> all sentences and annotations used to create model "mevol/BiomedNLP-PubMedBERT-ProteinStructure-NER-v1.2"; 1961 sentences
- train.tsv --> training subset of the data; 1372 sentences
- dev.tsv --> development subset of the data; 294 sentences
- test.tsv --> testing subset of the data; 295 sentences
The total number of annotations is: 10409
Annotations in BioC JSON
The BioC formated JSON files of the publications have been downloaded from the annotation tool TeamTat. The files are found in the directory: "annotated_BioC_JSON" There is one file for each document and they follow standard naming "unique PubMedCentral ID"_ann.json
Each document JSON contains the following relevant keys:
- "sourceid" --> giving the numerical part of the unique PubMedCentral ID
- "text" --> containing the complete raw text of the publication as a string
- "denotations" --> containing a list of all the annotations for the text
Each annotation is a dictionary with the following keys:
- "span" --> gives the start and end of the annotatiom span defined by sub keys:
- "begin" --> character start position of annotation
- "end" --> character end position of annotation
- "obj" --> a string containing a number of terms that can be separated by ","; the order of the terms gives the following: entity type, reference to ontology, annotator, time stamp
- "id" --> unique annotation ID
Here an example:
[{"sourceid":"4784909",
"sourcedb":"",
"project":"",
"target":"",
"text":"",
"denotations":[{"span":{"begin":24,
"end":34},
"obj":"chemical,CHEBI:,[email protected],2023-03-21T15:19:42Z",
"id":"4500"},
{"span":{"begin":50,
"end":59},
"obj":"taxonomy_domain,DUMMY:,[email protected],2023-03-21T15:15:03Z",
"id":"1281"}]
}
]
Annotations in BioC XML
The BioC formated XML files of the publications have been downloaded from the annotation tool TeamTat. The files are found in the directory: "annotated_BioC_XML" There is one file for each document and they follow standard naming "unique PubMedCentral ID_ann.xml
The key XML tags to be able to visualise the annotations in TeamTat as well as extracting them to create the training data are "passage" and "offset". The "passage" tag encloses a text passage or paragraph to which the annotations are linked. "Offset" gives the passage/ paragraph offset and allows to determine the character starting and ending postions of the annotations. The tag "text" encloses the raw text of the passage.
Each annotation in the XML file is tagged as below:
- "annotation id=" --> giving the unique ID of the annotation
- "infon key="type"" --> giving the entity type of the annotation
- "infon key="identifier"" --> giving a reference to an ontology for the annotation
- "infon key="annotator"" --> giving the annotator
- "infon key="updated_at"" --> providing a time stamp for annotation creation/update
- "location" --> start and end character positions for the annotated text span
- "offset" --> start character position as defined by offset value
- "length" --> length of the annotation span; sum of "offset" and "length" creates the end character position
Here is a basic example of what the BioC XML looks like. Additional tags for document management are not given. Please refer to the documenttation to find out more.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE collection SYSTEM "BioC.dtd">
<collection>
<source>PMC</source>
<date>20140719</date>
<key>pmc.key</key>
<document>
<id>4784909</id>
<passage>
<offset>0</offset>
<text>The Structural Basis of Coenzyme A Recycling in a Bacterial Organelle</text>
<annotation id="4500">
<infon key="type">chemical</infon>
<infon key="identifier">CHEBI:</infon>
<infon key="annotator">[email protected]</infon>
<infon key="updated_at">2023-03-21T15:19:42Z</infon>
<location offset="24" length="10"/>
<text>Coenzyme A</text>
</annotation>
</passage>
</document>
</collection>
Annotations in CSV
The annotations and the relevant sentences they have been found in have also been made available as tab-separated CSV files, one for each publication in the dataset. The files can be found in directory "annotation_CSV". Each file is named as "unique PubMedCentral ID".csv.
The column labels in the CSV files are as follows:
- "anno_start" --> character start position of the annotation
- "anno_end" --> character end position of the annotation
- "anno_text" --> text covered by the annotation
- "entity_type" --> entity type of the annotation
- "sentence" --> sentence text in which the annotation was found
- "section" --> publication section in which the annotation was found
Annotations in JSON
A combined JSON file was created only containing the relevant sentences and associated annotations for each publication in the dataset. The file can be found in directory "annotation_JSON" under the name "annotations.json".
The following keys are used:
- "PMC4850273" --> unique PubMedCentral of the publication
- "annotations" --> list of dictionaries for the relevant, annotated sentences of the
document; each dictionary has the following sub keys
- "sid" --> unique sentence ID
- "sent" --> sentence text as string
- "section" --> publication section the sentence is in
- "ner" --> nested list of annotations; each sublist contains the following items: start character position, end character position, annotation text, entity type
Here is an example of a sentence and its annotations:
{"PMC4850273": {"annotations":
[{"sid": 0,
"sent": "Molecular Dissection of Xyloglucan Recognition in a Prominent Human Gut Symbiont",
"section": "TITLE",
"ner": [
[24,34,"Xyloglucan","chemical"],
[62,67,"Human","species"],]
},]
}}
- Downloads last month
- 61