shlokn commited on
Commit
ce5a12e
Β·
unverified Β·
1 Parent(s): 37d1825

feat: new dataset structure

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. LOADING_TROUBLESHOOTING.md +0 -49
  2. README.md +56 -39
  3. TESTING.md +0 -173
  4. {train/texts β†’ articles}/PMC10038974.md +0 -0
  5. {train/texts β†’ articles}/PMC10085626.md +0 -0
  6. {train/texts β†’ articles}/PMC10091789.md +0 -0
  7. {train/texts β†’ articles}/PMC10099095.md +0 -0
  8. {train/texts β†’ articles}/PMC10139129.md +0 -0
  9. {train/texts β†’ articles}/PMC10145266.md +0 -0
  10. {train/texts β†’ articles}/PMC10152845.md +0 -0
  11. {train/texts β†’ articles}/PMC10154044.md +0 -0
  12. {train/texts β†’ articles}/PMC10159199.md +0 -0
  13. {train/texts β†’ articles}/PMC10163902.md +0 -0
  14. {test/texts β†’ articles}/PMC10179231.md +0 -0
  15. {train/texts β†’ articles}/PMC10196221.md +0 -0
  16. {train/texts β†’ articles}/PMC10214567.md +0 -0
  17. {train/texts β†’ articles}/PMC10230242.md +0 -0
  18. {train/texts β†’ articles}/PMC10244018.md +0 -0
  19. {train/texts β†’ articles}/PMC10275785.md +0 -0
  20. {test/texts β†’ articles}/PMC10278212.md +0 -0
  21. {train/texts β†’ articles}/PMC1029622.md +0 -0
  22. {train/texts β†’ articles}/PMC10298263.md +0 -0
  23. {train/texts β†’ articles}/PMC10309098.md +0 -0
  24. {train/texts β†’ articles}/PMC10327396.md +0 -0
  25. {train/texts β†’ articles}/PMC10337687.md +0 -0
  26. {train/texts β†’ articles}/PMC10349379.md +0 -0
  27. {train/texts β†’ articles}/PMC10349800.md +0 -0
  28. {train/texts β†’ articles}/PMC10377184.md +0 -0
  29. {train/texts β†’ articles}/PMC10381361.md +0 -0
  30. {train/texts β†’ articles}/PMC10409991.md +0 -0
  31. {test/texts β†’ articles}/PMC10418744.md +0 -0
  32. {train/texts β†’ articles}/PMC10452379.md +0 -0
  33. {train/texts β†’ articles}/PMC10463210.md +0 -0
  34. {train/texts β†’ articles}/PMC10478012.md +0 -0
  35. {train/texts β†’ articles}/PMC10483403.md +0 -0
  36. {val/texts β†’ articles}/PMC10495004.md +0 -0
  37. {test/texts β†’ articles}/PMC10499425.md +0 -0
  38. {train/texts β†’ articles}/PMC10501538.md +0 -0
  39. {test/texts β†’ articles}/PMC10502099.md +0 -0
  40. {train/texts β†’ articles}/PMC10526247.md +0 -0
  41. {train/texts β†’ articles}/PMC10527451.md +0 -0
  42. {train/texts β†’ articles}/PMC10529681.md +0 -0
  43. {train/texts β†’ articles}/PMC10532840.md +0 -0
  44. {train/texts β†’ articles}/PMC10532907.md +0 -0
  45. {train/texts β†’ articles}/PMC10537526.md +0 -0
  46. {train/texts β†’ articles}/PMC10557961.md +0 -0
  47. {train/texts β†’ articles}/PMC10565537.md +0 -0
  48. {test/texts β†’ articles}/PMC10566653.md +0 -0
  49. {train/texts β†’ articles}/PMC10582663.md +0 -0
  50. {train/texts β†’ articles}/PMC10583240.md +0 -0
LOADING_TROUBLESHOOTING.md DELETED
@@ -1,49 +0,0 @@
1
- # Loading Troubleshooting
2
-
3
- ## CastError: Column Names Don't Match
4
-
5
- If you encounter a `CastError` about column names not matching when loading the dataset from Hugging Face Hub, this is caused by schema conflicts between different file formats in the repository.
6
-
7
- ### Quick Fix
8
-
9
- Use explicit JSONL data files:
10
-
11
- ```python
12
- from datasets import load_dataset
13
-
14
- dataset = load_dataset('json', data_files={
15
- 'train': 'hf://datasets/shlokn/autogkb/train.jsonl',
16
- 'validation': 'hf://datasets/shlokn/autogkb/val.jsonl',
17
- 'test': 'hf://datasets/shlokn/autogkb/test.jsonl'
18
- })
19
- ```
20
-
21
- ### Why This Happens
22
-
23
- The dataset repository contains both:
24
- - `.jsonl` files with the processed dataset (recommended for most use cases)
25
- - `.tsv` files with raw annotations (for manual inspection)
26
-
27
- The Hugging Face datasets library auto-detects both formats and tries to merge them, but they have slightly different column names:
28
- - JSONL files: `isplural`, `is_plural` (processed field names)
29
- - TSV files: `isPlural` (original column name)
30
-
31
- ### The Fix
32
-
33
- We've added a `.huggingfaceignore` file to prevent auto-detection of the TSV files. However, it may take some time for this change to propagate on the Hub.
34
-
35
- ### Alternative Loading Methods
36
-
37
- See `load_dataset_examples.py` for multiple ways to load the dataset:
38
- 1. **Explicit JSONL files** (recommended, most reliable)
39
- 2. **Standard HF loading** (works after .huggingfaceignore is processed)
40
- 3. **Local loading** (if you have the repo cloned)
41
- 4. **Manual loading** (using pandas directly)
42
-
43
- ### Verification
44
-
45
- Run the test script to verify loading works:
46
-
47
- ```bash
48
- python test_hf_loading.py
49
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -20,8 +20,16 @@ multilinguality:
20
  pretty_name: AutoGKB Annotation Benchmark
21
  dataset_info:
22
  features:
23
- - name: variant_annotation_id
 
 
 
 
 
 
24
  dtype: string
 
 
25
  - name: variant_haplotypes
26
  dtype: string
27
  - name: gene
@@ -29,7 +37,7 @@ dataset_info:
29
  - name: drugs
30
  dtype: string
31
  - name: pmid
32
- dtype: string
33
  - name: phenotype_category
34
  dtype: string
35
  - name: significance
@@ -64,8 +72,6 @@ dataset_info:
64
  dtype: string
65
  - name: comparison_metabolizer_types
66
  dtype: string
67
- - name: text
68
- dtype: string
69
  splits:
70
  - name: train
71
  num_examples: 3124
@@ -102,25 +108,33 @@ Each example contains:
102
  Example:
103
  ```json
104
  {
105
- "variant_annotation_id": "1450936460",
106
- "variant_haplotypes": "rs28362731",
107
- "gene": "AQP1",
108
- "drugs": "cisplatin",
109
- "pmid": "30840592",
110
- "phenotype_category": "Efficacy",
111
- "significance": "no",
112
- "sentence": "Genotype AG is not associated with response to cisplatin in people with Mesothelioma as compared to genotype GG.",
113
- "is_is_not_associated": "Not associated with",
114
- "direction_of_effect": "",
115
- "population_phenotypes_or_diseases": "Other:Mesothelioma",
116
- "comparison_alleles_or_genotypes": "GG",
117
- "text": "# Prediction of CYP2D6 poor metabolizers..."
 
 
 
 
118
  }
119
  ```
120
 
121
  ### Data Fields
122
 
123
  #### Core Fields
 
 
 
 
124
  - `variant_annotation_id`: Unique identifier for each annotation
125
  - `variant_haplotypes`: Genetic variant identifier (e.g., rs numbers, haplotypes)
126
  - `gene`: Gene symbol (e.g., CYP2D6, ABCB1)
@@ -149,8 +163,6 @@ Example:
149
  - `comparison_alleles_or_genotypes`: Reference genotypes for comparison
150
  - `comparison_metabolizer_types`: Reference metabolizer types
151
 
152
- #### Text Data
153
- - `text`: Full text of the source scientific article in markdown format
154
 
155
  ### Data Splits
156
 
@@ -234,10 +246,10 @@ This dataset is released under the Apache License 2.0.
234
  ### Citation Information
235
 
236
  ```bibtex
237
- @misc{variant_drug_benchmark_2024,
238
- title={Variant-Drug Annotation Benchmark},
239
- author={AutoGKB Team},
240
- year={2024},
241
  note={A benchmark for pharmacogenomic variant-drug annotation extraction from scientific literature}
242
  }
243
  ```
@@ -257,7 +269,8 @@ This dataset contributes to the biomedical NLP community by providing:
257
  ```python
258
  from datasets import load_dataset
259
 
260
- dataset = load_dataset("variant_drug_benchmark")
 
261
 
262
  # Access different splits
263
  train_data = dataset["train"]
@@ -268,6 +281,14 @@ test_data = dataset["test"]
268
  efficacy_examples = train_data.filter(
269
  lambda x: "Efficacy" in x["phenotype_category"]
270
  )
 
 
 
 
 
 
 
 
271
  ```
272
 
273
  ### Evaluation
@@ -288,21 +309,17 @@ python evaluate.py baseline_predictions.tsv val/annotations.tsv --output results
288
  ## File Structure
289
 
290
  ```
291
- benchmark/
292
- β”œβ”€β”€ train/
293
- β”‚ β”œβ”€β”€ annotations.tsv # Training annotations
294
- β”‚ └── texts/ # Training text files (PMC*.md)
295
- β”œβ”€β”€ val/
296
- β”‚ β”œβ”€β”€ annotations.tsv # Validation annotations
297
- β”‚ └── texts/ # Validation text files
298
- β”œβ”€β”€ test/
299
- β”‚ β”œβ”€β”€ annotations.tsv # Test annotations
300
- β”‚ └── texts/ # Test text files
301
- β”œβ”€β”€ dataset_statistics.json
302
- β”œβ”€β”€ evaluate.py # Evaluation script
303
- β”œβ”€β”€ baseline_model.py # Rule-based baseline
304
- β”œβ”€β”€ variant_drug_benchmark.py # HuggingFace dataset script
305
- β”œβ”€β”€ dataset_infos.json # Dataset metadata
306
  β”œβ”€β”€ LICENSE # Apache 2.0 license
307
  └── README.md # This file
308
  ```
 
20
  pretty_name: AutoGKB Annotation Benchmark
21
  dataset_info:
22
  features:
23
+ - name: pmcid
24
+ dtype: string
25
+ - name: article_title
26
+ dtype: string
27
+ - name: article_path
28
+ dtype: string
29
+ - name: article_text
30
  dtype: string
31
+ - name: variant_annotation_id
32
+ dtype: int64
33
  - name: variant_haplotypes
34
  dtype: string
35
  - name: gene
 
37
  - name: drugs
38
  dtype: string
39
  - name: pmid
40
+ dtype: int64
41
  - name: phenotype_category
42
  dtype: string
43
  - name: significance
 
72
  dtype: string
73
  - name: comparison_metabolizer_types
74
  dtype: string
 
 
75
  splits:
76
  - name: train
77
  num_examples: 3124
 
108
  Example:
109
  ```json
110
  {
111
+ "pmcid": "PMC6714673",
112
+ "article_title": "Warfarin Dose Model for the Prediction of Stable Maintenance Dose in Indian Patients",
113
+ "article_path": "articles/PMC6714673.md",
114
+ "article_text": "# Warfarin Dose Model for the Prediction of Stable Maintenance Dose in Indian Patients\n\n## Abstract\n\nWarfarin is a commonly used anticoagulant...",
115
+ "variant_annotation_id": 1449192282,
116
+ "variant_haplotypes": "rs1799853",
117
+ "gene": "CYP2C9",
118
+ "drugs": "warfarin",
119
+ "pmid": 28049362,
120
+ "phenotype_category": "Dosage",
121
+ "significance": "yes",
122
+ "sentence": "Genotype CT is associated with decreased dose of warfarin as compared to genotype CC.",
123
+ "alleles": "CT",
124
+ "is_is_not_associated": "Associated with",
125
+ "direction_of_effect": "decreased",
126
+ "pd_pk_terms": "dose of",
127
+ "comparison_alleles_or_genotypes": "CC"
128
  }
129
  ```
130
 
131
  ### Data Fields
132
 
133
  #### Core Fields
134
+ - `pmcid`: PubMed Central identifier of the source article
135
+ - `article_title`: Title of the source scientific article
136
+ - `article_path`: Relative path to the article file (markdown format)
137
+ - `article_text`: Full text of the scientific article in markdown format
138
  - `variant_annotation_id`: Unique identifier for each annotation
139
  - `variant_haplotypes`: Genetic variant identifier (e.g., rs numbers, haplotypes)
140
  - `gene`: Gene symbol (e.g., CYP2D6, ABCB1)
 
163
  - `comparison_alleles_or_genotypes`: Reference genotypes for comparison
164
  - `comparison_metabolizer_types`: Reference metabolizer types
165
 
 
 
166
 
167
  ### Data Splits
168
 
 
246
  ### Citation Information
247
 
248
  ```bibtex
249
+ @misc{autogkb_annotation_benchmark_2025,
250
+ title={AutoGKB Annotation Benchmark},
251
+ author={Shlok Natarajan},
252
+ year={2025},
253
  note={A benchmark for pharmacogenomic variant-drug annotation extraction from scientific literature}
254
  }
255
  ```
 
269
  ```python
270
  from datasets import load_dataset
271
 
272
+ # Load the dataset from Hugging Face Hub
273
+ dataset = load_dataset("autogkb/autogkb-annotation-benchmark")
274
 
275
  # Access different splits
276
  train_data = dataset["train"]
 
281
  efficacy_examples = train_data.filter(
282
  lambda x: "Efficacy" in x["phenotype_category"]
283
  )
284
+
285
+ # Example: Access article text for a specific annotation
286
+ first_example = train_data[0]
287
+ print(f"PMC ID: {first_example['pmcid']}")
288
+ print(f"Article Title: {first_example['article_title']}")
289
+ print(f"Gene: {first_example['gene']}")
290
+ print(f"Drug: {first_example['drugs']}")
291
+ print(f"Full Article Text: {first_example['article_text'][:500]}...")
292
  ```
293
 
294
  ### Evaluation
 
309
  ## File Structure
310
 
311
  ```
312
+ autogkb/
313
+ β”œβ”€β”€ articles/ # Full article texts in markdown format
314
+ β”‚ β”œβ”€β”€ PMC10038974.md
315
+ β”‚ β”œβ”€β”€ PMC10085626.md
316
+ β”‚ └── ... # 1,431 articles total
317
+ β”œβ”€β”€ train.jsonl # Training annotations (3,124 examples)
318
+ β”œβ”€β”€ val.jsonl # Validation annotations (796 examples)
319
+ β”œβ”€β”€ test.jsonl # Test annotations (596 examples)
320
+ β”œβ”€β”€ autogkb_annotation_benchmark.py # HuggingFace dataset script
321
+ β”œβ”€β”€ dataset_infos.json # Dataset metadata
322
+ β”œβ”€β”€ dataset_statistics.json # Dataset statistics
 
 
 
 
323
  β”œβ”€β”€ LICENSE # Apache 2.0 license
324
  └── README.md # This file
325
  ```
TESTING.md DELETED
@@ -1,173 +0,0 @@
1
- # AutoGKB Dataset Testing Guide
2
-
3
- This directory contains comprehensive testing utilities for the AutoGKB Annotation Benchmark dataset to ensure reliable loading and data integrity.
4
-
5
- ## Quick Start
6
-
7
- ```bash
8
- # Run quick test to verify everything works
9
- python load_dataset_safe.py test
10
-
11
- # Run comprehensive simple tests
12
- python test_dataset_simple.py
13
-
14
- # Run full test suite (may take longer)
15
- python test_dataset_loading.py
16
- ```
17
-
18
- ## Test Files
19
-
20
- ### `test_dataset_simple.py`
21
- **Recommended for routine testing**
22
- - Fast execution (< 30 seconds)
23
- - Tests TSV structure and headers
24
- - Tests dataset loading with small samples
25
- - Tests data content quality
26
- - Good for CI/CD pipelines
27
-
28
- ### `test_dataset_loading.py`
29
- **Comprehensive test suite**
30
- - Full unittest framework
31
- - Tests cache handling
32
- - Tests multiple loading methods
33
- - Tests data integrity across all splits
34
- - More thorough but slower
35
-
36
- ### `load_dataset_safe.py`
37
- **Production-ready loading utility**
38
- - Handles cache issues automatically
39
- - Multiple fallback loading methods
40
- - Safe for use in development and production
41
- - Includes quick testing function
42
-
43
- ## Common Issues and Solutions
44
-
45
- ### Cache Issues
46
- The HuggingFace datasets library can have caching conflicts. Our utilities automatically:
47
- - Disable caching when needed
48
- - Clear problematic cache directories
49
- - Provide fallback loading methods
50
-
51
- ### Loading Methods
52
- The safe loader tries multiple approaches:
53
- 1. **Manual generation**: Direct data loading without caching
54
- 2. **Temporary disk**: Save to temp location and reload
55
- 3. **CSV fallback**: Direct pandas loading as last resort
56
-
57
- ## Usage Examples
58
-
59
- ### Basic Loading
60
- ```python
61
- from load_dataset_safe import load_autogkb_safe
62
-
63
- # Load small sample for testing
64
- dataset = load_autogkb_safe(split="train", max_examples=10)
65
-
66
- # Load specific split
67
- train_data = load_autogkb_safe(split="train")
68
-
69
- # Load all splits
70
- dataset_dict = load_autogkb_safe()
71
- ```
72
-
73
- ### Advanced Options
74
- ```python
75
- # Disable caching (recommended for development)
76
- dataset = load_autogkb_safe(use_cache=False)
77
-
78
- # Load from specific directory
79
- dataset = load_autogkb_safe(data_dir="/path/to/dataset")
80
-
81
- # Quick test function
82
- from load_dataset_safe import quick_test
83
- success = quick_test(max_examples=5)
84
- ```
85
-
86
- ## Dataset Validation
87
-
88
- The tests verify:
89
-
90
- ### βœ… File Structure
91
- - All required TSV files exist (`train/`, `val/`, `test/`)
92
- - Correct column headers (22 columns matching schema)
93
- - Non-empty files
94
-
95
- ### βœ… Data Loading
96
- - Dataset builder initialization
97
- - Manual data generation
98
- - Cache-aware loading
99
- - Fallback methods
100
-
101
- ### βœ… Data Integrity
102
- - No duplicate IDs within splits
103
- - Required fields are non-empty
104
- - Correct data types (all strings)
105
- - PMID format validation
106
-
107
- ### βœ… Text Matching
108
- - PMID to text file matching
109
- - Text content availability
110
- - Cross-reference validation
111
-
112
- ## Expected Test Output
113
-
114
- ```
115
- πŸ§ͺ AutoGKB Dataset Simple Test Suite
116
- ==================================================
117
- πŸ” Testing TSV file structure...
118
- βœ… train/annotations.tsv: 22 columns
119
- βœ… val/annotations.tsv: 22 columns
120
- βœ… test/annotations.tsv: 22 columns
121
- βœ… TSV structure test passed!
122
-
123
- πŸ“¦ Testing dataset loading...
124
- βœ… train: 3 examples loaded
125
- βœ… validation: 3 examples loaded
126
- βœ… test: 3 examples loaded
127
- βœ… Dataset loading test passed!
128
-
129
- πŸ“š Testing full dataset loading...
130
- βœ… Full dataset test passed!
131
- Splits: ['train', 'validation', 'test']
132
- train: 5 examples
133
- validation: 5 examples
134
- test: 5 examples
135
-
136
- πŸ” Testing data content quality...
137
- βœ… Data content test passed! Checked 10 examples
138
-
139
- πŸŽ‰ All tests passed!
140
- ```
141
-
142
- ## Troubleshooting
143
-
144
- ### If tests fail:
145
-
146
- 1. **Check file structure**: Ensure `train/`, `val/`, `test/` directories exist with `annotations.tsv` files
147
- 2. **Clear cache**: Run `python -c "from load_dataset_safe import clear_datasets_cache; clear_datasets_cache()"`
148
- 3. **Check dependencies**: Ensure `datasets`, `pandas` are installed
149
- 4. **Run with verbose output**: Add print statements or use the comprehensive test suite
150
-
151
- ### Common error messages:
152
-
153
- - **"Required file missing"**: Check that TSV files exist in correct locations
154
- - **"Column mismatch"**: Verify TSV headers match schema (run column header test)
155
- - **"Cache issues"**: Use `use_cache=False` or clear cache directory
156
- - **"Loading failed"**: Try fallback methods or check file permissions
157
-
158
- ## Integration with CI/CD
159
-
160
- For automated testing:
161
-
162
- ```bash
163
- # Add to your CI pipeline
164
- python test_dataset_simple.py
165
- if [ $? -eq 0 ]; then
166
- echo "Dataset tests passed"
167
- else
168
- echo "Dataset tests failed"
169
- exit 1
170
- fi
171
- ```
172
-
173
- The simple test suite is designed to be fast and reliable for continuous integration environments.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
{train/texts β†’ articles}/PMC10038974.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10085626.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10091789.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10099095.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10139129.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10145266.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10152845.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10154044.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10159199.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10163902.md RENAMED
File without changes
{test/texts β†’ articles}/PMC10179231.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10196221.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10214567.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10230242.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10244018.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10275785.md RENAMED
File without changes
{test/texts β†’ articles}/PMC10278212.md RENAMED
File without changes
{train/texts β†’ articles}/PMC1029622.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10298263.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10309098.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10327396.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10337687.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10349379.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10349800.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10377184.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10381361.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10409991.md RENAMED
File without changes
{test/texts β†’ articles}/PMC10418744.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10452379.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10463210.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10478012.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10483403.md RENAMED
File without changes
{val/texts β†’ articles}/PMC10495004.md RENAMED
File without changes
{test/texts β†’ articles}/PMC10499425.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10501538.md RENAMED
File without changes
{test/texts β†’ articles}/PMC10502099.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10526247.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10527451.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10529681.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10532840.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10532907.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10537526.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10557961.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10565537.md RENAMED
File without changes
{test/texts β†’ articles}/PMC10566653.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10582663.md RENAMED
File without changes
{train/texts β†’ articles}/PMC10583240.md RENAMED
File without changes