Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ pipeline_tag: text-generation
|
|
23 |
|
24 |
## Model Details
|
25 |
|
26 |
-
This model represent a fine-tuned version of `Qwen/Qwen2.5-0.5B-Instruct` on MultiClinSum training data
|
27 |
|
28 |
### Model Description
|
29 |
|
@@ -93,7 +93,7 @@ The training procedure involves:
|
|
93 |
1. Preparation of the `rationale` for summaries distillation.
|
94 |
2. Launch of the **fine-tuning** process.
|
95 |
|
96 |
-
**Fine-tuning:** Please follow this script for using `MultiClinSum` dataset for fine-tuning at GoogleColab A100 (40GB VRAM) + 80GB RAM:
|
97 |
* https://github.com/nicolay-r/distil-tuning-llm/blob/master/distil_ft_qwen25_05b_A100-40GB_80GB_std.sh
|
98 |
|
99 |
#### Preprocessing [optional]
|
|
|
23 |
|
24 |
## Model Details
|
25 |
|
26 |
+
This model represent a fine-tuned version of `Qwen/Qwen2.5-0.5B-Instruct` on [MultiClinSum](https://zenodo.org/records/15463353) training data
|
27 |
|
28 |
### Model Description
|
29 |
|
|
|
93 |
1. Preparation of the `rationale` for summaries distillation.
|
94 |
2. Launch of the **fine-tuning** process.
|
95 |
|
96 |
+
**Fine-tuning:** Please follow this script for using [`MultiClinSum` dataset](https://zenodo.org/records/15463353) for fine-tuning at GoogleColab A100 (40GB VRAM) + 80GB RAM:
|
97 |
* https://github.com/nicolay-r/distil-tuning-llm/blob/master/distil_ft_qwen25_05b_A100-40GB_80GB_std.sh
|
98 |
|
99 |
#### Preprocessing [optional]
|