Post
3506
π For those who interested in summarization of the long textual reports in medical domain ππ©Ί,
@Xiaolihai
and I delighted to share that we experiment with distillation tuning adaptation for Qwen-2.5 0.5B. We use reports from the MultiClinSum dataset and pass it through 72B version to retrieve report explanations in order to initiate ditillation tuning for 0.5B model. We experiment with passages written in English, French, Portuguese, and Spanish.
π We find that using distil-technique results in 2-4% performance increment on fine-tuning and similar improvements for reports in English (non-official and official evaluation). For the other it results in systems that perform similar to the convential tuning (standard) (see result below).
Dataset: https://zenodo.org/records/15459174
Competition: https://participants-area.bioasq.org/general_information/MultiClinSum/
Github: https://github.com/nicolay-r/distil-tuning-llm
model: nicolay-r/qwen25-05b-multiclinsum-distil
π We find that using distil-technique results in 2-4% performance increment on fine-tuning and similar improvements for reports in English (non-official and official evaluation). For the other it results in systems that perform similar to the convential tuning (standard) (see result below).
Dataset: https://zenodo.org/records/15459174
Competition: https://participants-area.bioasq.org/general_information/MultiClinSum/
Github: https://github.com/nicolay-r/distil-tuning-llm
model: nicolay-r/qwen25-05b-multiclinsum-distil