π§ qwen2.5-md-finetuned
Model Overview
qwen2.5-md-finetuned
is a fine-tuned version of the Qwen2.5-Medium model, optimized for improved performance on domain-specific or task-specific data. This model leverages the powerful multilingual and multitask capabilities of the base Qwen2.5 architecture and is adapted further using Low-Rank Adaptation (LoRA) techniques for efficient fine-tuning.
β Base Model: Qwen2.5-Medium π οΈ Fine-Tuned By: adi2606 π License: MIT π§± Adapter Format:
adapter_model.safetensors
(LoRA)
π Use Cases
This model is best suited for:
- Custom conversational agents
- Code or documentation assistants
- Knowledge-based QA systems
- Any application benefiting from Qwen2.5βs capabilities but requiring domain-specific fine-tuning
π§ Fine-Tuning Details
- Technique: Parameter-efficient fine-tuning using LoRA
- Adapter Config: See
adapter_config.json
- Tokenizer: Includes full tokenizer configuration (
tokenizer_config.json
,vocab.json
,merges.txt
) - Additional Tokens:
added_tokens.json
andspecial_tokens_map.json
for enhanced compatibility with downstream applications
πΎ Files
Filename | Description |
---|---|
adapter_model.safetensors |
LoRA adapter weights |
adapter_config.json |
Adapter configuration for inference |
tokenizer_config.json |
Tokenizer configuration |
tokenizer.json |
Pre-tokenized vocabulary |
vocab.json |
Vocabulary JSON |
merges.txt |
Merge rules for BPE tokenizer |
special_tokens_map.json |
Special tokens mapping |
added_tokens.json |
Custom added tokens |
chat_template.jinja |
Custom chat template (if applicable) |
β How to Use
You can load this adapter with the base Qwen2.5-Medium model using peft
:
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Medium", device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("adi2606/qwen2.5-md-finetuned", trust_remote_code=True)
model = PeftModel.from_pretrained(base_model, "adi2606/qwen2.5-md-finetuned")
π Performance
(Optional section) If you have evaluation metrics or benchmark results, they can be added here. Example:
- Domain accuracy: 89.3%
- BLEU/ROUGE/F1 scores if applicable
π Citation
If you use this model in your work, please consider citing it:
@misc{adi2606qwen25md,
author = {adi2606},
title = {qwen2.5-md-finetuned},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/adi2606/qwen2.5-md-finetuned}},
}
π€ Contributions
If you find issues or would like to contribute improvements to the model or tokenizer, feel free to open a pull request or discussion on the model repository.