🧠 qwen2.5-md-finetuned

Model Overview

qwen2.5-md-finetuned is a fine-tuned version of the Qwen2.5-Medium model, optimized for improved performance on domain-specific or task-specific data. This model leverages the powerful multilingual and multitask capabilities of the base Qwen2.5 architecture and is adapted further using Low-Rank Adaptation (LoRA) techniques for efficient fine-tuning.

βœ… Base Model: Qwen2.5-Medium πŸ› οΈ Fine-Tuned By: adi2606 πŸ“œ License: MIT 🧱 Adapter Format: adapter_model.safetensors (LoRA)


πŸ“Œ Use Cases

This model is best suited for:

  • Custom conversational agents
  • Code or documentation assistants
  • Knowledge-based QA systems
  • Any application benefiting from Qwen2.5’s capabilities but requiring domain-specific fine-tuning

πŸ”§ Fine-Tuning Details

  • Technique: Parameter-efficient fine-tuning using LoRA
  • Adapter Config: See adapter_config.json
  • Tokenizer: Includes full tokenizer configuration (tokenizer_config.json, vocab.json, merges.txt)
  • Additional Tokens: added_tokens.json and special_tokens_map.json for enhanced compatibility with downstream applications

πŸ’Ύ Files

Filename Description
adapter_model.safetensors LoRA adapter weights
adapter_config.json Adapter configuration for inference
tokenizer_config.json Tokenizer configuration
tokenizer.json Pre-tokenized vocabulary
vocab.json Vocabulary JSON
merges.txt Merge rules for BPE tokenizer
special_tokens_map.json Special tokens mapping
added_tokens.json Custom added tokens
chat_template.jinja Custom chat template (if applicable)

βœ… How to Use

You can load this adapter with the base Qwen2.5-Medium model using peft:

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Medium", device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("adi2606/qwen2.5-md-finetuned", trust_remote_code=True)
model = PeftModel.from_pretrained(base_model, "adi2606/qwen2.5-md-finetuned")

πŸ“ˆ Performance

(Optional section) If you have evaluation metrics or benchmark results, they can be added here. Example:

  • Domain accuracy: 89.3%
  • BLEU/ROUGE/F1 scores if applicable

πŸ“š Citation

If you use this model in your work, please consider citing it:

@misc{adi2606qwen25md,
  author = {adi2606},
  title = {qwen2.5-md-finetuned},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/adi2606/qwen2.5-md-finetuned}},
}

🀝 Contributions

If you find issues or would like to contribute improvements to the model or tokenizer, feel free to open a pull request or discussion on the model repository.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support