Model Card for Model ID
Model Details
This model is a fine-tuned version of unsloth/Qwen3-1.7B
on a translated version of the Alpaca Stanford dataset in Azerbaijani language.
The model is instruction-tuned to better follow prompts and generate relevant responses in Azerbaijani.
Model Description
- Language(s) (NLP): Azerbaijani
- License: MIT
- Finetuned from model: unsloth/Qwen3-1.7B
Uses
Direct Use
- Instruction following in Azerbaijani
- Education, research, and experimentation with low-resource language LLMs
- Chatbots, task-oriented systems, language agents
How to Get Started with the Model
Use the code below to get started with the model.
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-1.7B",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen3-1.7B-Alpaca-Azerbaijani")
question = "Bir sifət əlavə edərək aşağıdakı cümləni yenidən yazın. Tələbə mürəkkəb anlayışları anlaya bildi. "
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = False,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 2048,
temperature = 0.7,
top_p = 0.8,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
Training Data
- Dataset: omar07ibrahim/Alpaca_Stanford_Azerbaijan
Framework versions
- PEFT 0.14.0
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support