Model Card for Qwen3-1.7B-azerbaijani-math

Model Details

This model is a fine-tuned version of the Qwen3-1.7B, adapted for instruction-following in Azerbaijani, with focus on mathematical problem solving. The fine-tuning process improves the model’s ability to:

  • Understand and solve math problems written in Azerbaijani
  • Respond in a fluent, natural Azerbaijani style
  • Follow task-specific instructions with improved alignment and chat capability.

Model Description

  • Developed by: Rustam Shiriyev
  • Language(s) (NLP): Azerbaijani
  • License: MIT
  • Finetuned from model: unsloth/Qwen3-1.7B

Uses

Direct Use

This model is best suited for:

  • Solving and explaining math problems in Azerbaijani
  • Educational assistants and tutoring bots for Azerbaijani students

Out-of-Scope Use

  • Not fine-tuned for factual correctness or safety filtering.

How to Get Started with the Model

Use the code below to get started with the model.

from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

login(token="")

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen3-1.7B",
    device_map={"": 0}, token=""
)

model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen3-1.7B-azerbaijani-math")


question = "Bir f(x) funksiyası verilib: f(x) = 2x^2 + 3x + 4. Bu funksiyanın maksimum və ya minimum nöqtəsini hesablayın və nəticəni geniş izah edin."

messages = [
    {"role" : "user", "content" : question}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True, 
    enable_thinking = False, 
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 512,
    temperature = 0.7, top_p = 0.8, top_k = 20,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

Training Details

Training Data

The model was fine-tuned on a curated combination of:

  • OnlyCheeini/azerbaijani-math-gpt4o — 100,000 examples of Azerbaijani math instructions generated via GPT-4o, focused on algebra, geometry, and applied math.

  • mlabonne/FineTome-100k — 35,000 chat-style instruction samples (35% of the full dataset) to improve general-purpose instruction following and conversational ability.

Framework versions

  • PEFT 0.14.0
Downloads last month
22
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Rustamshry/Qwen3-1.7B-azerbaijani-math

Finetuned
Qwen/Qwen3-1.7B
Finetuned
unsloth/Qwen3-1.7B
Adapter
(3)
this model

Datasets used to train Rustamshry/Qwen3-1.7B-azerbaijani-math