Rustamshry's picture
Update README.md
8a0f61b verified
---
base_model: unsloth/Qwen3-1.7B
library_name: peft
license: mit
datasets:
- omar07ibrahim/Alpaca_Stanford_Azerbaijan
language:
- az
pipeline_tag: question-answering
tags:
- alpaca
---
# Model Card for Model ID
## Model Details
This model is a fine-tuned version of [`unsloth/Qwen3-1.7B`](https://huggingface.co/unsloth/Qwen3-1.7B) on a translated version of the **Alpaca Stanford dataset** in **Azerbaijani language**.
The model is instruction-tuned to better follow prompts and generate relevant responses in Azerbaijani.
### Model Description
- **Language(s) (NLP):** Azerbaijani
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen3-1.7B
## Uses
### Direct Use
- Instruction following in Azerbaijani
- Education, research, and experimentation with low-resource language LLMs
- Chatbots, task-oriented systems, language agents
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-1.7B",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen3-1.7B-Alpaca-Azerbaijani")
question = "Bir sifət əlavə edərək aşağıdakı cümləni yenidən yazın. Tələbə mürəkkəb anlayışları anlaya bildi. "
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = False,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 2048,
temperature = 0.7,
top_p = 0.8,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
## Training Data
- **Dataset:** omar07ibrahim/Alpaca_Stanford_Azerbaijan
### Framework versions
- PEFT 0.14.0