File size: 2,079 Bytes
f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b 8a0f61b f244f9b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
base_model: unsloth/Qwen3-1.7B
library_name: peft
license: mit
datasets:
- omar07ibrahim/Alpaca_Stanford_Azerbaijan
language:
- az
pipeline_tag: question-answering
tags:
- alpaca
---
# Model Card for Model ID
## Model Details
This model is a fine-tuned version of [`unsloth/Qwen3-1.7B`](https://huggingface.co/unsloth/Qwen3-1.7B) on a translated version of the **Alpaca Stanford dataset** in **Azerbaijani language**.
The model is instruction-tuned to better follow prompts and generate relevant responses in Azerbaijani.
### Model Description
- **Language(s) (NLP):** Azerbaijani
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen3-1.7B
## Uses
### Direct Use
- Instruction following in Azerbaijani
- Education, research, and experimentation with low-resource language LLMs
- Chatbots, task-oriented systems, language agents
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-1.7B",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen3-1.7B-Alpaca-Azerbaijani")
question = "Bir sifət əlavə edərək aşağıdakı cümləni yenidən yazın. Tələbə mürəkkəb anlayışları anlaya bildi. "
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = False,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 2048,
temperature = 0.7,
top_p = 0.8,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
## Training Data
- **Dataset:** omar07ibrahim/Alpaca_Stanford_Azerbaijan
### Framework versions
- PEFT 0.14.0 |