File size: 4,463 Bytes
b3dd091 a5c8928 09f4ed3 a5c8928 06536b4 a5c8928 06536b4 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 06536b4 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 09f4ed3 a5c8928 2567dd9 a5c8928 09f4ed3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- bn
license: apache-2.0
tags:
- Bengali
- QA
- llama-3
- instruct
pipeline_tag: text-generation
---
# Bangla-Llama-3.2-3B-Instruct-QA-v2
<b>Bengali Question-Answering Model</b> | Fine-tuned on Llama-3 Architecture | Version 2
## Model Description
This model is optimized for question-answering in the Bengali language. It is fine-tuned using **Llama-3-3B** architecture with Unsloth. The model is trained on a **context-aware instruct dataset**, designed to generate accurate and relevant responses.
## How to Use
### Required Libraries
```bash
pip install transformers torch accelerate
```
### Code Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
model_name = "Kowshik24/Bangla-llama-3.2-3B-Instruct-QA-v2"
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Setting up system and user prompts
messages = [
{
"role": "system",
"content": "১৯৫২ সালের ২১ ফেব্রুয়ারি বাংলা ভাষাকে পাকিস্তানের রাষ্ট্রভাষা হিসেবে স্বীকৃতি দেওয়ার দাবিতে ঢাকা বিশ্ববিদ্যালয়ের ছাত্ররা বিক্ষোভ করে। পুলিশের গুলিতে শহিদ হন রফিক, সালাম, বরকতসহ অনেকে। এই আন্দোলনের ফলস্বরূপ ১৯৫৬ সালে বাংলা রাষ্ট্রভাষার মর্যাদা পায় এবং পরবর্তীতে UNESCO ১৯৯৯ সালে ২১ ফেব্রুয়ারিকে আন্তর্জাতিক মাতৃভাষা দিবস ঘোষণা করে।"
},
{
"role": "user",
"content": "ভাষা আন্দোলনের দিনটি কোন তারিখে পালিত হয়?"
},
]
# Processing chat template
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generating the answer
outputs = model.generate(
input_ids,
max_new_tokens=256,
temperature=0.01,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
# Decoding the output
full_response = tokenizer.decode(outputs[0], skip_special_tokens=True)
answer = full_response.split("assistant\n\n")[-1].strip()
print("Answer:", answer)
```
### Output
```
Answer: ২১ ফেব্রুয়ারি
```
## Hyperparameters
| Parameter | Value | Explanation |
|-----------------|---------|----------------------------------|
| `temperature` | 0.01 | Low creativity (deterministic) |
| `max_new_tokens`| 256 | Maximum output length |
| `torch_dtype` | bfloat16| Memory optimization |
## Training Details
- **Architecture**: Llama-3-3B Instruct
- **Fine-tuning**: Unsloth (4-bit QLoRA)
## Use Cases
- Educational tools
- Bengali chatbots
- Documentation Q&A
- Journalism research
## Limitations
- Cannot support long contexts (more than 4K tokens)
## Ethical AI
This model is designed following ethical guidelines. It should not be used to generate harmful content.
## Citation
If this model helps you in your work, please cite it as follows:
```bibtex
@INPROCEEDINGS{11013841,
author={Debanath, Koshik and Aich, Sagor and Srizon, Azmain Yakin},
booktitle={2025 International Conference on Electrical, Computer and Communication Engineering (ECCE)},
title={Advancing Low-Resource NLP: Contextual Question Answering for Bengali Language Using Llama},
year={2025},
volume={},
number={},
pages={1-6},
keywords={Adaptation models;Large language models;Computational modeling;Transfer learning;LoRa;Reinforcement learning;Benchmark testing;Question answering (information retrieval);Multilingual;Synthetic data;Natural Language Processing;Question Answering;Large Language Models;Llama Model;Fine-Tuning;Bengali Dataset},
doi={10.1109/ECCE64574.2025.11013841}}
```
## Contact
For questions or suggestions, email: [[email protected]](mailto:[email protected]) |