Qwen2.5-7B Financial Thai
This model is a fine-tuned version of Qwen2.5-7B-Instruct for Thai financial question answering.
Model Details
- Base Model: Qwen2.5-7B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Data: Thai financial Q&A dataset
- Languages: Thai, English
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and tokenizer
model_name = "Surasan/qwen2.5-7b-financial-thai-4bit-full"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Generate response
def generate_answer(question):
prompt = f'''<|im_start|>user
วิเคราะห์คำถามทางการเงินแล้วตอบพร้อมเหตุผลสั้นๆ
{question}<|im_end|>
<|im_start|>assistant
การคิด:'''
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.1,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
return response.strip()
# Example usage
question = "หุ้นของบริษัท A มีราคา 100 บาท PE ratio 15 เท่า บริษัท B มีราคา 80 บาท PE ratio 20 เท่า ควรเลือกลงทุนบริษัทใด?"
answer = generate_answer(question)
print(answer)
Training Details
- Training Steps: 300
- Learning Rate: 2e-4
- Batch Size: 8 (with gradient accumulation)
- LoRA Rank: 64
- LoRA Alpha: 128
Performance
Optimized for Thai financial question answering with reasoning capabilities.
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support