Model Card for 522H0134-NguyenNhatHuy/Sailor-1.8B-Chat-SFT
This is a supervised fine-tuned version of sail/Sailor-1.8B-Chat
using LoRA and PEFT, tailored for Vietnamese open-domain conversation.
Model Details
Model Description
- Model type: Causal Language Model (Chat-style)
- Language(s): Vietnamese
- License: Apache 2.0
- Fine-tuned from: sail/Sailor-1.8B-Chat
This model is designed for natural and fluent Vietnamese conversation, with improved instruction-following capabilities through supervised fine-tuning.
Uses
Direct Use
- Open-domain chat in Vietnamese
- Educational and general knowledge Q&A
- Vietnamese instruction-following tasks
Out-of-Scope Use
- Use in high-risk applications (e.g. medical, legal, or financial advice)
- English-centric tasks (this model is tuned for Vietnamese)
Bias, Risks, and Limitations
This model may reflect biases present in its training data and base model. Outputs may be inaccurate or inappropriate in some contexts.
Recommendations
Always review model outputs before deployment in user-facing applications. Do not rely on the model for critical decisions.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("sail/Sailor-1.8B-Chat")
base_model = AutoModelForCausalLM.from_pretrained("sail/Sailor-1.8B-Chat")
model = PeftModel.from_pretrained(base_model, "522H0134-NguyenNhatHuy/Sailor-1.8B-Chat-SFT")
inputs = tokenizer("Xin chào, bạn khỏe không?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support