Introduction
We introduce luxia-21.4b-alignment-v1.0, an instruction-tuned and alignment model based on luxia-21.4b. Please refer to the evaluation results table for details.
Instruction Fine-tuning Strategy
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO)
Data Contamination Test Results
Results will be updated soon.
Evaluation Results
Results will be updated soon.
Usage Instructions
How to use
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("saltlux/luxia-21.4b-alignment-v0.1")
model = AutoModelForCausalLM.from_pretrained(
"saltlux/luxia-21.4b-alignment-v0.1",
device_map="auto",
torch_dtype=torch.float16,
)
License
- saltlux/luxia-21.4b-alignment-v1.0: apache-2.0
Contact Us
Any questions and suggestions are welcomed at the discussion tab.
- Downloads last month
- 11,263
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.