Model Card for LRC-1.7B-SFT
LRC-1.7B-SFT is a Small Language Model (SLM) with approximately 1.7 billion parameters. It is the Supervised Fine-Tuned (SFT) version of LRC-1.7B-Base. The LRC method is an efficient knowledge distillation technique used to construct the base model from its teacher, Qwen2.5-3B-Instruct, using 20 billion tokens. This SFT version was then further fine-tuned on an instruction-following dataset ultrachat_200k.
The LRC approach trains a set of low-rank projection matrices that enable soft pruning by compressing teacher weights and an "activation clone" mechanism that aligns student activations (including FFN signals) with those of the teacher. The base model, LRC-1.7B-Base, was trained on 20 billion tokens.
Uses
Direct Use
LRC-1.7B-SFT is an instruction-tuned model and is intended for tasks requiring instruction following, question answering, and general chat capabilities.
Biases, Risks, and Limitations
- SFT Dataset Limitations: Our SFT model (LRC-1.7B-SFT) was fine-tuned solely on the UltraChat dataset. While UltraChat enhances general instruction-following, it may not be sufficiently diverse or targeted to instill robust safety alignment or complex instruction adherence compared to models trained with more extensive or specialized alignment techniques (e.g., RLHF, or SFT on broader safety/instruction datasets). Consequently, the model might exhibit deficiencies in safety and its ability to follow highly complex or nuanced instructions.
- Inherited Biases: The model may reflect biases present in its pre-training data (Fineweb-Edu, OpenHermes 2.5) and the teacher model (Qwen2.5-3B-Instruct).
- Hallucination: Like all LLMs, LRC-1.7B-SFT can generate factually incorrect or nonsensical information (hallucinations).
- Limited Scope of Evaluation: The paper's primary evaluation focuses on pre-training efficiency and general downstream tasks. Extensive testing on safety benchmarks or complex reasoning tasks beyond the reported MMLU, ARC, etc., was not detailed.
How to Get Started with the Model
❗ Critical: For vLLM serving, please specify model-impl==transformers
when using qwen series model. This is because, in the current implementation of vLLM, the qwen model does not support setting a custom head_dim
through the config. Fortunately, vLLM allows using transformers as the backend.
Tested versions that can serve properly: vllm==0.8.5.post1
and transformers==4.51.3
.
Serve command:
vllm serve JitaiHao/LRC-1.7B-Base --model-impl transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('JitaiHao/LRC-1.7B')
model = AutoModelForCausalLM.from_pretrained('JitaiHao/LRC-1.7B')
# Prepare a multi-turn chat history
messages = [
{"role": "user", "content": "Hello, who are you?"},
{"role": "assistant", "content": "Hello, I am an AI assistant."}
]
# Use apply_chat_template to create a prompt for the model
input_text = tokenizer.apply_chat_template(
messages,
tokenize=False, # Only generate the string prompt, do not tokenize yet
add_generation_prompt=True # Add a generation prompt for the assistant
)
print(input_text) # View the generated prompt string
# If you want to generate a response with the model
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
- Pre-training (for LRC-1.7B-Base): 20 billion tokens from the "Mixed-1.1" dataset (20B Fineweb-Edu, 450M OpenHermes 2.5, as per Table 8 & 10, using "Mixed-1.1-Qwen" composition).
- Supervised Fine-Tuning (SFT): 0.2 billion tokens from the UltraChat dataset.
Training Procedure
- Pre-training (LRC-1.7B-Base): Trained using the Low-Rank Clone (LRC) method.
- Teacher Model: Qwen2.5-3B-Instruct
- Supervised Fine-Tuning (SFT):
- Dataset: UltraChat (0.2B tokens)
- Learning Rate (SFT): 1.0 x 10⁻⁵
Evaluation
Zero-Shot Comparison with other publicly available SFT models under 2B parameters (from Table 1 of the paper):
Model | # Tokens | ARC-E | ARC-C | LogiQA | CSQA | PIQA | WinoG | BoolQ | SciQ | MMLU | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|
InternLM2-1.8B | 2T | 71.04 | 42.06 | 28.42 | 70.11 | 74.27 | 63.77 | 75.50 | 94.50 | 43.75 | 62.60 |
LRC-1.7B-SFT | 20B | 74.62 | 44.20 | 30.88 | 70.19 | 73.07 | 63.30 | 79.82 | 93.80 | 54.93 | 64.98 |
Qwen3-1.7B | 36T | 72.47 | 43.00 | 28.42 | 64.78 | 72.20 | 61.48 | 77.65 | 93.10 | 55.44 | 63.17 |
SmolLM2-1.7B | 11T | 69.11 | 43.52 | 28.88 | 51.19 | 76.01 | 68.98 | 68.47 | 89.80 | 48.50 | 60.50 |
LRC-1.5B-SFT | 10B | 74.75 | 44.97 | 30.72 | 65.77 | 73.07 | 62.25 | 75.78 | 94.60 | 49.42 | 63.48 |
MiniCPM-1.2B | 1T | 70.16 | 39.68 | 30.88 | 64.29 | 74.65 | 60.77 | 67.58 | 91.50 | 44.23 | 60.42 |
Performance on safety and instruction-following tasks (from Table 14, LRC-1.7B refers to the SFT version, LRC-1.7B-B refers to the base version):
Benchmark | Metric | Score (LRC-1.7B-SFT) | Score (LRC-1.7B-Base) |
---|---|---|---|
ToxiGen | Accuracy Norm | 43.30 | 43.30 |
IFeval | Instance-Level Loose Acc | 39.69 | 36.69 |
TruthfulQA | MC2 | 47.95 | 53.17 |
The gains on IFeval post-SFT, and the slight decrease on TruthfulQA (compared to its base model) may reflect the characteristics of the UltraChat SFT data.
Technical Specifications
Model Architecture and Objective
- Architecture: Transformer-based decoder-only model, adhering to a Llama-like architecture (as implied by the paper's general description of LRC models).
- Number of Layers: 36
- Hidden Size: 1,200
- FFN Intermediate Size: 11,008
- Attention Q Heads: 16
- Attention KV Heads: 2
- Head Dimension: 128
- Vocabulary Size: 151,936
- Word Embeddings: Tied
- Downloads last month
- 0