Model Card for forerunner-qwen32b-simpo-awq

Model Details
Model Description
This model is a fine-tuned version of Qwen/Qwen2.5-32B-Instruct
. Training was conducted in two stages:
- Supervised Fine-Tuning (SFT) using the
Unsloth
library on a custom dataset. - Reinforcement Learning (RL) using the SimPO algorithm (via
LLaMA-Factory
) on a preference dataset.
The model focuses on philosophical and esoteric topics and is multilingual. The final version is quantized using AWQ.
- Developed by: radm
- Finetuned from model:
Qwen/Qwen2.5-32B-Instruct
- Model type: Causal LM based on the Qwen2.5 architecture
- Language(s): Multilingual
- License: Apache 2.0 (inherited from base model)
Uses
Example Output (with temperature=0.6, repetition_penalty=1.15):
User: As a result of what mental process does duality appear in a person?
Model: Duality, or the concept of two-polarity, is born from the mind's ability to categorize and contrast things. The mind exercises this by distinguishing between 'mine' and 'not mine,' which leads to hierarchical thinking and the creation of concepts like 'good and evil,' 'truth and falsehood.' This dualistic nature of thought becomes ingrained over time as we encounter different experiences that further reinforce these distinctions.
Out-of-Scope Use
The model is not designed for generating harmful, unethical, biased, or factually incorrect content. Performance on tasks outside its training domain (philosophical/esoteric chat) may be suboptimal.
Bias, Risks, and Limitations
The model inherits biases from its base model (Qwen/Qwen2.5-32B-Instruct
) and the fine-tuning datasets. It may generate plausible-sounding but incorrect or nonsensical information, especially on complex topics. Its "understanding" is based on patterns in the data, not genuine comprehension or consciousness. Use the outputs with critical judgment.
Training Details
Training Data
The model was fine-tuned in two stages:
- SFT: Used the custom dataset.
- SimPO RL: Used the preference datasets, containing pairs of preferred and rejected responses for given prompts, focusing on philosophical and esoteric themes.
Training Procedure
Stage 1: Supervised Fine-Tuning (SFT)
Training was performed using the Unsloth
library integrated with trl
's SFTTrainer
.
- Framework: Unsloth + SFTTrainer
- Base Model:
Qwen/Qwen2.5-32B-Instruct
- LoRA Configuration:
r
: 512lora_alpha
: 512lora_dropout
: 0.0bias
: "none"target_modules
: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]use_rslora
: True
- Precision: Auto (bfloat16 / float16)
- Quantization (load): 4-bit
- Optimizer: Paged AdamW 8-bit
- Learning Rate: 8e-5
- LR Scheduler: Cosine
- Warmup Steps: 10
- Batch Size (per device): 1
- Gradient Accumulation Steps: 128 (Effective Batch Size: 128)
- Max Sequence Length: 8192
- Epochs: 1
Stage 2: Reinforcement Learning (SimPO)
RL fine-tuning was performed using LLaMA-Factory
and the SimPO algorithm.
- Framework: LLaMA-Factory + SimPO
- Base Model: Result of SFT stage (
Qwen/Qwen2.5-32B-Instruct-sft
) - LoRA Configuration:
r
: 256lora_alpha
: 256lora_dropout
: 0.0lora_target
: alluse_dora
: Trueuse_rslora
: True
- Precision: bfloat16
- Quantization (load): 4-bit
- Optimizer: AdamW (with
weight_decay: 0.01
) - Learning Rate: 7e-7
- LR Scheduler: Cosine
- Warmup Steps: 16
- Batch Size (per device): 1
- Gradient Accumulation Steps: 64 (Effective Batch Size: 64)
- Max Sequence Length: 6600
- Epochs: 1.0
Stage 3: AWQ Quantization
After training completion, the model was quantized using the AWQ method to optimize performance and reduce size.
- Downloads last month
- 11