LIMO-Qwen3-8B-Math-Full-Precision
Full-precision (bfloat16) merged model trained with LIMO methodology.
Model Details
- Size: ~16GB (full precision)
- Base: Qwen/Qwen3-8B
- Training: LIMO dataset (817 samples)
- Method: LoRA โ Full merge
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Cbgcbg/limo-qwen3-8b-math-full-precision",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Cbgcbg/limo-qwen3-8b-math-full-precision")
Comparison
- Original Gasing: 15.26 GB โ
- Previous LIMO: 5.55 GB โ (quantized)
- This model: ~16 GB โ (full precision)
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support