LIMO-Qwen3-8B-Math-Full-Precision

Full-precision (bfloat16) merged model trained with LIMO methodology.

Model Details

  • Size: ~16GB (full precision)
  • Base: Qwen/Qwen3-8B
  • Training: LIMO dataset (817 samples)
  • Method: LoRA โ†’ Full merge

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Cbgcbg/limo-qwen3-8b-math-full-precision",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Cbgcbg/limo-qwen3-8b-math-full-precision")

Comparison

  • Original Gasing: 15.26 GB โœ…
  • Previous LIMO: 5.55 GB โŒ (quantized)
  • This model: ~16 GB โœ… (full precision)
Downloads last month
9
Safetensors
Model size
8.19B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Cbgcbg/limo-qwen3-8b-math-full-precision

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(211)
this model