parole-study-viper's picture
Update README.md
dd2030a verified
metadata
license: mit
library_name: transformers
datasets:
  - AI-MO/NuminaMath-CoT
  - KbsdJames/Omni-MATH
  - RUC-AIBOX/STILL-3-Preview-RL-Data
  - hendrycks/competition_math
language:
  - en
base_model: agentica-org/DeepScaleR-1.5B-Preview
tags:
  - mlx

parole-study-viper/DeepScaleR-1.5B-Preview-Q8-mlx

The Model parole-study-viper/DeepScaleR-1.5B-Preview-Q8-mlx was converted to MLX format from agentica-org/DeepScaleR-1.5B-Preview using mlx-lm version 0.20.5.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("parole-study-viper/DeepScaleR-1.5B-Preview-Q8-mlx")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)

Citation

@misc{deepscaler2025,
  title={DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL},
  author={Michael Luo and Sijun Tan and Justin Wong and Xiaoxiang Shi and William Tang and Manan Roongta and Colin Cai and Jeffrey Luo and Tianjun Zhang and Erran Li and Raluca Ada Popa and Ion Stoica},
  year={2025},
  howpublished={\url{https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2}},
  note={Notion Blog}
  year={2025}
}