Model: Fizik-0.6B-Pro

Note: In rare cases, the model may use a different <think> tag format. This does not affect performance or output quality. We're aware of the issue and are working on a fix.

Description

Fizik-0.6B-Pro is a refined reasoning model trained on the Fizik-SFT-Reasoning dataset โ€” 11,000 examples of structured, step-by-step thinking. Every sample is tagged with <think>...</think>, and all non-reasoning content was removed.

This model was built to fix the core issue in the Fizik Preview version: inconsistent reasoning behavior. Now, reasoning is always active when prompted, with no ambiguity.


Behavior

  • Always reasons when prompted
    The model consistently follows the <think> structure without skipping steps.

  • No fallback to non-reasoning answers
    Reasoning is treated as the default behavior.

  • Performs well on multi-step tasks
    Especially in areas like math, logic, and multi-hop QA.


Intended Use

  • Tasks that require explicit reasoning
  • Safe deployment where reliable logic is needed
  • Research on controlled thought generation

Limitations

  • Will not respond naturally to prompts that expect short or intuitive answers.
  • Use Fizik-0.6B-Full if you need toggleable reasoning behavior.

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for NewstaR/Fizik-0.6B-Pro

Finetuned
(1)
this model
Quantizations
2 models