metadata
library_name: transformers
license: llama3.2
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
tags:
- code
- math
- cot
- conversational
- moe
- Superthoughts
⚠️THIS MODEL IS EXPERIMENTAL!!
After more than two months since the release of superthoughts lite v1, we finally release the new version. v2
Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.
Information
- In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
- To enable proper reasoning, set this as the system prompt:
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model.
This experimental model is more of a proof-of-concept. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.