A newer version of this model is available:
sthenno-com/miscii-14b-1225
miscii-14b-1028
Role-based Instructions
Just parse the following as your system prompt.
Note there is NO special-tokens
here.
An example system prompt:
system_prompt: str = (
"""<|context_start|>personas<|context_sep|>
<|persona_start|>user<|persona_sep|>
{user_persona}<|persona_end|>
<|persona_start|>assistant<|persona_sep|>
{assistant_persona}<|persona_end|><|context_end|>""".format(
user_persona="""I am Miscii.
I am the designer of Sthenno.
[Optional: Additional statements]""",
assistant_persona="""I am Sthenno.
I speak in Chinese.
[Optional: Additional statements]""",
)
)
Training
See Report for miscii-1020 for more details.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
Metric | Value |
---|---|
Avg. | 35.05 |
IFEval (0-Shot) | 82.37 |
BBH (3-Shot) | 49.26 |
MATH Lvl 5 (4-Shot) | 6.34 |
GPQA (0-shot) | 14.21 |
MuSR (0-shot) | 12.00 |
MMLU-PRO (5-shot) | 46.14 |
- Downloads last month
- 75
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for sthenno-com/miscii-14b-1028
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-14B-Instruct
Datasets used to train sthenno-com/miscii-14b-1028
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard82.370
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard49.260
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard6.340
- acc_norm on GPQA (0-shot)Open LLM Leaderboard14.210
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.000
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard46.140