metadata
license: apache-2.0
tags:
- unsloth
- trl
- grpo
- llama
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
library_name: transformers
datasets:
- microsoft/orca-math-word-problems-200k
Model Details
Reasoning Llama model series fine-tuned on microsoft/orca-math-word-problems-200k using GRPO(Group Relative Policy Optimization) reinforcement learning technique.
Base model: meta-llama/Llama-3.1-8B-Instruct
Parameters
- learning_rate = 5e-6,
- adam_beta1 = 0.9,
- adam_beta2 = 0.99,
- weight_decay = 0.1,
- warmup_ratio = 0.1,
- lr_scheduler_type = "cosine",
- optim = "paged_adamw_8bit",
Suggested system prompt for reasoning
Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>
Do not forget <reasoning></reasoning><answer></answer> tags.