ThinkerLlama-8B-v1 / README.md
suayptalha's picture
Update README.md
96eeab4 verified
metadata
license: apache-2.0
tags:
  - unsloth
  - trl
  - grpo
  - llama
language:
  - en
base_model:
  - meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
library_name: transformers
datasets:
  - microsoft/orca-math-word-problems-200k

Model Details

Reasoning Llama model series fine-tuned on microsoft/orca-math-word-problems-200k using GRPO(Group Relative Policy Optimization) reinforcement learning technique.

Base model: meta-llama/Llama-3.1-8B-Instruct

Parameters

  • learning_rate = 5e-6,
  • adam_beta1 = 0.9,
  • adam_beta2 = 0.99,
  • weight_decay = 0.1,
  • warmup_ratio = 0.1,
  • lr_scheduler_type = "cosine",
  • optim = "paged_adamw_8bit",

Suggested system prompt for reasoning

Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>
Do not forget <reasoning></reasoning><answer></answer> tags.

Support:

If you find this work useful, you can support me! Buy Me A Coffee