klei1 commited on
Commit
02502b3
·
verified ·
1 Parent(s): 9336652

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -53
README.md CHANGED
@@ -1,15 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Gemma 3 27B GRPO Reasoning Model
2
 
3
  ## Model Description
4
  - **Developed by:** klei1
5
  - **Model type:** Gemma 3 27B fine-tuned with GRPO for reasoning tasks
6
  - **License:** apache-2.0
7
- - **Finetuned from model:** [unsloth/gemma-3-27b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-27b-it-unsloth-bnb-4bit)
8
- - **Framework:** Hugging Face Transformers with Unsloth optimization
9
-
10
- This model is a fine-tuned version of Google's Gemma 3 27B instruction-tuned model, enhanced using Generative Rejection Policy Optimization (GRPO) to improve its reasoning capabilities. The training was completed 1.6x faster with [Unsloth](https://github.com/unslothai/unsloth) optimization, requiring 60% less VRAM and enabling 6x longer context than environments with Flash Attention 2.
11
 
12
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
13
 
14
  ## Capabilities & Training
15
 
@@ -29,14 +47,17 @@ The model has been trained to follow a specific reasoning format:
29
  - Final solutions are provided between `<SOLUTION>` and `</SOLUTION>` tags
30
 
31
  ### Training Configuration
32
- - **Framework:** Unsloth with Hugging Face's TRL library
33
  - **Optimization:** LoRA fine-tuning (r=8, alpha=8)
34
- - **Precision:** 4-bit dynamic quantization for superior accuracy with minimal VRAM usage
35
  - **Reward Functions:** Format adherence, answer accuracy, and reasoning quality
36
- - **Context Length:** Up to 128K tokens supported by base model
37
 
38
  ## Technical Specifications
39
 
 
 
 
 
 
40
  ### Gemma 3 Architecture Benefits
41
  - 27B parameters, trained on 14 trillion tokens
42
  - 128K context window (extended from 32K)
@@ -44,15 +65,10 @@ The model has been trained to follow a specific reasoning format:
44
  - 5 sliding + 1 global attention pattern
45
  - 1024 sliding window attention
46
 
47
- ### Unsloth Optimization
48
- - 1.6x faster training compared to standard implementations
49
- - >60% VRAM reduction, enabling training on consumer GPUs
50
- - Support for 6x longer context than environments with Flash Attention 2
51
- - Fixes for float16 mixed precision issues that cause infinity in activations and gradients
52
 
53
- ## Usage
54
 
55
- ### Example System Prompt
56
  ```
57
  You are given a problem.
58
  Think about the problem and provide your working out.
@@ -60,43 +76,6 @@ Place it between <start_working_out> and <end_working_out>.
60
  Then, provide your solution between <SOLUTION></SOLUTION>
61
  ```
62
 
63
- ### Example Usage
64
- ```python
65
- from transformers import AutoModelForCausalLM, AutoTokenizer
66
- import torch
67
-
68
- model_name = "klei1/gemma-3-27b-grpo" # Replace with your model path
69
- model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16)
70
- tokenizer = AutoTokenizer.from_pretrained(model_name)
71
-
72
- system_prompt = """You are given a problem.
73
- Think about the problem and provide your working out.
74
- Place it between <start_working_out> and <end_working_out>.
75
- Then, provide your solution between <SOLUTION></SOLUTION>"""
76
-
77
- messages = [
78
- {"role": "system", "content": system_prompt},
79
- {"role": "user", "content": "If a train travels at 60 miles per hour, how far will it travel in 2.5 hours?"}
80
- ]
81
-
82
- text = tokenizer.apply_chat_template(
83
- messages,
84
- add_generation_prompt=True,
85
- tokenize=False
86
- )
87
-
88
- inputs = tokenizer(text, return_tensors="pt").to(model.device)
89
- outputs = model.generate(
90
- **inputs,
91
- max_new_tokens=512,
92
- temperature=0.7,
93
- top_p=0.95,
94
- top_k=64
95
- )
96
-
97
- print(tokenizer.decode(outputs[0], skip_special_tokens=False))
98
- ```
99
-
100
  ## Limitations
101
  - While this model excels at reasoning tasks, particularly mathematical problems, it may still occasionally provide incorrect solutions for complex problems.
102
  - The model's performance might vary depending on problem complexity and wording.
@@ -104,7 +83,6 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=False))
104
 
105
  ## Acknowledgments
106
  - Google for developing the Gemma 3 model family
107
- - Unsloth team for providing optimization techniques that make fine-tuning large models more accessible
108
  - Hugging Face for their TRL library and GRPO implementation
109
 
110
  ## Citation
 
1
+ ---
2
+ base_model: gemma-3-27b-it
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - gemma3
7
+ - reasoning
8
+ - mathematics
9
+ - grpo
10
+ license: apache-2.0
11
+ language:
12
+ - en
13
+ inference:
14
+ parameters:
15
+ temperature: 0.7
16
+ top_p: 0.95
17
+ top_k: 64
18
+ max_new_tokens: 512
19
+ ---
20
+
21
  # Gemma 3 27B GRPO Reasoning Model
22
 
23
  ## Model Description
24
  - **Developed by:** klei1
25
  - **Model type:** Gemma 3 27B fine-tuned with GRPO for reasoning tasks
26
  - **License:** apache-2.0
27
+ - **Finetuned from model:** Google's Gemma 3 27B instruction-tuned model
28
+ - **Framework:** Hugging Face Transformers
 
 
29
 
30
+ This model is a fine-tuned version of Google's Gemma 3 27B instruction-tuned model, enhanced using Generative Rejection Policy Optimization (GRPO) to improve its reasoning capabilities.
31
 
32
  ## Capabilities & Training
33
 
 
47
  - Final solutions are provided between `<SOLUTION>` and `</SOLUTION>` tags
48
 
49
  ### Training Configuration
50
+ - **Framework:** Hugging Face's TRL library
51
  - **Optimization:** LoRA fine-tuning (r=8, alpha=8)
 
52
  - **Reward Functions:** Format adherence, answer accuracy, and reasoning quality
 
53
 
54
  ## Technical Specifications
55
 
56
+ ### Available Formats
57
+ This model is available in two formats:
58
+ - Standard adapter format (adapter_model.safetensors)
59
+ - GGUF 8-bit quantized format (bleta-meditor-27b-finetune.Q8_0.gguf) for use with llama.cpp
60
+
61
  ### Gemma 3 Architecture Benefits
62
  - 27B parameters, trained on 14 trillion tokens
63
  - 128K context window (extended from 32K)
 
65
  - 5 sliding + 1 global attention pattern
66
  - 1024 sliding window attention
67
 
68
+ ## System Prompt
 
 
 
 
69
 
70
+ To get the best results from this model, use this system prompt:
71
 
 
72
  ```
73
  You are given a problem.
74
  Think about the problem and provide your working out.
 
76
  Then, provide your solution between <SOLUTION></SOLUTION>
77
  ```
78
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ## Limitations
80
  - While this model excels at reasoning tasks, particularly mathematical problems, it may still occasionally provide incorrect solutions for complex problems.
81
  - The model's performance might vary depending on problem complexity and wording.
 
83
 
84
  ## Acknowledgments
85
  - Google for developing the Gemma 3 model family
 
86
  - Hugging Face for their TRL library and GRPO implementation
87
 
88
  ## Citation