Update README.md
Browse files
README.md
CHANGED
@@ -2,8 +2,9 @@
|
|
2 |
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
|
3 |
tags:
|
4 |
- text-generation-inference
|
|
|
5 |
- transformers
|
6 |
-
-
|
7 |
- llama
|
8 |
- gguf
|
9 |
license: apache-2.0
|
@@ -15,8 +16,10 @@ language:
|
|
15 |
|
16 |
- **Developed by:** johnnietien
|
17 |
- **License:** apache-2.0
|
18 |
-
- **Finetuned from model :**
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
|
|
|
|
|
2 |
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
|
3 |
tags:
|
4 |
- text-generation-inference
|
5 |
+
- reasoning
|
6 |
- transformers
|
7 |
+
- DeepSeek R1
|
8 |
- llama
|
9 |
- gguf
|
10 |
license: apache-2.0
|
|
|
16 |
|
17 |
- **Developed by:** johnnietien
|
18 |
- **License:** apache-2.0
|
19 |
+
- **Finetuned from model :** llama-3.2-1b-instruct-bnb-4bit
|
20 |
|
21 |
+
The first Reasoning model can have an “aha moment” same as DeepSeek’s R1.
|
22 |
+
We've enhanced the entire GRPO process, making it use 80% less VRAM than Hugging Face + FA2.
|
23 |
+
This allows you to reproduce R1-Zero's "aha moment" on just 7GB of VRAM using llama-3.2-1b.
|
24 |
+
Please note, this isn’t fine-tuning DeepSeek’s R1 distilled models or using distilled data from R1 for tuning.
|
25 |
+
This is converting a standard model into a full-fledged reasoning model using GRPO.
|