Add comprehensive model card with usage instructions and evaluation results
Browse files
README.md
CHANGED
@@ -40,8 +40,8 @@ This LoRA adapter enhances google/gemma-3-1b-it with structured reasoning capabi
|
|
40 |
- **LoRA Rank**: 64
|
41 |
- **LoRA Alpha**: 128
|
42 |
- **Training Samples**: 107
|
43 |
-
- **Thinking Tag Usage**:
|
44 |
-
- **Average Quality Score**:
|
45 |
|
46 |
## 🔧 Usage
|
47 |
|
@@ -131,9 +131,9 @@ The model was trained on self-generated reasoning problems across multiple domai
|
|
131 |
## 🔬 Evaluation
|
132 |
|
133 |
The adapter was evaluated on diverse reasoning tasks:
|
134 |
-
- Thinking tag usage rate:
|
135 |
-
- Average reasoning quality score:
|
136 |
-
- Response comprehensiveness:
|
137 |
|
138 |
## 🏷️ Related
|
139 |
|
|
|
40 |
- **LoRA Rank**: 64
|
41 |
- **LoRA Alpha**: 128
|
42 |
- **Training Samples**: 107
|
43 |
+
- **Thinking Tag Usage**: 60.0%
|
44 |
+
- **Average Quality Score**: 5.60
|
45 |
|
46 |
## 🔧 Usage
|
47 |
|
|
|
131 |
## 🔬 Evaluation
|
132 |
|
133 |
The adapter was evaluated on diverse reasoning tasks:
|
134 |
+
- Thinking tag usage rate: 60.0%
|
135 |
+
- Average reasoning quality score: 5.60
|
136 |
+
- Response comprehensiveness: 454 words average
|
137 |
|
138 |
## 🏷️ Related
|
139 |
|