Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ model-index:
|
|
21 |
results: []
|
22 |
new_version: Daemontatox/Compumacy-Experimental_MF
|
23 |
---
|
24 |
-
|
25 |
# Compumacy-Experimental_MF
|
26 |
## A Specialized Language Model for Clinical Psychology & Psychiatry
|
27 |
|
@@ -147,7 +147,7 @@ Frameworks and Libraries
|
|
147 |
### LoRA Configuration
|
148 |
|
149 |
Parameter-Efficient Fine-Tuning was performed using LoRA with the following configuration, targeting a wide range of attention and feed-forward network layers to ensure comprehensive adaptation:
|
150 |
-
|
151 |
Rank (r): 16
|
152 |
|
153 |
LoRA Alpha (lora_alpha): 16 (scaled learning)
|
@@ -183,6 +183,7 @@ Warmup Ratio: 0.02
|
|
183 |
Gradient Checkpointing: "unsloth"
|
184 |
|
185 |
Random State: 42
|
|
|
186 |
|
187 |
## Ethical Considerations and Limitations
|
188 |
|
|
|
21 |
results: []
|
22 |
new_version: Daemontatox/Compumacy-Experimental_MF
|
23 |
---
|
24 |
+

|
25 |
# Compumacy-Experimental_MF
|
26 |
## A Specialized Language Model for Clinical Psychology & Psychiatry
|
27 |
|
|
|
147 |
### LoRA Configuration
|
148 |
|
149 |
Parameter-Efficient Fine-Tuning was performed using LoRA with the following configuration, targeting a wide range of attention and feed-forward network layers to ensure comprehensive adaptation:
|
150 |
+
```python
|
151 |
Rank (r): 16
|
152 |
|
153 |
LoRA Alpha (lora_alpha): 16 (scaled learning)
|
|
|
183 |
Gradient Checkpointing: "unsloth"
|
184 |
|
185 |
Random State: 42
|
186 |
+
```
|
187 |
|
188 |
## Ethical Considerations and Limitations
|
189 |
|