Daemontatox commited on
Commit
1d1cd90
·
verified ·
1 Parent(s): f2a7fe4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  results: []
22
  new_version: Daemontatox/Compumacy-Experimental_MF
23
  ---
24
-
25
  # Compumacy-Experimental_MF
26
  ## A Specialized Language Model for Clinical Psychology & Psychiatry
27
 
@@ -147,7 +147,7 @@ Frameworks and Libraries
147
  ### LoRA Configuration
148
 
149
  Parameter-Efficient Fine-Tuning was performed using LoRA with the following configuration, targeting a wide range of attention and feed-forward network layers to ensure comprehensive adaptation:
150
-
151
  Rank (r): 16
152
 
153
  LoRA Alpha (lora_alpha): 16 (scaled learning)
@@ -183,6 +183,7 @@ Warmup Ratio: 0.02
183
  Gradient Checkpointing: "unsloth"
184
 
185
  Random State: 42
 
186
 
187
  ## Ethical Considerations and Limitations
188
 
 
21
  results: []
22
  new_version: Daemontatox/Compumacy-Experimental_MF
23
  ---
24
+ ![image](./image.jpg)
25
  # Compumacy-Experimental_MF
26
  ## A Specialized Language Model for Clinical Psychology & Psychiatry
27
 
 
147
  ### LoRA Configuration
148
 
149
  Parameter-Efficient Fine-Tuning was performed using LoRA with the following configuration, targeting a wide range of attention and feed-forward network layers to ensure comprehensive adaptation:
150
+ ```python
151
  Rank (r): 16
152
 
153
  LoRA Alpha (lora_alpha): 16 (scaled learning)
 
183
  Gradient Checkpointing: "unsloth"
184
 
185
  Random State: 42
186
+ ```
187
 
188
  ## Ethical Considerations and Limitations
189