Pretergeek
commited on
Commit
•
4a53a82
1
Parent(s):
5a34885
Update README.md
Browse files
README.md
CHANGED
@@ -14,12 +14,16 @@ license: apache-2.0
|
|
14 |
|
15 |
# OpenChat-3.5-0106_8.11B_36Layers-Interleaved
|
16 |
|
17 |
-
This is
|
18 |
|
19 |
## Merge Details
|
20 |
### Merge Method
|
21 |
|
22 |
-
This model was merged using the passthrough merge method.
|
|
|
|
|
|
|
|
|
23 |
|
24 |
### Models Merged
|
25 |
|
|
|
14 |
|
15 |
# OpenChat-3.5-0106_8.11B_36Layers-Interleaved
|
16 |
|
17 |
+
This is NOT your usual frankenmerge created using [mergekit](https://github.com/cg123/mergekit).
|
18 |
|
19 |
## Merge Details
|
20 |
### Merge Method
|
21 |
|
22 |
+
This model was merged using the passthrough merge method, but employing the Block Expansion method described in the paper [LLaMA Pro: Progressive LLaMA with Block Expansion](https://arxiv.org/abs/2401.02415).
|
23 |
+
|
24 |
+
The authors of the paper added new layers interleaved in between the original layers of the model, setting the parameters of the o_proj and down_proj layers to zero. This effectively adds layers that will just output their input (as if they were "transparent") allowing the model to remain functional even without further training. These new layers can then be targeted during training or fine-tuning without risking catastrophic forgetting, if you follow the author's training method to freeze the original layers and only train the new layers.
|
25 |
+
|
26 |
+
This model has not yet received additional training, so it should perform close to the original model.
|
27 |
|
28 |
### Models Merged
|
29 |
|