Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ tags:
|
|
9 |
- merge
|
10 |
|
11 |
---
|
12 |
-
#
|
13 |
|
14 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
15 |
|
@@ -23,6 +23,7 @@ This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) me
|
|
23 |
The following models were included in the merge:
|
24 |
* [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)
|
25 |
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
|
|
|
26 |
|
27 |
### Configuration
|
28 |
|
|
|
9 |
- merge
|
10 |
|
11 |
---
|
12 |
+
# Trifecta-L3-8b
|
13 |
|
14 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
15 |
|
|
|
23 |
The following models were included in the merge:
|
24 |
* [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)
|
25 |
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
|
26 |
+
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
27 |
|
28 |
### Configuration
|
29 |
|