Doctor-Shotgun commited on
Commit
dd4d04b
·
1 Parent(s): 288ca25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -9,7 +9,11 @@ tags:
9
  - llama-2
10
  license: agpl-3.0
11
  ---
12
- # Model Card: CalliopeDS-v2-L2-13B
 
 
 
 
13
 
14
  This is a Llama 2-based model consisting of a merge of several models using PEFT adapters and SLERP merging:
15
  - [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
@@ -21,6 +25,7 @@ This is a Llama 2-based model consisting of a merge of several models using PEFT
21
  Charles Goddard's [mergekit](https://github.com/cg123/mergekit) repo was used to perform these operations.
22
 
23
  The purpose of this merge was to create a model that excels at creative writing and roleplay while maintaining general intelligence and instruction-following capabilities. In testing, it has shown to be capable at producing descriptive and verbose responses while demonstrating a solid understanding of the context.
 
24
  ## Usage:
25
  Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format of LIMARP v3:
26
  ```
 
9
  - llama-2
10
  license: agpl-3.0
11
  ---
12
+ # CalliopeDS-v2-L2-13B
13
+
14
+ [EXL2 Quants](https://huggingface.co/Doctor-Shotgun/CalliopeDS-v2-L2-13B-exl2)
15
+
16
+ [GGUF Quants](https://huggingface.co/Doctor-Shotgun/Misc-Models)
17
 
18
  This is a Llama 2-based model consisting of a merge of several models using PEFT adapters and SLERP merging:
19
  - [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
 
25
  Charles Goddard's [mergekit](https://github.com/cg123/mergekit) repo was used to perform these operations.
26
 
27
  The purpose of this merge was to create a model that excels at creative writing and roleplay while maintaining general intelligence and instruction-following capabilities. In testing, it has shown to be capable at producing descriptive and verbose responses while demonstrating a solid understanding of the context.
28
+
29
  ## Usage:
30
  Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format of LIMARP v3:
31
  ```