Gryphe commited on
Commit
d89d925
1 Parent(s): 3264791

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -5,10 +5,9 @@ language:
5
  ---
6
  An experiment with gradient merges using [the following script](https://github.com/TehVenomm/LM_Transformers_BlockMerge), with [Chronos](https://huggingface.co/elinas/chronos-13b) as its primary model, augmented by [Hermes](https://huggingface.co/NousResearch/Nous-Hermes-13b) and [Wizard-Vicuna Uncensored](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF).
7
 
8
- Quantizations are available from TheBloke (You're the best!):
9
 
10
- [GGML](https://huggingface.co/TheBloke/MythoLogic-13B-GGML)
11
- [GPTQ](https://huggingface.co/TheBloke/MythoLogic-13B-GPTQ)
12
 
13
  Chronos is a wonderfully verbose model, though it definitely seems to lack in the logic department. Hermes and WizardLM have been merged gradually, primarily in the higher layers (10+) in an attempt to rectify some of this behaviour.
14
 
@@ -18,6 +17,8 @@ Below is an illustration to showcase a rough approximation of the gradients I us
18
 
19
  ![](approximation.png)
20
 
 
 
21
  This model primarily uses Alpaca formatting, so for optimal model performance, use:
22
  ```
23
  ### Instruction:
 
5
  ---
6
  An experiment with gradient merges using [the following script](https://github.com/TehVenomm/LM_Transformers_BlockMerge), with [Chronos](https://huggingface.co/elinas/chronos-13b) as its primary model, augmented by [Hermes](https://huggingface.co/NousResearch/Nous-Hermes-13b) and [Wizard-Vicuna Uncensored](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF).
7
 
8
+ Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoLogic-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoLogic-13B-GPTQ) (You're the best!)
9
 
10
+ ## Model details
 
11
 
12
  Chronos is a wonderfully verbose model, though it definitely seems to lack in the logic department. Hermes and WizardLM have been merged gradually, primarily in the higher layers (10+) in an attempt to rectify some of this behaviour.
13
 
 
17
 
18
  ![](approximation.png)
19
 
20
+ ## Prompt Format
21
+
22
  This model primarily uses Alpaca formatting, so for optimal model performance, use:
23
  ```
24
  ### Instruction: