Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,8 @@ It has also been quantized down to 4Bit using the GPTQ library available here: h
|
|
21 |
python llama.py .\Metharme-7b-Merged-Safetensors c4 --wbits 4 --true-sequential --groupsize 32 --save_safetensors Metharme-7B-GPTQ-4bit-32g.no-act-order.safetensors
|
22 |
```
|
23 |
|
|
|
|
|
24 |
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
|
25 |
|
26 |
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
|
|
|
21 |
python llama.py .\Metharme-7b-Merged-Safetensors c4 --wbits 4 --true-sequential --groupsize 32 --save_safetensors Metharme-7B-GPTQ-4bit-32g.no-act-order.safetensors
|
22 |
```
|
23 |
|
24 |
+
---
|
25 |
+
|
26 |
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
|
27 |
|
28 |
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
|