Upload new GPTQs with varied parameters
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -39,7 +40,6 @@ Below is an instruction that describes a task. Write a response that appropriate
|
|
39 |
### Instruction: {prompt}
|
40 |
|
41 |
### Response:
|
42 |
-
|
43 |
```
|
44 |
|
45 |
## Provided files
|
@@ -53,7 +53,7 @@ Each separate quant is in a different branch. See below for instructions on fet
|
|
53 |
| main | 4 | 128 | False | 7.90 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
54 |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.45 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
55 |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.95 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
56 |
-
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.70 GB | True | AutoGPTQ | 4-bit, with Act Order
|
57 |
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.80 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
|
58 |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 14.10 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
|
59 |
|
@@ -130,7 +130,6 @@ prompt_template=f'''Below is an instruction that describes a task. Write a respo
|
|
130 |
### Instruction: {prompt}
|
131 |
|
132 |
### Response:
|
133 |
-
|
134 |
'''
|
135 |
|
136 |
print("\n\n*** Generate:")
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
model_type: llama
|
5 |
---
|
6 |
|
7 |
<!-- header start -->
|
|
|
40 |
### Instruction: {prompt}
|
41 |
|
42 |
### Response:
|
|
|
43 |
```
|
44 |
|
45 |
## Provided files
|
|
|
53 |
| main | 4 | 128 | False | 7.90 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
54 |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.45 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
55 |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.95 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
56 |
+
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.70 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
57 |
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.80 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
|
58 |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 14.10 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
|
59 |
|
|
|
130 |
### Instruction: {prompt}
|
131 |
|
132 |
### Response:
|
|
|
133 |
'''
|
134 |
|
135 |
print("\n\n*** Generate:")
|