Upload README.md
Browse files
README.md
CHANGED
@@ -96,7 +96,7 @@ Models are released as sharded safetensors files.
|
|
96 |
|
97 |
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
98 |
| ------ | ---- | -- | ----------- | ------- | ---- |
|
99 |
-
| [main](https://huggingface.co/TheBloke/Llama-2-13B-chat-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 |
|
100 |
|
101 |
<!-- README_AWQ.md-provided-files end -->
|
102 |
|
@@ -108,7 +108,7 @@ Documentation on installing and using vLLM [can be found here](https://vllm.read
|
|
108 |
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
|
109 |
|
110 |
```shell
|
111 |
-
python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-13B-chat-
|
112 |
```
|
113 |
|
114 |
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
|
|
|
96 |
|
97 |
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
98 |
| ------ | ---- | -- | ----------- | ------- | ---- |
|
99 |
+
| [main](https://huggingface.co/TheBloke/Llama-2-13B-chat-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | Processing, coming soon
|
100 |
|
101 |
<!-- README_AWQ.md-provided-files end -->
|
102 |
|
|
|
108 |
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
|
109 |
|
110 |
```shell
|
111 |
+
python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-13B-chat-AWQ --quantization awq
|
112 |
```
|
113 |
|
114 |
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
|