Update README.md
Browse filesadding quantization info to the readme/model card.
README.md
CHANGED
@@ -65,4 +65,39 @@ Please consider setting temperature = 0 to get consistent outputs.
|
|
65 |
|
66 |
## Recommended Hardware
|
67 |
|
68 |
-
Running this model requires 2 or more 80GB GPUs, e.g. NVIDIA A100, with at least 150GB of free disk space.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
## Recommended Hardware
|
67 |
|
68 |
+
Running this model requires 2 or more 80GB GPUs, e.g. NVIDIA A100, with at least 150GB of free disk space.
|
69 |
+
|
70 |
+
## Recommended Hardware
|
71 |
+
|
72 |
+
This model is designed for high-performance environments with two or more 80GB GPUs (e.g., NVIDIA A100) and at least 150GB of free disk space.
|
73 |
+
|
74 |
+
### Quantization
|
75 |
+
|
76 |
+
If hardware constraints prevent loading the full model, quantization can reduce memory requirements. Note, however, that quantization inherently discards model information.
|
77 |
+
|
78 |
+
> **Warning**: We do not recommend quantized usage. Expert-curated domain knowledge embedded in the model may be lost, degrading performance in critical tasks.
|
79 |
+
|
80 |
+
If you proceed with quantization, we recommend using the GGUF format. This method enables out-of-core quantization—essential when RAM is a limiting factor.
|
81 |
+
|
82 |
+
To convert the model to GGUF, use the `lama.cpp` tools (tested with release `b5233`). Due to the model’s custom setup, use the legacy conversion script, which includes the required `--vocab-type` flag.
|
83 |
+
|
84 |
+
```
|
85 |
+
python ./llama.cpp/examples/convert_legacy_llama.py ./ncos_model_directory/ --outfile ncos.gguf --vocab-type bpe
|
86 |
+
```
|
87 |
+
|
88 |
+
Once converted, the model can be quantized without fully loading it into RAM.
|
89 |
+
> Info: To chose the right quantization scheme for your use case, please read up on the different kind of quantization and parameters for each option. Some methods offer to use example data to guide the quantiozation of the model. This can help to avoid loss of information that is relevant to your intended application.
|
90 |
+
|
91 |
+
For demonstration, the following command performs an uninformed 4-bit quantization using the `q4_0` method:
|
92 |
+
|
93 |
+
```
|
94 |
+
./lama.cpp/llama-quantize ./ncos.gguf ./ncos-q4_0.gguf q4_0
|
95 |
+
```
|
96 |
+
The resulting 4bit version of the model is roughly 40GB in size. The model can run on hardware belolw the 'recommended hardware', as described above. If running the model via the "CPU"-option, i.e. without a GPU, you can even run the model on consumer setups with around 50GB of RAM. With other quantization options it might even be possible to reduce the size further.
|
97 |
+
|
98 |
+
In addition to running the model in gradio, as sketched above, you can also deploy on-premise using the ollama-library (version: v0.6.7). After setting up a ollama-"modelfile" according to your use case (e.g. the preferred system prompt and some additional setups can be found in the config files of the model) you can add the model to ollama like this:
|
99 |
+
```
|
100 |
+
ollama create ncos-q40 -f ./ncos-gguf/Modelfile
|
101 |
+
```
|
102 |
+
|
103 |
+
|