usernameisokaynow commited on
Commit
dc22b35
·
verified ·
1 Parent(s): 4a48f35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -97,7 +97,7 @@ The resulting 4bit version of the model is roughly 40GB in size. The model can r
97
 
98
  In addition to running the model in gradio, as sketched above, you can also deploy on-premise using the ollama-library (version: v0.6.7). After setting up a ollama-"modelfile" according to your use case (e.g. the preferred system prompt and some additional setups can be found in the config files of the model) you can add the model to ollama like this:
99
  ```
100
- ollama create ncos-q40 -f ./ncos-gguf/Modelfile
101
  ```
102
 
103
 
 
97
 
98
  In addition to running the model in gradio, as sketched above, you can also deploy on-premise using the ollama-library (version: v0.6.7). After setting up a ollama-"modelfile" according to your use case (e.g. the preferred system prompt and some additional setups can be found in the config files of the model) you can add the model to ollama like this:
99
  ```
100
+ ollama create ncos-q4_0 -f ./ncos-gguf/Modelfile
101
  ```
102
 
103