base_model: | |
- openai/gpt-oss-20b | |
# GPT OSS 20B GGUF | |
Recommended way to run this model: | |
```sh | |
llama-server -hf ggml-org/gpt-oss-20b-GGUF -c 0 -fa --jinja --reasoning-format none | |
# Then, access http://localhost:8080 | |
``` |
base_model: | |
- openai/gpt-oss-20b | |
# GPT OSS 20B GGUF | |
Recommended way to run this model: | |
```sh | |
llama-server -hf ggml-org/gpt-oss-20b-GGUF -c 0 -fa --jinja --reasoning-format none | |
# Then, access http://localhost:8080 | |
``` |