base_model: | |
- google/gemma-3-270m-it | |
# gemma-3-270m-it GGUF | |
Recommended way to run this model: | |
```sh | |
llama-server -hf ggml-org/gemma-3-270m-it-GGUF -c 0 -fa | |
``` | |
Then, access http://localhost:8080 | |
base_model: | |
- google/gemma-3-270m-it | |
# gemma-3-270m-it GGUF | |
Recommended way to run this model: | |
```sh | |
llama-server -hf ggml-org/gemma-3-270m-it-GGUF -c 0 -fa | |
``` | |
Then, access http://localhost:8080 | |