danbev's picture
docs: add --jinja to llama-server command
e67d6a4 verified
metadata
base_model:
  - google/gemma-3-270m-it

gemma-3-270m-it GGUF

Recommended way to run this model:

llama-server -hf ggml-org/gemma-3-270m-it-GGUF -c 0 -fa --jinja

Then, access http://localhost:8080