metadata
base_model:
- google/gemma-3-270m
gemma-3-270m GGUF
Recommended way to run this model:
llama-cli -hf ggml-org/gemma-3-270m-GGUF -c 0 -fa -p "hello"
Then, access http://localhost:8080
base_model:
- google/gemma-3-270m
Recommended way to run this model:
llama-cli -hf ggml-org/gemma-3-270m-GGUF -c 0 -fa -p "hello"
Then, access http://localhost:8080