File size: 1,114 Bytes
f9c2f5b a7ea2cb b0f2d76 a7ea2cb f9c2f5b ce01a37 d18f496 bb0a759 ce01a37 a7ea2cb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
license: apache-2.0
language:
- en
library_name: llama.cpp
pipeline_tag: text-generation
---
# Gemma 2 - Inference Endpoint
## <span style="color: red">NOTICE:</span> This model does, in fact run on inference endpoints. Just click deploy, unlike with regular GGUF models. The model is no longer stored, merely linked. Enjoy <span style="color: red"><3</span>
<label>Code Sample ( One-Shot )</label>
```javascript
{
"inputs": "A plain old prompt with nothing else"
}
```
## Multi turn coming soon...
Hello! I wrote a simple container that allows for easy running of llama-cpp-python with GGUF models. My goal here was a cheap way to play with Gemma, but then I thought maybe i'd share just in case it's helpful. I'll probably make a bunch of these, so if you have any requests for GGUF or otherwise quantized Llama.cpp models to become inference endpoints, please feel free to reach out!
# Files
I used the excellent quant by [lmstudio-ai/gemma-2b-it-GGUF](https://huggingface.co/lmstudio-ai/gemma-2b-it-GGUF),
My email is [email protected]
Just kidding, it's sam att samuellmeyers DOT... com |