Poro-34B-gguf
This is a GGUF quantization of the Poro-34B model.
Please refer to that repository's model card for details.
The current revision is a quantization of the 1000B token checkpoint.
The conversion was done with llama.cpp version b2354 (e25fb4b18fcedb9bed6be4585cf842e9a669b28b) on a Google Compute machine generously sponsored by Valohai.
- Downloads last month
- 23
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
5-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.