This is the 8-bit quantized version of NousResearch/Hermes-3-Llama-3.1-8B by following the example from the AutoGPTQ repository.

Downloads last month
25
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ktoprakucar/Hermes-3-Llama-3.1-8B-Q8-GPTQ

Quantized
(211)
this model