Joseph717171's picture
Update README.md
ed4a495 verified
Custom GGUF quants of [NousResearch/DeepHermes-3-Llama-3-8B-Preview](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview), where the Output Tensors are kept at F32 or quantized to Q8_0, while the Embeddings are kept at F32. Enjoy! 🧠πŸ”₯πŸš€