Q4_K_M GGUF quant of Reflection-Llama-3.1-70B - fixed version.
Runs great on 48GB VRAM, tested.
Ollama modelfile added - version with original system prompt - output is split into "thinking" and "output" tags.
If you want llama 3.1 'vanilla' experience, just remove SYSTEM from modelfile before creating ollama model.

All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: Buy Me A Coffee

Downloads last month
4
GGUF
Model size
70.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including TeeZee/Reflection-Llama-3.1-70B-GGUF