Llama-3-70B-Instruct-Storywriter-iMat-GGUF

Special request. Quantized from fp32 with love. If you can't fit IQ quants in your VRAM, try using the K quants in this repo instead.

  • The .imatrix file in this repo was created using the Q8_0 quantization of Llama-3-70B-Instruct-Storywriter-iMat-GGUF.
  • Calculated in 88 chunks with n_ctx=512 using groups_merged.txt

For a brief rundown of iMatrix quant performance please see this PR

All quants are verified working prior to uploading to repo for your safety and convenience.

Tip: Pick a file size under your GPU's VRAM while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.

BFloat16 model card can be found here

Downloads last month
440
GGUF
Model size
70.6B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including InferenceIllusionist/Llama-3-70B-Instruct-Storywriter-iMat-GGUF