L3-70B-Euryale-v2.1-iMat-GGUF

Quantized from fp16.

  • Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 88 chunks and n_ctx=512

Recommended Sampler Settings (From Original Model Card)

Temperature - 1.17
min_p - 0.075
Repetition Penalty - 1.10

SillyTavern Instruct Settings:
Context Template: Llama-3-Instruct-Names
Instruct Presets: Euryale-v2.1-Llama-3-Instruct

For a brief rundown of iMatrix quant performance please see this PR

All quants are verified working prior to uploading to repo for your safety and convenience.

Tip: Pick a file size under your GPU's VRAM while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.

Original model card can be found here

Downloads last month
249
GGUF
Model size
70.6B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for InferenceIllusionist/L3-70B-Euryale-v2.1-iMat-GGUF

Quantized
(3)
this model

Collection including InferenceIllusionist/L3-70B-Euryale-v2.1-iMat-GGUF