Outdated:
Outdaded tokenizer configuration!
This is only kept for historical purposes, use the newer models instead of this one.

This is a Llama-3 land now, cowboys!

GGUF-IQ-Imatrix quants for ResplendentAI/Aura_L3_8B. Presets here.

Use the latest version of KoboldCpp. Use the provided presets.
This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now.

Original model information:

Aura L3

image/png

The next evolution in Aura models, trained on 6 separate datasets and ready to bring you to your knees.

I am so happy to be one of the first with a finetune of this amazing model. I hope that you all enjoy the finetune as much as I know I will.

Downloads last month
145
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including Lewdiculous/Aura_L3_8B-GGUF-IQ-Imatrix