My first quantization, this is a q4_0 GGML(ggjtv3) and GGUFv2 quantization of the model https://huggingface.co/acrastt/OmegLLaMA-3B I hope it's working fine. 🤗

Prompt format:

Interests: {interests}
Conversation:
You: {prompt}
Stranger: 
Downloads last month
8
GGUF
Model size
3.43B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train Aryanne/OmegLLaMA-3B-ggml-and-gguf