Edit model card

OwenArli/ArliAI-Llama-3-8B-Cumulus-v0.3.2-GGUF

This is quantized version of OwenArli/ArliAI-Llama-3-8B-Cumulus-v0.3.2-GGUF created using llama.cpp

Model Description

Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct

This v0.3.2 version is even more uncensored thanks to using https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v0.6-Abliterated as the base model. The 0.0.2 is for slight adjustment to the DPO stage.

In terms of reasoning and intelligence, this model is probably a bit worse than the OG model because of the decensoring. However, this model is better at long back and forth chats and will refuse less.

This model works best with system prompts that tells it that it is the character, instead of telling it to act as a character.

Training:

  • Full 8192 sequence length.
  • Training duration is around 2 days on an RTX 4090, using 4-bit loading and Qlora 64-rank 64-alpha resulting in ~2% trainable weights.

Instruct format:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Quants:

FP16: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Cumulus-v0.3.2

GGUF: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Cumulus-v0.3.2-GGUF

Downloads last month
129
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/ArliAI-Llama-3-8B-Cumulus-v0.3.2-GGUF

Quantized
(1)
this model

Collection including QuantFactory/ArliAI-Llama-3-8B-Cumulus-v0.3.2-GGUF