πŸ¦™ Llama-3.1-70B-Instruct-lorablated

πŸ¦™ Llama 3.1 8B Instruct abliterated

This is an uncensored version of Llama 3.1 70B Instruct created with abliteration (see this article to know more about it) using @grimjim's recipe.

More precisely, this is a LoRA-abliterated (lorablated) model:

  1. Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3 and an abliterated Llama 3
  2. Merge: We merge this new LoRA adapter using task arithmetic to a censored Llama 3.1 to abliterate it.

I adapted this recipe to Llama 3.1 70B using failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 and optimized the LoRA rank.

The model is fully uncensored in my tests and maintains a high level of quality. A more rigorous evaluation is still needed to measure the impact of this process on benchmarks.

Special thanks to @grimjim for this technique (see his 8B model) and @FailSpy for his 70B abliterated model. Please follow them if you're interested in abliterated models.

In addition, thanks to brev.dev for providing me with compute!

πŸ” Applications

General-purpose, role-play (see feedback from McUH). Use the Llama 3 chat template.

⚑️ Quantization

🧩 Configuration

This model was merged using the task arithmetic merge method using ./meta-llama/Meta-Llama-3.1-70B-Instruct + Llama-3-70B-Instruct-abliterated-LORA as a base.

The following YAML configuration was used to produce this model:

base_model: meta-llama/Meta-Llama-3.1-70B-Instruct+Llama-3-70B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: false
slices:
- sources:
  - layer_range: [0, 80]
    model: meta-llama/Meta-Llama-3.1-70B-Instruct+Llama-3-70B-Instruct-abliterated-LORA
    parameters:
      weight: 1.0

You can reproduce this model using the following commands:

# Setup
git clone https://github.com/arcee-ai/mergekit.git
cd mergekit && pip install -e .
pip install bitsandbytes

# Extraction
mergekit-extract-lora failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 meta-llama/Meta-Llama-3-70B-Instruct Llama-3-70B-Instruct-abliterated-LORA --rank=64

# Merge using previous config
mergekit-yaml config.yaml Llama-3.1-70B-Instruct-lorablated --allow-crimes --lora-merge-cache=./cache
Downloads last month
2,622
Safetensors
Model size
70.6B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlabonne/Llama-3.1-70B-Instruct-lorablated

Finetuned
(56)
this model
Merges
1 model
Quantizations
4 models

Space using mlabonne/Llama-3.1-70B-Instruct-lorablated 1

Collection including mlabonne/Llama-3.1-70B-Instruct-lorablated