huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned

This is a fine-tuned version of huihui-ai/Llama-3.3-70B-Instruct-abliterated

If the desired result is not achieved, you can clear the conversation and try again.

Use with ollama

You can use huihui_ai/llama3.3-abliterated-ft directly,

ollama run huihui_ai/llama3.3-abliterated-ft

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

See the snippet below for usage with Transformers:

import transformers
import torch

model_id = "huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Downloads last month
2,410
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned