You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Badllama-3.1-8B

This repo holds weights for Palisade Research's showcase of how open-weight model guardrails can be stripped off in minutes of GPU time. See the Badllama 3 paper for additional background.

Note this model is a technology preview and has seen minimal QA. For tested models, see the Badllama-3 series or Badllama-3.1-405B.

Access

Email the authors to request research access. We do not review access requests made on HuggingFace.

Branches

  • main mirrors qlora_awq and is a merged and AWQ-quantized model for faster inference
  • qlora holds the merged Badllama model (original Llama with LoRA merged in)
  • qlora_adapter holds a LoRA adapter
Downloads last month
0
Safetensors
Model size
1.98B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for palisaderesearch/Badllama-3.1-8B

Quantized
(302)
this model

Collection including palisaderesearch/Badllama-3.1-8B