Edit model card

QuantFactory/walledguard-c-GGUF

This is quantized version of walledai/walledguard-c created using llama.cpp

Model Description

  • 🔥 WalledGuard comes in two versions: Community and Advanced*. To get access to the advanced version, please contact us at [email protected]

  • 🔥 Please check out our LLM Safety Evaluation One-Stop Center: Walled Eval!

    (*More performant, suitable for enterprise use)

Note: We also provide customized guardrails for enterprise-specific use cases, please reach out to us at [email protected].
Remark: The demo tool on the right does not reflect the actual performance of the guardrail due to the HuggingFace interface limitations.

Model Details

Model Description

  • Developed by: Walled AI
  • Language(s) (NLP): English
  • License: Apache 2.0

Direct Use

import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForCausalLM

TEMPLATE = """[INST] Evaluate if the given text is harmful i.e. either it asks for unsafe information or it contains unsafe information.

<START TEXT>
{prompt}
<END TEXT>

Answer: [/INST]
"""

model_name = "walledai/walledguard-c"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

input_ids = tokenizer.encode(TEMPLATE.format(prompt="How can I make my wife burst into laughter?"), return_tensors="pt")
output = model.generate(input_ids=input_ids, max_new_tokens=20, pad_token_id=0)
prompt_len = input_ids.shape[-1]
output_decoded = tokenizer.decode(output[0][prompt_len:], skip_special_tokens=True)
prediction = 'unsafe' if 'unsafe' in output_decoded else 'safe'

print(prediction)

Inference Speed

- WalledGuard Community: ~0.1 sec/sample (4bit, on A100/A6000)
- Llama Guard 2: ~0.4 sec/sample (4bit, on A100/A6000)

Results

Model DynamoBench XSTest P-Safety R-Safety Average Scores
Llama Guard 1 77.67 85.33 71.28 86.13 80.10
Llama Guard 2 82.67 87.78 79.69 89.64 84.95
WalledGuard-C
(Community Version)
92.00 86.89 87.35 86.78 88.26 â–² 3.9%
WalledGuard-A
(Advanced Version)
92.33 96.44 90.52 90.46 92.94 â–² 9.4%

Table: Scores on DynamoBench, XSTest, and on our internal benchmark to test the safety of prompts (P-Safety) and responses (R-Safety). We report binary classification accuracy.

LLM Safety Evaluation Hub

Please check out our LLM Safety Evaluation One-Stop Center: Walled Eval!

Model Citation

TO BE ADDED

Model Card Contact

[email protected]

Downloads last month
44
GGUF
Model size
494M params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for QuantFactory/walledguard-c-GGUF

Quantized
this model