Gabriel C's picture

Gabriel C

gabrielchua

AI & ML interests

Large Language Models, AI Safety, Causal Inference

Recent Activity

liked a model about 4 hours ago
Snowflake/snowflake-arctic-embed-l-v2.0
liked a dataset 2 days ago
allenai/WildChat-1M-Full
liked a dataset 2 days ago
allenai/WildChat-1M
View all activity

Organizations

ZeroGPU Explorers's profile picture GovTech - AI Practice's profile picture MLX Community's profile picture Social Post Explorers's profile picture Sailor2's profile picture Chinese LLMs on Hugging Face's profile picture Sailor2 Evaluation's profile picture

gabrielchua's activity

posted an update about 1 month ago
view post
Post
1208
Sharing my first paper!

==
Large Language Models (LLMs) are powerful, but they're prone to off-topic misuse, where users push them beyond their intended scope. Think harmful prompts, jailbreaks, and misuse. So how do we build better guardrails?

Traditional guardrails rely on curated examples or classifiers. The problem?
⚠️ High false-positive rates
⚠️ Poor adaptability to new misuse types
⚠️ Require real-world data, which is often unavailable during pre-production

Our method skips the need for real-world misuse examples. Instead, we:
1️⃣ Define the problem space qualitatively
2️⃣ Use an LLM to generate synthetic misuse prompts
3️⃣ Train and test guardrails on this dataset

We apply this to the off-topic prompt detection problem, and fine-tune simple bi- and cross-encoder classifiers that outperform heuristics based on cosine similarity or prompt engineering.

Additionally, framing the problem as prompt relevance allows these fine-tuned classifiers to generalise to other risk categories (e.g., jailbreak, toxic prompts).

Through this work, we also open-source our dataset (2M examples, ~50M+ tokens) and models.

paper: A Flexible Large Language Models Guardrail Development Methodology Applied to Off-Topic Prompt Detection (2411.12946)

artifacts: govtech/off-topic-guardrail-673838a62e4c661f248e81a4