Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
anakin87Β 
posted an update Jul 3
Post
432
πŸ›‘οΈ AI Guardrails with Open Language Models - Tutorial

πŸ““ https://haystack.deepset.ai/cookbook/safety_moderation_open_lms

How do you ensure your AI application is safe from harmful or inappropriate user inputs?

This is a core requirement for real-world AI deployments. Luckily, several open Language Models are built specifically for safety moderation.

I've been exploring them and put together a hands-on tutorial using the Haystack framework to build your own AI guardrails.

In the notebook, you'll learn how to use and customize:
πŸ”Ή Meta Llama Guard (via Hugging Face API)
πŸ”Ή IBM Granite Guardian (via Ollama), which can also evaluate RAG specific risk dimensions
πŸ”Ή Google ShieldGemma (via Ollama)
πŸ”Ή Nvidia NemoGuard models family, including a model for topic control

You'll also see how to integrate content moderation into a πŸ”Ž RAG pipeline.
In this post