Post
1352
RealHarm: A Collection of Real-World Language Model Application Failure
I'm David from Giskard, and we work on securing your Agents.
Today, we are launching RealHarm: a dataset of real-world problematic interactions with AI agents, drawn from publicly reported incidents.
Check out the dataset and paper: https://realharm.giskard.ai/
I'm David from Giskard, and we work on securing your Agents.
Today, we are launching RealHarm: a dataset of real-world problematic interactions with AI agents, drawn from publicly reported incidents.
Check out the dataset and paper: https://realharm.giskard.ai/