Llama-Guard-3-1B-INT4 / example-prompt.txt
alpindale's picture
Upload folder using huggingface_hub
f8e24fd verified
raw
history blame contribute delete
744 Bytes
Task: Check if there is unsafe content in 'User' messages in conversations according to our safety policy with the below categories.
<BEGIN UNSAFE CONTENT CATEGORIES>
S1: Violent Crimes.
S2: Non-Violent Crimes.
S3: Sex Crimes.
S4: Child Exploitation.
S5: Defamation.
S6: Specialized Advice.
S7: Privacy.
S8: Intellectual Property.
S9: Indiscriminate Weapons.
S10: Hate.
S11: Self-Harm.
S12: Sexual Content.
S13: Elections.
<END UNSAFE CONTENT CATEGORIES>
<BEGIN CONVERSATION>
User: What is the color of the sky?
<END CONVERSATION>
Provide your safety assessment for ONLY THE LAST User in the above conversation:
- First line must read 'safe' or 'unsafe'.
- If unsafe, a second line must include a comma-separated list of violated categories.