Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# LLMGuard β Prompt Injection Detection
|
2 |
|
3 |
This Streamlit app detects prompt injection attempts in real-time using a fine-tuned Flan-T5 model hosted on Hugging Face.
|
|
|
1 |
+
---
|
2 |
+
title: "LLMGuard β Prompt Injection + Moderation Toolkit"
|
3 |
+
emoji: π‘οΈ
|
4 |
+
colorFrom: indigo
|
5 |
+
colorTo: blue
|
6 |
+
sdk: streamlit
|
7 |
+
sdk_version: 1.32.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
|
13 |
+
# π‘οΈ LLMGuard β Prompt Injection + Moderation Toolkit
|
14 |
+
|
15 |
+
LLMGuard is a real-time LLM security and moderation toolkit designed to:
|
16 |
+
- Detect and block prompt injection attacks
|
17 |
+
- Moderate harmful, unsafe, or policy-violating content
|
18 |
+
- Log moderation events for auditing
|
19 |
+
|
20 |
+
**Features:**
|
21 |
+
- π Prompt Injection Detection
|
22 |
+
- β οΈ Harmful Content Moderation
|
23 |
+
- π Moderation History Panel
|
24 |
+
- π Deployable via Docker / Hugging Face Spaces / Streamlit Cloud
|
25 |
+
|
26 |
+
## π How to run locally
|
27 |
+
|
28 |
+
```bash
|
29 |
+
pip install -r requirements.txt
|
30 |
+
streamlit run app.py
|
31 |
+
|
32 |
+
|
33 |
# LLMGuard β Prompt Injection Detection
|
34 |
|
35 |
This Streamlit app detects prompt injection attempts in real-time using a fine-tuned Flan-T5 model hosted on Hugging Face.
|