Spaces:
Sleeping
Sleeping
File size: 3,119 Bytes
32cf7b5 4e32bcc 32cf7b5 c6c15e2 8d7e5ed c6c15e2 8d7e5ed c6c15e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
---
license: mit
title: Freeekyyy-chatBot
sdk: streamlit
sdk_version: 1.44.1
---
# π€ Freeekyyy ChatBot
**Freeekyyy** is an *over-the-top*, emotional AI chatbot that FREAKS OUT (in Markdown!) on any topic you provide.
It uses [LangChain](https://github.com/langchain-ai/langchain) + [OpenRouter](https://openrouter.ai) to generate expressive, explosive Markdown responses β perfect for dramatic, chaotic, and wildly informative outputs.
> π₯ Now powered with a **RAG (Retrieval-Augmented Generation) pipeline** to respond using your own PDFs and documents!
Check it out live π [MKCL/Freeekyyy-chatBot on Hugging Face π€―](https://huggingface.co/spaces/MKCL/Freeekyyy-chatBot)
---
## π§ How It Works
- Uses `LangChain`'s `ChatPromptTemplate` to inject emotional few-shot prompts.
- Connects to **DeepSeek-R1-Zero** via [OpenRouter](https://openrouter.ai).
- Uses **vector search** (via `ChromaDB`) and **HuggingFace embeddings** for document retrieval (RAG).
- Outputs responses in beautiful **Markdown (.md)** format.
- Works as a **Streamlit app** or a **FastAPI backend**.
---
## π Retrieval-Augmented Generation (RAG)
The chatbot now includes a smart document processing pipeline:
1. **Document Ingestion**: Parses your uploaded PDF files.
2. **Chunking**: Splits them into overlapping text chunks.
3. **Embeddings**: Generates vector embeddings using `BAAI/bge-small-en`.
4. **Vector Store**: Stores chunks in `ChromaDB`.
5. **Context Injection**: Relevant chunks are inserted into the LLM prompt for context-aware responses!
---
## π₯οΈ Streamlit Integration
To display Markdown output in Streamlit:
```python
import streamlit as st
# Assuming `md_output` contains your model's response
st.markdown(md_output, unsafe_allow_html=True)
```
---
## π Installation
### Option 1: Using `uv`
```bash
uv pip install -r requirements.txt
```
### Option 2: Using regular pip
```bash
pip install -r requirements.txt
```
---
## π¦ Requirements
```
langchain
langchain-community
langchain-openai
openai
chromadb
python-dotenv
huggingface_hub
sentence-transformers
streamlit
uvicorn
fastapi
```
---
## π οΈ Environment Variables
Create a `.env` file in the root directory:
```
OPENROUTER_API_KEY=your_openrouter_key_here
HUGGINGFACE_API_KEY=your_huggingface_key_here
```
---
## π§ͺ Example Prompt Structure
```python
from langchain.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You're an extremely emotional AI. Always freak out in Markdown."),
("user", "Topic: Volcanoes")
])
```
---
## π RAG Query with Vector Search
```python
# Sample retrieval pipeline
relevant_chunks = db.similarity_search(query, k=4)
context = "\n\n".join([doc.page_content for doc in relevant_chunks])
final_prompt = f"""
You are an emotional assistant. Respond dramatically using Markdown.
Context:
{context}
Question:
{query}
"""
```
---
## π§βπ» Want to Use as an API?
Run your backend like this:
```bash
uvicorn main:app --reload
```
---
## π License
MIT β go freak out and teach some AI emotions! π€―β€οΈπ₯ |