Spaces:
Sleeping
Sleeping
| license: mit | |
| title: Freeekyyy-chatBot | |
| sdk: streamlit | |
| sdk_version: 1.44.1 | |
| # π€ Freeekyyy ChatBot | |
| **Freeekyyy** is an *over-the-top*, emotional AI chatbot that FREAKS OUT (in Markdown!) on any topic you provide. | |
| It uses [LangChain](https://github.com/langchain-ai/langchain) + [OpenRouter](https://openrouter.ai) to generate expressive, explosive Markdown responses β perfect for dramatic, chaotic, and wildly informative outputs. | |
| > π₯ Now powered with a **RAG (Retrieval-Augmented Generation) pipeline** to respond using your own PDFs and documents! | |
| Check it out live π [MKCL/Freeekyyy-chatBot on Hugging Face π€―](https://huggingface.co/spaces/MKCL/Freeekyyy-chatBot) | |
| --- | |
| ## π§ How It Works | |
| - Uses `LangChain`'s `ChatPromptTemplate` to inject emotional few-shot prompts. | |
| - Connects to **DeepSeek-R1-Zero** via [OpenRouter](https://openrouter.ai). | |
| - Uses **vector search** (via `ChromaDB`) and **HuggingFace embeddings** for document retrieval (RAG). | |
| - Outputs responses in beautiful **Markdown (.md)** format. | |
| - Works as a **Streamlit app** or a **FastAPI backend**. | |
| --- | |
| ## π Retrieval-Augmented Generation (RAG) | |
| The chatbot now includes a smart document processing pipeline: | |
| 1. **Document Ingestion**: Parses your uploaded PDF files. | |
| 2. **Chunking**: Splits them into overlapping text chunks. | |
| 3. **Embeddings**: Generates vector embeddings using `BAAI/bge-small-en`. | |
| 4. **Vector Store**: Stores chunks in `ChromaDB`. | |
| 5. **Context Injection**: Relevant chunks are inserted into the LLM prompt for context-aware responses! | |
| --- | |
| ## π₯οΈ Streamlit Integration | |
| To display Markdown output in Streamlit: | |
| ```python | |
| import streamlit as st | |
| # Assuming `md_output` contains your model's response | |
| st.markdown(md_output, unsafe_allow_html=True) | |
| ``` | |
| --- | |
| ## π Installation | |
| ### Option 1: Using `uv` | |
| ```bash | |
| uv pip install -r requirements.txt | |
| ``` | |
| ### Option 2: Using regular pip | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| --- | |
| ## π¦ Requirements | |
| ``` | |
| langchain | |
| langchain-community | |
| langchain-openai | |
| openai | |
| chromadb | |
| python-dotenv | |
| huggingface_hub | |
| sentence-transformers | |
| streamlit | |
| uvicorn | |
| fastapi | |
| ``` | |
| --- | |
| ## π οΈ Environment Variables | |
| Create a `.env` file in the root directory: | |
| ``` | |
| OPENROUTER_API_KEY=your_openrouter_key_here | |
| HUGGINGFACE_API_KEY=your_huggingface_key_here | |
| ``` | |
| --- | |
| ## π§ͺ Example Prompt Structure | |
| ```python | |
| from langchain.prompts import ChatPromptTemplate | |
| prompt = ChatPromptTemplate.from_messages([ | |
| ("system", "You're an extremely emotional AI. Always freak out in Markdown."), | |
| ("user", "Topic: Volcanoes") | |
| ]) | |
| ``` | |
| --- | |
| ## π RAG Query with Vector Search | |
| ```python | |
| # Sample retrieval pipeline | |
| relevant_chunks = db.similarity_search(query, k=4) | |
| context = "\n\n".join([doc.page_content for doc in relevant_chunks]) | |
| final_prompt = f""" | |
| You are an emotional assistant. Respond dramatically using Markdown. | |
| Context: | |
| {context} | |
| Question: | |
| {query} | |
| """ | |
| ``` | |
| --- | |
| ## π§βπ» Want to Use as an API? | |
| Run your backend like this: | |
| ```bash | |
| uvicorn main:app --reload | |
| ``` | |
| --- | |
| ## π License | |
| MIT β go freak out and teach some AI emotions! π€―β€οΈπ₯ |