Spaces:
Sleeping
Sleeping
A newer version of the Streamlit SDK is available:
1.50.0
metadata
license: mit
title: Freeekyyy-chatBot
sdk: streamlit
sdk_version: 1.44.1
π€ Freeekyyy ChatBot
Freeekyyy is an over-the-top, emotional AI chatbot that FREAKS OUT (in Markdown!) on any topic you provide.
It uses LangChain + OpenRouter to generate expressive, explosive Markdown responses β perfect for dramatic, chaotic, and wildly informative outputs.
π₯ Now powered with a RAG (Retrieval-Augmented Generation) pipeline to respond using your own PDFs and documents!
Check it out live π MKCL/Freeekyyy-chatBot on Hugging Face π€―
π§ How It Works
- Uses
LangChain
'sChatPromptTemplate
to inject emotional few-shot prompts. - Connects to DeepSeek-R1-Zero via OpenRouter.
- Uses vector search (via
ChromaDB
) and HuggingFace embeddings for document retrieval (RAG). - Outputs responses in beautiful Markdown (.md) format.
- Works as a Streamlit app or a FastAPI backend.
π Retrieval-Augmented Generation (RAG)
The chatbot now includes a smart document processing pipeline:
- Document Ingestion: Parses your uploaded PDF files.
- Chunking: Splits them into overlapping text chunks.
- Embeddings: Generates vector embeddings using
BAAI/bge-small-en
. - Vector Store: Stores chunks in
ChromaDB
. - Context Injection: Relevant chunks are inserted into the LLM prompt for context-aware responses!
π₯οΈ Streamlit Integration
To display Markdown output in Streamlit:
import streamlit as st
# Assuming `md_output` contains your model's response
st.markdown(md_output, unsafe_allow_html=True)
π Installation
Option 1: Using uv
uv pip install -r requirements.txt
Option 2: Using regular pip
pip install -r requirements.txt
π¦ Requirements
langchain
langchain-community
langchain-openai
openai
chromadb
python-dotenv
huggingface_hub
sentence-transformers
streamlit
uvicorn
fastapi
π οΈ Environment Variables
Create a .env
file in the root directory:
OPENROUTER_API_KEY=your_openrouter_key_here
HUGGINGFACE_API_KEY=your_huggingface_key_here
π§ͺ Example Prompt Structure
from langchain.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You're an extremely emotional AI. Always freak out in Markdown."),
("user", "Topic: Volcanoes")
])
π RAG Query with Vector Search
# Sample retrieval pipeline
relevant_chunks = db.similarity_search(query, k=4)
context = "\n\n".join([doc.page_content for doc in relevant_chunks])
final_prompt = f"""
You are an emotional assistant. Respond dramatically using Markdown.
Context:
{context}
Question:
{query}
"""
π§βπ» Want to Use as an API?
Run your backend like this:
uvicorn main:app --reload
π License
MIT β go freak out and teach some AI emotions! π€―β€οΈπ₯