Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,15 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
Follow these steps to set up and run the e-commerce FAQ chatbot, optimized for hardware with 16-19GB RAM and 8-11GB GPU.
|
| 4 |
|
|
@@ -97,4 +108,4 @@ This implementation includes several optimizations for systems with 16-19GB RAM
|
|
| 97 |
- The embedding and retrieval components work efficiently even on limited hardware
|
| 98 |
- Response generation speed depends on the model size and available GPU memory
|
| 99 |
- For optimal performance with 8GB GPU, stick with Phi-2 model
|
| 100 |
-
- For faster responses with less accuracy, use TinyLlama-1.1B
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: FAQ Chatbot Using RAG
|
| 3 |
+
emoji: 💬
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: indigo
|
| 6 |
+
sdk: streamlit
|
| 7 |
+
sdk_version: "1.30.0"
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# FAQ Chatbot Using RAG for Customer Support - Setup Instructions
|
| 13 |
|
| 14 |
Follow these steps to set up and run the e-commerce FAQ chatbot, optimized for hardware with 16-19GB RAM and 8-11GB GPU.
|
| 15 |
|
|
|
|
| 108 |
- The embedding and retrieval components work efficiently even on limited hardware
|
| 109 |
- Response generation speed depends on the model size and available GPU memory
|
| 110 |
- For optimal performance with 8GB GPU, stick with Phi-2 model
|
| 111 |
+
- For faster responses with less accuracy, use TinyLlama-1.1B
|