|
--- |
|
title: Phi 2 Fine Tuned Chatbot |
|
emoji: π |
|
colorFrom: purple |
|
colorTo: pink |
|
sdk: gradio |
|
sdk_version: 5.22.0 |
|
app_file: app.py |
|
pinned: false |
|
short_description: Phi-2 Conversational Assistant |
|
--- |
|
|
|
# Phi-2 Conversational Assistant |
|
|
|
A fine-tuned version of Microsoft's Phi-2 model for conversational AI, deployed using Gradio interface. |
|
|
|
## Features |
|
|
|
- Built on Microsoft's Phi-2 language model |
|
- Fine-tuned for conversational tasks |
|
- Interactive web interface using Gradio |
|
- CPU-based inference for accessibility |
|
- Configurable response parameters |
|
- Context-aware conversations |
|
|
|
## Installation |
|
|
|
1. Clone the repository |
|
2. Install dependencies: |
|
```bash |
|
pip install -r requirements.txt |
|
``` |
|
|
|
## Usage |
|
Run the application: |
|
```bash |
|
python app.py |
|
``` |
|
The application will launch a local web server. You can interact with the chatbot through the Gradio interface. |
|
|
|
## Model Configuration |
|
The chatbot uses the following settings: |
|
- Maximum new tokens: 96 |
|
- Temperature: 0.6 |
|
- Top-p sampling: 0.7 |
|
- No repeat ngram size: 3 |
|
- Repetition penalty: 1.2 |
|
|
|
## Project Structure |
|
```plaintext |
|
. |
|
βββ app.py # Main application file |
|
βββ requirements.txt # Project dependencies |
|
βββ phi2-finetuned-final/ # Fine-tuned model files |
|
βββ adapter_config.json |
|
βββ adapter_model.safetensors |
|
βββ tokenizer files |
|
``` |
|
|