Update README.md
Browse files
README.md
CHANGED
|
@@ -38,15 +38,15 @@ That's it! Head to the next step and start building your application!
|
|
| 38 |
<details>
|
| 39 |
<summary>🏗️ Building Your GPT-4 Turbo Application with DALL-E 3 Image Generation</summary>
|
| 40 |
|
| 41 |
-
1. Clone [this](https://github.com/AI-Maker-Space/
|
| 42 |
|
| 43 |
``` bash
|
| 44 |
-
git clone https://github.com/AI-Maker-Space/
|
| 45 |
```
|
| 46 |
|
| 47 |
2. Navigate inside this repo
|
| 48 |
``` bash
|
| 49 |
-
cd
|
| 50 |
```
|
| 51 |
|
| 52 |
3. Install the packages required for this python envirnoment in `requirements.txt`.
|
|
@@ -54,12 +54,12 @@ That's it! Head to the next step and start building your application!
|
|
| 54 |
pip install -r requirements.txt
|
| 55 |
```
|
| 56 |
|
| 57 |
-
4.
|
| 58 |
``` bash
|
| 59 |
-
OPENAI_API_KEY=
|
| 60 |
```
|
| 61 |
|
| 62 |
-
|
| 63 |
``` bash
|
| 64 |
chainlit run app.py -w
|
| 65 |
```
|
|
@@ -91,7 +91,7 @@ Awesome! Time to throw it into a docker container and prepare it for shipping!
|
|
| 91 |
2. Run and test the Docker image locally using the `run` command. The `-p`parameter connects our **host port #** to the left of the `:` to our **container port #** on the right.
|
| 92 |
|
| 93 |
``` bash
|
| 94 |
-
docker run -p 7860:7860 llm-app
|
| 95 |
```
|
| 96 |
|
| 97 |
3. Visit http://localhost:7860 in your browser to see if the app runs correctly.
|
|
|
|
| 38 |
<details>
|
| 39 |
<summary>🏗️ Building Your GPT-4 Turbo Application with DALL-E 3 Image Generation</summary>
|
| 40 |
|
| 41 |
+
1. Clone [this](https://github.com/AI-Maker-Space/GPT4AppWithDALLE3) repo.
|
| 42 |
|
| 43 |
``` bash
|
| 44 |
+
git clone https://github.com/AI-Maker-Space/GPT4AppWithDALLE3.git
|
| 45 |
```
|
| 46 |
|
| 47 |
2. Navigate inside this repo
|
| 48 |
``` bash
|
| 49 |
+
cd GPT4AppWithDALLE3
|
| 50 |
```
|
| 51 |
|
| 52 |
3. Install the packages required for this python envirnoment in `requirements.txt`.
|
|
|
|
| 54 |
pip install -r requirements.txt
|
| 55 |
```
|
| 56 |
|
| 57 |
+
4. Export Your OpenAI API Key to your local env. using
|
| 58 |
``` bash
|
| 59 |
+
export OPENAI_API_KEY=XXXX
|
| 60 |
```
|
| 61 |
|
| 62 |
+
6. Let's try deploying it locally. Make sure you're in the python environment where you installed Chainlit and OpenAI. Run the app using Chainlit. This may take a minute to run.
|
| 63 |
``` bash
|
| 64 |
chainlit run app.py -w
|
| 65 |
```
|
|
|
|
| 91 |
2. Run and test the Docker image locally using the `run` command. The `-p`parameter connects our **host port #** to the left of the `:` to our **container port #** on the right.
|
| 92 |
|
| 93 |
``` bash
|
| 94 |
+
docker run -p 7860:7860 -e OPENAI_API_KEY=XXXX llm-app
|
| 95 |
```
|
| 96 |
|
| 97 |
3. Visit http://localhost:7860 in your browser to see if the app runs correctly.
|