Update README.md
Browse files
README.md
CHANGED
@@ -564,7 +564,7 @@ The HyperCLOVA X SEED Think model is built on a custom LLM architecture based on
|
|
564 |
After downloading the model checkpoint to a local path (`/path/to/hyperclova-x-seed-think-14b`), you can perform text inference by running the following commands on a GPU environment with A100 or higher.
|
565 |
|
566 |
```bash
|
567 |
-
python -m vllm.entrypoints.api_server --model=/path/to/hyperclova-x-seed-think-14b --trust_remote_code --port=8000
|
568 |
|
569 |
curl http://localhost:8000/v1/completions \
|
570 |
-H "Content-Type: application/json" \
|
|
|
564 |
After downloading the model checkpoint to a local path (`/path/to/hyperclova-x-seed-think-14b`), you can perform text inference by running the following commands on a GPU environment with A100 or higher.
|
565 |
|
566 |
```bash
|
567 |
+
python -m vllm.entrypoints.openai.api_server --model=/path/to/hyperclova-x-seed-think-14b --trust_remote_code --port=8000
|
568 |
|
569 |
curl http://localhost:8000/v1/completions \
|
570 |
-H "Content-Type: application/json" \
|