Update README.md
Browse files
README.md
CHANGED
@@ -5,8 +5,8 @@ Quick start:
|
|
5 |
```
|
6 |
pip install huggingface-hub[cli] openvino-genai==2025.2
|
7 |
curl -O https://raw.githubusercontent.com/helena-intel/snippets/refs/heads/main/llm_chat/python/llm_chat_manual.py
|
8 |
-
huggingface-cli download helenai/Phi-4-mini-
|
9 |
-
python llm_chat_manual.py Phi-4-mini-
|
10 |
```
|
11 |
|
12 |
In the last line, change CPU to GPU or NPU to run on Intel GPU or NPU. For NPU inference, make sure that the latest version of the NPU driver is installed ([Windows](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html), [Linux](https://github.com/intel/linux-npu-driver/releases))
|
|
|
5 |
```
|
6 |
pip install huggingface-hub[cli] openvino-genai==2025.2
|
7 |
curl -O https://raw.githubusercontent.com/helena-intel/snippets/refs/heads/main/llm_chat/python/llm_chat_manual.py
|
8 |
+
huggingface-cli download helenai/Phi-4-mini-instruct-ov-sym --local-dir Phi-4-mini-instruct-ov-sym
|
9 |
+
python llm_chat_manual.py Phi-4-mini-instruct-ov-sym CPU
|
10 |
```
|
11 |
|
12 |
In the last line, change CPU to GPU or NPU to run on Intel GPU or NPU. For NPU inference, make sure that the latest version of the NPU driver is installed ([Windows](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html), [Linux](https://github.com/intel/linux-npu-driver/releases))
|