Update `README.md` to use TEI v1.7 instead

#18
by alvarobartt HF Staff - opened
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -203,13 +203,13 @@ print(scores.tolist())
203
  You can either run / deploy TEI on NVIDIA GPUs as:
204
 
205
  ```bash
206
- docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.7.2 --model-id Qwen/Qwen3-Embedding-8B --dtype float16
207
  ```
208
 
209
  Or on CPU devices as:
210
 
211
  ```bash
212
- docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2 --model-id Qwen/Qwen3-Embedding-8B --dtype float16
213
  ```
214
 
215
  And then, generate the embeddings sending a HTTP POST request as:
 
203
  You can either run / deploy TEI on NVIDIA GPUs as:
204
 
205
  ```bash
206
+ docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.7 --model-id Qwen/Qwen3-Embedding-8B --dtype float16
207
  ```
208
 
209
  Or on CPU devices as:
210
 
211
  ```bash
212
+ docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.7 --model-id Qwen/Qwen3-Embedding-8B --dtype float16
213
  ```
214
 
215
  And then, generate the embeddings sending a HTTP POST request as: