DevShubham commited on
Commit
d7db01b
·
verified ·
1 Parent(s): 8679f12

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -6
README.md CHANGED
@@ -110,9 +110,6 @@ The following clients/libraries will automatically download models for you, prov
110
  - LoLLMS Web UI
111
  - Faraday.dev
112
 
113
- ### In `text-generation-webui`
114
-
115
- Under Download Model, you can enter the model repo: TheBloke/CodeLlama-13B-Instruct-GGUF and below it, a specific filename to download, such as: codellama-13b-instruct.q4_K_M.gguf.
116
 
117
  Then click Download.
118
 
@@ -149,9 +146,6 @@ pip3 install hf_transfer
149
 
150
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
151
 
152
- ```shell
153
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeLlama-13B-Instruct-GGUF codellama-13b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
154
- ```
155
 
156
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
157
  </details>
 
110
  - LoLLMS Web UI
111
  - Faraday.dev
112
 
 
 
 
113
 
114
  Then click Download.
115
 
 
146
 
147
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
148
 
 
 
 
149
 
150
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
151
  </details>