Text Generation
Transformers
GGUF
English
llama
TheBloke commited on
Commit
5919d36
·
1 Parent(s): f1163be

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -180,7 +180,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
180
  from ctransformers import AutoModelForCausalLM
181
 
182
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
183
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/orca_mini_v3_13B-GGML", model_file="orca_mini_v3_13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
184
 
185
  print(llm("AI is going to"))
186
  ```
 
180
  from ctransformers import AutoModelForCausalLM
181
 
182
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
183
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/orca_mini_v3_13B-GGUF", model_file="orca_mini_v3_13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
184
 
185
  print(llm("AI is going to"))
186
  ```