Transformers
GGUF
falcon-h1
ibrahim-khadraoui-TII commited on
Commit
b3e477c
·
verified ·
1 Parent(s): ccd44d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -78,7 +78,7 @@ For vLLM, simply start a server by executing the command below:
78
  vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
79
  ```
80
 
81
- ### `llama.cpp`
82
 
83
  While we are working on integrating our architecture directly into `llama.cpp` library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1
84
  Use the same installing guidelines as `llama.cpp`.
 
78
  vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
79
  ```
80
 
81
+ ### 🦙 llama.cpp
82
 
83
  While we are working on integrating our architecture directly into `llama.cpp` library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1
84
  Use the same installing guidelines as `llama.cpp`.