DeepQ
Collection
Small-LM powerd by reasoning ability with Qwen base model.
โข
2 items
โข
Updated
We advise you to clone llama.cpp and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository llama.cpp.
Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use huggingface-cli:
Install
pip install -U huggingface_hub
Download:
huggingface-cli download Linzes/DeepQ-1.5B-gguf DeepQ-1.5B_Q5_K_M.gguf --local-dir . --local-dir-use-symlinks False
For users, to achieve chatbot-like experience, it is recommended to commence in the conversation mode:
./llama-cli -m <gguf-file-path> \
-co -cnv -p "You are a helpful assistant." \
-fa -ngl 80 -n 512
Base model
Qwen/Qwen2.5-1.5B