Vicuna 13b v1.3 German GGML
These files are GGML format model files for Vicuna 13b v1.3 German. Please find all information about the model in the original repository.
GGML files are for CPU + GPU inference using llama.cpp and libraries and UIs which support this format, such as:
Prompt template:
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Hello!
ASSISTANT: Hello!</s>
USER: How are you?
ASSISTANT: I am good.</s>
Compatibility
q4_0
+ q5_1
So far, I only quantized a q4_0
and q5_1
version for my own use. Please let me know if there is demand for other quantizations.
These should be compatbile with any UIs, tools and libraries released since late May.
Provided files
Name | Quant method | Bits | Size | Max RAM required | Use case |
---|---|---|---|---|---|
vicuna-13b-v1.3-ger.ggmlv3.q4_0.bin | q4_0 | 4 | 7.37 GB | ~9.8 GB | Original llama.cpp quant method, 4-bit. |
vicuna-13b-v1.3-ger.ggmlv3.q5_1.bin | q5_1 | 5 | 9.78 GB | ~12.3 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
How to run in llama.cpp
I use the following command line; adjust for your tastes and needs:
./main -t 10 -ngl 32 -m vicuna-13b-v1.3-ger.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are an story writing assistant who writes very long, detailed and interesting stories\n\nUser:\nWrite a story about llamas\nAssistant:\n"
If you're able to use full GPU offloading, you should use -t 1
to get best performance.
If not able to fully offload to GPU, you should use more cores. Change -t 10
to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change -ngl 32
to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the -p <PROMPT>
argument with -i -ins
How to run in text-generation-webui
Further instructions here: text-generation-webui/docs/llama.cpp-models.md.
Thanks
Special thanks to LMSYS for the great Orca Mini base model and TheBloke for his great work quantizing billions of models (and for his template for this README).