๐ Llama-3
Collection
My experiments with Llama-3 models
โข
61 items
โข
Updated
โข
22
MaziyarPanahi/Llama-3-Smaug-8B-GGUF contains GGUF format model files for abacusai/Llama-3-Smaug-8B.
You MUST
follow the prompt template provided by Llama-3:
./llama.cpp/main -m Llama-3-Smaug-8B.Q2_K.gguf -r '<|eot_id|>' --in-prefix "\n<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -p "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi! How are you?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n" -n 1024
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
abacusai/Llama-3-Smaug-8B