Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf
This model was converted to GGUF format from GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
using llama.cpp.
Refer to the original model card for more details on the model.
Use with llama.cpp
CLI:
llama-cli --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q8_0.gguf -p "Your prompt here"
Server:
llama-server --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q8_0.gguf -c 2048
Model Details
- Quantization Type: q8_0
- Original Model: GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
- Format: GGUF
- Downloads last month
- 6
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Chemin-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf
Base model
meta-llama/Meta-Llama-3-8B-Instruct
Finetuned
aisingapore/Llama-SEA-LION-v2-8B
Finetuned
aisingapore/Llama-SEA-LION-v2-8B-IT