lee-ite/Llama-3.1-8B-PHQ

This LoRA was converted to GGUF format from lee-ite/Llama-3.1-8B-PHQ-lora using llama.cpp. The base Model is meta-llama/Meta-Llama-3.1-8B-Instruct.

Use with llama.cpp

You need to merge the LoRA-GGUF into the Base-Model use llama.cpp.

Downloads last month
53
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for lee-ite/Llama-3.1-8B-PHQ

Quantized
(301)
this model