Llamacpp imatrix Quantizations of meta-llama/Llama-3.1-8B

Using llama.cpp release b3878 for quantization.

Original model: https://huggingface.co/meta-llama/Llama-3.1-8B

Run it in LM Studio

Prompt format

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

You can either specify a new local-dir (boapro/WRT_II) or download them all in place (./)

Q4_0_X_X

If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons on the original pull request

To check which one would work best for your ARM chip, you can check AArch64 SoC features (thanks EloyOn!).

If you want to get more into the weeds, you can check out this extremely useful feature chart:

llama.cpp feature matrix

Downloads last month
5
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for boapro/WRT_II

Quantized
(264)
this model

Datasets used to train boapro/WRT_II