YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This model is compatible with tensor parallelism. The RHT runs per-GPU instead of across GPUs. q, k, v, up, and gate are split along the output channel, and o and down are split along the input channel. This model has slightly worse quality than the non "TP8" model.

Downloads last month
12
Safetensors
Model size
54.5B params
Tensor type
BF16
·
F32
·
FP16
·
I16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including relaxml/Llama-3.1-405B-Instruct-QTIP-2Bit-TP8