The deepseek-ai/DeepSeek-R1-Distill-Llama-70B model quantized to fp8.

Downloads last month
106
Safetensors
Model size
70.6B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jsbaicenter/r1-1776-distill-llama-70b-FP8-Dynamic

Quantized
(56)
this model