Legion-V2.1-LLaMa-70B-EXL2

Released by Tarek07

Quant(s) by FrenzyBiscuit

5.00 BPW H6 - Fits in 72GB VRAM with 64k FP16 context with speculative decoding

5.35 BPW H6 - Fits in 72GB VRAM with 64k FP16 context

6.70 BPW H6 - Fits in 72GB VRAM with 32k FP16 context

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ReadyArt/Tarek07_Legion-V2.1-LLaMa-70B-EXL2

Quantized
(15)
this model