Molmo-7B-O BnB 4bit quant

30GB -> 7GB

approx. 12GB VRAM required

base model for more information:

https://huggingface.co/allenai/Molmo-7B-O-0924

example code:

https://github.com/cyan2k/molmo-7b-bnb-4bit

performance metrics & benchmarks to compare with base will follow over the next week

Downloads last month
1,342
Safetensors
Model size
4.35B params
Tensor type
F32
ยท
U8
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cyan2k/molmo-7B-O-bnb-4bit

Quantized
(2)
this model

Spaces using cyan2k/molmo-7B-O-bnb-4bit 2