Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

FimbulHermes-15B-v0.1 EXL2 6.5bpw

This is a 6.5bpw quant of steinzer-narayan/fimbulhermes-15B-v0.1_exl2_6.5bpw.

Downloads last month
7
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for steinzer-narayan/fimbulhermes-15B-v0.1_exl2_6.5bpw