AWQ quantization of Entropicengine/Pinecone-Titan-70b

quantization_config:
bits: 4
group_size: 128
quant_method: awq
version: gemm
zero_point: true

Downloads last month
1
Safetensors
Model size
11.3B params
Tensor type
I32
BF16
F16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Sinensis/Pinecone-Titan-70b-AWQ

Quantized
(4)
this model