AWQ quantization of Entropicengine/Pinecone-Titan-70b
quantization_config:
bits: 4
group_size: 128
quant_method: awq
version: gemm
zero_point: true
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for Sinensis/Pinecone-Titan-70b-AWQ
Base model
Entropicengine/Pinecone-Titan-70b