LLMs quantized with GPTQ
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
updated
a model
5 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g11-s0
published
a model
5 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g11-s0
updated
a model
5 days ago
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g562-s0
Organizations
Collections
4
models
52
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g11-s0
Text Generation
•
Updated
•
5
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g562-s0
Text Generation
•
Updated
•
3
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g12-s0
Text Generation
•
Updated
•
1
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g563-s0
Text Generation
•
Updated
•
5
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g0-s0
Text Generation
•
Updated
•
1
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g13-s0
Text Generation
•
Updated
•
1
iproskurina/opt-test
Text Generation
•
Updated
•
5
iproskurina/opt-125m-gptq2
Text Generation
•
Updated
•
14
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-G2S0
Text Generation
•
Updated
•
4
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-G3S0
Text Generation
•
Updated