Spaces:
Running
on
A10G
Running
on
A10G
More storage
#151 opened 2 days ago
by
noNyve

Cant convert Linq-Embed-Mistral
#150 opened 2 days ago
by
Hoshino-Yumetsuki

Add IQ2_(some-letter) quantization
#149 opened 2 days ago
by
noNyve

No support for making GGUF of HuggingFaceTB/SmolVLM-500M-Instruct
1
#148 opened 26 days ago
by
TimexPeachtree

Unable to convert Senqiao/LISA_Plus_7b
1
#147 opened about 1 month ago
by
PlayAI
Unable to convert ostris/Flex.1-alpha
#146 opened about 1 month ago
by
fullsoftwares
Crashes on watt-ai/watt-tool-70B
#145 opened about 2 months ago
by
ejschwartz
Update app.py
1
#144 opened 2 months ago
by
gghfez
Unable to convert Phi-3 Vision
#143 opened 2 months ago
by
venkatsriram
Accessing own private repos
2
#141 opened 3 months ago
by
themex1380
Why can't i login?
5
#139 opened 3 months ago
by
safe049

If generating model cards readmes, consider adding support for these extra authorship parameters
2
#137 opened 3 months ago
by
mofosyne

Add F16 and BF16 quantization
1
#129 opened 4 months ago
by
andito

update readme for card generation
4
#128 opened 5 months ago
by
ariG23498

[bug] asymmetric t5 models fail to quantize
#126 opened 5 months ago
by
pszemraj

[Bug] Extra files with related name were uploaded to the resulting repository
#125 opened 5 months ago
by
Felladrin

Issue converting PEFT LoRA fine tuned model to GGUF
3
#124 opened 6 months ago
by
AdnanRiaz107
Issue converting nvidia/NV-Embed-v2 to GGUF
#123 opened 6 months ago
by
redshiva

Issue converting FLUX.1-dev model to GGUF format
5
#122 opened 6 months ago
by
cbrescia
Add Llama 3.1 license
#121 opened 6 months ago
by
jxtngx

Add an option to put all quantization variants in the same repo
#120 opened 6 months ago
by
A2va
Phi-3.5-MoE-instruct
6
#117 opened 6 months ago
by
goodasdgood
Fails to quntize T5 (xl and xxl) models
1
#116 opened 6 months ago
by
girishponkiya
Arm optimized quants
2
#113 opened 7 months ago
by
SaisExperiments
DeepseekForCausalLM is not supported
1
#112 opened 7 months ago
by
nanowell
Please, update converting script. Llama.cpp added support for Nemotron and Minitron architectures.
3
#111 opened 7 months ago
by
NikolayKozloff
Enable the created name repo to be without the quantization type
#110 opened 7 months ago
by
A2va
I think I broke the space quantizing 4bit modle with Q4L
#106 opened 7 months ago
by
hellork

Authorship Metadata support added to converter script, you may want to add the ability to add metadata overrides
3
#104 opened 8 months ago
by
mofosyne

Please support this method:
7
#96 opened 8 months ago
by
ZeroWw
Support Q2 imatrix quants
1
#95 opened 8 months ago
by
Dampfinchen
Maybe impose a max model size?
3
#33 opened 11 months ago
by
pcuenq
