Robert Sinclair
ZeroWw
AI & ML interests
LLMs optimization (model quantization and back-end optimizations) so that LLMs can run on computers of people with both kidneys. Discord: https://discord.com/channels/@robert_46007
Recent Activity
new activity
about 10 hours ago
PrimeIntellect/INTELLECT-2:Please create also 4B and 8B models
updated
a model
1 day ago
ZeroWw/Seed-Coder-8B-Reasoning-GGUF
published
a model
2 days ago
ZeroWw/Seed-Coder-8B-Reasoning-GGUF
Organizations
ZeroWw's activity
Please create also 4B and 8B models
#6 opened about 10 hours ago
by
ZeroWw
This one works.
#1 opened 2 days ago
by
ZeroWw
GGUF model with architecture gemma3 is not supported yet
3
#2 opened about 2 months ago
by
kieransmith
Ads baked into AI Outputs = garbage
7
6
#2 opened 28 days ago
by
Athlon-X

There is a big error!
1
11
#2 opened 14 days ago
by
ZeroWw
Please create a smaller reasoning model.
#72 opened 14 days ago
by
ZeroWw
Model Architecture Details
1
#24 opened about 1 month ago
by
nbaligar
Please release also Gemini Flash 1.5 weights.
2
#31 opened about 2 months ago
by
ZeroWw
Please release the weights of Gemini 1.5 Flash
2
#43 opened about 2 months ago
by
ZeroWw
Add llama.cpp support
13
2
#19 opened about 2 months ago
by
KeilahElla
gemma-3-4b-it-abliterated.q8q4.gguf is very much appreciated.
2
#1 opened about 2 months ago
by
twoxfh
Thanks!
2
1
#1 opened about 2 months ago
by
erichartford

Please do 8B and 4B too.
2
#1 opened about 2 months ago
by
ZeroWw
This is by far the best model I have seen until now.
1
2
#8 opened 10 months ago
by
ZeroWw
Brainstorm 40x method developed by David_AU
4
#1 opened 7 months ago
by
ZeroWw
My quants and silly expriment.
1
2
#1 opened 7 months ago
by
ZeroWw
Any chance of a 1B/2B/3B/4B model?
2
#5 opened 7 months ago
by
ZeroWw
Request
1
#1 opened 7 months ago
by
Zeldazackman
Question about your quantization method
3
#1 opened 7 months ago
by
rollercoasterX

1B and 3B are nice. Please make also an 8B so we can compare it to gemini flash 8B.
3
#20 opened 7 months ago
by
ZeroWw