Falcon H
they uploaded their own ggufs, but maybe you want to make better ;)
https://huggingface.co/collections/tiiuae/falcon-h1-6819f2795bc406da60fab8df
The Falcon H1 series of models are currently my absolute favourite LLMs. They are by far the most intelligent and knoledgable models bellow 100B. Them not beeing overfitted makes using them a pleasure. I'm saying this after having extensively tested over 100 models in the past month. With extensively tested I mean having them asked all the over 100 questions of my private RealWorld Q&A beanchmark at least 20 times.
The reason we currently have not provided GGUFs for them is because they are currently not supported by mainline llama.cpp but only thair llama.cpp fork.
Their fork was broken. They were working on llama.cpp support and today it was merged. Please update llama.cpp :)
I love Falco H1 so much I even created my own abliterated version of it which currently is absolute favourite model and what I currently use myself for most of my LLM needs: https://huggingface.co/nicoboss/Falcon-H1-34B-Instruct-abliterated
Their fork was broken. They were working on llama.cpp support and today it was merged. Please update llama.cpp :)
Oh wow I totally missed that. What massive news!
Yes, now there is also Jamba, but maybe start from the Falcon ;)
@mradermacher
I updated ouer fork so please update to latest and let's queue all newly supported FalconH1ForCausalLM
, JambaForCausalLM
and SmolLM3ForCausalLM
based models!
@nicoboss I take it you'll do your abliterated model? I know I'd be interested in a GGUF of it if you are?
EDIT: Sorry, this was an open tab, I didn't see you'd already requested it!