Probably the new best "base" model to use to merge
#1
by
paulml
- opened
The other ones are unusable when they are quantized: https://huggingface.co/CultriX/NeuralTrix-7B-dpo/discussions/1
Did you manage to make it work with Monarch-7B?
Did you manage to make it work with Monarch-7B?
I tried the Q4_K_M here: https://huggingface.co/seyf1elislam/Monarch-7B-GGUF
It worked great so this is probably the best quantized 7B model!
Oh that's funny, I changed the name of this model to Beagle4 (https://huggingface.co/mlabonne/Beagle4). This is not the same model as this new Monarch.