MiniMaxAI/M1 models

#1070
by piloponth - opened

It really baffles me why there aren’t any quantizations, so iI try…
kindly asking for quants for

piloponth changed discussion title from MiniMaxAI/M1 model to MiniMaxAI/M1 models

The MiniMaxM1ForCausalLM architecture is unfortunately not currently supported by llama.cpp which is a requirement for GGUF quants to be possible. This is really unfortunate as I would love to try out this massive 456B model.

Thank you for the clarification of the situation.

Best, Pilo.

piloponth changed discussion status to closed

Sign up or log in to comment