phi-4-mini and phi-4-multimodal
https://huggingface.co/microsoft/Phi-4-mini-instruct and https://huggingface.co/microsoft/Phi-4-multimodal-instruct
Two new models from MS with promising performance for both text2text (for the -mini one) and speech/image to text for the multimodal.
The mini one uses Phi3 architecture so should be easy to quantize, not so sure about multimodal
Well, we can try, but multimodel capabilities are not currently supported in our pipeline (other than for qwen2vl). I don't see high chances for either, though, as there would normally other ones already for such a high-profile model. They are queued nevertheless, and you can check for progress at http://hf.tst.eu/status.html.
multimodel isn't supported, mini is not understood by the converter:
raise ValueError(f'The length of rope long and short factors must be {rope_dims / 2}')
ValueError: The length of rope long and short factors must be 64.0