Phi 3 not supported
#3
by
rpeinl
- opened
I tested 3 models from mlx-community today,
model, tokenizer = load("mlx-community/Qwen2-57B-A14B-4bit")
model, tokenizer = load("mlx-community/Phi-3-medium-128k-instruct-4bit")
model, tokenizer = load("mlx-community/Llama-3-8B-16K-4bit")
None of the three was working. For Phi 3 and Qwen it says they are not supported. For Llama 3 it says
"Received parameters not in model: model.embed_tokens.biases model.embed_tokens.scales."
I don't know why you are publishing models, that do not work with your own framework, although they are tailored especially for that framework.