| language: | |
| - multilingual | |
| license: mit | |
| license_link: https://huggingface.co/microsoft/Phi-3.5-vision-instruct/resolve/main/LICENSE | |
| pipeline_tag: text-generation | |
| tags: | |
| - nlp | |
| - code | |
| - vision | |
| - mlx | |
| inference: | |
| parameters: | |
| temperature: 0.7 | |
| widget: | |
| - messages: | |
| - role: user | |
| content: <|image_1|>Can you describe what you see in the image? | |
| # mlx-community/Phi-3.5-vision-instruct-bf16 | |
| This model was converted to MLX format from [`microsoft/Phi-3.5-vision-instruct`]() using mlx-vlm version **0.0.13**. | |
| Refer to the [original model card](https://huggingface.co/microsoft/Phi-3.5-vision-instruct) for more details on the model. | |
| ## Use with mlx | |
| ```bash | |
| pip install -U mlx-vlm | |
| ``` | |
| ```bash | |
| python -m mlx_vlm.generate --model mlx-community/Phi-3.5-vision-instruct-bf16 --max-tokens 100 --temp 0.0 | |
| ``` | |