Phi-4-mini-instruct-int4-ov

Description

This is Phi-4-mini-instruct model converted to the OpenVINO™ IR (Intermediate Representation) format with weights compressed to INT4 by NNCF.

With the following pyproject.yoml

[project]
name = "export"
version = "0.1.0"
description = "Export models"
readme = "README.md"
requires-python = "==3.12.*"
dependencies = [
    "openvino==2025.2.0",
    "optimum[openvino]",
    "optimum-intel",
    "openvino-genai",
    "huggingface-hub==0.33.0",
    "tokenizers==0.21.1"
]

Then run the export

uv sync
uv run optimum-cli export openvino --model microsoft/phi-4-mini-instruct --task text-generation-with-past --weight-format int4 --group-size -1 --ratio 1.0 --sym --trust-remote-code phi-4-mini-instruct/INT4-NPU_compressed_weights 

Compatibility

The provided OpenVINO™ IR model is compatible with:

  • OpenVINO version 2025.2.0 and higher
  • Optimum Intel 1.23.0 and higher

Running Model Inference with OpenVINO GenAI

  1. Install packages required for using OpenVINO GenAI.
pip install -U openvino openvino-tokenizers openvino-genai
pip install huggingface_hub
  1. Download model from HuggingFace Hub
import huggingface_hub as hf_hub
model_id = "bweng/phi-4-mini-instruct-int4-ov-npu"
model_path = "phi-4-mini-instruct-int4-ov"
hf_hub.snapshot_download(model_id, local_dir=model_path)
  1. Run model inference:
import openvino_genai as ov_genai
device = "NPU"
pipe = ov_genai.LLMPipeline(model_path, "NPU", MAX_PROMPT_LEN=4096, CACHE_DIR="./cache")

# Create a proper GenerationConfig object
gen_config = GenerationConfig(apply_chat_template=True, max_new_tokens=1024)

# Now call generate with the correct config object
output = pipe.generate("How are you doing?", generation_config=gen_config)
print(output)

More GenAI usage examples can be found in OpenVINO GenAI library docs and samples

You can find more detaild usage examples in OpenVINO Notebooks:

Limitations

Check the original model card for original model card for limitations.

Legal information

The original model is distributed under mit license. More details can be found in original model card.

Disclaimer

Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Downloads last month
40
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bweng/phi-4-mini-instruct-int4-ov-npu

Quantized
(89)
this model

Collection including bweng/phi-4-mini-instruct-int4-ov-npu