OpenVINO NPU
Collection
Models converted to run on Intel NPUs through OpenVINO directly. Models here were tested to be compatible with Intel Ultra 7 consumer laptop
•
8 items
•
Updated
Model creator: openai Original model: https://huggingface.co/openai/whisper-large-v3-turbo
optimum-cli export openvino --trust-remote-code --model openai/whisper-large-v3-turbo --weight-format int8 --disable-stateful whisper-large-v3-turbo
The provided OpenVINO™ IR model is compatible with:
pip install optimum[openvino]
from transformers import AutoProcessor
from optimum.intel.openvino import OVModelForSpeechSeq2Seq
model_id = "bweng/whisper-large-v3-turbo-int8-ov"
tokenizer = AutoProcessor.from_pretrained(model_id)
model = OVModelForSpeechSeq2Seq.from_pretrained(model_id)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]
input_features = processor(
sample["audio"]["array"],
sampling_rate=sample["audio"]["sampling_rate"],
return_tensors="pt",
).input_features
outputs = model.generate(input_features)
text = processor.batch_decode(outputs)[0]
print(text)
pip install huggingface_hub
pip install -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino openvino-tokenizers openvino-genai
import huggingface_hub as hf_hub
model_id = "bweng/whisper-large-v3-turbo-int8"
model_path = "whisper-large-v3-turbo-int8"
hf_hub.snapshot_download(model_id, local_dir=model_path)
import openvino_genai as ov_genai
import datasets
device = "NPU"
pipe = ov_genai.WhisperPipeline(model_path, device)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation", trust_remote_code=True)
sample = dataset[0]["audio]["array"]
print(pipe.generate(sample))
More GenAI usage examples can be found in OpenVINO GenAI library docs and samples
Check the original model card for original model card for limitations.
Base model
openai/whisper-large-v3