My Project OpenArc, an inference engine for OpenVINO, now supports this model and serves inference over OpenAI compatible endpoints for text to text and text with vision!
We have a growing Discord community of others interested in using Intel for AI/ML.
- Find documentation on the Optimum-CLI export process here
- Use my HF space Echo9Zulu/Optimum-CLI-Tool_tool to build commands and execute locally
This repo contains OpenVINO quantizied versions of DeepSeek-R1-0528-Qwen3-8B.
I reccomend starting with DeepSeek-R1-0528-Qwen3-8B-int4_asym-awq-se-ov
To download individual models from this repo use the provided snippet:
from huggingface_hub import snapshot_download
repo_id = "Echo9Zulu/DeepSeek-R1-0528-Qwen3-8B-OpenVINO"
# Choose the weights you want
repo_directory = "DeepSeek-R1-0528-Qwen3-8B
# Where you want to save it
local_dir = "./Echo9Zulu_DeepSeek-R1-0528-Qwen3-8B/DeepSeek-R1-0528-Qwen3-8B-int4_asym-awq-se-ov"
snapshot_download(
repo_id=repo_id,
allow_patterns=[f"{repo_directory}/*"],
local_dir=local_dir,
local_dir_use_symlinks=True
)
print("Download complete!")
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Echo9Zulu/DeepSeek-R1-0528-Qwen3-8B-OpenVINO
Base model
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B