Edit model card

Llama-2-7b-chat ONNX models for DirectML

This repository hosts the optimized versions of meta-llama/Llama-2-7b-chat-hf to accelerate inference with ONNX Runtime for DirectML.

Usage on Windows (Intel / AMD / Nvidia / Qualcomm)

conda create -n onnx python=3.10
conda activate onnx
winget install -e --id GitHub.GitLFS
pip install huggingface-hub[cli]
huggingface-cli download EmbeddedLLM/llama-2-7b-chat-int4-onnx-directml --local-dir .\llama-2-7b-chat
pip install numpy==1.26.4
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py" -OutFile "phi3-qa.py"
pip install onnxruntime-directml
pip install --pre onnxruntime-genai-directml
conda install conda-forge::vs2015_runtime
python phi3-qa.py -m .\llama-2-7b-chat

What is DirectML

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.

Downloads last month
22
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collections including EmbeddedLLM/llama-2-7b-chat-int4-onnx-directml