LLM On-Device Deployment
In this tutorial we will show an end to end workflow deploying large language models (LLMs) to Snapdragon® platforms such as Snapdragon® 8 Elite, Snapdragon® 8 Gen 3 (e.g., Samsung Galaxy S24 family) and Snapdragon® X Elite (e.g. Snapdragon® based Microsoft Surface Pro). We will use Qualcomm AI Hub to compile the models to QAIRT context binaries and run them with Genie from the QAIRT SDK.
We will use Llama3 8B as a running example. Other LLMs from AI Hub Models will work with the same flow.
Overview
We will walk you through the follow steps:
- Get access to Llama 3 weights from Hugging Face.
- Use Qualcomm AI Hub Models to export Llama 3 using AI Hub.
- Prepare assets required by Qualcomm Genie, the inference runtime for LLMs.
- Run the LLM on device with an example prompt on Android / Windows PC with Snapdragon®.
Note that because this is a large model, it may take 4-6 hours to generate required assets.
If you have any questions, please feel free to post on AI Hub Slack channel
Device Requirements
Model name | Minimum Compile QAIRT SDK version | Supported devices |
---|---|---|
Llama-v2-7B-Chat | 2.27.0 | Snapdragon® 8 Elite Snapdragon® 8 Gen 3 Snapdragon® X Elite Snapdragon® X Plus |
Llama-v3-8B-Instruct | 2.27.0 | Snapdragon® 8 Elite Snapdragon® X Elite Snapdragon® X Plus |
Llama-v3.1-8B-Instruct | 2.27.7 | Snapdragon® 8 Elite |
Llama-v3.1-8B-Instruct | 2.28.0 | Snapdragon® X Elite Snapdragon® X Plus |
Llama-v3.2-3B-Instruct | 2.27.7 | Snapdragon® 8 Elite Snapdragon® 8 Gen 3 (Context length 2048) |
Llama-v3.2-3B-Instruct | 2.28.0 | Snapdragon® X Elite Snapdragon® X Plus |
Llama-SEA-LION-v3.5-8B-R | 2.28.0 | Snapdragon® 8 Elite Snapdragon® X Elite Snapdragon® X Plus |
Llama3-TAIDE-LX-8B-Chat-Alpha1 | 2.27.0 | Snapdragon® 8 Elite Snapdragon® X Elite Snapdragon® X Plus |
Baichuan2-7B | 2.27.7 | Snapdragon® 8 Elite |
Qwen2-7B-Instruct | 2.27.7 | Snapdragon® 8 Elite |
Mistral-7B-Instruct-v0.3 | 2.27.7 | Snapdragon® 8 Elite |
Phi-3.5-Mini-Instruct | 2.29.0 | Snapdragon® 8 Elite Snapdragon® X Elite Snapdragon® 8 Gen 3 |
IBM-Granite-v3.1-8B-Instruct | 2.30.0 | Snapdragon® 8 Elite Snapdragon® X Elite |
Device requirements:
- Android 15
- At least Genie SDK from QAIRT (or QNN) SDK 2.29.0 (earlier versions have issues with long prompts).
- Hexagon architecture v73 or above (please see Devices list).
- 16GB memory or more for 7B+ or 4096 context length models.
- 12GB memory or more for 3B+ models (and you may need to adjust down context length).
Please make sure device requirements are met before proceeding.
Required Software
The following packages are required:
- QAIRT SDK (see QNN SDK for versions prior to 2.32)
- qai-hub-models and any extras for your desired model.
- qai-hub
QAIRT Installation
Typically we recommend using the same QAIRT SDK version that AI Hub uses to compile the assets. You can find this version by clicking the job links posted printed by the export command.
Go to QAIRT SDK (or QNN SDK for older versions) and follow the installation instructions. Note that the first time after log in you would be redirected to QPM home page. Click on the link again to get to the QAIRT download page.
If you are on a Mac laptop, we recommend using
Docker to install qpm-cli to extract the .qik
file.
If successful, you should see a message with the install path. This will depend on the platform and can look like this:
/opt/qcom/aistack/qairt/<version>
C:\Qualcomm\AIStack\QAIRT\<version>
Set your QNN_SDK_ROOT
environment variable to point to this directory. On
Linux or Mac you would run:
export QNN_SDK_ROOT=/opt/qcom/aistack/qairt/<version>
On Windows, you can search the taskbar for "Edit the system environment variables".
Python Packages
Following standard best practices, we recommend creating a virtual environment specifically for exporting AI Hub models. The following steps can be performed on Windows, Linux, or Mac. On Windows, you can either install x86-64 Python (since package support is limited on native ARM64 Python) or use Windows Subsystem for Linux (WSL).
Create Virtual Environment
Create a virtualenv for qai-hub-models
with Python 3.10.
You can also use conda.
For clarity, we recommend creating a virtual env:
python3.10 -m venv llm_on_genie_venv
Install qai-hub-models
In a shell session, install qai-hub-models
in the virtual environment:
source llm_on_genie_venv/bin/activate
pip install -U "qai-hub-models[llama-v3-8b-instruct]"
Replace llama-v3-8b-instruct
with the desired llama model from AI Hub
Model.
Note to replace _
with -
(e.g. llama_v3_8b_instruct
-> llama-v3-8b-instruct
)
Make sure Git is installed in your environment. This command should work:
git --version
Ensure at least 80GB of memory (RAM + swap). On Ubuntu (including through WSL) you can check it by
free -h
Increase swap size if needed.
We use qai-hub-models to adapt Huggingface Llama models for on-device inference.
Acquire Genie Compatible QNN binaries from AI Hub
[Llama Only] Setup Hugging Face token
Setting up Hugging Face token is required only for the Llama model family. Request model access on Hugging Face for Llama models. For instance, you can apply here to access Llama 3.2 3B model.
Set up Hugging Face token locally by following the instructions here.
Download or Generate Genie Compatible QNN Binaries
Some of the models can be downloaded directly from AI Hub. For Llama, it has to be exported through AI Hub Models.
To generate the Llama assets, we will run a single command that performs the following steps:
Download model weights from Hugging Face. You will need to sign the Llama license if you haven't already done so.
Upload models to AI Hub for compilation.
Download compiled context binaries. Note that there are multiple binaries as we have split up the model.
Make a directory to put in all deployable assets. For this example we use
mkdir -p genie_bundle
[Optional] Upgrade PyTorch
The export command below may take 4-6 hours. It takes an additional 1-2 hours on PyTorch versions earlier than 2.4.0. We recommend upgrading PyTorch first:
pip install torch==2.4.0
This version is not yet supported in general by AI Hub Models but will work for the below export command.
Note that the export also requires a lot of memory (RAM + swap) on the host device (for Llama 3, we recommend 80 GB). If we detect that you have less memory than recommended, the export command will print a warning with instructions of how to increase your swap space.
For Android on Snapdragon® 8 Elite
python -m qai_hub_models.models.llama_v3_8b_instruct.export --device "Snapdragon 8 Elite QRD" --skip-inferencing --skip-profiling --output-dir genie_bundle
For Snapdragon 8 Gen 3, please use --device "Snapdragon 8 Gen 3 QRD"
.
For Windows on Snapdragon® X Elite
python -m qai_hub_models.models.llama_v3_8b_instruct.export --device "Snapdragon X Elite CRD" --skip-inferencing --skip-profiling --output-dir genie_bundle
Note: For older devices, you may need to adjust the context length using
--context-length <context-length>
.
The genie_bundle
would now contain both the intermediate models (token
,
prompt
) and the final context binaries (*.bin
). Remove the intermediate
models to have a smaller deployable artifact:
# Remove intermediate assets
rm -rf genie_bundle/{prompt,token}
Prepare Genie Configs
Tokenizer
To download the tokenizer, go to the source model's Hugging Face page and go to "Files
and versions." You can find a Hugging Face link through the model card on
AI Hub. This will take you to the Qualcomm Hugging Face page,
which in turn will have a link to the source Hugging Face page. The file will be named tokenizer.json
and should be downloaded to the genie_bundle
directory. The tokenizers are only hosted on the source Hugging Face page.
Model name | Tokenizer | Notes |
---|---|---|
Llama-v2-7B-Chat | tokenizer.json | |
Llama-v3-8B-Instruct | tokenizer.json | |
Llama-v3.1-8B-Instruct | tokenizer.json | |
Llama-SEA-LION-v3.5-8B-R | tokenizer.json | |
Llama-v3.2-3B-Instruct | tokenizer.json | |
Llama3-TAIDE-LX-8B-Chat-Alpha1 | tokenizer.json | |
Baichuan2-7B | tokenizer.json | |
Qwen2-7B-Instruct | tokenizer.json | |
Phi-3.5-Mini-Instruct | tokenizer.json | To see appropriate spaces in the output, remove lines 193-196 (Strip rule) in the tokenizer file. |
Mistral-7B-Instruct-v0.3 | tokenizer.json | |
IBM-Granite-v3.1-8B-Instruct | tokenizer.json |
[Optional] Use the Windows PowerShell LLM Runner
Do not use this script to create your Genie bundle if you are building Windows ChatApp. Continue with the rest of the tutorial instead.
The easiest path to running an LLM on a Windows on Snapdragon® device is to use the PowerShell implementation
of the rest of this tutorial. It will automatically generate the appropriate configuration files and execute genie-t2t-run.exe
on a prompt of your choosing.
Genie Config
Check out the AI Hub Apps repository using Git:
git clone https://github.com/quic/ai-hub-apps.git
Now run (replacing llama_v3_8b_instruct
with the desired model id):
cp ai-hub-apps/tutorials/llm_on_genie/configs/genie/llama_v3_8b_instruct.json genie_bundle/genie_config.json
For Windows laptops, please set use-mmap
to false
.
If you customized context length by adding --context-length
to the export
command, please open genie_config.json
and modify the "size"
option (under
"dialog"
-> "context"
) to be consistent.
In genie_bundle/genie_config.json
, also ensure that the list of bin files in
ctx-bins
matches with the bin files under genie_bundle
. Genie will look for
QNN binaries specified here.
HTP Backend Config
Copy the HTP config template:
cp ai-hub-apps/tutorials/llm_on_genie/configs/htp/htp_backend_ext_config.json.template genie_bundle/htp_backend_ext_config.json
Edit soc_model
and dsp_arch
in genie_bundle/htp_backend_ext_config.json
depending on your target device (should be consistent with the --device
you
specified in the export command):
Generation | soc_model |
dsp_arch |
---|---|---|
Snapdragon® Gen 2 | 43 | v73 |
Snapdragon® Gen 3 | 57 | v75 |
Snapdragon® 8 Elite | 69 | v79 |
Snapdragon® X Elite | 60 | v73 |
Snapdragon® X Plus | 60 | v73 |
Collect & Finalize Genie Bundle
When finished with the above steps, your bundle should look like this:
genie_bundle/
genie_config.json
htp_backend_ext_config.json
tokenizer.json
<model_id>_part_1_of_N.bin
...
<model_id>_part_N_of_N.bin
where is the name of the model. This is the name of the json you copied from configs/genie/<model_name>.json
.
Run LLM on Device
You have three options to run the LLM on device:
- Use the
genie-t2t-run
CLI command. - Use the CLI Windows ChatApp (Windows only).
- Use the Android ChatApp.
Prompt Formats
All the LLMs have different formats. To get sensible output from the LLMs, it is important to use the correct prompt format for the model. These can also be found on the Hugging Face repository for each of the model. Adding samples for a few models here.
Model name | Sample Prompt |
---|---|
Llama-v2-7B-Chat | <s>[INST] <<SYS>>You are a helpful AI Assistant.<</SYS>>[/INST]</s><s>[INST]What is France's capital?[/INST] |
Llama-v3-8B-Instruct Llama-v3.1-8B-Instruct Llama-v3.2-3B-Instruct |
<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nWhat is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
Llama3-TAIDE-LX-8B-Chat-Alpha1 | <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\n介紹台灣特色<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|> |
Llama-SEA-LION-v3.5-8B-R (non-thinking mode) | <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\ndetailed thinking off<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThủ đô của Việt Nam là thành phố nào?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n<think>\n\n</think>>\n\n |
Llama-SEA-LION-v3.5-8B-R (thinking mode) | <|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\ndetailed thinking on<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nThủ đô của Việt Nam là thành phố nào?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n<think>\nHere is my thinking:\n |
Qwen2-7B-Instruct | <|im_start|>system\nYou are a helpful AI Assistant<|im_end|><|im_start|>What is France's capital?\n<|im_end|>\n<|im_start|>assistant\n |
Phi-3.5-Mini-Instruct | <|system|>\nYou are a helpful assistant. Be helpful but brief.<|end|>\n<|user|>What is France's capital?\n<|end|>\n<|assistant|>\n |
Mistral-7B-Instruct-v0.3 | <s>[INST] You are a helpful assistant\n\nTranslate 'Good morning, how are you?' into French.[/INST] |
IBM-Granite-v3.1-8B-Instruct | <|start_of_role|>system<|end_of_role|>You are a helpful AI assistant.<|end_of_text|>\n <|start_of_role|>user<|end_of_role|>What is France's capital?<|end_of_text|>\n <|start_of_role|>assistant<|end_of_role|>\n |
1. Run Genie On-Device via genie-t2t-run
Genie on Windows with Snapdragon® X
Copy Genie's shared libraries and executable to our bundle. (Note you can skip this step if you used the powershell script to prepare your bundle.)
cp $QNN_SDK_ROOT/lib/hexagon-v73/unsigned/* genie_bundle
cp $QNN_SDK_ROOT/lib/aarch64-windows-msvc/* genie_bundle
cp $QNN_SDK_ROOT/bin/aarch64-windows-msvc/genie-t2t-run.exe genie_bundle
In Powershell, navigate to the bundle directory and run
./genie-t2t-run.exe -c genie_config.json -p "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nWhat is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|>"
Note that this prompt format is specific to Llama 3.
Genie on Android
Copy Genie's shared libraries and executable to our bundle.
# For 8 Gen 2
cp $QNN_SDK_ROOT/lib/hexagon-v73/unsigned/* genie_bundle
# For 8 Gen 3
cp $QNN_SDK_ROOT/lib/hexagon-v75/unsigned/* genie_bundle
# For 8 Elite
cp $QNN_SDK_ROOT/lib/hexagon-v79/unsigned/* genie_bundle
# For all devices
cp $QNN_SDK_ROOT/lib/aarch64-android/* genie_bundle
cp $QNN_SDK_ROOT/bin/aarch64-android/genie-t2t-run genie_bundle
Copy genie_bundle
from the host machine to the target device using ADB and
open up an interactive shell on the target device:
adb push genie_bundle /data/local/tmp
adb shell
On device, navigate to the bundle directory:
cd /data/local/tmp/genie_bundle
Set LD_LIBRARY_PATH
and ADSP_LIBRARY_PATH
to the current directory:
export LD_LIBRARY_PATH=$PWD
export ADSP_LIBRARY_PATH=$PWD
Then run:
./genie-t2t-run -c genie_config.json -p "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nWhat is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|>"
Sample Output
Using libGenie.so version 1.1.0
[WARN] "Unable to initialize logging in backend extensions."
[INFO] "Using create From Binary List Async"
[INFO] "Allocated total size = 323453440 across 10 buffers"
[PROMPT]: <|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nWhat is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
[BEGIN]: \n\nFrance's capital is Paris.[END]
[KPIS]:
Init Time: 6549034 us
Prompt Processing Time: 196067 us, Prompt Processing Rate : 86.707710 toks/sec
Token Generation Time: 740568 us, Token Generation Rate: 12.152884 toks/sec
2. Sample C++ Chat App Powered by Genie SDK
We provide a sample C++ app to show how to build an application using the Genie SDK. See CLI Windows ChatApp for more details.
3. Sample Android Chat App Powered by Genie SDK
We provide a sample Android (Java and C++ app) to show how to build an application using the Genie SDK for mobile. See Android ChatApp for more details.