GPU deployment using Vulkanlink
IREE can accelerate model execution on GPUs via Vulkan, a low-overhead graphics and compute API. Vulkan is cross-platform: it is available on many operating systems, including Android, Linux, and Windows. Vulkan is also cross-vendor: it is supported by most GPU vendors, including AMD, ARM, Intel, NVIDIA, and Qualcomm.
Support matrixlink
As IREE and the compiler ecosystem it operates within matures, more target specific optimizations will be implemented. At this stage, expect reasonable performance across all GPUs and for improvements to be made over time for specific vendors and architectures.
GPU Vendor | Category | Performance | Focus Architecture |
---|---|---|---|
ARM Mali GPU | Mobile | Good | Valhall+ |
Qualcomm Adreno GPU | Mobile | Reasonable | 640+ |
AMD GPU | Desktop/server | Good | RDNA+ |
NVIDIA GPU | Desktop/server | Reasonable | Turing+ |
Prerequisiteslink
In order to use Vulkan to drive the GPU, you need to have a functional Vulkan environment. IREE requires Vulkan 1.1 on Android and 1.2 elsewhere. It can be verified by the following steps:
Android mandates Vulkan 1.1 support since Android 10. You just need to make sure the device's Android version is 10 or higher.
Run the following command in a shell:
vulkaninfo | grep apiVersion
If vulkaninfo
does not exist, you will need to install the latest Vulkan
SDK. Installing via LunarG's package
repository is recommended, as it places Vulkan libraries and tools under
system paths so it's easy to discover.
If the listed version is lower than Vulkan 1.2, you will need to update the driver for your GPU.
Run the following command in a shell:
vulkaninfo | grep apiVersion
If vulkaninfo
does not exist, you will need to install the latest Vulkan
SDK.
If the listed version is lower than Vulkan 1.2, you will need to update the driver for your GPU.
Get the IREE compilerlink
Vulkan expects the program running on GPU to be expressed by the SPIR-V binary exchange format, which the model must be compiled into.
Download the compiler from a releaselink
Python packages are distributed through multiple channels. See the
Python Bindings page for more details.
The core iree-base-compiler
package includes the SPIR-V compiler:
Stable release packages are published to PyPI.
python -m pip install iree-base-compiler
Nightly pre-releases are published on GitHub releases.
python -m pip install \
--find-links https://iree.dev/pip-release-links.html \
--upgrade --pre iree-base-compiler
Development packages are built at every commit and on pull requests, for limited configurations.
On Linux with Python 3.11, development packages can be installed
into a Python venv
using
the
build_tools/pkgci/setup_venv.py
script:
# Install packages from a specific commit ref.
# See also the `--fetch-latest-main` and `--fetch-gh-workflow` options.
python ./build_tools/pkgci/setup_venv.py /tmp/.venv --fetch-git-ref=8230f41d
source /tmp/.venv/bin/activate
Tip
iree-compile
and other tools are installed to your python module
installation path. If you pip install with the user mode, it is under
${HOME}/.local/bin
, or %APPDATA%\Python
on Windows. You may want to
include the path in your system's PATH
environment variable:
export PATH=${HOME}/.local/bin:${PATH}
Build the compiler from sourcelink
Please make sure you have followed the
Getting started page to build
IREE for your host platform. The SPIR-V compiler backend is compiled in by
default on all platforms, though you should ensure that the
IREE_TARGET_BACKEND_VULKAN_SPIRV
CMake option is ON
when configuring.
Tip
iree-compile
will be built under the iree-build/tools/
directory. You
may want to include this path in your system's PATH
environment variable.
Get the IREE runtimelink
Next you will need to get an IREE runtime that supports the Vulkan HAL driver.
You can check for Vulkan support by looking for a matching driver and device:
$ iree-run-module --list_drivers
# ============================================================================
# Available HAL drivers
# ============================================================================
# Use --list_devices={driver name} to enumerate available devices.
cuda: NVIDIA CUDA HAL driver (via dylib)
hip: HIP HAL driver (via dylib)
local-sync: Local execution using a lightweight inline synchronous queue
local-task: Local execution using the IREE multithreading task system
vulkan: Vulkan 1.x (dynamic)
$ iree-run-module --list_devices
hip://GPU-00000000-1111-2222-3333-444444444444
local-sync://
local-task://
vulkan://00000000-1111-2222-3333-444444444444
Download the runtime from a releaselink
Python packages are distributed through multiple channels. See the
Python Bindings page for more details.
The core iree-base-runtime
package includes the Vulkan HAL driver:
Stable release packages are published to PyPI.
python -m pip install iree-base-runtime
Nightly pre-releases are published on GitHub releases.
python -m pip install \
--find-links https://iree.dev/pip-release-links.html \
--upgrade --pre iree-base-runtime
Development packages are built at every commit and on pull requests, for limited configurations.
On Linux with Python 3.11, development packages can be installed
into a Python venv
using
the
build_tools/pkgci/setup_venv.py
script:
# Install packages from a specific commit ref.
# See also the `--fetch-latest-main` and `--fetch-gh-workflow` options.
python ./build_tools/pkgci/setup_venv.py /tmp/.venv --fetch-git-ref=8230f41d
source /tmp/.venv/bin/activate
Build the runtime from sourcelink
Please make sure you have followed one of the
Building from source pages to build
IREE for your target platform. The Vulkan HAL driver is compiled in by default
on supported platforms, though you should ensure that the
IREE_HAL_DRIVER_VULKAN
CMake option is ON
when configuring.
Compile and run a programlink
With the requirements out of the way, we can now compile a model and run it.
Compile a programlink
The IREE compiler transforms a model into its final deployable format in several sequential steps. A model authored with Python in an ML framework should use the corresponding framework's import tool to convert into a format (i.e., MLIR) expected by the IREE compiler first.
Using a MobileNet model as an example, import using IREE's ONNX importer:
# Download the model you want to compile and run.
wget https://github.com/onnx/models/raw/refs/heads/main/validated/vision/classification/mobilenet/model/mobilenetv2-10.onnx
# Import to MLIR using IREE's ONNX importer.
pip install iree-base-compiler[onnx]
iree-import-onnx mobilenetv2-10.onnx --opset-version 17 -o mobilenetv2.mlir
Then run the following command to compile with the vulkan-spirv
target:
iree-compile \
--iree-hal-target-backends=vulkan-spirv \
--iree-vulkan-target=<...> \
mobilenetv2.mlir -o mobilenet_vulkan.vmfb
Tip - Vulkan targets
The --iree-vulkan-target
specifies the GPU architecture to target. It
accepts a few schemes:
- LLVM CodeGen backend style: this is using LLVM AMDGPU/NVPTX CodeGen targets
like
gfx1100
for AMD RX 7900XTX andsm_86
for NVIDIA RTX 3090 GPUs. - Architecture code name style like
rdna3
/valhall4
/ampere
/adreno
for AMD/ARM/NVIDIA/Qualcomm GPUs. - Product name style: e.g., using
rx7900xtx
/a100
for corresponding GPUs.
Here are a few examples showing how you can target various recent common GPUs:
GPU | Target Architecture | Architecture Code Name | Product Name |
---|---|---|---|
AMD RX7900XTX | gfx1100 |
rdna3 |
rx7900xtx |
AMD RX7900XT | gfx1100 |
rdna3 |
rx7900xt |
AMD RX7800XT | gfx1101 |
rdna3 |
rx7800xt |
AMD RX7700XT | gfx1101 |
rdna3 |
rx7700xt |
AMD RX6000 series | rdna2 |
||
AMD RX5000 series | rdna1 |
||
ARM Mali G715 | valhall4 |
e.g., mali-g715 |
|
ARM Mali G510 | valhall3 |
e.g., mali-g510 |
|
ARM GPUs | valhall |
||
NVIDIA RTX40 series | sm_89 |
ada |
e.g., rtx4090 |
NVIDIA RTX30 series | sm_86 |
ampere |
e.g., rtx3080ti |
NVIDIA RTX20 series | sm_75 |
turing |
e.g., rtx2070super |
Qualcomm GPUs | adreno |
If no target is specified, then a safe but more limited default will be used.
Note that we don't support the full spectrum of GPUs here and it is impossible to capture all details of a Vulkan implementation with a target triple, given the allowed variances on extensions, properties, limits, etc. So the target triple is just an approximation for usage. This is more of a mechanism to help us develop IREE itself--in the long term we want to perform multiple targetting to generate to multiple architectures if no target is given.
Run a compiled programlink
To run the compiled program:
iree-run-module \
--device=vulkan \
--module=mobilenet_vulkan.vmfb \
--function=torch-jit-export \
--input="1x3x224x224xf32=0"
The above assumes the exported function in the model is named torch-jit-export
and it expects one 224x224 RGB image. We are feeding in an image with all 0
values here for brevity, see iree-run-module --help
for the format to specify
concrete values.