GPU deployment using CUDAlink
IREE can accelerate model execution on Nvidia GPUs using CUDA.
Prerequisiteslink
In order to use CUDA to drive the GPU, you need to have a functional CUDA environment. It can be verified by the following steps:
nvidia-smi | grep CUDA
If nvidia-smi
does not exist, you will need to
install the latest CUDA Toolkit SDK.
Get the IREE compilerlink
Download the compiler from a releaselink
Python packages are distributed through multiple channels. See the
Python Bindings page for more details.
The core iree-base-compiler
package includes the CUDA compiler:
Stable release packages are published to PyPI.
python -m pip install iree-base-compiler
Nightly pre-releases are published on GitHub releases.
python -m pip install \
--find-links https://iree.dev/pip-release-links.html \
--upgrade --pre iree-base-compiler
Development packages are built at every commit and on pull requests, for limited configurations.
On Linux with Python 3.11, development packages can be installed
into a Python venv
using
the
build_tools/pkgci/setup_venv.py
script:
# Install packages from a specific commit ref.
# See also the `--fetch-latest-main` and `--fetch-gh-workflow` options.
python ./build_tools/pkgci/setup_venv.py /tmp/.venv --fetch-git-ref=8230f41d
source /tmp/.venv/bin/activate
Tip
iree-compile
and other tools are installed to your python module
installation path. If you pip install with the user mode, it is under
${HOME}/.local/bin
, or %APPDATA%\Python
on Windows. You may want to
include the path in your system's PATH
environment variable:
export PATH=${HOME}/.local/bin:${PATH}
Build the compiler from sourcelink
Please make sure you have followed the
Getting started page to build
the IREE compiler, then enable the CUDA compiler target with the
IREE_TARGET_BACKEND_CUDA
option.
Tip
iree-compile
will be built under the iree-build/tools/
directory. You
may want to include this path in your system's PATH
environment variable.
Get the IREE runtimelink
Next you will need to get an IREE runtime that includes the CUDA HAL driver.
You can check for CUDA support by looking for a matching driver and device:
$ iree-run-module --list_drivers
# ============================================================================
# Available HAL drivers
# ============================================================================
# Use --list_devices={driver name} to enumerate available devices.
cuda: NVIDIA CUDA HAL driver (via dylib)
hip: HIP HAL driver (via dylib)
local-sync: Local execution using a lightweight inline synchronous queue
local-task: Local execution using the IREE multithreading task system
vulkan: Vulkan 1.x (dynamic)
$ iree-run-module --list_devices
cuda://GPU-00000000-1111-2222-3333-444444444444
local-sync://
local-task://
vulkan://00000000-1111-2222-3333-444444444444
Download the runtime from a releaselink
Python packages are distributed through multiple channels. See the
Python Bindings page for more details.
The core iree-base-runtime
package includes the CUDA HAL driver:
Stable release packages are published to PyPI.
python -m pip install iree-base-runtime
Nightly pre-releases are published on GitHub releases.
python -m pip install \
--find-links https://iree.dev/pip-release-links.html \
--upgrade --pre iree-base-runtime
Development packages are built at every commit and on pull requests, for limited configurations.
On Linux with Python 3.11, development packages can be installed
into a Python venv
using
the
build_tools/pkgci/setup_venv.py
script:
# Install packages from a specific commit ref.
# See also the `--fetch-latest-main` and `--fetch-gh-workflow` options.
python ./build_tools/pkgci/setup_venv.py /tmp/.venv --fetch-git-ref=8230f41d
source /tmp/.venv/bin/activate
Build the runtime from sourcelink
Please make sure you have followed the
Getting started page to build
IREE from source, then enable the CUDA HAL driver with the
IREE_HAL_DRIVER_CUDA
option.
Compile and run a program modellink
With the requirements out of the way, we can now compile a model and run it.
Compile a programlink
The IREE compiler transforms a model into its final deployable format in several sequential steps. A model authored with Python in an ML framework should use the corresponding framework's import tool to convert into a format (i.e., MLIR) expected by the IREE compiler first.
Using a MobileNet model as an example, import using IREE's ONNX importer:
# Download the model you want to compile and run.
wget https://github.com/onnx/models/raw/refs/heads/main/validated/vision/classification/mobilenet/model/mobilenetv2-10.onnx
# Import to MLIR using IREE's ONNX importer.
pip install iree-base-compiler[onnx]
iree-import-onnx mobilenetv2-10.onnx --opset-version 17 -o mobilenetv2.mlir
Then run the following command to compile with the cuda
target:
iree-compile \
--iree-hal-target-backends=cuda \
--iree-cuda-target=<...> \
mobilenetv2.mlir -o mobilenet_cuda.vmfb
Tip - CUDA targets
Canonically a CUDA target (iree-cuda-target
) matching the LLVM NVPTX
backend of the form sm_<arch_number>
is needed to compile towards each GPU
architecture. If no architecture is specified then we will default to
sm_60
.
Here is a table of commonly used architectures:
CUDA GPU | Target Architecture | Architecture Code Name |
---|---|---|
NVIDIA P100 | sm_60 |
pascal |
NVIDIA V100 | sm_70 |
volta |
NVIDIA A100 | sm_80 |
ampere |
NVIDIA H100 | sm_90 |
hopper |
NVIDIA RTX20 series | sm_75 |
turing |
NVIDIA RTX30 series | sm_86 |
ampere |
NVIDIA RTX40 series | sm_89 |
ada |
In addition to the canonical sm_<arch_number>
scheme, iree-cuda-target
also supports two additonal schemes to make a better developer experience:
- Architecture code names like
volta
orampere
- GPU product names like
a100
orrtx3090
These two schemes are translated into the canonical form under the hood. We add support for common code/product names without aiming to be exhaustive. If the ones you want are missing, please use the canonical form.
Run a compiled programlink
To run the compiled program:
iree-run-module \
--device=cuda \
--module=mobilenet_cuda.vmfb \
--function=torch-jit-export \
--input="1x3x224x224xf32=0"
The above assumes the exported function in the model is named torch-jit-export
and it expects one 224x224 RGB image. We are feeding in an image with all 0
values here for brevity, see iree-run-module --help
for the format to specify
concrete values.