GPU deployment using CUDAlink
IREE can accelerate model execution on Nvidia GPUs using CUDA.
Prerequisiteslink
In order to use CUDA to drive the GPU, you need to have a functional CUDA environment. It can be verified by the following steps:
nvidia-smi | grep CUDA
If nvidia-smi
does not exist, you will need to
install the latest CUDA Toolkit SDK.
Get the IREE compilerlink
Download the compiler from a releaselink
Python packages are regularly published to
PyPI. See the
Python Bindings page for more details.
The core iree-compiler
package includes the CUDA compiler:
Stable release packages are published to PyPI.
python -m pip install iree-compiler
Nightly releases are published on GitHub releases.
python -m pip install \
--find-links https://iree.dev/pip-release-links.html \
--upgrade iree-compiler
Tip
iree-compile
is installed to your python module installation path. If you
pip install with the user mode, it is under ${HOME}/.local/bin
, or
%APPDATA%Python
on Windows. You may want to include the path in your
system's PATH
environment variable:
export PATH=${HOME}/.local/bin:${PATH}
Build the compiler from sourcelink
Please make sure you have followed the
Getting started page to build
the IREE compiler, then enable the CUDA compiler target with the
IREE_TARGET_BACKEND_CUDA
option.
Tip
iree-compile
will be built under the iree-build/tools/
directory. You
may want to include this path in your system's PATH
environment variable.
Get the IREE runtimelink
Next you will need to get an IREE runtime that includes the CUDA HAL driver.
Build the runtime from sourcelink
Please make sure you have followed the
Getting started page to build
IREE from source, then enable the CUDA HAL driver with the
IREE_HAL_DRIVER_CUDA
option.
Compile and run a program modellink
With the compiler and runtime ready, we can now compile programs and run them on GPUs.
Compile a programlink
The IREE compiler transforms a model into its final deployable format in many sequential steps. A model authored with Python in an ML framework should use the corresponding framework's import tool to convert into a format (i.e., MLIR) expected by the IREE compiler first.
Using MobileNet v2 as an example, you can download the SavedModel with trained weights from TensorFlow Hub and convert it using IREE's TensorFlow importer. Then run one of the following commands to compile:
iree-compile \
--iree-hal-target-backends=cuda \
--iree-hal-cuda-llvm-target-arch=<...> \
mobilenet_iree_input.mlir -o mobilenet_cuda.vmfb
Note that a cuda target architecture (iree-hal-cuda-llvm-target-arch
) of
the form sm_<arch_number>
is needed to compile towards each GPU
architecture. If no architecture is specified then we will default to
sm_35
.
Here is a table of commonly used architectures:
CUDA GPU | Target Architecture |
---|---|
Nvidia K80 | sm_35 |
Nvidia P100 | sm_60 |
Nvidia V100 | sm_70 |
Nvidia A100 | sm_80 |
Run a compiled programlink
Run the following command:
iree-run-module \
--device=cuda \
--module=mobilenet_cuda.vmfb \
--function=predict \
--input="1x224x224x3xf32=0"
The above assumes the exported function in the model is named as predict
and
it expects one 224x224 RGB image. We are feeding in an image with all 0 values
here for brevity, see iree-run-module --help
for the format to specify
concrete values.