CPU deploymentlink
IREE supports efficient program execution on CPU devices by using LLVM to compile all dense computations in each program into highly optimized CPU native instruction streams, which are embedded in one of IREE's deployable formats.
To compile a program for CPU execution:
-
Pick a CPU target supported by LLVM. By default, IREE includes these LLVM targets:
- X86
- ARM
- AArch64
- RISCV
Other targets may work, but in-tree test coverage and performance work is focused on that list.
-
Pick one of IREE's supported executable formats:
Executable Format Description Embedded ELF (Default) Portable, high performance dynamic library System library Platform-specific dynamic library (.so, .dll, etc.) VMVX Reference target
At runtime, CPU executables can be loaded using one of IREE's CPU HAL devices:
local-task
: asynchronous, multithreaded device built on IREE's "task" systemlocal-sync
: synchronous, single-threaded devices that executes work inline
Prerequisiteslink
Get the IREE compilerlink
Download the compiler from a releaselink
Python packages are distributed through multiple channels. See the
Python Bindings page for more details.
The core iree-base-compiler
package includes the compiler tools:
Stable release packages are published to PyPI.
python -m pip install iree-base-compiler
Nightly pre-releases are published on GitHub releases.
python -m pip install \
--find-links https://iree.dev/pip-release-links.html \
--upgrade --pre iree-base-compiler
Development packages are built at every commit and on pull requests, for limited configurations.
On Linux with Python 3.11, development packages can be installed
into a Python venv
using
the
build_tools/pkgci/setup_venv.py
script:
# Install packages from a specific commit ref.
# See also the `--fetch-latest-main` and `--fetch-gh-workflow` options.
python ./build_tools/pkgci/setup_venv.py /tmp/.venv --fetch-git-ref=8230f41d
source /tmp/.venv/bin/activate
Tip
iree-compile
and other tools are installed to your python module
installation path. If you pip install with the user mode, it is under
${HOME}/.local/bin
, or %APPDATA%\Python
on Windows. You may want to
include the path in your system's PATH
environment variable:
export PATH=${HOME}/.local/bin:${PATH}
Build the compiler from sourcelink
Please make sure you have followed the
Getting started page to build
IREE for your host platform. The llvm-cpu
compiler backend is compiled in by
default on all platforms, though you should ensure that the
IREE_TARGET_BACKEND_LLVM_CPU
CMake option is ON
when configuring.
Tip
iree-compile
will be built under the iree-build/tools/
directory. You
may want to include this path in your system's PATH
environment variable.
Get the IREE runtimelink
You will need to get an IREE runtime that supports the local CPU HAL driver, along with the appropriate executable loaders for your application.
You can check for CPU support by looking for the local-sync
and local-task
drivers and devices:
$ iree-run-module --list_drivers
# ============================================================================
# Available HAL drivers
# ============================================================================
# Use --list_devices={driver name} to enumerate available devices.
cuda: NVIDIA CUDA HAL driver (via dylib)
hip: HIP HAL driver (via dylib)
local-sync: Local execution using a lightweight inline synchronous queue
local-task: Local execution using the IREE multithreading task system
vulkan: Vulkan 1.x (dynamic)
$ iree-run-module --list_devices
hip://GPU-00000000-1111-2222-3333-444444444444
local-sync://
local-task://
vulkan://00000000-1111-2222-3333-444444444444
Download the runtime from a releaselink
Python packages are distributed through multiple channels. See the
Python Bindings page for more details.
The core iree-base-runtime
package includes the local CPU HAL drivers:
Stable release packages are published to PyPI.
python -m pip install iree-base-runtime
Nightly pre-releases are published on GitHub releases.
python -m pip install \
--find-links https://iree.dev/pip-release-links.html \
--upgrade --pre iree-base-runtime
Development packages are built at every commit and on pull requests, for limited configurations.
On Linux with Python 3.11, development packages can be installed
into a Python venv
using
the
build_tools/pkgci/setup_venv.py
script:
# Install packages from a specific commit ref.
# See also the `--fetch-latest-main` and `--fetch-gh-workflow` options.
python ./build_tools/pkgci/setup_venv.py /tmp/.venv --fetch-git-ref=8230f41d
source /tmp/.venv/bin/activate
Build the runtime from sourcelink
Please make sure you have followed one of the
Building from source pages to build
IREE for your target platform. The local CPU HAL drivers and devices are
compiled in by default on all platforms, though you should ensure that the
IREE_HAL_DRIVER_LOCAL_TASK
and IREE_HAL_EXECUTABLE_LOADER_EMBEDDED_ELF
(or other executable loader) CMake options are ON
when configuring.
Compile and run a programlink
With the requirements out of the way, we can now compile a model and run it.
Compile a programlink
The IREE compiler transforms a model into its final deployable format in several sequential steps. A model authored with Python in an ML framework should use the corresponding framework's import tool to convert into a format (i.e., MLIR) expected by the IREE compiler first.
Using a MobileNet model as an example, import using IREE's ONNX importer:
# Download the model you want to compile and run.
wget https://github.com/onnx/models/raw/refs/heads/main/validated/vision/classification/mobilenet/model/mobilenetv2-10.onnx
# Import to MLIR using IREE's ONNX importer.
pip install iree-base-compiler[onnx]
iree-import-onnx mobilenetv2-10.onnx --opset-version 17 -o mobilenetv2.mlir
Then run the following command to compile with the llvm-cpu
target:
iree-compile \
--iree-hal-target-backends=llvm-cpu \
--iree-llvmcpu-target-cpu=host \
mobilenetv2.mlir -o mobilenet_cpu.vmfb
Tip - Target CPUs and CPU features
By default, the compiler will use a generic CPU target which will result in poor performance. A target CPU or target CPU feature set should be selected using one of these options:
--iree-llvmcpu-target-cpu=...
--iree-llvmcpu-target-cpu-features=...
When not cross compiling, passing --iree-llvmcpu-target-cpu=host
is
usually sufficient on most devices.
Tip - CPU targets
The --iree-llvmcpu-target-triple
flag tells the compiler to generate code
for a specific type of CPU. You can see the list of supported targets with
iree-compile --iree-llvmcpu-list-targets
, or use the default value of
"host" to let LLVM infer the triple from your host machine
(e.g. x86_64-linux-gnu
).
$ iree-compile --iree-llvmcpu-list-targets
Registered Targets:
aarch64 - AArch64 (little endian)
aarch64_32 - AArch64 (little endian ILP32)
aarch64_be - AArch64 (big endian)
arm - ARM
arm64 - ARM64 (little endian)
arm64_32 - ARM64 (little endian ILP32)
armeb - ARM (big endian)
riscv32 - 32-bit RISC-V
riscv64 - 64-bit RISC-V
wasm32 - WebAssembly 32-bit
wasm64 - WebAssembly 64-bit
x86 - 32-bit X86: Pentium-Pro and above
x86-64 - 64-bit X86: EM64T and AMD64
Run a compiled programlink
To run the compiled program:
iree-run-module \
--device=local-task \
--module=mobilenet_cpu.vmfb \
--function=torch-jit-export \
--input="1x3x224x224xf32=0"
The above assumes the exported function in the model is named torch-jit-export
and it expects one 224x224 RGB image. We are feeding in an image with all 0
values here for brevity, see iree-run-module --help
for the format to specify
concrete values.