Glossarylink
IREE exists in an ecosystem of projects and acts as a bridge between machine learning frameworks and a variety of hardware platforms. This glossary outlines some of those projects and technologies.
Something missing?
Don't see a project of technology here that you think should be? We welcome contributions on our GitHub page!
JAXlink
JAX is Python framework supporting high-performance machine learning research by bridging automatic differentiation and ML compilers like XLA and IREE.
See the JAX Integration guide for details on how to use JAX programs with IREE.
MLIRlink
Multi-Level Intermediate Representation (MLIR) is the compiler framework that IREE is built around. Beyond the tooling this includes a set of common dialects and transformations that IREE utilizes for its code generation system.
For general discussion on MLIR see the project's discourse forum.
Linalglink
Linalg is an MLIR dialect that defines Linear Algebra operations in a generalized fashion by modeling iteration spaces together with compute payloads. Linalg includes a set of commonly used operations as well as generic interfaces.
IREE uses the Linalg dialect during its code generation pipeline to define tensor operations then generate loop structures for its various backend targets.
OpenXLAlink
OpenXLA is a community-driven, open source ML compiler ecosystem.
IREE interfaces with some of the OpenXLA projects, such as StableHLO.
PyTorchlink
PyTorch is an optimized tensor library for deep learning.
PyTorch uses the Torch-MLIR project to interface with projects like IREE. See the PyTorch Integration guide for details on how to use PyTorch programs with IREE.
SPIR-Vlink
SPIR-V is a shader and kernel intermediate language for expressing parallel computation typically used for GPUs. It serves as a hardware agnostic assembly format for distributing complex, computationally intensive programs.
IREE uses the SPIR-V MLIR Dialect in its code generation pipeline for Vulkan and other compute APIs.
StableHLOlink
StableHLO is a set of versioned high-level operations (HLOs) for ML models with backward and forward compatibility guarantees. StableHLO aims to improve interoperability between frameworks (such as TensorFlow, JAX, and PyTorch) and ML compilers.
StableHLO has both a specification and an MLIR dialect.
IREE uses the StableHLO MLIR Dialect as one of its input formats.
TOSAlink
Tensor Operator Set Architecture (TOSA) provides a set of tensor operations commonly employed by Deep Neural Networks. TOSA defines accuracy and compatibility constraints so frameworks that use it can trust that applications will produce similar results on a variety of hardware targets.
TOSA has both a specification and an MLIR dialect.
IREE uses the TOSA MLIR dialect as one of its input formats.
TFLitelink
TensorFlow Lite (TFLite) is a library for deploying models on mobile and other edge devices.
IREE supports running TFLite programs that have been imported into MLIR using the TOSA dialect. See the TFLite Integration guide for details on how to use TFLite programs with IREE.
IREE also has bindings for the
TFLite C API,
see the
runtime/bindings/tflite/
directory for details.