DirectML Operator Support

Windows Development Documentation

Supported Operators

DirectML provides a comprehensive set of operators to enable hardware-accelerated machine learning inference on Windows. This page details the operators currently supported by DirectML, categorized for clarity.

Core Operators

These are fundamental operations used across many machine learning models.

Operator Description Category Supported Types
Add Element-wise addition of two tensors. Arithmetic FP32, FP16, INT32
Sub Element-wise subtraction of two tensors. Arithmetic FP32, FP16, INT32
Mul Element-wise multiplication of two tensors. Arithmetic FP32, FP16, INT32
Div Element-wise division of two tensors. Arithmetic FP32, FP16
Relu Rectified Linear Unit activation (max(0, x)). Activation FP32, FP16
Sigmoid Sigmoid activation function. Activation FP32, FP16
Tanh Hyperbolic tangent activation function. Activation FP32, FP16
Conv2D 2D Convolution operation. Convolution FP32, FP16
MaxPool2D 2D Max Pooling operation. Pooling FP32, FP16
AvgPool2D 2D Average Pooling operation. Pooling FP32, FP16
Gemm General Matrix Multiply (often used for Fully Connected layers). Linear Algebra FP32, FP16
Softmax Softmax activation function, typically for classification. Activation FP32, FP16
BatchNormalization Normalizes activations across the batch dimension. Normalization FP32, FP16

Commonly Used Operators

These operators are frequently found in popular deep learning models.

Operator Description Category Supported Types
Concat Concatenates tensors along a specified axis. Tensor Manipulation FP32, FP16, INT32
Reshape Changes the shape of a tensor without changing its data. Tensor Manipulation FP32, FP16, INT32
Slice Extracts a slice from a tensor. Tensor Manipulation FP32, FP16, INT32
ReduceMean Computes the mean of elements across dimensions of a tensor. Reduction FP32, FP16
ReduceSum Computes the sum of elements across dimensions of a tensor. Reduction FP32, FP16, INT32
MatMul Matrix multiplication of two tensors. Linear Algebra FP32, FP16

Advanced Operators

Support for more specialized or complex operations.

Operator Description Category Supported Types
LayerNormalization Normalizes activations across the feature dimension. Normalization FP32, FP16
LSTM Long Short-Term Memory recurrent layer. Recurrent FP32, FP16
GRU Gated Recurrent Unit recurrent layer. Recurrent FP32, FP16
LSTMCell Single LSTM cell operation. Recurrent FP32, FP16

Note: Support for specific operator versions, data types (e.g., INT8, UINT8), and combinations may vary depending on your DirectML driver version and hardware capabilities. Always refer to the latest DirectML documentation or use the DirectML capabilities query API for the most up-to-date information.

Operator Support Matrix

The table below provides a high-level overview of operator support across different DirectML feature levels. Check the DirectML Release Notes for detailed version compatibility.

Operator Feature Level 1.0 Feature Level 1.1 Feature Level 1.2 Feature Level 1.3
Add
Sub
Mul
Div
Conv2D
Relu
MaxPool2D
Gemm
Softmax
LSTM

✓ Supported, ✗ Not supported

Contributing to Operator Support

We are continuously expanding DirectML operator support. If you need support for an operator not listed here, please visit the DirectML GitHub repository to submit a feature request or contribute.