Leveraging Windows' hardware-accelerated machine learning
Explore how DirectML can accelerate various Natural Language Processing tasks on Windows. These samples demonstrate the integration of DirectML with popular NLP models and frameworks.
DirectML provides a high-performance, low-latency path for running AI inference and training on a wide range of DirectX 12-capable hardware, including GPUs from NVIDIA, AMD, and Intel. This is crucial for demanding NLP workloads like text generation, sentiment analysis, and translation.
This sample demonstrates how to load and run a pre-trained BERT model for text classification tasks, such as sentiment analysis, using DirectML for accelerated inference.
View Sample Download CodeImplement and accelerate sequence-to-sequence models for machine translation. This sample showcases efficient tensor operations for transformer-based architectures.
View Sample Download CodeUtilize DirectML to speed up Named Entity Recognition models, identifying and categorizing entities in text. This sample focuses on efficient execution of recurrent or convolutional layers.
View Sample Download CodeExplore running a fine-tuned GPT-2 model for text generation. This sample highlights the performance gains achieved by offloading complex generative models to DirectML.
View Sample Download CodeTo run these samples, you'll typically need:
For detailed setup instructions and an overview of DirectML concepts, please refer to the official DirectML Documentation.