Microsoft Learn

Azure Synapse Analytics Spark Pool Reference

This document provides a comprehensive reference for understanding and managing Apache Spark pools within Azure Synapse Analytics. Learn about configurations, APIs, and best practices for optimizing your big data workloads.

Overview

Azure Synapse Analytics offers powerful Apache Spark capabilities for big data processing and machine learning. Spark pools provide a managed, scalable, and cost-effective environment for running your Spark jobs.

Key features include:

  • Managed Environment: No need to worry about infrastructure management.
  • Scalability: Dynamically scale your Spark cluster resources up or down based on demand.
  • Integration: Seamless integration with other Azure services like Azure Data Lake Storage, Azure Cosmos DB, and Power BI.
  • Languages: Support for Scala, Python, SQL, and .NET.

Spark Pool Configurations

When creating or configuring a Spark pool in Synapse Analytics, several parameters can be adjusted to optimize performance and cost.

Parameter Description Default Value Allowed Values
Node size The VM size for the Spark nodes. Affects CPU, memory, and local storage. Standard_DS3_v2 Various standard VM sizes (e.g., Standard_DS3_v2, Standard_E8ds_v4)
Number of nodes The initial number of worker nodes. 3 1 - 100
Autoscale Enable or disable automatic scaling of worker nodes. Enabled Enabled/Disabled
Min nodes Minimum number of worker nodes when autoscale is enabled. 1 1 - 100
Max nodes Maximum number of worker nodes when autoscale is enabled. 10 1 - 100
Spark version The version of Apache Spark to run. 3.2 3.1, 3.2, 3.3, etc.
Python version The Python version to use. 3.8 3.8, 3.9, 3.10, etc.

Spark Pool Management APIs

Manage your Spark pools programmatically using the Azure Synapse Analytics REST API or Azure SDKs.

Create Spark Pool

POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}/bigDataPools/{bigDataPoolName}?api-version=2021-06-01
Creates a new Apache Spark pool.
Request Body Parameters:
  • location: String (Required) - The geo-location where the resource lives.
  • properties: Object (Required) - Properties of the Spark Pool.
    • nodeCount: Integer (Required) - The number of nodes to allocate for this Spark pool.
    • nodeSize: String (Required) - The node size of the Spark pool.
    • autoScale: Object (Optional) - Auto-scaling configuration for the Spark pool.
      • minNodeCount: Integer (Optional)
      • maxNodeCount: Integer (Optional)
    • sparkVersion: String (Required) - The Spark version for the Spark pool.
    • libraryRequirements: Object (Optional) - Auto-scaling configuration for the Spark pool.
Successful Response:

A 201 Created status code with the Spark pool resource representation.

Get Spark Pool

GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}/bigDataPools/{bigDataPoolName}?api-version=2021-06-01
Gets a Spark pool.
Successful Response:

A 200 OK status code with the Spark pool resource representation.

Update Spark Pool

PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}/bigDataPools/{bigDataPoolName}?api-version=2021-06-01
Updates a Spark pool.
Request Body Parameters:

Similar to Create Spark Pool, but can specify only properties to update.

Successful Response:

A 200 OK status code with the updated Spark pool resource representation.

Delete Spark Pool

DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Synapse/workspaces/{workspaceName}/bigDataPools/{bigDataPoolName}?api-version=2021-06-01
Deletes a Spark pool.
Successful Response:

A 200 OK status code if deletion is successful.

Best Practices

  • Right-size your nodes: Choose VM sizes that match your workload's CPU and memory requirements.
  • Configure autoscale: Use autoscale to handle fluctuating workloads and optimize costs.
  • Monitor performance: Regularly monitor Spark job metrics and cluster utilization to identify bottlenecks.
  • Optimize data partitioning: Ensure your data is partitioned effectively for faster query processing.
  • Manage dependencies: Use a consistent approach for managing Spark libraries and dependencies across your projects.