Azure Functions Table Storage Bindings
Table storage bindings allow your Azure Functions to interact with Azure Table Storage, a NoSQL key-value store that stores data as tables. You can use these bindings to read data from, write data to, and query Azure Table Storage within your functions.
Supported Operations
Table storage bindings support the following operations:
- Input Binding: Read a single entity or a collection of entities from a table.
- Output Binding: Insert or update single or multiple entities in a table.
Input Binding
The input binding for Table Storage allows you to retrieve data from a table and make it available to your function as a parameter.
Configuration
In your function.json file, you can define an input binding with the type set to table.
{
"bindings": [
{
"name": "inputEntity",
"type": "table",
"direction": "in",
"tableName": "MyTable",
"partitionKey": "{partitionKey}",
"rowKey": "{rowKey}",
"connection": "AzureWebJobsStorage"
},
{
"name": "req",
"type": "httpTrigger",
"direction": "in",
"methods": [
"get"
]
},
{
"name": "$return",
"type": "http",
"direction": "out"
}
]
}
Parameters
- name: The name of the parameter in your function code.
- type: Must be
table. - direction: Must be
in. - tableName: The name of the Azure Table Storage table.
- partitionKey: The partition key of the entity to retrieve. Can be a literal value or a binding expression (e.g., from an HTTP request parameter).
- rowKey: The row key of the entity to retrieve. Can be a literal value or a binding expression.
- connection: The name of an app setting that contains the Azure Storage connection string. Defaults to
AzureWebJobsStorage.
Example (C#)
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using System.Threading.Tasks;
public static class GetTableEntity
{
[FunctionName("GetTableEntity")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "entity/{partitionKey}/{rowKey}")] HttpRequest req,
string partitionKey,
string rowKey,
[Table("MyTable", partitionKey = "{partitionKey}", rowKey = "{rowKey}")] dynamic inputEntity, // Using dynamic for flexibility
ILogger log)
{
log.LogInformation($"C# HTTP trigger function processed a request to get entity: PartitionKey='{partitionKey}', RowKey='{rowKey}'");
if (inputEntity == null)
{
return new NotFoundResult();
}
return new OkObjectResult(inputEntity);
}
}
When retrieving a single entity, the partitionKey and rowKey are required. If you omit them, a collection of all entities in the table will be retrieved, and the function parameter will be an array or list of objects.
Output Binding
The output binding for Table Storage allows you to write data to a table.
Configuration
In your function.json file, you can define an output binding with the type set to table.
{
"bindings": [
{
"name": "outputEntity",
"type": "table",
"direction": "out",
"tableName": "MyTable",
"connection": "AzureWebJobsStorage"
},
{
"name": "req",
"type": "httpTrigger",
"direction": "in",
"methods": [
"post"
]
},
{
"name": "$return",
"type": "http",
"direction": "out"
}
]
}
Parameters
- name: The name of the parameter in your function code (often implicitly bound to the return value).
- type: Must be
table. - direction: Must be
out. - tableName: The name of the Azure Table Storage table to write to.
- connection: The name of an app setting that contains the Azure Storage connection string. Defaults to
AzureWebJobsStorage.
Example (C#)
This example shows how to insert a new entity into a table.
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using System.Threading.Tasks;
using System.IO;
using Newtonsoft.Json;
public static class AddTableEntity
{
[FunctionName("AddTableEntity")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = "entity")] HttpRequest req,
[Table("MyTable", Connection = "AzureWebJobsStorage")] IAsyncCollector outputEntity,
ILogger log)
{
log.LogInformation("C# HTTP trigger function to add a new entity processed a request.");
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
if (data?.partitionKey == null || data?.rowKey == null)
{
return new BadRequestObjectResult("Please pass partitionKey and rowKey in the request body.");
}
var entity = new TableEntity
{
PartitionKey = data.partitionKey.ToString(),
RowKey = data.rowKey.ToString()
};
// Add other properties from the request body
foreach (var property in data)
{
if (property.Name != "partitionKey" && property.Name != "rowKey")
{
entity.Add(property.Name, property.Value.ToString());
}
}
await outputEntity.AddAsync(entity);
return new OkObjectResult($"Entity with PartitionKey='{entity.PartitionKey}' and RowKey='{entity.RowKey}' added to MyTable.");
}
}
You can use ICollector<T> or IAsyncCollector<T> for output bindings, where T is the type of the entity (e.g., TableEntity, a custom class, or dynamic).
Batch Operations
For efficiency, you can perform batch operations (inserting or updating multiple entities at once) using the Table Storage client library directly within your function.
Example (Python)
This example demonstrates a batch insert using the Azure SDK for Python.
import logging
import azure.functions as func
from azure.data.tables import TableServiceClient, UpdateMode
from azure.core.exceptions import ResourceNotFoundError
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function to perform batch operation processed a request.')
try:
# Get connection string from app settings
conn_str = os.environ["AzureWebJobsStorage"]
table_name = "MyBatchTable"
# Create a TableServiceClient
table_service_client = TableServiceClient.from_connection_string(conn_str=conn_str)
table_client = table_service_client.get_table_client(table_name=table_name)
# Define entities for batch operation
entities_to_insert = [
{"PartitionKey": "batch1", "RowKey": "item1", "Description": "First item in batch"},
{"PartitionKey": "batch1", "RowKey": "item2", "Description": "Second item in batch"},
{"PartitionKey": "batch2", "RowKey": "itemA", "Description": "Another item"},
]
# Perform batch insert
# Note: Azure Table Storage batch operations have limitations (e.g., entities must share the same PartitionKey for a single batch)
# For simplicity here, we'll create separate batches if needed or handle them individually.
# A more robust solution would group by PartitionKey.
batch_responses = []
for entity_data in entities_to_insert:
batch_client = table_client.new_batch()
batch_client.create_entity(entity=entity_data)
batch_responses.extend(table_client.send_batch(batch_client))
logging.info(f"Batch operation completed with {len(batch_responses)} responses.")
return func.HttpResponse(
"Batch operation successful.",
status_code=200
)
except Exception as e:
logging.error(f"An error occurred: {e}")
return func.HttpResponse(
f"An error occurred: {str(e)}",
status_code=500
)
Key Concepts
- PartitionKey and RowKey: These form the unique identifier for an entity in Table Storage. Your bindings will often use these to specify which data to retrieve or where to store data.
- Dynamic Data: The
dynamickeyword in C# or Python dictionaries allow for flexible schema, which is common in NoSQL databases like Table Storage. - Connection Strings: Ensure your storage account connection string is securely stored in your Azure Function app settings.