Introduction
Azure Table Storage is a NoSQL key-attribute store that allows you to store large amounts of structured, non-relational data. Azure Functions provides powerful bindings to integrate seamlessly with Table Storage, enabling you to read from and write to tables with minimal code.
This document covers how to use Table Storage bindings as triggers, inputs, and outputs in Azure Functions.
Table Storage Trigger
The Table Storage trigger allows your function to be invoked when new entities are added to a specified Azure Table Storage table.
Configuration
To configure the Table Storage trigger, you define it in your function.json file (for JavaScript, C#, etc.) or use attributes in C#.
Key properties include:
type: Set totableTrigger.direction: Set toin.name: The name of the parameter that will receive the table entity data.tableName: The name of the Azure Table Storage table to monitor.connection: The name of the application setting that contains the Azure Storage connection string.
{
"scriptFile": "index.js",
"bindings": [
{
"name": "entity",
"type": "tableTrigger",
"direction": "in",
"tableName": "MyTable",
"connection": "AzureWebJobsStorage"
}
]
}
Events
The trigger fires for each new entity inserted into the table. The event payload contains the entity data.
Table Storage Input Binding
The Table Storage input binding allows you to retrieve a single entity or a collection of entities from a table as input to your function.
Configuration
Configure the input binding similarly to the trigger, but with type set to table and direction set to in.
You can specify a PartitionKey and RowKey to retrieve a specific entity, or omit them to retrieve all entities (use with caution for large tables).
type: Set totable.direction: Set toin.name: The name of the parameter holding the entity(ies).tableName: The name of the table to read from.partitionKey: (Optional) The PartitionKey to filter by.rowKey: (Optional) The RowKey to filter by.connection: The name of the application setting for the connection string.
{
"scriptFile": "index.js",
"bindings": [
{
"name": "inputEntity",
"type": "table",
"direction": "in",
"tableName": "MyTable",
"partitionKey": "{partitionKeyParam}",
"rowKey": "{rowKeyParam}",
"connection": "AzureWebJobsStorage"
}
]
}
Usage
The input parameter will contain the requested entity or an array of entities.
Table Storage Output Binding
The Table Storage output binding allows your function to write entities to an Azure Table Storage table.
Configuration
Configure the output binding with type set to table and direction set to out.
type: Set totable.direction: Set toout.name: The name of the parameter used to write data.tableName: The name of the table to write to.connection: The name of the application setting for the connection string.
{
"scriptFile": "index.js",
"bindings": [
{
"name": "outputEntity",
"type": "table",
"direction": "out",
"tableName": "MyTable",
"connection": "AzureWebJobsStorage"
}
]
}
Usage
In your function code, you assign the entity object (or an array of objects) to the output parameter.
The entity object must have at least a PartitionKey and RowKey property.
Code Examples
JavaScript Example
A function that reads an entity using an input binding and writes a new entity using an output binding.
module.exports = async function (context, req, inputEntity) {
context.log('JavaScript HTTP trigger function processed a request.');
// Input binding (assuming PartitionKey and RowKey are passed in HTTP request)
const partitionKey = req.query.partitionKey || (req.body && req.body.partitionKey);
const rowKey = req.query.rowKey || (req.body && req.body.rowKey);
if (inputEntity) {
context.log(`Found entity: ${JSON.stringify(inputEntity)}`);
} else {
context.log(`No entity found for PartitionKey: ${partitionKey}, RowKey: ${rowKey}`);
}
// Output binding: Create a new entity
const newEntity = {
PartitionKey: "Processed",
RowKey: context.invocationId,
Message: `Processed request for ${partitionKey}:${rowKey}`,
Timestamp: new Date().toISOString()
};
context.bindings.outputEntity = newEntity;
context.log(`Writing new entity: ${JSON.stringify(newEntity)}`);
context.res = {
status: 200,
body: "Entity processed and new entity written to table."
};
};
C# Example
A function using attributes for input and output bindings.
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using System.Threading.Tasks;
using System.Collections.Generic;
public static class TableStorageFunction
{
[FunctionName("ProcessTableEntity")]
public static async Task Run(
HttpRequest req,
[Table("MyTable", "{PartitionKey}", "{RowKey}")] MyEntity inputEntity,
[Table("MyTable")] IAsyncCollector outputEntity,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string partitionKey = req.Query["PartitionKey"];
string rowKey = req.Query["RowKey"];
if (inputEntity != null)
{
log.LogInformation($"Found entity: PartitionKey={inputEntity.PartitionKey}, RowKey={inputEntity.RowKey}, Data={inputEntity.Data}");
}
else
{
log.LogInformation($"No entity found for PartitionKey: {partitionKey}, RowKey: {rowKey}");
}
// Output binding: Create a new entity
var newEntity = new MyEntity
{
PartitionKey = "Processed",
RowKey = Guid.NewGuid().ToString(),
Data = $"Processed request for {partitionKey}:{rowKey}"
};
await outputEntity.AddAsync(newEntity);
log.LogInformation($"Writing new entity: PartitionKey={newEntity.PartitionKey}, RowKey={newEntity.RowKey}");
// You would typically return a response here
// return new OkObjectResult("Entity processed and new entity written to table.");
}
}
public class MyEntity
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public string Data { get; set; }
public string Timestamp { get; set; } = DateTime.UtcNow.ToString("o");
}
Best Practices
- PartitionKey and RowKey: Always ensure your entities have valid
PartitionKeyandRowKeyproperties when writing. - Large Tables: Be mindful when querying entire tables without filters. Consider implementing strategies to retrieve data in smaller chunks if necessary.
- Connection Strings: Store your Azure Storage connection strings securely in application settings, not directly in your code.
- Error Handling: Implement robust error handling and logging to diagnose issues effectively.
- Trigger vs. Polling: The Table Storage trigger is event-driven for insertions. For updates or deletions, you might need to explore other patterns or implement custom logic.