Skip to content

Latest commit

 

History

History
999 lines (763 loc) · 37.5 KB

functions-bindings-storage-table.md

File metadata and controls

999 lines (763 loc) · 37.5 KB
title description author ms.topic ms.date ms.author ms.custom
Azure Table storage bindings for Azure Functions
Understand how to use Azure Table storage bindings in Azure Functions.
craigshoemaker
reference
09/03/2018
cshoe
tracking-python

Azure Table storage bindings for Azure Functions

This article explains how to work with Azure Table storage bindings in Azure Functions. Azure Functions supports input and output bindings for Azure Table storage.

[!INCLUDE intro]

Packages - Functions 1.x

The Table storage bindings are provided in the Microsoft.Azure.WebJobs NuGet package, version 2.x. Source code for the package is in the azure-webjobs-sdk GitHub repository.

[!INCLUDE functions-package-auto]

[!INCLUDE functions-storage-sdk-version]

Packages - Functions 2.x and higher

The Table storage bindings are provided in the Microsoft.Azure.WebJobs.Extensions.Storage NuGet package, version 3.x. Source code for the package is in the azure-webjobs-sdk GitHub repository.

[!INCLUDE functions-package-v2]

Input

Use the Azure Table storage input binding to read a table in an Azure Storage account.

One entity

The following example shows a C# function that reads a single table row. For every message send to the queue, the function will be triggered.

The row key value "{queueTrigger}" indicates that the row key comes from the queue message string.

public class TableStorage
{
    public class MyPoco
    {
        public string PartitionKey { get; set; }
        public string RowKey { get; set; }
        public string Text { get; set; }
    }

    [FunctionName("TableInput")]
    public static void TableInput(
        [QueueTrigger("table-items")] string input, 
        [Table("MyTable", "MyPartition", "{queueTrigger}")] MyPoco poco, 
        ILogger log)
    {
        log.LogInformation($"PK={poco.PartitionKey}, RK={poco.RowKey}, Text={poco.Text}");
    }
}

IQueryable

The following example shows a C# function that reads multiple table rows where the MyPoco class derives from TableEntity.

public class TableStorage
{
    public class MyPoco : TableEntity
    {
        public string Text { get; set; }
    }

    [FunctionName("TableInput")]
    public static void TableInput(
        [QueueTrigger("table-items")] string input, 
        [Table("MyTable", "MyPartition")] IQueryable<MyPoco> pocos, 
        ILogger log)
    {
        foreach (MyPoco poco in pocos)
        {
            log.LogInformation($"PK={poco.PartitionKey}, RK={poco.RowKey}, Text={poco.Text}");
        }
    }
}

CloudTable

IQueryable isn't supported in the Functions v2 runtime. An alternative is to use a CloudTable method parameter to read the table by using the Azure Storage SDK. Here's an example of a function that queries an Azure Functions log table:

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.Threading.Tasks;

namespace FunctionAppCloudTable2
{
    public class LogEntity : TableEntity
    {
        public string OriginalName { get; set; }
    }
    public static class CloudTableDemo
    {
        [FunctionName("CloudTableDemo")]
        public static async Task Run(
            [TimerTrigger("0 */1 * * * *")] TimerInfo myTimer, 
            [Table("AzureWebJobsHostLogscommon")] CloudTable cloudTable,
            ILogger log)
        {
            log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

            TableQuery<LogEntity> rangeQuery = new TableQuery<LogEntity>().Where(
                TableQuery.CombineFilters(
                    TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, 
                        "FD2"),
                    TableOperators.And,
                    TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan, 
                        "t")));

            // Execute the query and loop through the results
            foreach (LogEntity entity in 
                await cloudTable.ExecuteQuerySegmentedAsync(rangeQuery, null))
            {
                log.LogInformation(
                    $"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
            }
        }
    }
}

For more information about how to use CloudTable, see Get started with Azure Table storage.

If you try to bind to CloudTable and get an error message, make sure that you have a reference to the correct Storage SDK version.

One entity

The following example shows a table input binding in a function.json file and C# script code that uses the binding. The function uses a queue trigger to read a single table row.

The function.json file specifies a partitionKey and a rowKey. The rowKey value "{queueTrigger}" indicates that the row key comes from the queue message string.

{
  "bindings": [
    {
      "queueName": "myqueue-items",
      "connection": "MyStorageConnectionAppSetting",
      "name": "myQueueItem",
      "type": "queueTrigger",
      "direction": "in"
    },
    {
      "name": "personEntity",
      "type": "table",
      "tableName": "Person",
      "partitionKey": "Test",
      "rowKey": "{queueTrigger}",
      "connection": "MyStorageConnectionAppSetting",
      "direction": "in"
    }
  ],
  "disabled": false
}

The configuration section explains these properties.

Here's the C# script code:

public static void Run(string myQueueItem, Person personEntity, ILogger log)
{
    log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
    log.LogInformation($"Name in Person entity: {personEntity.Name}");
}

public class Person
{
    public string PartitionKey { get; set; }
    public string RowKey { get; set; }
    public string Name { get; set; }
}

IQueryable

The following example shows a table input binding in a function.json file and C# script code that uses the binding. The function reads entities for a partition key that is specified in a queue message.

Here's the function.json file:

{
  "bindings": [
    {
      "queueName": "myqueue-items",
      "connection": "MyStorageConnectionAppSetting",
      "name": "myQueueItem",
      "type": "queueTrigger",
      "direction": "in"
    },
    {
      "name": "tableBinding",
      "type": "table",
      "connection": "MyStorageConnectionAppSetting",
      "tableName": "Person",
      "direction": "in"
    }
  ],
  "disabled": false
}

The configuration section explains these properties.

The C# script code adds a reference to the Azure Storage SDK so that the entity type can derive from TableEntity:

#r "Microsoft.WindowsAzure.Storage"
using Microsoft.WindowsAzure.Storage.Table;
using Microsoft.Extensions.Logging;

public static void Run(string myQueueItem, IQueryable<Person> tableBinding, ILogger log)
{
    log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
    foreach (Person person in tableBinding.Where(p => p.PartitionKey == myQueueItem).ToList())
    {
        log.LogInformation($"Name: {person.Name}");
    }
}

public class Person : TableEntity
{
    public string Name { get; set; }
}

CloudTable

IQueryable isn't supported in the Functions runtime for versions 2.x and higher). An alternative is to use a CloudTable method parameter to read the table by using the Azure Storage SDK. Here's an example of a function that queries an Azure Functions log table:

{
  "bindings": [
    {
      "name": "myTimer",
      "type": "timerTrigger",
      "direction": "in",
      "schedule": "0 */1 * * * *"
    },
    {
      "name": "cloudTable",
      "type": "table",
      "connection": "AzureWebJobsStorage",
      "tableName": "AzureWebJobsHostLogscommon",
      "direction": "in"
    }
  ],
  "disabled": false
}
#r "Microsoft.WindowsAzure.Storage"
using Microsoft.WindowsAzure.Storage.Table;
using System;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;

public static async Task Run(TimerInfo myTimer, CloudTable cloudTable, ILogger log)
{
    log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

    TableQuery<LogEntity> rangeQuery = new TableQuery<LogEntity>().Where(
    TableQuery.CombineFilters(
        TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, 
            "FD2"),
        TableOperators.And,
        TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan, 
            "a")));

    // Execute the query and loop through the results
    foreach (LogEntity entity in 
    await cloudTable.ExecuteQuerySegmentedAsync(rangeQuery, null))
    {
        log.LogInformation(
            $"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
    }
}

public class LogEntity : TableEntity
{
    public string OriginalName { get; set; }
}

For more information about how to use CloudTable, see Get started with Azure Table storage.

If you try to bind to CloudTable and get an error message, make sure that you have a reference to the correct Storage SDK version.

The following example shows a table input binding in a function.json file and JavaScript code that uses the binding. The function uses a queue trigger to read a single table row.

The function.json file specifies a partitionKey and a rowKey. The rowKey value "{queueTrigger}" indicates that the row key comes from the queue message string.

{
  "bindings": [
    {
      "queueName": "myqueue-items",
      "connection": "MyStorageConnectionAppSetting",
      "name": "myQueueItem",
      "type": "queueTrigger",
      "direction": "in"
    },
    {
      "name": "personEntity",
      "type": "table",
      "tableName": "Person",
      "partitionKey": "Test",
      "rowKey": "{queueTrigger}",
      "connection": "MyStorageConnectionAppSetting",
      "direction": "in"
    }
  ],
  "disabled": false
}

The configuration section explains these properties.

Here's the JavaScript code:

module.exports = function (context, myQueueItem) {
    context.log('Node.js queue trigger function processed work item', myQueueItem);
    context.log('Person entity name: ' + context.bindings.personEntity.Name);
    context.done();
};

Single table row

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "messageJSON",
      "type": "table",
      "tableName": "messages",
      "partitionKey": "message",
      "rowKey": "{id}",
      "connection": "AzureWebJobsStorage",
      "direction": "in"
    },
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "get",
        "post"
      ],
      "route": "messages/{id}"
    },
    {
      "type": "http",
      "direction": "out",
      "name": "$return"
    }
  ],
  "disabled": false
}
import json

import azure.functions as func

def main(req: func.HttpRequest, messageJSON) -> func.HttpResponse:

    message = json.loads(messageJSON)
    return func.HttpResponse(f"Table row: {messageJSON}")

The following example shows an HTTP triggered function which returns a list of person objects who are in a specified partition in Table storage. In the example, the partition key is extracted from the http route, and the tableName and connection are from the function settings.

public class Person {
    private String PartitionKey;
    private String RowKey;
    private String Name;

    public String getPartitionKey() { return this.PartitionKey; }
    public void setPartitionKey(String key) { this.PartitionKey = key; }
    public String getRowKey() { return this.RowKey; }
    public void setRowKey(String key) { this.RowKey = key; }
    public String getName() { return this.Name; }
    public void setName(String name) { this.Name = name; }
}

@FunctionName("getPersonsByPartitionKey")
public Person[] get(
        @HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="persons/{partitionKey}") HttpRequestMessage<Optional<String>> request,
        @BindingName("partitionKey") String partitionKey,
        @TableInput(name="persons", partitionKey="{partitionKey}", tableName="%MyTableName%", connection="MyConnectionString") Person[] persons,
        final ExecutionContext context) {

    context.getLogger().info("Got query for person related to persons with partition key: " + partitionKey);

    return persons;
}

The TableInput annotation can also extract the bindings from the json body of the request, like the following example shows.

@FunctionName("GetPersonsByKeysFromRequest")
public HttpResponseMessage get(
        @HttpTrigger(name = "getPerson", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="query") HttpRequestMessage<Optional<String>> request,
        @TableInput(name="persons", partitionKey="{partitionKey}", rowKey = "{rowKey}", tableName="%MyTableName%", connection="MyConnectionString") Person person,
        final ExecutionContext context) {

    if (person == null) {
        return request.createResponseBuilder(HttpStatus.NOT_FOUND)
                    .body("Person not found.")
                    .build();
    }

    return request.createResponseBuilder(HttpStatus.OK)
                    .header("Content-Type", "application/json")
                    .body(person)
                    .build();
}

The following examples uses the Filter to query for persons with a specific name in an Azure Table, and limits the number of possible matches to 10 results.

@FunctionName("getPersonsByName")
public Person[] get(
        @HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.FUNCTION, route="filter/{name}") HttpRequestMessage<Optional<String>> request,
        @BindingName("name") String name,
        @TableInput(name="persons", filter="Name eq '{name}'", take = "10", tableName="%MyTableName%", connection="MyConnectionString") Person[] persons,
        final ExecutionContext context) {

    context.getLogger().info("Got query for person related to persons with name: " + name);

    return persons;
}

Input - attributes and annotations

In C# class libraries, use the following attributes to configure a table input binding:

  • TableAttribute

    The attribute's constructor takes the table name, partition key, and row key. The attribute can be used on an out parameter or on the return value of the function, as shown in the following example:

    [FunctionName("TableInput")]
    public static void Run(
        [QueueTrigger("table-items")] string input, 
        [Table("MyTable", "Http", "{queueTrigger}")] MyPoco poco, 
        ILogger log)
    {
        ...
    }

    You can set the Connection property to specify the storage account to use, as shown in the following example:

    [FunctionName("TableInput")]
    public static void Run(
        [QueueTrigger("table-items")] string input, 
        [Table("MyTable", "Http", "{queueTrigger}", Connection = "StorageConnectionAppSetting")] MyPoco poco, 
        ILogger log)
    {
        ...
    }

    For a complete example, see Input - C# example.

  • StorageAccountAttribute

    Provides another way to specify the storage account to use. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level:

    [StorageAccount("ClassLevelStorageAppSetting")]
    public static class AzureFunctions
    {
        [FunctionName("TableInput")]
        [StorageAccount("FunctionLevelStorageAppSetting")]
        public static void Run( //...
    {
        ...
    }

The storage account to use is determined in the following order:

  • The Table attribute's Connection property.
  • The StorageAccount attribute applied to the same parameter as the Table attribute.
  • The StorageAccount attribute applied to the function.
  • The StorageAccount attribute applied to the class.
  • The default storage account for the function app ("AzureWebJobsStorage" app setting).

Attributes are not supported by C# Script.

Attributes are not supported by JavaScript.

Attributes are not supported by Python.

In the Java functions runtime library, use the @TableInput annotation on parameters whose value would come from Table storage. This annotation can be used with native Java types, POJOs, or nullable values using Optional<T>.


Input - configuration

The following table explains the binding configuration properties that you set in the function.json file and the Table attribute.

function.json property Attribute property Description
type n/a Must be set to table. This property is set automatically when you create the binding in the Azure portal.
direction n/a Must be set to in. This property is set automatically when you create the binding in the Azure portal.
name n/a The name of the variable that represents the table or entity in function code.
tableName TableName The name of the table.
partitionKey PartitionKey Optional. The partition key of the table entity to read. See the usage section for guidance on how to use this property.
rowKey RowKey Optional. The row key of the table entity to read. See the usage section for guidance on how to use this property.
take Take Optional. The maximum number of entities to read in JavaScript. See the usage section for guidance on how to use this property.
filter Filter Optional. An OData filter expression for table input in JavaScript. See the usage section for guidance on how to use this property.
connection Connection The name of an app setting that contains the Storage connection string to use for this binding. The setting can be the name of an "AzureWebJobs" prefixed app setting or connection string name. For example, if your setting name is "AzureWebJobsMyStorage", you can specify "MyStorage" here. The Functions runtime will automatically look for an app setting that named "AzureWebJobsMyStorage". If you leave connection empty, the Functions runtime uses the default Storage connection string in the app setting that is named AzureWebJobsStorage.

[!INCLUDE app settings to local.settings.json]

Input - usage

  • Read one row in

    Set partitionKey and rowKey. Access the table data by using a method parameter T <paramName>. In C# script, paramName is the value specified in the name property of function.json. T is typically a type that implements ITableEntity or derives from TableEntity. The filter and take properties are not used in this scenario.

  • Read one or more rows

    Access the table data by using a method parameter IQueryable<T> <paramName>. In C# script, paramName is the value specified in the name property of function.json. T must be a type that implements ITableEntity or derives from TableEntity. You can use IQueryable methods to do any filtering required. The partitionKey, rowKey, filter, and take properties are not used in this scenario.

    [!NOTE] IQueryable isn't supported in the Functions v2 runtime. An alternative is to use a CloudTable paramName method parameter to read the table by using the Azure Storage SDK. If you try to bind to CloudTable and get an error message, make sure that you have a reference to the correct Storage SDK version.

  • Read one row in

    Set partitionKey and rowKey. Access the table data by using a method parameter T <paramName>. In C# script, paramName is the value specified in the name property of function.json. T is typically a type that implements ITableEntity or derives from TableEntity. The filter and take properties are not used in this scenario.

  • Read one or more rows

    Access the table data by using a method parameter IQueryable<T> <paramName>. In C# script, paramName is the value specified in the name property of function.json. T must be a type that implements ITableEntity or derives from TableEntity. You can use IQueryable methods to do any filtering required. The partitionKey, rowKey, filter, and take properties are not used in this scenario.

    [!NOTE] IQueryable isn't supported in the Functions v2 runtime. An alternative is to use a CloudTable paramName method parameter to read the table by using the Azure Storage SDK. If you try to bind to CloudTable and get an error message, make sure that you have a reference to the correct Storage SDK version.

Set the filter and take properties. Don't set partitionKey or rowKey. Access the input table entity (or entities) using context.bindings.<BINDING_NAME>. The deserialized objects have RowKey and PartitionKey properties.

Table data is passed to the function as a JSON string. De-serialize the message by calling json.loads as shown in the input example.

The TableInput attribute gives you access to the table row that triggered the function.


Output

Use an Azure Table storage output binding to write entities to a table in an Azure Storage account.

Note

This output binding does not support updating existing entities. Use the TableOperation.Replace operation from the Azure Storage SDK to update an existing entity.

The following example shows a C# function that uses an HTTP trigger to write a single table row.

public class TableStorage
{
    public class MyPoco
    {
        public string PartitionKey { get; set; }
        public string RowKey { get; set; }
        public string Text { get; set; }
    }

    [FunctionName("TableOutput")]
    [return: Table("MyTable")]
    public static MyPoco TableOutput([HttpTrigger] dynamic input, ILogger log)
    {
        log.LogInformation($"C# http trigger function processed: {input.Text}");
        return new MyPoco { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text };
    }
}

The following example shows a table output binding in a function.json file and C# script code that uses the binding. The function writes multiple table entities.

Here's the function.json file:

{
  "bindings": [
    {
      "name": "input",
      "type": "manualTrigger",
      "direction": "in"
    },
    {
      "tableName": "Person",
      "connection": "MyStorageConnectionAppSetting",
      "name": "tableBinding",
      "type": "table",
      "direction": "out"
    }
  ],
  "disabled": false
}

The configuration section explains these properties.

Here's the C# script code:

public static void Run(string input, ICollector<Person> tableBinding, ILogger log)
{
    for (int i = 1; i < 10; i++)
        {
            log.LogInformation($"Adding Person entity {i}");
            tableBinding.Add(
                new Person() { 
                    PartitionKey = "Test", 
                    RowKey = i.ToString(), 
                    Name = "Name" + i.ToString() }
                );
        }

}

public class Person
{
    public string PartitionKey { get; set; }
    public string RowKey { get; set; }
    public string Name { get; set; }
}

The following example shows a table output binding in a function.json file and a JavaScript function that uses the binding. The function writes multiple table entities.

Here's the function.json file:

{
  "bindings": [
    {
      "name": "input",
      "type": "manualTrigger",
      "direction": "in"
    },
    {
      "tableName": "Person",
      "connection": "MyStorageConnectionAppSetting",
      "name": "tableBinding",
      "type": "table",
      "direction": "out"
    }
  ],
  "disabled": false
}

The configuration section explains these properties.

Here's the JavaScript code:

module.exports = function (context) {

    context.bindings.tableBinding = [];

    for (var i = 1; i < 10; i++) {
        context.bindings.tableBinding.push({
            PartitionKey: "Test",
            RowKey: i.toString(),
            Name: "Name " + i
        });
    }
    
    context.done();
};

The following example demonstrates how to use the Table storage output binding. The table binding is configured in the function.json by assigning values to name, tableName, partitionKey, and connection:

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "message",
      "type": "table",
      "tableName": "messages",
      "partitionKey": "message",
      "connection": "AzureWebJobsStorage",
      "direction": "out"
    },
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "get",
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "$return"
    }
  ]
}

The following function generates a unique UUI for the rowKey value and persists the message into Table storage.

import logging
import uuid
import json

import azure.functions as func

def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:

    rowKey = str(uuid.uuid4())

    data = {
        "Name": "Output binding message",
        "PartitionKey": "message",
        "RowKey": rowKey
    }

    message.set(json.dumps(data))

    return func.HttpResponse(f"Message created with the rowKey: {rowKey}")

The following example shows a Java function that uses an HTTP trigger to write a single table row.

public class Person {
    private String PartitionKey;
    private String RowKey;
    private String Name;

    public String getPartitionKey() {return this.PartitionKey;}
    public void setPartitionKey(String key) {this.PartitionKey = key; }
    public String getRowKey() {return this.RowKey;}
    public void setRowKey(String key) {this.RowKey = key; }
    public String getName() {return this.Name;}
    public void setName(String name) {this.Name = name; }
}

public class AddPerson {

    @FunctionName("addPerson")
    public HttpResponseMessage get(
            @HttpTrigger(name = "postPerson", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION, route="persons/{partitionKey}/{rowKey}") HttpRequestMessage<Optional<Person>> request,
            @BindingName("partitionKey") String partitionKey,
            @BindingName("rowKey") String rowKey,
            @TableOutput(name="person", partitionKey="{partitionKey}", rowKey = "{rowKey}", tableName="%MyTableName%", connection="MyConnectionString") OutputBinding<Person> person,
            final ExecutionContext context) {

        Person outPerson = new Person();
        outPerson.setPartitionKey(partitionKey);
        outPerson.setRowKey(rowKey);
        outPerson.setName(request.getBody().get().getName());

        person.setValue(outPerson);

        return request.createResponseBuilder(HttpStatus.OK)
                        .header("Content-Type", "application/json")
                        .body(outPerson)
                        .build();
    }
}

The following example shows a Java function that uses an HTTP trigger to write multiple table rows.

public class Person {
    private String PartitionKey;
    private String RowKey;
    private String Name;

    public String getPartitionKey() {return this.PartitionKey;}
    public void setPartitionKey(String key) {this.PartitionKey = key; }
    public String getRowKey() {return this.RowKey;}
    public void setRowKey(String key) {this.RowKey = key; }
    public String getName() {return this.Name;}
    public void setName(String name) {this.Name = name; }
}

public class AddPersons {

    @FunctionName("addPersons")
    public HttpResponseMessage get(
            @HttpTrigger(name = "postPersons", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION, route="persons/") HttpRequestMessage<Optional<Person[]>> request,
            @TableOutput(name="person", tableName="%MyTableName%", connection="MyConnectionString") OutputBinding<Person[]> persons,
            final ExecutionContext context) {

        persons.setValue(request.getBody().get());

        return request.createResponseBuilder(HttpStatus.OK)
                        .header("Content-Type", "application/json")
                        .body(request.getBody().get())
                        .build();
    }
}

Output - attributes and annotations

In C# class libraries, use the TableAttribute.

The attribute's constructor takes the table name. The attribute can be used on an out parameter or on the return value of the function, as shown in the following example:

[FunctionName("TableOutput")]
[return: Table("MyTable")]
public static MyPoco TableOutput(
    [HttpTrigger] dynamic input, 
    ILogger log)
{
    ...
}

You can set the Connection property to specify the storage account to use, as shown in the following example:

[FunctionName("TableOutput")]
[return: Table("MyTable", Connection = "StorageConnectionAppSetting")]
public static MyPoco TableOutput(
    [HttpTrigger] dynamic input, 
    ILogger log)
{
    ...
}

For a complete example, see Output - C# example.

You can use the StorageAccount attribute to specify the storage account at class, method, or parameter level. For more information, see Input - attributes.

Attributes are not supported by C# Script.

Attributes are not supported by JavaScript.

Attributes are not supported by Python.

In the Java functions runtime library, use the TableOutput annotation on parameters to write values into table storage.

See the example for more detail.


Output - configuration

The following table explains the binding configuration properties that you set in the function.json file and the Table attribute.

function.json property Attribute property Description
type n/a Must be set to table. This property is set automatically when you create the binding in the Azure portal.
direction n/a Must be set to out. This property is set automatically when you create the binding in the Azure portal.
name n/a The variable name used in function code that represents the table or entity. Set to $return to reference the function return value.
tableName TableName The name of the table.
partitionKey PartitionKey The partition key of the table entity to write. See the usage section for guidance on how to use this property.
rowKey RowKey The row key of the table entity to write. See the usage section for guidance on how to use this property.
connection Connection The name of an app setting that contains the Storage connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named "MyStorage". If you leave connection empty, the Functions runtime uses the default Storage connection string in the app setting that is named AzureWebJobsStorage.

[!INCLUDE app settings to local.settings.json]

Output - usage

Access the output table entity by using a method parameter ICollector<T> paramName or IAsyncCollector<T> paramName where T includes the PartitionKey and RowKey properties. These properties are often accompanied by implementing ITableEntity or inheriting TableEntity.

Alternatively you can use a CloudTable method parameter to write to the table by using the Azure Storage SDK. If you try to bind to CloudTable and get an error message, make sure that you have a reference to the correct Storage SDK version.

Access the output table entity by using a method parameter ICollector<T> paramName or IAsyncCollector<T> paramName where T includes the PartitionKey and RowKey properties. These properties are often accompanied by implementing ITableEntity or inheriting TableEntity. The paramName value is specified in the name property of function.json.

Alternatively you can use a CloudTable method parameter to write to the table by using the Azure Storage SDK. If you try to bind to CloudTable and get an error message, make sure that you have a reference to the correct Storage SDK version.

Access the output event by using context.bindings.<name> where <name> is the value specified in the name property of function.json.

There are two options for outputting a Table storage row message from a function:

  • Return value: Set the name property in function.json to $return. With this configuration, the function's return value is persisted as a Table storage row.

  • Imperative: Pass a value to the set method of the parameter declared as an Out type. The value passed to set is persisted as an Event Hub message.

There are two options for outputting a Table storage row from a function by using the TableStorageOutput annotation:

  • Return value: By applying the annotation to the function itself, the return value of the function is persisted as a Table storage row.

  • Imperative: To explicitly set the message value, apply the annotation to a specific parameter of the type OutputBinding<T>, where T includes the PartitionKey and RowKey properties. These properties are often accompanied by implementing ITableEntity or inheriting TableEntity.


Exceptions and return codes

Binding Reference
Table Table Error Codes
Blob, Table, Queue Storage Error Codes
Blob, Table, Queue Troubleshooting

Next steps

[!div class="nextstepaction"] Learn more about Azure functions triggers and bindings