Transactions in Azure Cosmos DB with the .NET SDK

Introduction

Azure Cosmos DB supports transactions, which means you can run a set of insert, update, and delete operations as one single operation, and they’re all guaranteed to either succeed or fail together. This is of course, quite different than bulk execution (see my previous post), where each individual operation succeeds or fails independently. The traditional technique for implementing transactions in Cosmos DB is to write a server-side stored procedure in JavaScript to perform the updates. But with transactional batch in the .NET SDK, you can implement transactions with C# right inside your client application.

Transactions are supported per partition key, which means that they are scoped to a single logical partition. A transaction cannot span multiple partition keys. So you need to supply the partition key for the transaction, and then you can insert, update, and delete documents within that partition key, inside the transaction. You just batch up your operations in a single TransactionBatch object, and then the SDK ships it off in a single request to Cosmos DB where it runs within a transaction that succeeds or fails as a whole.

Demo: Transactional Batch

In this example, we want to update a customer document and a customer order document, as a single transaction. In our project, we have a shared class that provides us with a CosmosClient from the .NET SDK:

public static class Shared
{
    public static CosmosClient Client { get; private set; }

    static Shared()
    {
        var config = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
        var endpoint = config["CosmosEndpoint"];
        var masterKey = config["CosmosMasterKey"];

        Client = new CosmosClient(endpoint, masterKey);
    }
}

First let’s create a container with 400 RUs of throughput, which I’ll partition on customer ID:

var containerDef = new ContainerProperties
{
    Id = "batchDemo",
    PartitionKeyPath = "/customerId",
};

var database = Shared.Client.GetDatabase("myDatabase");
await database.CreateContainerAsync(containerDef, 400);

So we’re storing a customer document and all that customer’s order documents, in the same logical partition, using customer ID. And so that means we can update all those documents together, in one transaction. This illustration shows the grouping of documents within a single logical partitioned (dashed blue lines), with logical partitions spread across two physical partitions:

Next, let’s create a customer with some orders. We’ll give this customer an ID of cust1, and then create a Customer object and two Order objects.

var newCustomerDoc = new Customer { Id = "cust1", CustomerId = "cust1", Name = "John Doe", OrderCount = 2 };
var newOrderDoc1 = new Order { Id = "order1", CustomerId = "cust1", Item = "Surface Pro", Quantity = 1 };
var newOrderDoc2 = new Order { Id = "order2", CustomerId = "cust1", Item = "Surface Book", Quantity = 4 };

The Customer and Order classes are simple POCOs that I’ve defined for this demo, with JsonProperty attributes to convert our properties to camelCase for the documents stored in Cosmos DB:

public class Customer
{
    [JsonProperty("id")]
    public string Id { get; set; }
    [JsonProperty("customerId")]
    public string CustomerId { get; set; }
    [JsonProperty("name")]
    public string Name { get; set; }
    [JsonProperty("orderCount")]
    public int OrderCount { get; set; }
}

public class Order
{
    [JsonProperty("id")]
    public string Id { get; set; }
    [JsonProperty("customerId")]
    public string CustomerId { get; set; }
    [JsonProperty("item")]
    public string Item { get; set; }
    [JsonProperty("quantity")]
    public int Quantity { get; set; }
}

Both objects have an ID of course, as well as the customer ID which is the partition key. The new customer document has the same value for both its ID and customer ID, while the two new order documents have IDs that are unique to each order belonging to the same customer (order1 and order2 in this case). And of course, all three documents have the same partition key in the customer ID property, cust1.

Now that we’ve got our three objects, we call CreateTransactionalBatch on the container, supply the partition key, and chain one CreateItem method after another for each of the three documents:

var container = Shared.Client.GetContainer("myDatabase", "batchDemo");

var batch = container.CreateTransactionalBatch(new PartitionKey("cust1"))
    .CreateItem(newCustomerDoc)
    .CreateItem(newOrderDoc1)
    .CreateItem(newOrderDoc2);

Alright, that gives us a transactional batch, that we can now execute by calling passing it to ExecuteBatch:

await ExecuteBatch(batch);

Here’s the code for ExecuteBatch, and you can see that it’s pretty simple:

private static async Task ExecuteBatch(TransactionalBatch batch)
{
    var batchResponse = await batch.ExecuteAsync();

    using (batchResponse)
    {
        if (batchResponse.IsSuccessStatusCode)
        {
            Console.WriteLine("Transcational batch succeeded");
            for (var i = 0; i < batchResponse.Count; i++)
            {
                var result = batchResponse.GetOperationResultAtIndex<dynamic>(i);
                Console.WriteLine($"Document {i + 1}:");
                Console.WriteLine(result.Resource);
            }
        }
        else
        {
            Console.WriteLine("Transcational batch failed");
            for (var i = 0; i < batchResponse.Count; i++)
            {
                var result = batchResponse.GetOperationResultAtIndex<dynamic>(i);
                Console.WriteLine($"Document {i + 1}: {result.StatusCode}");
            }
        }
    }
}

We just call ExecuteAsync on the batch, and check the response for success. Remember, it’s an all or nothing transaction, so we’ll either have three new documents, or no new documents. In this case, the transaction succeeds, so we can iterate the response and call GetOperationResultAtIndex to get back each new document. Here in the container, you can see the three new documents.

Notice that we’ve got an orderCount property in the customer document, showing two orders for John Doe. This right here is a bit of denormalization, where we always know how many orders a customer has without having to run a separate count query on their order documents. We’ll always increment this orderCount in the customer document, any time we create a new order for that customer, so it’s really important to make sure that those operations are always performed using a transaction.

Let’s create a third order for this customer. First, we’ll read the customer document by calling ReadItemAsync, which needs the ID and partition key for the customer – and again, for a customer document, that’s the same value for both, cust1. Then we’ll increment that orderCount property (we know that current value is 2 right now, so this increments it to 3):

var container = Shared.Client.GetContainer("myDatabase", "batchdemo");
var result = await container.ReadItemAsync<Customer>("cust1", new PartitionKey("cust1"));
var existingCustomerDoc = result.Resource;
existingCustomerDoc.OrderCount++;

Finally, we’ll create the new order document for three Surface mice.

var newOrderDoc = new Order { Id = "order3", CustomerId = "cust1", Item = "Surface Mouse", Quantity = 3 };

OK, let’s create another transactional batch. This time we’re doing a ReplaceItem for the existing customer document, along with a CreateItem for the new order document.

var batch = container.CreateTransactionalBatch(new PartitionKey(customerId))
    .ReplaceItem(existingCustomerDoc.Id, existingCustomerDoc)
    .CreateItem(newOrderDoc);

Now let’s call ExecuteBatch once again to run this transaction.

await ExecuteBatch(batch);

This batch also succeeds. And back in the portal, sure enough, the order count in the customer document is now three, to match the three order documents:

Alright, now let’s make the transaction fail with a bad order.

Once again, we get the customer document by calling ReadItemAsync with the same customer ID for the document’s ID and partition key. And then, once again, increment the orderCount, this time changing it from 3 to 4.

var container = Shared.Client.GetContainer("myDatabase", "batchdemo");
var result = await container.ReadItemAsync<Customer>("cust1", new PartitionKey("cust1"));
var existingCustomerDoc = result.Resource;
existingCustomerDoc.OrderCount++;

Now for new order:

var newOrderDoc = new Order { Id = "order3", CustomerId = "cust1", Item = "Surface Mouse", Quantity = 3 };

It’s for two Surface keyboards, but the order ID order3 is a duplicate of an existing order document in the container. Order ID’s must be unique within each customer’s logical partition, so this is a bad order. Meanwhile, we’ve already incremented the customer’s order count to four, let’s see what happens now when we wrap this up inside another transactional batch. Of course, this is the same code we used to create the previous order, but this time we’re expecting the transaction to fail:

var batch = container.CreateTransactionalBatch(new PartitionKey(customerId))
    .ReplaceItem(existingCustomerDoc.Id, existingCustomerDoc)
    .CreateItem(newOrderDoc);

await ExecuteBatch(batch);

Now when we execute the batch, the response tells us that the transaction was unsuccessful, and returns the StatusCode to tell us what went wrong:

Transactional batch failed
Document 1: FailedDependency
Document 2: Conflict

This shows us that there was really nothing wrong with the first operation to replace the customer document with an incremented order count. But the second operation to create the new order failed with a conflict, as we expected, and so that first operation also failed as a dependency.

If you jumping over once more to the data explorer, you can see that there’s been no change at all to the container – the customer’s order count is still three, and there are still only three orders.

That’s it for transactional batch. As you can see, it’s pretty simple to implement transactions in your client application using the .NET SDK, which provides a nice alternative to using JavaScript stored procedures to achieve the same functionality.

Enjoy, and happy coding!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: