Multiple Partition Keys in Azure Cosmos DB (Part 3) – Azure Functions with Change Feed Trigger

Welcome to part 3 in this three-part series on using the Azure Cosmos DB change feed to implement a synchronized solution between multiple containers that store the same documents, each with a different partition key.

Start with part 1 for an overview of the problem we’re trying to solve. In a nutshell, we’re using the change feed to synchronize two containers, so that changes made to one container are replicated to the other. Thus the two containers have the same data in them, but each one is partitioned by a different partition key to best suit one of two common data access patterns. Part 1 presented a solution that queries the change feed directly. In part 2, I showed you how to use the Change Feed Processor (CFP) library instead, which provides numerous advantages over the low-level approach in part 1.

This post concludes the three-part series, and shows how to deploy the solution to execute in a serverless cloud environment using Azure Functions.

Building the Azure Functions Project

As you learned in part 2, the benefits of using the CFP library are clear. But we still need a way to deploy our host to run in Azure. We could certainly refactor the CfpLibraryHost console application as a web job. But the simplest way to achieve this is by using Azure Functions with a Cosmos DB trigger.

If you’re not already familiar with Azure Functions, they let you write individual methods (functions), which you deploy for execution in a serverless environment hosted on Azure. The term “serverless” here means without also having to create an Azure app service to deploy your host.

In the current ChangeFeedDemos solution, create a new Azure Functions project and name it CfpLibraryFunction. Visual Studio will offer up a selection of “trigger” templates to choose from, including the Cosmos DB Trigger that we’ll be using. However, we’ll just choose the Empty template and configure the project manually:

Next, add the Cosmos DB WebJob Extensions to the new project from the NuGet package Microsoft.Azure.WebJobs.Extensions.CosmosDB:

Now create a new class named MultiPkCollectionFunction in the CfpLibraryFunction project with the following code:

using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System;
using System.Collections.Generic;

namespace CosmosDbDemos.MultiPkCollection.ChangeFeedFunction
{
  public static class MultiPkFunction
  {
    private const string CosmosEndpoint = "https://<account-name>.documents.azure.com:443/";
    private const string CosmosMasterKey = "<account-key>";

    private static readonly DocumentClient _client;

    static MultiPkFunction()
    {
      _client = new DocumentClient(new Uri(CosmosEndpoint), CosmosMasterKey);
    }

    [FunctionName("MultiPkCollectionFunction")]
    public static void Run(
      [CosmosDBTrigger(
        databaseName: "multipk",
        collectionName: "byCity",
        ConnectionStringSetting = "CosmosDbConnectionString",
        LeaseCollectionName = "lease")]
      IReadOnlyList<Document> documents,
      ILogger log)
    {
      foreach (var document in documents)
      {
        var jDocument = JsonConvert.DeserializeObject<JObject>(document.ToString());

        if (jDocument["ttl"] == null)
        {
          var stateCollUri = UriFactory.CreateDocumentCollectionUri("multipk", "byState");
          _client.UpsertDocumentAsync(stateCollUri, document).Wait();
          log.LogInformation($"Upserted document id {document.Id} in byState collection");
        }
        else
        {
          var stateDocUri = UriFactory.CreateDocumentUri("multipk", "byState", document.Id);
          _client.DeleteDocumentAsync(stateDocUri, new RequestOptions
          {
            PartitionKey = new PartitionKey(jDocument["state"].Value<string>())
          }).Wait();
          log.LogInformation($"Deleted document id {document.Id} in byState collection");
        }
      }
    }
  }
}

Note that this is all the code we need. While the CFP library host project required less code than querying the change feed directly, it still required additional code to build and host the processor – plus you need a full Azure app service to deploy in the cloud.

Using an Azure Function, it’s super lean. We have the same minimal business logic as before, and still leverage all the CFP library goodness as before, but being “serverless,” we’re ready to deploy this as a standalone function in the cloud, without creating a full Azure app service. Plus, you can test the Azure Function locally using Visual Studio and the debugger, so that you can deploy to the cloud with confidence. Folks, it just doesn’t get much better than this!

The CosmosDBTrigger binding in the signature of the Run method causes this code to fire on every notification that get raised out of the CFP library that’s watching the byCity container for changes. The binding tells the trigger to monitor the byCity collection in the multipk database, and to persist leases to the container named lease. It also references a connection string setting named CosmosDbConnectionString to obtain the necessary endpoint and master key, which we’ll configure in a moment.

The actual implementation of the Run method is virtually identical to the ProcessChangesAsync method in our CFP library host project from part 2. The only difference is that we log output using the ILogger instance passed into our Azure Function, rather than using Console.WriteLine to log output in our CFP library host project.

Setting the connection string

Next we need to set the connection string for the trigger that we are referencing in the signature of the Run method. Azure Functions support configuration files similar to the way appsettings.json, app.config, or web.config is used for typical .NET applications. For Azure Functions, this file is named local.settings.json. Open this file and add the CosmosDbConnectionString setting that the trigger will use to monitor the byCity collection for changes:

{
  "IsEncrypted": false,
  "Values": {
    "CosmosDbConnectionString": "AccountEndpoint=https://<account-name>.documents.azure.com:443/;AccountKey=<account-key>;",
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet"
  }
} 

Testing locally

It’s very easy to test and debug our code locally, before deploying it to Azure.

Let’s see it work by running the two projects ChangeFeedDirect and CfpLibraryFunction at the same time. The ChangeFeedDirect console app provides an interactive menu for creating the database with the byCity, byState, and lease containers, and also for inserting, updating, and deleting documents in byCity container. The CfpLibraryFunction app, meanwhile, sits and waits to be notified of changes as they occur in the byCity container, so that they can be synchronized to the byState container.

Start the first console app (ChangeFeedDirect) and run the DB command to drop and recreate the database with empty byCity and byState containers. Then run CL to create the lease container.

Without closing the first console app, start the second one (CfpLibraryFunction). To do this, right-click the CfpLibraryFunction project in Solution Explorer and choose Debug, Start New Instance. You will see a console app open up which hosts the local webjob for our Azure function:

Wait a few moments until the local Azure Functions host indicates that the application has started:

Back in the first console app (ChangeFeedDirect), run the CD command to create three documents in the byCity container. Then run the UD command to update a document, and the DD command to delete another document (by setting its ttl property). Do not run SC to manually sync with the direct change feed queries from part 1. Instead, sit back and watch it happen automatically with the CFP library under control of the local Azure Functions webjob host:

Before we deploy to Azure, let re-initialize the database. Back in the first console app (ChangeFeedDirect), run the DB command to drop and recreate the database with empty byCity and byState containers. Then run CL to create the lease container.

Deploying to Azure

Satisfied that our code is working properly, we can now deploy the Azure function to run in the cloud.

Right-click the CfpLibraryFunction project and choose Publish.

Select Create New and then click Publish.

Assign a name for the deployed function, and select the desired subscription, resource group, hosting plan, and storage account (or just accept the defaults):

Now click Create and wait for the deployment to complete.

Over in the Azure portal, navigate to All Resources, filter on types for App Services, and locate your new Azure Functions service:

Click on the App Service, and then click Configuration:

Next, click New Application Setting:

This is the cloud equivalent of the local.settings.json file, where you need to plug in the same CosmosDbConnection string setting that you provided earlier when testing locally:

Now click Update, Save, and close the application settings blade.

You’re all done! To watch it work, click on MultiPkCollectionFunction, scroll down, and expand the Logs panel:

Back in the ChangeFeedDirect console app, run the CD command to create three documents in the byCity container. Then run the UD command to update a document, and the DD command to delete another document (by setting its ttl property). Do not run SC to manually sync with the direct change feed queries from part 1. Instead, sit back and watch it happen automatically with the CFP library running in our Azure function. Within a few moments, the Azure function trigger fires, and our code synchronizes them to the byState container:

What’s Next?

A few more pointers before concluding:

Conclusion

This concludes the three-post series on using the Cosmos DB change feed to synchronize two containers with different partition keys over the same data. Thanks for reading, and happy coding!

Advertisements

Multiple Partition Keys in Azure Cosmos DB (Part 2) – Using the Change Feed Processor Library

Welcome to part 2 in this series of blog posts on using the Azure Cosmos DB change feed to implement a synchronized solution between multiple containers that store the same documents, each with a different partition key.

Start with part 1 for an overview of the problem we’re trying to solve. In a nutshell, we’re using the change feed to synchronize two containers, so that changes made to one container are replicated to the other. Thus the two containers have the same data in them, but each one is partitioned by a different partition key to best suit one of two common data access patterns. Part 1 presented a solution that queries the change feed directly. In this post, we’ll use the Change Feed Processor (CFP) library instead, which provides numerous advantages over the low-level approach we took in part 1.

What is the CFP Library?

For the solution we built in part 1 to consume the change feed at scale, much more work needs to be done. For example, the change feed on each partition key range of the container can be consumed concurrently, so we could add multithreading logic to parallelize those queries. Long change feeds can also be consumed in chunks, using continuation tokens that we could persist as a “lease,” so that new clients can resume consumption where previous clients left off. We also want the sync automated, so that we don’t need to poll manually.

Fortunately, the Change Feed Processor (CFP) library handles all these details for you. In most cases, unless you have very custom requirements, the CFP library is the way to go over directly querying the change feed yourself.

The CFP library automatically discovers the container’s partition key ranges and parallelizes change feed consumption across each of them internally. This means that you only consume one logical change feed for the entire container, and don’t worry at all about the individual change feeds behind the individual partition key ranges. The library relies on a dedicated “lease” container to persist lease documents that maintain state for each partition key range. Each lease represents a checkpoint in time which tracks continuation tokens for chunking long change feeds across multiple clients. Thus, as you increase the number of clients, the overall change feed processing gets divided across them evenly.

Using the lease container also means we won’t need to rely on our own “sync” document that we were using to track the last time we queried the change feed directly. Therefore, our code will be simpler than the previous solution from part 1, yet we will simultaneously inherit all the aforementioned capabilities that the CFP library provides.

Building on the ChangeFeedDemo solution we created in part 1, open the Program.cs file in the ChangeFeedDirect project. In the Run method, add a line to display a menu item for creating the lease container:

Console.WriteLine("CL - Create lease collection"); 

Further down in the method, add an “else if” condition for the new menu item that calls CreateLeaseCollection:

else if (input == "CL") await CreateLeaseCollection();

Now add the CreateLeaseCollection method:

private static async Task CreateLeaseCollection()
{
  using (var client = new DocumentClient(new Uri(CosmosEndpoint), CosmosMasterKey))
  {
    var dbUri = UriFactory.CreateDatabaseUri("multipk");

    var partitionKeyDefinition = new PartitionKeyDefinition();
    partitionKeyDefinition.Paths.Add("/id");
    await client.CreateDocumentCollectionAsync(dbUri, new DocumentCollection
    {
      Id = "lease",
      PartitionKey = partitionKeyDefinition
    }, new RequestOptions { OfferThroughput = 400 });
  }

  Console.WriteLine("Created lease collection");
} 

This code creates a container named lease provisioned for 400 RUs per second. Also notice that the lease container uses the id property itself as its partition key.

Next, create the host project, which is essentially the client. When an instance of this host runs, it will listen for and consume changes across the entire container. Then, as you spin up additional instances of the host, the work to consume changes across the entire container will be distributed uniformly across all the host instances by the CFP library.

In the current ChangeFeedDemos solution, create a new .NET Core console application and name the project CfpLibraryHost. Next, add the CFP library to the new project from the NuGet package Microsoft.Azure.DocumentDB.ChangeFeedProcessor:

Replace all the code in Program.cs as follows:

using System;
using System.Threading.Tasks;

namespace CfpLibraryHost
{
  class Program
  {
    static void Main(string[] args)
    {
      var host = new ChangeFeedProcessorHost();
      Task.Run(host.RunProcessor).Wait();

      Console.ReadKey();
    }
  }
} 

This startup code initializes and runs the host. It then enters a wait state during which the host will be notified automatically by the CFP library whenever changes are available for consumption.

Next, to implement the host, create a new class named ChangeFeedProcessorHost with the following code:

using Microsoft.Azure.Documents.ChangeFeedProcessor;
using System;
using System.Threading.Tasks;

namespace CfpLibraryHost
{
  public class ChangeFeedProcessorHost
  {
    public async Task RunProcessor()
    {
      var monitoredCollection = new DocumentCollectionInfo
      {
        Uri = new Uri(Constants.CosmosEndpoint),
        MasterKey = Constants.CosmosMasterKey,
        DatabaseName = "multipk",
        CollectionName = "byCity",
      };

      var leaseCollection = new DocumentCollectionInfo
      {
        Uri = new Uri(Constants.CosmosEndpoint),
        MasterKey = Constants.CosmosMasterKey,
        DatabaseName = "multipk",
        CollectionName = "lease",
      };

      var builder = new ChangeFeedProcessorBuilder();

      builder
        .WithHostName($"Change Feed Processor Host - {Guid.NewGuid()}")
        .WithFeedCollection(monitoredCollection)
        .WithLeaseCollection(leaseCollection)
        .WithObserverFactory(new MultiPkCollectionObserverFactory());

      var processor = await builder.BuildAsync();
      await processor.StartAsync();

      Console.WriteLine("Started change feed processor - press any key to stop");
      Console.ReadKey();

      await processor.StopAsync();
    }
  }

} 

The host builds a processor configured to monitor the byCity container, using the lease container to track internal state. The processor also references an observer factory which, in turn, supplies an observer to consume the changes as they happen.

This code relies on defined constants for the URI and master key of the Cosmos DB account. Define them as follows in a new class called Constants.cs:

namespace CfpLibraryHost
{
  public class Constants
  {
    public const string CosmosEndpoint = "https://<account-name>.documents.azure.com:443/";
    public const string CosmosMasterKey = "<account-key>";
  }
} 

The observer factory is super-simple. Just create a class that implements IChangeFeedObserverFactory, and return an observer instance (where the real logic will go) that implements IChangeFeedObserver. Name the class MultiPkCollectionObserverFactory, as referenced by WithObserverFactory in the host.

using Microsoft.Azure.Documents.ChangeFeedProcessor.FeedProcessing;

namespace CfpLibraryHost
{
  public class MultiPkCollectionObserverFactory : IChangeFeedObserverFactory
  {
    public IChangeFeedObserver CreateObserver() =>
      new MultiPkCollectionObserver();
  }
} 

Finally, create the observer class itself. Name this class MultiPkCollectionObserver, as referenced by the CreateObserver method in the factory class:

using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.ChangeFeedProcessor.FeedProcessing;
using Microsoft.Azure.Documents.Client;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;

namespace CfpLibraryHost
{
  public class MultiPkCollectionObserver : IChangeFeedObserver
  {
    private static DocumentClient _client;

    static MultiPkCollectionObserver() =>
      _client = new DocumentClient(new Uri(Constants.CosmosEndpoint), Constants.CosmosMasterKey);

    public async Task OpenAsync(IChangeFeedObserverContext context)
    {
      Console.WriteLine($"Start observing partition key range {context.PartitionKeyRangeId}");
    }

    public async Task CloseAsync(IChangeFeedObserverContext context, ChangeFeedObserverCloseReason reason)
    {
      Console.WriteLine($"Stop observing partition key range {context.PartitionKeyRangeId} because {reason}");
    }

    public async Task ProcessChangesAsync(IChangeFeedObserverContext context, IReadOnlyList<Document> documents, CancellationToken cancellationToken)
    {
      foreach (var document in documents)
      {
        var jDocument = JsonConvert.DeserializeObject<JObject>(document.ToString());

        if (jDocument["ttl"] == null)
        {
          var stateCollUri = UriFactory.CreateDocumentCollectionUri("multipk", "byState");
          await _client.UpsertDocumentAsync(stateCollUri, document);
          Console.WriteLine($"Upserted document id {document.Id} in byState collection");
        }
        else
        {
          var stateDocUri = UriFactory.CreateDocumentUri("multipk", "byState", document.Id);
          await _client.DeleteDocumentAsync(stateDocUri, new RequestOptions
          {
            PartitionKey = new PartitionKey(jDocument["state"].Value<string>())
          });
          Console.WriteLine($"Deleted document id {document.Id} in byState collection");
        }
      }
    }
  }
} 

Our synchronization logic is embedded inside the ProcessChangesAsync method. The benefits of using the CFP library should really stand out now, compared to our solution from part 1. This code is significantly simpler yet dramatically more robust than the SyncCollections method we wrote in the ChangeFeedDirect project. We are no longer concerned with discovering and iterating partition key ranges, nor do we need to track the last sync time ourselves. Instead, the CFP library handles those details, plus it parallelizes the work and uniformly distributes large-scale processing across multiple client instances. Our code is now limited to just the business logic we require, which is the same logic as before: Upsert new or changed documents into the byState container as they occur in the byCity container, or delete them if their TTL property is set.

Let’s see it work. We’ll want to run both projects at the same time. The ChangeFeedDirect console app provides an interactive menu for creating the database with the byCity, byState, and lease containers, and also for inserting, updating, and deleting documents in byCity container. The CfpLibraryHost app, meanwhile, sits and waits to be notified of changes as they occur in the byCity container, so that they can be synchronized to the byState container.

Start the first console app (ChangeFeedDirect) and run the DB command to drop and recreate the database with empty byCity and byState containers. Then run CL to create the lease container.

Without closing the first console app, start the second one (CfpLibraryHost). To do this, right-click the CfpLibraryHost project in Solution Explorer and choose Debug, Start New Instance. The host fires up, and now just sits and waits for changes from the CFP library:

Started change feed processor - press any key to stop
Start observing partition key range 0

Back in the first console app (ChangeFeedDirect), run the CD command to create three documents in the byCity container. Then run the UD command to update a document, and the DD command to delete another document (by setting its ttl property). Do not run SC to manually sync with the direct change feed queries from part 1. Instead, sit back and watch it happen automatically with the CFP library. Within a few moments, the host is notified of the changes, and our observer class synchronizes them to the byState container:

Started change feed processor - press any key to stop
Start observing partition key range 0
Upserted document id 63135c76-d5c6-44a8-acd3-67dfc54faae6 in byState collection
Upserted document id 06296937-de8b-4d72-805d-f3120bf06403 in byState collection
Upserted document id 0b222279-06df-487e-b746-7cb347acdf18 in byState collection
Upserted document id 63135c76-d5c6-44a8-acd3-67dfc54faae6 in byState collection
Deleted document id 0b222279-06df-487e-b746-7cb347acdf18 in byState collection 

The beauty here is that the client does not need to manually poll the change feed for updates. And we can just spin up as many clients as needed to scale out the processing for very large containers with many changes.

What’s Next?

The benefits of using the CFP library are clear, but we still need to deploy our host to run in Azure. We could certainly deploy the CfpLibraryHost console application to a web app in App Service as an Azure WebJob. But the simplest way to achieve this is by using Azure Functions with a Cosmos DB trigger.

So tune in to part 3, where I’ll conclude this three-part series by showing you how to deploy our solution to Azure using Azure Functions.

Multiple Partition Keys in Azure Cosmos DB (Part 1) – Querying the Change Feed Directly

To begin, let’s be clear that an Azure Cosmos DB container can have only one partition key. I say this from the start in case “multiple partition keys” in the title is somehow misinterpreted to imply otherwise. You always need to come up with a single property to serve as the partition key for every container and choosing the best property for this can sometimes be difficult.

Understanding the Problem

Making the right choice requires intimate knowledge about the data access patterns of your users and applications. You also need to understand how horizontal partitioning works behind the scenes in terms of storage and throughput, how queries and stored procedures are scoped by partition key, and the performance implications of running cross-partition queries and fan-out queries. So there are many considerations to take into account before settling on the one property to partition a container on. I discuss all this at length in my previous blog post Horizontal Partitioning in Azure Cosmos DB.

But here’s the rub. What if, after all the analysis, you come to realize that you simply cannot settle on a single property that serves as an ideal partition key for all scenarios? Let’s say for example, from a write perspective, you find one property will best distribute writes uniformly across all the partitions in the container. But from a query perspective, you find that using the same partition key results in too much fanning out. Or, you might identify two categories of common queries, where it’s roughly 50/50; meaning, about half of all the queries are of one type, and half are of the other. What do you do if the two query categories would each benefit from different partition keys?

Your brain can get caught in an infinite loop over this until you wind up in that state of “analysis paralysis,” where you recognize that there’s just no single best property to choose as the partition key. To break free, you need to think outside the box. Or, let’s say, think outside the container. Because the solution here is to simply create another container that’s a complete “replica” of the first. This second container holds the exact same set of documents as the first but defines a different partition key.

I placed quotes around the word “replica” because this second container is not technically a replica in the true Cosmos DB sense of the word (where, internally, Cosmos DB automatically maintains replicas of the physical partitions in every container). Rather, it’s a manual replica that you maintain yourself. Thus, it’s your job to keep it in sync with changes when they happen in the first container, which is partitioned by a property that’s optimized for writes. As those writes occur in real time, you need to respond by updating the second collection, which is partitioned by a property that’s optimized for queries.

Enter Change Feed

Fortunately, change feed comes to the rescue here. Cosmos DB maintains a persistent record of changes for every container that can be consumed using the change feed. This gives you a reliable mechanism for retrieving changes made to any container, all the way back to the beginning of time. For an introduction to change feed, have a look at my previous blog post Change Feed – Unsung Hero of Azure Cosmos DB

In this three-part series of blog posts, I’ll dive into different techniques you can use for consuming the change feed to synchronize containers:

Let’s get started. Assume that we’ve done our analysis and established that city is the ideal partition key for writes, as well as roughly half of the most common queries our users will be running. But we’ve also determined that state is the ideal partition key for the other (roughly half) commonly executed queries. This means we’ll want one container partitioned by city, and another partitioned by state. And we’ll want to consume the city-partitioned container’s change feed to keep the state-partitioned container in sync with changes as they occur. We’ll then be able to direct our city-based queries to the first container, and our state-based queries to the second container, which then eliminates fan-out queries in both cases.

Setting Up

If you’d like to follow along, you’ll need to be sure your environment is setup properly. First, of course, you’ll need to have a Cosmos DB account. The good news here is that you can get a free 30-day account with the “try cosmos” offering, which doesn’t even require a credit card or Azure subscription (just a free Microsoft account). Even better, there’s no limit to the number of times you can start a new 30-day trial. Create your free account at http://azure.microsoft.com/try/cosmosdb.

You’ll need your account’s endpoint URI and master key to connect to Cosmos DB from C#. To obtain them, head over to your Cosmos DB account in the Azure portal, open the Keys blade, and keep it open so that you can handily copy/paste them into the project.

You’ll also need Visual Studio. I’ll be using Visual Studio 2019, but the latest version of Visual Studio 2017 is fine as well. You can download the free community edition at https://visualstudio.microsoft.com/downloads.

Querying the Change Feed Directly

We’ll begin with the raw approach, which is to query the change feed directly using the SDK. The reality is that you’ll almost never want to go this route, except for the simplest small-scale scenarios. Still, it’s worth taking some time to examine this approach first, as I think you’ll benefit from learning how the change feed operates at a low level, and it will enhance your appreciation of the Change Feed Processor (CFP) library which I’ll cover in the next blog post of this series.

Fire up Visual Studio, create a new .NET Core console application, and name the project ChangeFeedDirect, and name the solution ChangeFeedDemos (we’ll be adding more projects to this solution in parts 2 and 3 of this blog series). Next, add the SDK to the ChangeFeedDirect project from the NuGet package Microsoft.Azure.DocumentDB.Core:

We’ll write some basic code to create a database with the two containers, with additional methods to create, update, and delete documents in the first container (partitioned by city). Then we’ll write our “sync” method that directly queries the change feed on the first container, in order to update the second container (partitioned by state) and reflect all the changes made.

Note: Our code (and the SDK) refers to containers as collections.

We’ll write all our code inside the Program.cs file. First, update the using statements at the very top of the file to get the right namespaces imported:

using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

Next, at the very top of the Program class, set two private string constants for your Cosmos DB account’s endpoint and master key. You can simply copy them from the Keys blade in the Azure portal, and paste them right into the code:

private const string CosmosEndpoint = "https://<account-name>.documents.azure.com:443/";
private const string CosmosMasterKey = "<account-key>";

Now edit the Main method with one line to call the Run method:

static void Main(string[] args)
{
  Task.Run(Run).Wait();
}

This one-liner invokes the Run method and waits for it to complete. Add the Run method next, which displays a menu and calls the various methods defined for each menu option:

private static async Task Run()
{
  while (true)
  {
    Console.WriteLine("Menu");
    Console.WriteLine();
    Console.WriteLine("DB - Create database");
    Console.WriteLine("CD - Create documents");
    Console.WriteLine("UD - Update document");
    Console.WriteLine("DD - Delete document");
    Console.WriteLine("SC - Sync collections");
    Console.WriteLine("Q - Quit");
    Console.WriteLine();
    Console.Write("Selection: ");

    var input = Console.ReadLine().ToUpper().Trim();
    if (input == "Q") break;
    else if (input == "DB") await CreateDatabase();
    else if (input == "CD") await CreateDocuments();
    else if (input == "UD") await UpdateDocument();
    else if (input == "DD") await DeleteDocument();
    else if (input == "SC") await SyncCollections();
    else Console.WriteLine("\nInvalid selection; try again\n");
  }
}

We’ll add the menu option methods one at a time, starting with CreateDatabase:

private static async Task CreateDatabase()
{
  using (var client = new DocumentClient(new Uri(CosmosEndpoint), CosmosMasterKey))
  {
    // Create the database

    var dbUri = UriFactory.CreateDatabaseUri("multipk");

    try { await client.DeleteDatabaseAsync(dbUri); } catch { }

    await client.CreateDatabaseAsync(new Database { Id = "multipk" });

    // Create the two collections

    var partitionKeyDefinition = new PartitionKeyDefinition();
    partitionKeyDefinition.Paths.Add("/city");
    await client.CreateDocumentCollectionAsync(dbUri, new DocumentCollection
    {
      Id = "byCity",
      PartitionKey = partitionKeyDefinition,
      DefaultTimeToLive = -1
    }, new RequestOptions { OfferThroughput = 400 });

    partitionKeyDefinition = new PartitionKeyDefinition();
    partitionKeyDefinition.Paths.Add("/state");
    await client.CreateDocumentCollectionAsync(dbUri, new DocumentCollection
    {
      Id = "byState",
      PartitionKey = partitionKeyDefinition
    }, new RequestOptions { OfferThroughput = 400 });

    // Load the sync document into the first collection

    var collUri = UriFactory.CreateDocumentCollectionUri("multipk", "byCity");
    var syncDocDef = new
    {
      city = "sync",
      state = "sync",
      lastSync = default(DateTime?)
    };
    await client.CreateDocumentAsync(collUri, syncDocDef);
  }

  Console.WriteLine("Created database for change feed demo");
}

Most of this code is intuitive, even if you’ve never done any Cosmos DB programming before. We first use our endpoint and master key to create a DocumentClient, and then we use the client to create a database named multipk with the two containers in it.

This works by first calling DeleteDatabaseAsync wrapped in a try block with an empty catch block. This effectively results in “delete if exists” behavior to ensure that the multipk database does not exist when we call CreateDatabaseAsync to create it.

Next, we call CreateDocumentCollection twice to create the two containers (again, a collection is a container). We name the first container byCity and assign it a partition key of /city, and we name the second container byState and assign /state as the partition key. Both containers reserve 400 request units (RUs) per second, which is the lowest throughput you can provision.

Notice the DefaultTimeToLive = -1 option applied to the first container. At the time of this writing, change feed does not support deletes. That is, if you delete a document from a container, it does not get picked up by the change feed. This may be supported in the future, but for now, the TTL (time to live) feature provides a very simple way to cope with deletions. Rather than physically deleting documents from the first container, we’ll just update them with a TTL of 60 seconds. That gives us 60 seconds to detect the update in the change feed, so that we can physically delete the corresponding document in the second container. Then, 60 seconds later, Cosmos DB will automatically physically delete the document from the first container by virtue of its TTL setting. You’ll see all this work in a moment when we run the code.

The other point to call out is the creation of our sync document, which is a special metadata document that won’t get copied over to the second container. Instead, we’ll use it to persist a timestamp to keep track of the last time we synchronized the containers. This way, each time we sync, we can request the correct point in time from which to consume changes that have occurred since the previous sync. The document is initialized with a lastSync value of null so that our first sync will consume the change feed from the beginning of time. Then lastSync is updated so that the next sync picks up precisely where the first one left off.

Now let’s implement CreateDocuments. This method simply populates three documents in the first container:

private static async Task CreateDocuments()
{
  using (var client = new DocumentClient(new Uri(CosmosEndpoint), CosmosMasterKey))
  {
    var collUri = UriFactory.CreateDocumentCollectionUri("multipk", "byCity");

    var dataDocDef = new
    {
      city = "Brooklyn",
      state = "NY",
      slogan = "Kings County"
    };
    await client.CreateDocumentAsync(collUri, dataDocDef);

    dataDocDef = new
    {
      city = "Los Angeles",
      state = "CA",
      slogan = "Golden"
    };
    await client.CreateDocumentAsync(collUri, dataDocDef);

    dataDocDef = new
    {
      city = "Orlando",
      state = "FL",
      slogan = "Sunshine"
    };
    await client.CreateDocumentAsync(collUri, dataDocDef);
  }

  Console.WriteLine("Created 3 documents in city collection");
}

Notice that all three documents have city and state properties, where the city property is the partition key for the container that we’re creating these documents in. The state property is the partition key for the second container, where our sync method will create copies of these documents as it picks them up from the change feed. The slogan property is just an ordinary document property. And although we aren’t explicitly supplying an id property, the SDK will automatically generate one for each document with a GUID as the id value.

We’ll also have an UpdateDocument method to perform a change on one of the documents:

private static async Task UpdateDocument()
{
  var collUri = UriFactory.CreateDocumentCollectionUri("multipk", "byCity");

  using (var client = new DocumentClient(new Uri(CosmosEndpoint), CosmosMasterKey))
  {
    // Update a document
    var brooklynDoc = client
      .CreateDocumentQuery(collUri, "SELECT * FROM c WHERE c.city = 'Brooklyn'")
      .AsEnumerable()
      .FirstOrDefault();

    brooklynDoc.slogan = "Fuhgettaboutit! " + Guid.NewGuid().ToString();
    await client.ReplaceDocumentAsync(brooklynDoc._self, brooklynDoc);
  }

  Console.WriteLine("Updated Brooklyn document in city collection");
}

This code retrieves the Brooklyn document, updates the slogan property, and calls ReplaceDocumentAsync to persist the change back to the container.

Next comes the DeleteDocument method:

private static async Task DeleteDocument()
{
  var collUri = UriFactory.CreateDocumentCollectionUri("multipk", "byCity");

  using (var client = new DocumentClient(new Uri(CosmosEndpoint), CosmosMasterKey))
  {
    // Delete a document (set time-to-live at 60 seconds)
    var orlandoDoc = client
      .CreateDocumentQuery(collUri, "SELECT * FROM c WHERE c.city = 'Orlando'")
      .AsEnumerable()
      .FirstOrDefault();

    orlandoDoc.ttl = 60;
    await client.ReplaceDocumentAsync(orlandoDoc._self, orlandoDoc);
  }

  Console.WriteLine("Deleted Orlando document in city collection");
}

Remember that (currently) the change feed doesn’t capture deleted documents, so we’re using the TTL (time to live) technique to keep our deletions in sync. Rather than calling DeleteDocumentAsync to physically delete the Orlando document, we’re simply updating it with a ttl property set to 60 and saving it back to the container with ReplaceDocumentAsync. To the change feed, this is just another update, so our sync method will pick it up normally as you’ll see in a moment. Meanwhile, Cosmos DB will physically delete the Orlando document from the first container in 60 seconds, giving our sync method up to one minute to pick it up from the change feed and delete it from the second container.

And finally, the sync method, which is what this whole discussion is all about. Here’s the code for SyncCollections:

private static async Task SyncCollections()
{
  using (var client = new DocumentClient(new Uri(CosmosEndpoint), CosmosMasterKey))
  {
    var cityCollUri = UriFactory.CreateDocumentCollectionUri("multipk", "byCity");

    // Obtain last sync time

    var syncDoc = client
      .CreateDocumentQuery(cityCollUri, "SELECT * FROM c WHERE c.city = 'sync'")
      .AsEnumerable()
      .FirstOrDefault();

    var lastSync = (DateTime?)syncDoc.lastSync;

    // Step 1: Gather all the partition key ranges (physical partitions)

    var continuationToken = default(string);
    var partitionKeyRanges = new List<PartitionKeyRange>();
    var loop = true;

    while (loop)
    {
      var partitionKeyRange = await client
        .ReadPartitionKeyRangeFeedAsync(
          cityCollUri,
          new FeedOptions { RequestContinuation = continuationToken });

      partitionKeyRanges.AddRange(partitionKeyRange);
      continuationToken = partitionKeyRange.ResponseContinuation;
      loop = continuationToken != null;
    }

    // Step 2: Consume the change feed for each partition key range

    // (simple demo doesn't scale when continuation tokens are needed
    foreach (var partitionKeyRange in partitionKeyRanges)
    {
      var options = new ChangeFeedOptions
      {
        PartitionKeyRangeId = partitionKeyRange.Id,
        StartFromBeginning = (lastSync == null),
        StartTime = (lastSync == null ? null : lastSync),
      };

      var query = client.CreateDocumentChangeFeedQuery(cityCollUri, options);

      while (query.HasMoreResults)
      {
        var readChangesResponse = await query.ExecuteNextAsync();
        foreach (var changedDocument in readChangesResponse)
        {
          if (changedDocument.city != "sync")
          {
            if (JsonConvert.DeserializeObject(changedDocument.ToString())["ttl"] == null)
            {
              var stateCollUri = UriFactory.CreateDocumentCollectionUri("multipk", "byState");
              await client.UpsertDocumentAsync(stateCollUri, changedDocument);
              Console.WriteLine($"Upserted document id {changedDocument.id} in byState collection");
            }
            else
            {
              var stateDocUri = UriFactory.CreateDocumentUri("multipk", "byState", changedDocument.id);
              await client.DeleteDocumentAsync(stateDocUri, new RequestOptions
              {
                PartitionKey = new PartitionKey(changedDocument.state)
              });
              Console.WriteLine($"Deleted document id {changedDocument.id} in byState collection");
            }
          }
        }
      }
    }

    // Update last sync time
    syncDoc.lastSync = DateTime.UtcNow;
    await client.ReplaceDocumentAsync(syncDoc._self, syncDoc);
  }

}

Let’s break this down. First, we grab the last sync time from the sync document in the first container. Remember, this will be null the very first time we run this method. Then, we’re ready to query the change feed, which is a two-step process.

For step 1, we need to discover all the partition key ranges in the container. A partition key range is essentially a set of partition keys. In our small demo, where we have only one document each across three distinct partition keys (cities), Cosmos DB will host all three of these documents inside a single partition key range.

Although there is conceptually only one change feed per container, there is actually one change feed for each partition key range in the container. So step 1 calls ReadPartitionKeyRangeFeedAsync to discover the partition key ranges, with a loop that utilizes a continuation token from the response so that we retrieve all of the partition key ranges into a list.

Then, in step 2, we iterate the list to consume the change feed on each partition key range. Notice the ChangeFeedOptions object that we set on each iteration, which identifies the partition key range in PartitionKeyRangeId, and then sets either StartFromBeginning or StartTime, depending on whether lastSync is null or not. If it’s null (which will be true only on the very first sync), then StartFromBeginning will be set to true and StartTime will be set to null. Otherwise, StartFromBeginning gets set to false, and StartTime gets set to the timestamp from the last sync.

After preparing the options, we call CreateDocumentChangeFeedQuery that returns an iterator. As long as the iterator’s HasMoreResults property is true, we call ExecuteNextAsync on it to fetch the next set of results from the change feed. And here, ultimately, is where we plug in our sync logic.

Each result is a changed document. We know this will always include the sync document, because we’ll be updating it after every sync. This is metadata that we don’t need copied over to the second container each time, so we filter out the sync document by testing the city property for “sync.”

For all other changed documents, it now becomes a matter of performing the appropriate create, update, or delete operation on the second container. First, we check to see if there is a ttl property on the document. Remember that this is our indication of whether this is a delete or not. If the ttl property isn’t present, then it’s either a create or an update. In either case, we handle the change by calling UpsertDocumentAsync on the second container (upsert means “update or insert”).

Otherwise, if we detect the ttl property, then we call DeleteDocumentAsync to delete the document from the second container, knowing that Cosmos DB will delete its counterpart from the first container when the ttl expires.

Let’s test it out. Start the console app and run the DB (create database) and CD (create documents) commands. Then navigate to the Data Explorer in the Azure portal to verify that the database exists with the two containers, and that the byCity container has three documents in it, plus the sync document with a lastSync value of null indicating that no sync has yet occurred:

The byState container should be empty at this point, because we haven’t run our first sync yet:

Back in the console app, run the SC command to sync the containers. This copies all three documents from the first container’s change feed over to the second container, skipping the sync document which we excluded in our code:

Upserted document id 01501a99-e8df-4c3b-9892-ed2eadb81180 in byState collection
Upserted document id fb8d41ae-3aae-4892-bfe9-8c34bc8138d2 in byState collection
Upserted document id 5f5600f3-9f34-4a4d-bdb4-28061a5ab35a in byState collection

Returning to the portal, refresh the data explorer to confirm that the second container now has the same three documents as the first, although here they are partitioned by state rather than city:

Both containers are now in sync. Refresh the first container view, and you’ll see that the lastSync property has been changed from null to a timestamp from when the previous sync ran.

Run the UD command in the console app the update the first container with a change to the slogan property of the Brooklyn, NY document. Now run SC to sync the containers again:

Upserted document id 01501a99-e8df-4c3b-9892-ed2eadb81180 in byState collection

Refresh the second container view, and you’ll see that the slogan property has now been updated there as well.

Finally, run the DD command in the console app the delete the Orlando, FL document from the first container. Remember that this doesn’t actually delete the document, but rather updates it with a ttl property set to 60 seconds. Then run SC to sync the containers again:

Deleted document id 5f5600f3-9f34-4a4d-bdb4-28061a5ab35a in byState collection

You can now confirm that the Orlando, FL document is deleted from the second container, and within a minute (upon ttl expiration), you’ll see that it gets deleted from the first container as well.

However, don’t wait longer than a minute after setting the ttl before running the sync or you will run out of time. Cosmos DB will delete the document from the first container when the ttl expires, at which point it will disappear from the change feed and you will lose your chance to delete it from the second container.

What’s Next?

It didn’t take that much effort to consume the change feed, but that’s only because we have a tiny container with just a handful of changes, and we’re manually invoking each sync. To consume the change feed at scale, much more work needs to be done. For example, the change feed on each partition key range of the container can be consumed concurrently, so we could add multithreading logic to parallelize those queries. Long change feeds can also be consumed in chunks, using continuation tokens that we could persist as a “lease,” so that new clients can resume consumption where previous clients left off. We also want the sync automated, so that we don’t need to poll manually.

Fortunately, the Change Feed Processor (CFP) library handles all these details for you. It was certainly beneficial to start by querying the change feed directly, since exploring that option first is a great way to learn how change feed works internally. However, unless you have very custom requirements, the CFP library is the way to go.

So tune in to part 2, and we’ll see how much easier it is to implement our multiple partition key solution much more robustly using the CFP library.

Demystifying the Multi-Model Capabilities in Azure Cosmos DB

When someone casually asks me, “Hey, what is Cosmos DB?,” I casually respond, “Well, that’s Microsoft’s globally distributed, massively scalable, horizontally partitioned, low latency, fully indexed, multi-model NoSQL database, of course.” One of two things happen then. I’ll often get a long weird look, after which the other person politely excuses themselves and moves along. But every now and again, I’ll hear “wow, that sounds awesome, tell me more!” If you’re still reading this, then I’m guessing you’re in the latter category.

If you start to elaborate on each of the bullet points in my soundbite response, there’s a lot to discuss before you get to “multi-model NoSQL” at the tail end. Starting with “globally distributed,” Cosmos DB is – first and foremost – a database designed for modern web and mobile applications, which are (typically) global applications in nature. Simply by clicking the mouse on a map in the portal, your Cosmos DB database is instantly replicated anywhere and everywhere Microsoft hosts a data center (there are nearly 50 of them worldwide, to date). This delivers high availability and low latency to users wherever they’re located.

Cosmos DB also delivers virtually unlimited scale, both in terms of storage – via server-side horizontal partitioning, and throughput – by provisioning a prescribed number of request units (RUs) per second. This ensures low latency, and is backed by comprehensive SLAs to yield predictable database performance for your applications. And it’s unique “inverted indexing” scheme enables automatic indexing of every property that you store, with minimal overhead.

Whew. That’s quite a lot to digest before we even start pondering Cosmos DB’s multi-model support, mentioned there at the end of my lengthy description. In fact, it’s very deliberately placed at the end. Because regardless of which data model you choose, it actually makes no difference to Cosmos DB. The capabilities around global distribution, horizontal partitioning, provisioned throughput, and automatic indexing apply – these are durable concepts that transcend whatever data model you choose. So you get to pick and choose among any of the supported data models, without compromising any of the core features of the Cosmos DB engine.

Which segues right into the topic of this blog post. What exactly is “multi-model”? And specifically, what does it mean for a database platform like Cosmos DB to support multiple data models?

It all boils down to how you’d like to treat your data – and this is where the developer comes in. Because, while massive scale is clearly important (if not critical), developers don’t really care about such details as long as it all “just works.” And it’s Cosmos DB’s job to make sure that this all works. When it comes to actually building applications – well, that’s the developer’s job, and this is where the decision of which data model to choose comes into play.

Depending on the type of application being built, it could be more appropriate to use one data model and not another. For example, if the application focuses more on relationships between entities than the entities themselves, then a graph data model may work better than a document model. In other cases, a developer may want to migrate an existing NoSQL application to Cosmos DB; for example, an existing Mongo DB or Cassandra application. In these scenarios, the Cosmos DB data model will be pre-determined; depending on the back-end database dependency of the application being ported, the developer would choose either the Mongo DB-compatible or Cassandra-compatible data model. Such a migration would require minimal (to no) changes to the existing application code. And yet, in other “green field” situations, developers that are very opinionated about how data should be modeled are free to choose whichever data model they prefer.

Each data model has an API for developers to work with in Cosmos DB. Put another way, the developer chooses an API for their database, and that determines the data model that is used. So, let’s break it down:

MM1

Document Data Model (SQL & MongoDB APIs)

The first thing to point out is that the SQL API is, essentially, the original DocumentDB programming model from the days when Cosmos DB was called DocumentDB. This is arguably the most robust and capable of all the APIs, because it is the only one that exposes a server-side programming model that lets you build fully transactional stored procedures, triggers, and user-defined functions.

So both the SQL and MongoDB APIs give you a document data model, but the two APIs themselves are radically different. Yes, they are similar from a data modeling perspective; you store complete denormalized entities as a hierarchical key-value document model; pure JSON in the case of the SQL API, or BSON in the case of the MongoDB API (BSON is MongoDB’s special binary-encoded version of JSON that extends JSON with additional data types and multi-language support).

The critical difference between the two APIs is the programming interface itself. The SQL API uses Microsoft’s innovative variant of structured query language (SQL) that is tailored for searching across hierarchical JSON documents. It also supports the server-side programming model (for example, stored procedures), which none of the other APIs do.

In contrast, the MongoDB API actually provides wire-level support, which is a compatibility layer that understands the protocol used by the MongoDB driver for sending packets over the network. MongoDB has a built-in find method used for querying documents (unlike the SQL support found in the SQL API). So the MongoDB API appeals to existing MongoDB developers, because they now enjoy the scale-out, throughput, and global distribution capabilities of Cosmos DB, without deserting the MongoDB ecosystem. Because the MongoDB API in Cosmos DB gives you full compatibility with existing MongoDB application code, and lets you continue working with familiar MongoDB tools.

Key-Value Data Model (Table API)

You can also model your data as a key-value store, using the Table API. This API is actually the evolution of Azure Table Storage – one of the very first NoSQL databases available on Azure. In fact, all existing Azure Table Storage customers will eventually be migrated over to Cosmos DB and the Table API.

With this data model, each entity consists of a key and a value pair, but the value itself is a set of key-value pairs. So this is nothing like a table in a relational database, where each row has the same columns; with the Table API in Cosmo DB, each entity’s value can have a different set of key-value pairs.

The Table API appeals primarily to existing Azure Table Storage customers, because it emulates the Azure Table Storage API. Using this API, existing Azure Table Storage apps can be migrated quickly and easily to Cosmos DB. For a new project though, there would be little reason to ever consider using the Table API, when you consider the fact that the SQL API is far more capable than the Table API.

So then, when would you actually choose to use the Table API? Well, again, the primary use case is to migrate an existing Azure Table Storage account over to Cosmos DB, without having to change any code in your applications. Remember that Microsoft is planning to do this for every customer over a long term migration, but there’s no reason to wait for them to do that, if you don’t want to wait. You can migrate the data yourself now, and then immediately start enjoying the benefits of Cosmos DB as a back-end, and you don’t have to make any changes whatsoever to your existing Azure Table Storage applications. You just change the connection string to point to Cosmos DB, and the application continues to work seamlessly against the Cosmos DB Table API.

Graph Data Model (Gremlin API)

You can also choose the Gremlin API, which gives you a graph database based on the Apache Tinkerpop open source project. Graph databases are becoming increasingly popular in the NoSQL world.

What do you put in a graph? One of two things; either a vertex or an edge. Now don’t let these terms intimidate you. They’re just fancy words for entities and relationships, respectively. So a vertex is an entity, and an edge is a one-way relationship between any two vertices. And that’s it – nothing more and nothing less. These are the building blocks of any graph database. And whether you’re storing a vertex or an edge, you can attach any number of arbitrary properties to it; much like the arbitrary key-value pairs you can define for a row using the Table API, or a flat JSON document using the SQL API.

The Gremlin API provides a succinct graph traversal language that enables you to efficiently query across the many relationships that exist in a graph database. For example, in a social networking application, you could easily find a user, then look for all of that user’s posts where the location is NY, and of those, find all the relationships where some other user has commented on or liked those posts.

Columnar (Cassandra API)

There’s a fourth option for choosing a data model, and that’s columnar, using the Cassandra API. Columnar is yet another way of modeling your data, where – in a departure from the typical way of dealing with schema-free data in the NoSQL world – you actually do define the schema of your data up-front. However, data is still stored physically in a column-oriented fashion, so it’s still OK to have sparse columns, and it has good support for aggregations. Columnar is somewhat similar to the Key-Value data model with the Table API, except that every item in the container is required to adhere to the schema defined for the container. And in that sense, columnar is really most similar to columnstore in SQL Server, except of course that it is implemented using a NoSQL architecture, so it’s distributed and partitioned to massively scale out big data.

Atom Record Sequence (ARS)

The fact of the matter is, these APIs merely project your data as different data models; whereas internally, your data is always stored as ARS – or Atom Record Sequence – a Microsoft creation that defines the persistence layer for key-value pairs. Now you don’t need to know anything about ARS; you don’t even need to know that it’s there. But it is there, under the covers, storing all your data as key-value pairs in a manner that’s agnostic to the data-model you’ve chosen to work with.

Because at the end of the day, it’s all just keys and values – not just the key-value data model, but all these data models. They’re all some form of keys and values. A JSON or BSON document is a collection of keys and values, where values can either be simple values, or they can contain nested key-value pairs. The key-value model is clearly based on keys and values, but so are graph and columnar. The relationships you define in a graph database are expressed as annotations that are, themselves key-value pairs, and certainly all the columns defined for a columnar data model can be viewed as key-value pairs as well.

So, these API’s are here to broaden your choices in terms of how you get to treat your data; they bear no consequence on the ability to scale your database. For example, if you want to be able to write SQL queries, you would choose the SQL API, and not the Table API. But if you want MongoDB or Azure Table Storage compatibility, then you’d go with the MongoDB or Table API respectively.

Switching Between Data Models

As you’ve seen, when you choose an API, you are also choosing a data model. And today (since the release of Cosmos DB in May 2017), you choose an API when you create a Cosmos DB account. This means that today, a Cosmos DB account is tied to one API, which ties it to one data model:

MM2

But again, each data model is merely a projection of the same underlying ARS format, and so eventually you will be able to create a single account, and then switch freely between different APIs within the account. So that then, you’ll be able to access one database as graph, key-value, document, or columnar, all at once, if you wish:

MM3

You can also expect to see additional APIs in the future, as Cosmos DB broadens its compatibility support for other database systems. This will enable a an even wider range of developers to stick with their database of choice, while leveraging Cosmos DB as a back end for horizontal partitioning, provisioned throughput, global distribution, and automatic indexing.

Summary

Azure Cosmos DB has multiple APIs and supports multiple data models. In this blog post, we explored the multi-API, multi-model capabilities of Cosmos DB, including the document data model with either the SQL or MongoDB APIs, key-value with the Table API, graph with the Gremlin API, and columnar with the Cassandra API.

Regardless of which data model you choose, however, Cosmos DB stores everything in ARS, and merely projects different data models, based on the different APIs. This provides developers with a wide range of choices for how they’d like to model their data, without making any compromises in scale, partitioning, throughput, indexing, or global distribution.

Building Your First Windows Azure Cloud Application with Visual Studio 2008

In a previous post, I introduced the Azure Services Platform, and provided step-by-step procedures for getting started with Azure. It can take a bit of time downloading the tools and SDKs and redeeming invitation tokens, but once that’s all done, it’s remarkably simple and fast to build, debug, run, and deploy applications and services to the cloud. In this post, I’ll walk through the process for creating a simple cloud application with Visual Studio 2008.

You don’t need to sign up for anything or request any invitation tokens to walk through the steps in this post. Once you’ve installed the Windows Azure Tools for Microsoft Visual Studio July 2009 CTP and related hotfixes as described in my previous post, you’ll have the necessary templates and runtime components for creating and testing Windows Azure projects. (Of course, your Azure account and services must be set up to actually deploy and run in the cloud, which I’ll cover in a future post.)

Installing the Windows Azure Tools for Microsoft Visual Studio July 2009 CTP provides templates that simplify the process of developing a Windows Azure application. It also installs the Windows Azure SDK, which provides the cloud on your desktop. This means you can build applications on your local machine and debug them as if they were running in cloud. It does this by providing Development Fabric and Development Storage services that simulate the cloud environment on your local machine. When you’re ready, it then takes just a few clicks to deploy to the real cloud.

Let’s dive in!

Start Visual Studio 2008, and begin a new project. Scroll to the new Cloud Service project type, select the Cloud Service template, and name the project HelloCloud.

NewCloudServiceProject

When you choose the Cloud Service template, you are creating at least two projects for your solution: the cloud service project itself, and any number of hosted role projects which Visual Studio prompts for with the New Cloud Service Project dialog. There are three types of role projects you can have, but the one we’re interested in is the ASP.NET Web Role. Add an ASP.NET Web Role to the solution from the Visual C# group and click OK.

AddWebRole

We now have two separate projects in our solution: a Cloud Service project named HelloCloud, and an ASP.NET Web Role project named WebRole1:

CloudSolution

The HelloCloud service project just holds configuration information for hosting one or more role projects in the cloud. Its Roles node in Solution Explorer presently indicates that it’s hosting one role, which is our WebRole1 ASP.NET Web Role. Additional roles can be added to the service, including ASP.NET Web Roles that host WCF services in the cloud, but we’ll cover that in a future post. Note also that it’s set as the solution’s startup project.

The project contains two XML files named ServiceDefinition.csdef and ServiceConfiguration.cscfg. Together, these two files define the roles hosted by the service. Again, for our first cloud application, they currently reflect the single ASP.NET Web Role named WebRole1:

ServiceDefinition.csdef

<?xml version="1.0"?>
<ServiceConfiguration serviceName="HelloCloud" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
<Role name="WebRole1">
<Instances count="1" />
<ConfigurationSettings />
</Role>
</ServiceConfiguration>

ServiceConfiguration.cscfg

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="HelloCloud" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="WebRole1" enableNativeCodeExecution="false">
<InputEndpoints>
<!-- Must use port 80 for http and port 443 for https when running in the cloud -->
<InputEndpoint name="HttpIn" protocol="http" port="80" />
</InputEndpoints>
<ConfigurationSettings />
</WebRole>
</ServiceDefinition>

The second project, WebRole1, is nothing more than a conventional ASP.NET application that holds a reference to the Azure runtime assembly System.ServiceHosting.ServiceRuntime. From your perspective as an ASP.NET developer, an ASP.NET Web Role is an ASP.NET application, but one that can be hosted in the cloud. You can add any Web components to it that you would typically include in a Web application, including HTML pages, ASPX pages, ASMX or WCF services, images, media, etc.

For our exercise, we’ll just set the title text and add some HTML content in the Default.aspx page created by Visual Studio for the WebRole1 project.

<html xmlns="http://www.w3.org/1999/xhtml">

<head runat="server">
<title>Hello Windows Azure</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<h1>Hello From The Cloud!</h1>
</div>
</form>
</body>
</html>

We’re ready to debug/run our application, but unlike debugging a conventional ASP.NET Web application:

  • The ASP.NET Web Role project is not the startup project; the Cloud Service project is
  • The ASP.NET Web Role project won’t run on the Development Server (aka Cassini) or IIS

So debugging cloud services locally means starting the Cloud Service project, which in turn will start all the role projects that the service project hosts. And instead of Cassini or IIS, the ASP.NET Web Role projects will be hosted by two special services that simulate the cloud on your local machine: Development Fabric and Development Storage. The Development Fabric service provides the Azure computational services used in the cloud, and the Development Storage service provides the Azure storage services used in the cloud.

There are still a few things you need to ensure before you hit F5:

  • You must have started Visual Studio as an administrator. If you haven’t, you’ll get an error message complaining that “The Development Fabric must be run elevated.” You’ll need to restart Visual Studio as an administrator and try again.
  • SQL Server Express Edition (2005 or 2008) must be running as the .\SQLEXPRESS instance, your Windows account must have a login in .\SQLEXPRESS, and must be a member of the sysadmin role. If SQL Express isn’t configured properly, you’ll get a permissions error.

Enough talk! Let’s walk through a few build-and-run cycles to get a sense of how these two services work. Go ahead and hit F5 and give it a run.

If this is the very first time you are building a cloud service, Visual Studio will prompt you to initialize the Development Storage service (this won’t happen again for future builds). Click Yes, and wait a few moments while Visual Studio sets up the SQL Express database.

Although it uses SQL Server Express Edition as its backing store, make sure you don’t confuse Development Storage with SQL Azure, which offers a full SQL Server relational database environment in the cloud. Rather, Development Storage uses the local SQL Express database to persist table (dictionary), BLOB (file system), and queue storage just as Windows Azure provides the very same Storage Services in the real cloud.

Once the build is complete, Internet Explorer should launch and display our Hello Cloud page.

HelloFromTheCloud

While this may seem awfully anti-climactic, realize that it’s supposed to. The whole idea is that the development experience for building cloud-based ASP.NET Web Role projects is no different than it is for building conventional on-premises ASP.NET projects. Our Hello Cloud application is actually running on the Development Fabric service, which emulates the real cloud fabric provided by Azure on your local machine.

In the tray area, the Development Fabric service appears as a gears icon. Click on the gears icon to display the context menu:

DevelopmentFabricTray

Click Show Development Fabric UI to display the service’s user interface. In the Service Deployments treeview on the left, drill down to the HelloCloud service. Beneath it, you’ll see the WebRole1 project is running. Expand the WebRole1 project to see the number of fabric instances that are running:

DevelopmentFabric1Instance

At present, and by default, only one instance is running. But you can scale out to increase the capacity of your application simply by changing one parameter in the ServiceDefinition.csdef file.

Close the browser and open ServiceDefinition.csdef in the HelloCloud service project. Change the value of the count attribute in the Instances tag from 1 to 4:

<Instances count="4" />

Now hit F5 again, and view the Development Fabric UI again. This time, it shows 4 instances hosting WebRole1:

DevelopmentFabric4Instances

As you can see, it’s easy to instantly increase the capacity of our applications and services. The experience would be the same in the cloud.

Congratulations! You’ve just built your first Windows Azure application. It may not do much, but it clearly demonstrates the transparency of the Azure runtime environment. From your perspective, it’s really no different than building conventional ASP.NET applications. The Visual Studio debugger attaches to the process being hosted by the Development Fabric service, giving you the same ability to set breakpoints, single-step, etc., that you are used to, so that you can be just as productive building cloud applications.

You won’t get full satisfaction, of course, until you deploy your application to the real cloud. To do that, get your Windows Azure account and invitation tokens set up (as described in my previous post), and stay tuned for a future post that walk you through the steps for Windows Azure deployment.

SQL Azure CTP1 Released

Get Introduced to The Cloud

Read my previous post for a .NET developer’s introduction to the Azure Services Platform, and the detailed steps to get up and running quickly with Azure.

Get Your SQL Azure Token

If you’ve been waiting for a SQL Azure token to test-drive Microsoft’s latest cloud-based incarnation of SQL Server, your wait will soon be over. Just this morning, Microsoft announced the release of SQL Azure CTP1, and over the next several weeks they should be sending invitation tokens to everyone that requested them (if you’ve not yet requested a SQL Azure token, go to http://msdn.microsoft.com/en-us/sqlserver/dataservices/default.aspx and click Register for the CTP).

Get the Azure Training Kit

Microsoft has also just released the August update to the Azure training kit that has a lot of new SQL Azure content in it. Be sure to download it here: http://www.microsoft.com/downloads/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78.

Read the Documentation

The SQL Azure documentation, which was posted on MSDN on August 4, can be found here: http://msdn.microsoft.com/en-us/library/ee336279.aspx.

Get Your Head In The Clouds: Introducing the Azure Services Platform

Azure is coming, and it’s coming soon. If you’ve not yet gotten familiar with Azure (that is, the overall Azure Services Platform), you owe it to yourself to start learning about it now. Especially since it remains free until RTM in November. This post, and many that will follow, will help you get acquainted with Microsoft’s new cloud computing platform so that you can leverage your hard-earned .NET programming skills quickly and effectively as your applications and services begin moving to the cloud (and they will).

There are many pieces to Azure, and it would be overwhelming to dive into them all at once. So this post will just give the high-level overview, and future posts will target the individual components one at a time (so that you can chew and swallow like a normal person).

Cloud Computing: The Concept

Provisioning servers on your own is difficult. You need to first acquire and physically install the hardware. Then you need to get the necessary software license(s), install the OS, and deploy and configure your application. For medium-to-enterprise scale applications, you’ll also need to implement some form of load balancing and redundancy (mirroring, clustering, etc.) to ensure acceptable performance levels and continuous uptime in the event of unexpected hardware, software, or network failures. You’ll have to come up with a backup strategy and attend to it religiously as part of an overall disaster recovery plan that you’ll also need to establish. And once all of that is set up, you’ll need to maintain everything throughout the lifetime of your application. It’s no understatement to assert that moving to the cloud eliminates all of these burdens.

In short, the idea of applications and services running in “the cloud” means you’re dealing with intangible hardware resources. In our context, intangible translates to a maintenance-free runtime infrastructure. You sign up with a cloud hosting company for access, pay them for how much power your applications need (RAM, CPU, storage, scale-out load balancing, etc.), and let them worry about the rest.

Azure Virtualization and Fabric

Azure is Microsoft’s cloud computing offering. With Azure, your applications, services, and data reside in the Azure cloud. The Azure cloud is backed by large, geographically dispersed Microsoft data centers equipped with powerful servers, massive storage capacities, and very high redundancy to ensure continuous uptime.

However, this infrastructure is much more than just a mere Web hosting facility. Your cloud-based applications and services don’t actually run directly on these server machines. Instead, sophisticated virtualization technology manufactures a “fabric” that runs on top of all this physical hardware. Your “code in the cloud,” in turn, runs on that fabric. So scaling out during peak usage periods becomes a simple matter of changing a configuration setting that increases the number of instances running on the fabric to meet the higher demand. Similarly, when the busy season is over, it’s the same simple change to drop the instance count and scale back down. Azure manages the scaling by dynamically granting more or less hardware processing power to the fabric running the virtualized runtime environment. The process is virtually instantaneous.

Now consider the same scenario with conventional infrastructure. You’d need to provision servers, bring them online as participants in a load-balanced farm, and then take them offline to be decommissioned later when the extra capacity is no longer required. That requires a great deal of work and time — either for you directly, or for your hosting company — compared to tweaking some configuration with just a few mouse clicks,

The Azure Services Platform

The Azure Services Platform is the umbrella term for the comprehensive set of cloud-based hosting services and developer tools provided by Microsoft. It is still presently in beta, and is scheduled to RTM this November at the Microsoft PDC in Los Angeles. Even as it launches, Microsoft is busy expanding the platform with additional services for future releases.

Warning: During the past beta release cycles of Azure, there have been many confusing product brand name changes. Names used in this post (and any future posts between now and RTM) are based on the Azure July 2009 Community Technology Preview (CTP), and still remain subject to change.

At present, Azure is composed of the following components and services, each of which I’ll cover individually in future posts.

  • Windows Azure
    • Deploy, host, and manage applications and services in the cloud
    • Storage Services provides persisted storage for table (dictionary-style), BLOB, and queue data
  • SQL Azure
    • A SQL Server relational database in the cloud
    • With a few exceptions, supports the full SQL Server 2008 relational database feature set
  • Microsoft .NET Services
    • Service Bus enables easy bi-directional communication through firewall and NAT barriers via new WCF relay bindings
    • Access Control provides a claims-based security model that eliminates virtually all the gnarly security plumbing code typically used in business applications
  • Live Services
    • A set of user-centric services focused primarily on social applications and experiences
    • Mesh Services, Identity Services, Directory Services, User-Data Storage Services, Communication and Presence Services, Search Services, Geospatial Services

Getting Started with Azure

Ready to roll up your sleeves and dive in? Here’s a quick checklist for you to start building .NET applications for the Azure cloud using Visual Studio:

  • Ensure that your development environment meets the pre-requisites
    • Vista or Windows Server 2008 with the latest service packs, or Windows 7 (sorry, Windows XP not supported)
    • Visual Studio 2008 (or Visual Web Developer Express Edition) SP1
    • .NET Framework 3.5 SP1
    • SQL Server 2005/2008 Express Edition (workaround available for using non-Express editions)
  • Install Windows Azure Tools for Microsoft Visual Studio July 2009 CTP and related hotfixes
    • Download from http://msdn.microsoft.com/en-us/vstudio/cc972640.aspx
    • Installing the tools also installs the Windows Azure SDK
    • Provides Development Fabric and Development Storage services to simulate the cloud on your local development machine
    • Provides Visual Studio templates for quickly creating cloud-based ASP.NET applications and WCF services
  • Sign up for an Azure portal account using your Windows Live ID

Azure cloud services are free during the CTP, but Windows Azure and SQL Azure require invitation tokens that you need to request before you can use them. Also be aware that there could be a waiting period, if invitation requests during the CTP are in high demand, and that PDC attendees get higher priority in the queue. As of the July 2009 CTP, invitation tokens are no longer required for .NET Services or Live Services, and you can proceed directly to the Azure portal at http://windows.azure.com to start configuring those services.

The process to request invitation tokens for Windows Azure and SQL Azure can be a little confusing with the current CTP, so I’ve prepared the following step-by-step instructions for you:

  • To request an invitation token for Windows Azure:
    • Go to http://www.microsoft.com/azure/register.mspx and click Register for Azure Services
    • A new browser window will launch to the Microsft Connect Web site
    • If you’re not  currently logged in with your Windows Live ID, you’ll be prompted to do so now
    • You’ll then be taken through a short wizard-like registration process and asked to provide some profile information
    • You’ll then arrive at the Applying to the Azure Services Invitation Program page, which you need to complete and submit
    • Finally, you should receive a message that invitation tokens for both Windows Azure and SQL Azure will be sent to your Connect email account. Note that this is incorrect, and you will only be sent a Windows Azure token.
  • To request an invitation token for SQL Azure, you need to join the mailing list on the SQL Azure Dev Center:
    • Go to http://msdn.microsoft.com/en-us/sqlserver/dataservices/default.aspx
    • Click Register for the CTP
    • If you’re not  currently logged in with your Windows Live ID, you’ll be prompted to do so now
    • You’ll then arrive at the mailing list signup page, which you need to complete and submit
    • Finally, you should receive a message that you will be notified when your SQL Azure access becomes available
    • Note: SQL Azure tokens are limited; you may have to be patient to receive one during the CTP

Once you receive your invitation tokens, you can log in to the Azure portal at http://windows.azure.com and redeem them.

You don’t need to wait for your Windows Azure invitation code to begin building cloud applications and services with Visual Studio. Once you’ve installed the Windows Azure Tools for Microsoft Visual Studio July 2009 CTP and related hotfixes as described earlier, you’ll have the necessary templates and runtime components for creating and testing Windows Azure projects. Using the Development Fabric service that emulates the cloud on your desktop, you can run and debug Windows Azure projects locally (of course, you won’t be able to deploy them to the real cloud until you receive and redeem your Windows Azure invitation token). It’s easy to do, and in an upcoming post, I’ll show you how.

Posted in Windows Azure. Tags: . 2 Comments »