Introducing Azure DocumentDB

On April 8, 2015, Microsoft officially launched Azure DocumentDB, and it certainly can be characterized as a typical NoSQL document database. It is a massively scalable NoSQL document database that works with schema-free JSON documents. Beyond this, however, DocumentDB stands out with some very unique capabilities.

SQL Queries Over Schema-Free JSON

Of course, DocumentDB works with schema-free JSON. But unlike some platforms that require you to define index paths in advance of being able to query on specific properties, DocumentDB automatically indexes every property in a document as soon as the document is added to the database. Simply put, every document is instantly queryable the moment it’s created, and you can search on any property anywhere within the document hierarchy. Furthermore, documents are queryable using SQL, or I should say, using a special flavor of SQL that anyone with SQL experience should immediately find intuitive.

ACID Transactions Updating Multiple Documents

DocumentDB provides a server-side environment inside which you can write JavaScript code to update multiple documents with full transactional processing. This is an easy and powerful way to ensure data consistency across multiple documents, because DocumentDB ensures that all updates made on the server are committed together, or will roll everything back together in the event of an error.

Tunable Performance

There are many ways to tune DocumentDB for the performance needed by your application. For example, throughput can be scaled up or down instantly across three different performance tiers. And although DocumentDB indexes every property on every document, you can take control and fine-tune an indexing policy that reduces storage and processing overhead for specific documents and/or properties that never need to be indexed. And while DocumentDB supports both strong and eventual consistency, it also has two additional options to give you even greater control over the tradeoffs between performance and consistency.

Runs on Azure

Finally, DocumentDB is available as a fully managed, cloud-based, Platform As A Service, running on Azure. There’s just nothing for you to install or manage. No servers, cables, operating systems, or updates to deal with, no replicas to setup – Microsoft does all that work, and keeps the service running. Azure guarantees availability as well as predictable performance based on the service tier that you sign up for. Within literally minutes, you get started working with DocumentDB using just a browser and an Azure subscription.

Stay tuned for upcoming posts, where I’ll dig into all of these exciting capabilities in greater detail.

Relational Databases vs. NoSQL Document Databases

In this post, we’ll take a close look at some of the differences between a traditional relational store and a NoSQL document store.

Rows vs. Documents

To begin with, a document database stores entities as documents – meaning JSON documents, and this is very different to the way relational databases store data as rows in a table.

i1

Columns vs. Properties

While rows in the relational world are made of up columns, documents contain properties.

i2

Schema vs. Schema-Free

In the relational world, every table has a schema that defines the columns and data types that every row in the table must conform to. In contrast, a document database has no defined schema, and every document can be structured differently.

i3

In this example, there are four columns defined for a table, and it would be necessary to alter the table schema if we wanted a fifth column, or if we wanted to change the maximum length of the name column, or if we wanted to allow nulls in date-of-birth, you get the idea. But because document databases as schema-free, they aren’t subject to these constraints. This makes them ideal when you have a rapidly evolving schema, as is usually the case in software development today.

i4

Here, the first document has several properties, one of which is called name. Yet this doesn’t prevent us from adding a second document that uses the property fullName, rather than name, because there are no rules on schema. Similarly, we can add a third document that stores the fullName property an object that has distinct first and last name properties embedded within it. The database is more than happy to accept all these variations. Of course, this is just an example, and it would normally make no sense to mix up property names and shapes like this, just because you can. In this case, we can speculate that – at some point, it was determined that fullName was a better choice than name, and then at some later point, the fullName property was enhanced to distinguish between first and last names. This is a big selling point of document databases, because they let you evolve your schema as your needs dictate, as compared to relational database, where keeping up with an evolving schema is far more disruptive. Of course, when you are supporting varying schemas like this, it falls on you to present a unified view of the data. In this scenario, you could easily devise a query that returns the full name, first, by testing for the presence of a property called name, or a property called fullName if there is no name property, and then by testing if the fullName is an object or not; if it is, then you’d concatenate the embedded first and last name properties, otherwise, you’d just return the fullName property.

Normalized vs. Denormalized

Another significant difference between relational databases and document databases has to do with data normalization. In the relational world, we strive to normalize data as much as possible. This means avoiding data duplication, and maintaining separate tables for each related entity that can be joined together to produce a complete view of the data.

i5

In this example, we have a User table with a row for John, and we have a separate Holdings table with multiple rows for all of John’s stock holding. The two tables are related on User ID, which is 1 for John, so that all individual users are stored in the user table, and each user’s related holdings are stored in the holdings table. Not only must the tables be joined when retrieving data for an application, but the application must know how to take a single modified object with a user and their holdings, and persist changes to the database by updating the user and holding tables separately. This takes significant effort, even if you leverage assistance from an Object-Relational Mapping framework (ORM) like Entity Framework.

In a document database, we typically do the opposite.

i6

Here we see a single document that contains both the user and their holdings. Suddenly, there is no need to join when running a query, nor is there any need to shred the object into different places when saving changes, in fact, no need for an ORM layer at all. This, together with a schema-free model, eliminates a tremendous amount of friction that you normally have with relational database when designing schemas, joining tables, and maintaining an ORM layer on top of your database.

Of course, this isn’t a silver bullet that works well in every scenario. Here we see the one-to-many relationship implemented in a JSON document using an array of embedded objects. This is fine when you expect a reasonable maximum number of child objects, but what about so-called “unbounded” data. For example, imagine a blog post document with an infinite number of related comments. It would not be possible to store a single blog post with all of its comments inside a single document, and this is an example of where you actually might implement a model similar to a relational database, with each blog post in their own document, and each comment in a separate related document that is tied logically on the blog post ID. You can certainly do this, but it will fall on you and your application to store and retrieve blog posts and comments separately, because the document database won’t join them for you. You’re also free to model things however you’d like, so you could create blog post documents that include the first 100 comments, and then related documents that contain the next hundred, and the next hundred, with 100 comments per related document. There’s no one approach to data modeling that works in every scenario, so it’s up to you to experiment with what works best.

Data duplication is another aspect to this. With document databases, it’s not uncommon to duplicate data across multiple documents so that each document has the data it needs without having to locate other documents. Of course, if this is data that frequently changes, then you face another question as to whether it’s better to update multiple documents when a single piece of duplicated data is changed, or to extract the duplicate data out of each document and maintain it in a single shared document. Once again, how you model your data is all up to you. But at the end of the day, it’s important to understand that document databases work best when dealing with rich hierarchical documents that are entirely, or almost entirely, self-contained. Yes, you can model related documents when you need to, but if you find yourself modeling a database that contains many related documents, and/or, your documents have mostly flat structures, then this is a clear sign that a document database may not be the right tool for the job.

Strong Consistency vs. Eventual Consistency

Relational databases also enforce strong consistency on write operations. After updating the balance of a bank account, for example, we must be guaranteed that queries immediately show the updated balance – it’s entirely unacceptable to continue showing the old balance any time after it’s been changed.

One reason that document databases perform as well as they do is because when you write to the database, the changes are propagated to multiple replicas in the background. Then, read requests can be satisfied by any replica, making it possible to satisfy a high volume of client queries just by maintaining enough replicas. However, because not all replicas may be up-to-date at the point in time that a client issues a query, it is possible to receive inconsistent query results. Eventually, of course, all the replicas will be updated, and queries will return consistent results, which is why this behavior is called “eventual consistency.”

Simple vs. Complex

As I had started explaining, NoSQL databases are simple by design, which is the primary reason that they are able to achieve scale and performance that surpasses relational databases. This is also the reason why they do not and cannot replace relational databases that are often better at handling more complex requirements that don’t necessarily need to achieve massive scale.

Scale-Up vs. Scale-Out

And finally, speaking of scale, relational databases simply don’t scale out easily. You can do it, but it’s hard and expensive, whereas scale-out is a fundamental design goal with document databases.

What is a NoSQL Document Database?

In this blog post, we’ll discuss the major concepts around NoSQL document databases. In future posts, I’ll introduced Azure DocumentDB, Microsoft’s newest NoSQL document database, and discuss the major differences between relational databases and document databases.

What is NoSQL?

The best way to start is to clear up some terminology, where the industry has unfortunately adopted a couple of terms that are arguably misleading. SQL means Structured Query Language – meaning it’s just a language; a way of expressing a request to “go find something from someplace, where some condition is true, and give me back the result in the shape that I want it.” And so, SQL per-se doesn’t really define a specific technology. Again, it’s just a language, a dialect, but because SQL is the traditional query language of relational databases, the terms are often equated. So it’s really more helpful to think of a NoSQL database as a “non-relational” database, where – however you go about querying this database – it’s a database that abandons may of the concepts of relational databases. And so we wound up with the term “NoSQL,” where by now many NoSQL databases have emerged, and Azure DocumentDB is the latest NoSQL contender from Microsoft. But unlike most other NoSQL databases, the primary way to query DocumentDB is – oddly enough – by using SQL, or at least, a version of SQL that’s been adapted to the non-relational world of NoSQL databases. I’ll be talking a lot about DocumentDB in upcoming posts.

OK, so NoSQL really means non-relational. Now that’s a really broad definition. Saying it isn’t relational is like saying it’s anything else. And that’s true, which is why in fact there are different types of NoSQL databases. These include key-value stores, such as Azure Table Storage, column based stores like Cassandra, graph databases like Neo4, and document databases like MongoDB and Azure DocumentDB. While there are key differences between these types, all NoSQL database platforms share several common characteristics.

Huge amounts of data

First, they are designed to scale out, not just up. Meaning that while relational databases scale up easily enough, simply by adding more hardware, it’s much more difficult to scale them out horizontally – that is, to spread relational data across multiple partititions – once you hit the ceiling on CPU, disk, and memory, and can no longer scale up. In contrast, NoSQL databases are designed to scale out – infinitely, in fact—making it much easier to achieve internet scale for modern applications.

Schema-free data

Another common characteristic among NoSQL databases is the concept of schema-free data. That is, unlike relational databases, a NoSQL database does not enforce any schema. Every item in the database is free to store information that may or may not be structured the same as other items – even other items of the same type. This means that you can simply introduce new elements in your data as they become pertinent, without requiring any design changes in the database, such as adding or dropping columns, or changing data types. Similarly, you can stop including elements in new data as they start becoming irrelevant, again, without maintaining a schema in the database.

Simplicity Rules

By design, NoSQL databases are simple. They are not nearly as robust as traditional relational database platforms, like SQL Server and Oracle, and there are two reasons for this. For one, NoSQL databases don’t try to provide the complete functionality that is currently available in a relational database. That is, they are specifically designed to be simpler than relational databases, which is how they are able to out-perform relational databases on a large scale. In other cases, NoSQL databases lack features simply because they are much younger than their relational counterparts, which achieved maturity a long time ago. So while you can expect to see improvements in areas of missing functionality as NoSQL databases evolve, these platforms won’t make an attempt to replace full feature set available in relational databases. Despite the negative connotation in the name “NoSQL,” eliminating relational databases in favor of NoSQL is certainly not a stated goal. But at the same time, a scalable, schema-free, and easy-to-use database platform is rather compelling, and gives us more choices for our applications than we had before. Relational databases are definitely here to stay, but they no longer enjoy the monopoly they once had as the back-end platform of choice for new applications, now that a variety of NoSQL alternatives are here.

Document Database

I mentioned that Azure DocumentDB is a NoSQL database, and that it’s a NoSQL document database specifically. This is yet another unfortunate term, because when most people think of a document, they think of a file – like a Word document, spreadsheet, PDF file, and this is definitely not the type of document we mean when we say “document database.” In NoSQL terms, a document is more like an object graph; a complete representation of an entity and its related entities. While object graphs can be serialized and deserialized in any format, NoSQL databases typically leverage JSON, or JavaScript object notation, as the format for storing, projecting, and transporting data. Given the pervasiveness of JSON in today’s world – particularly among web applications – it should really take noone by surprise that NoSQL document databases have generally embraced JSON as their native data format. It’s a simple, lightweight format, yet expressive enough to support many different data modeling scenarios.

Integrating Document BLOB Storage with SQL Server

NoSQL platforms can support highly scalable databases with BLOB attachments (images, documents, and other files), but if you think you need to embrace a NoSQL solution in lieu of SQL Server just because you have a high volume of BLOBs in your database, then think again. Sure, if you have good reasons to go with NoSQL anyway – for example, if you want the flexibility of schema-free tables, and you can accept the compromises of eventual transactional consistency – then NoSQL can fit the bill nicely.

But critical line-of-business applications often can’t afford the relaxed constraints of NoSQL databases, and usually require schemas that are strongly typed, with full transactional integrity; that is, a full-fledged relational database system (RDBMS). However, relational database platforms like SQL Server were originally designed and optimized to work primarily with structured data, not BLOBs. And so historically, it’s never been feasible to store large amounts of BLOB data directly in the database. That is, until FILESTREAM.

With FILESTREAM, Microsoft addresses the dilemma of storing BLOBs within the relational database. My new Pluralsight course, SQL Server 2012-2014 Native File Streaming, explains this innovative feature in detail, and in this blog post, I’ll discuss how FILESTREAM (and its related technologies) can be used to implement a highly-scalable BLOB storage solution that’s fully integrated with a relational SQL Server database. You’ll also find live demos on everything covered by this post in the course.

Introducing FILESTREAM

Although SQL Server was never originally intended to handle BLOBs in large scale, this is no longer true as of FILESTREAM (introduced in SQL Server 2008). Before FILESTREAM, SQL Server was forced to shove BLOBs into the standard database filegroups, which are really optimized for storing structured row data in 8k pages. Because BLOBs don’t fit naturally within this structure, they must be pushed into off-row storage, which bloats the structured filegroups, and ultimately kills performance.

FS1b

FILESTREAM changes all that. First, to be clear, FILESTREAM is not actually a data type. Rather, it’s an attribute that you apply to the varbinary(max) data type, the same data type that you would use to store BLOBs directly inside the row. But by merely appending the FILESTREAM attribute to the varbinary(max) data type, SQL Server takes a radically different approach to physical BLOB storage. Rather than inundating the standard database filegroups with BLOBs, SQL Server stores BLOB content as files in the file system – where they belong; the file system being a native environment optimized for storing and streaming unstructured binary content. At the same time, it establishes and maintains reference pointers between the rows in the standard filegroups and the files in the file system that are tied to varbinary(max) columns in those rows. All this magic occurs behind the scenes, and is totally transparent to any existing code that works with ordinary varbinary(max) columns.

FS2

In this manner, the BLOB data is physically stored separately from structured row data, but it is logically an integral part of the database. So for example, backing up the database includes the BLOB data, with the option of performing a partial backup that excludes the FILESTREAM filegroup when you want to create smaller backups that don’t include BLOB data.

Furthermore, this solution provides full transactional consistency – because FILESTREAM integrates with the NTFS file system, and NTFS is a transactional file system. So when you start a database transaction and insert a row, and that row includes BLOB data stored in a varbinary(max) FILESTREAM column, then SQL Server automatically initiates an NTFS file system transaction over that BLOB data. Then, the fate of the file system transaction hinges on the fate of the database transaction. If and when the database transaction commits, then SQL Server will also commit the NTFS file system transaction; similarly, rolling back the database transaction automatically rolls back the NTFS transaction.

Accessing BLOBs with T-SQL

With FILESTREAM, you can treat BLOBs as ordinary varbinary(max) columns in T-SQL. For example, you can use the OPENROWSET function with the BULK provider to import an external file into a varbinary(max) column, and if that column is decorated with the FILESTREAM attribute, then SQL Server will automatically store a copy of that file as a BLOB in the NTFS file system behind the scenes, rather than force-fitting it into the standard database filegroups.

For example:

INSERT INTO PhotoAlbum(PhotoId, PhotoDescription, Photo)
  VALUES(
    3,
    'Mountains',
    (SELECT BulkColumn FROM OPENROWSET(BULK 'C:\Demo\Ascent.jpg', SINGLE_BLOB) AS x))

To retrieve BLOBs, it’s a simple SELECT:

SELECT * FROM PhotoAlbum

FS3

Using SqlFileStream and the Streaming API

Although FILESTREAM delivers scalable storage by leveraging the NTFS file system behind the scenes, BLOB access needs to scale as well. It’s great that you can enjoy total transparency by just using T-SQL access, but stop for a moment and think about what SQL Server needs to do when retrieving BLOBs with T-SQL. In order to serve up the Photo column in the resultset shown above for the SELECT statement, for example, SQL Server needed to read the entire contents of each BLOB from the NTFS file system that it’s managing internally, and this can easily and suddenly place a great deal of memory pressure on the server.

To address this concern, FILESTREAM exposes the streaming API. When you use this API, SQL Server still manages the file system behind the scenes, only it shifts the burden and memory requirements of actually reading and writing BLOBs in the file system off of itself and onto the client application. This keeps the memory requirements on SQL Server very lean, regardless of how large your BLOBs may be.

The SqlFileStream class is a managed code wrapper around the streaming API, which makes it extremely easy to use from .NET. In C# or VB .NET, you start a database transaction and issue an INSERT statement, but you don’t actually include the BLOB content with the INSERT statement. Instead, SQL Server passes you back the information you need to create a SqlFileStream object. This object inherits from the base System.IO.Stream class, meaning that it supports all the standard read/write methods of standard .NET stream classes, including memory streams, HTTP request/response streams, and local file streams. So it’s easy to then stream your BLOBs in and out, using buffers in memory allocated to your application – not SQL Server. Then, you just commit the database transaction, and SQL Server automatically commits the NTFS file system transaction at the same time.

In my course, I show you SqlFileStream up close, and demonstrate how to program against the streaming API from a thick client application, a client/server (Web) application, and in an n-tier (WCF) scenario as well.

Introducing FileTable

The FILESTREAM story only gets better with FileTable, added in SQL Server 2012. While FILESTREAM revolutionizes BLOB storage in SQL Server, it’s only accessible to developers and administrators. What about ordinary users? They’re certainly not going to write T-SQL or streaming API code to access BLOBs. And there’s also no way for ordinary client applications to access FILESTREAM data.

The solution is FileTable, which combines FILESTREAM with the hierarchyid data type to furnish an “emulated” file system; that is, a file system that users and applications can work with, but which is really a FileTable in a SQL Server database. A FileTable is just a regular table except that it has a fixed schema; specifically, it has these pre-determined columns for the metadata of the emulated file system:

FS4

Every row in a FileTable represents either a file or a folder (depending on the is_directory column), and the hierarchyid value in the path_locator column is what implements the folder structure of the emulated file system. The hierarchyid data type has methods that you can use to query and manipulate the structure; for example, you can programmatically move entire subtrees from one parent to another.

For rows that represent files, the file_stream column holds the actual BLOB, and this is a varbinary(max) FILESTREAM column. So behind the scenes, it is stored in the NTFS file system just like a varbinary(max) FILESTREAM column of an ordinary table (a non-FileTable) would be.

And so, in addition to being able to use T-SQL or the streaming API with a FileTable, the emulated file system that a FileTable represents also gets exposed to users and client applications via a Windows file share. As a result, changes made to the table in the database are reflected in the emulated file system, and conversely, changes made to the emulated file system by users or client applications are reflected automatically in the database, which ultimately pushes down into the physical NTFS file system being used for BLOB storage behind the scenes.

FS5

Summary

This blog post explained FILESTREAM, and its related feature, FileTable. We first saw how FILESTREAM supports scalable BLOB storage using the NTFS file system behind the scenes, and provides transparent T-SQL access using the varbinary(max) data type. We also learned about the streaming API and SqlFileStream, which shifts the burden and memory requirements for streaming BLOBs off of SQL Server and onto client applications, providing scalable BLOB access. And we finally saw how FileTable combines FILESTREAM with the hierarchyid data type to furnish an emulated file system on the front end that users and client applications can interact with, but which in actuality is just a table in the database.

And so, with FILESTREAM, line-of-business applications can embrace scalable BLOB integration without being forced to consider a NoSQL alternative to SQL Server.

Bidirectional Communication Between Directives and Controllers in Angular

In Angular, it’s very easy for a directive to call into a controller. Working in the other direction – that is, calling a directive function from the controller – is not quite as intuitive. In this blog post, I’ll show you an easy way for your controllers to call functions defined in your directives in your Angular applications.

Like I said, calling a controller function from a directive is straightforward. You simply define a “callback” function in the controller and pass it to the directive (using the ‘&’ symbol in the isolated scope definition). It’s then trivial for the directive to invoke the function, which calls into the controller. To put things in .NET terms, this is akin to a user control (the directive) raising an event, which the user control’s host (the controller) can handle.

For example, you may want your directive to call your controller when the user clicks a button defined inside the directive’s template:

myPage.html

<div ng-controller=”myController”>
    <my-directive on-button-click=”vm.directiveButtonClicked()” />
</div>

myController.js

function myController($scope) {
    var vm = this;
    vm.directiveButtonClicked = function () {
        // Controller reacting to call initiated by directive
        alert(‘Button was clicked in directive’);
    }
}

myDirectiveTemplate.html

<button ng-click=”buttonClicked”>Click Me</button>

myDirective.js

function myDirective() {
    return {
        restrict: ‘E’,
        templateUrl: ‘/Templates/myDirectiveTemplate.html’,
        scope: {
            onButtonClick: ‘&’
        },
        link: link
    };

    function link(scope, element, attrs, controller) {
        scope.buttonClicked = function () {
            // Button was clicked in the directive
            // Invoke callback function on the controller
            scope.onButtonClick();
         }
     }
}

Unfortunately, there is no clearly established pattern in Angular for communicating in the opposite direction (calling functions of the directive from the controller). Again, in .NET terms, it’s easy for a user control’s host (the controller) to invoke public or internal methods defined by the user control (the directive). But there is no native way to achieve the same thing in Angular, which is certainly curious, because this is not an uncommon requirement.

Several solutions to this problem can be found on the web, but most of them carry caveats and/or add unwanted complexity. Some work by using $watch, but $watch is undesirable and should generally be avoided when possible. Others work, but not with isolated scope, which means you won’t achieve isolation across multiple instances of the directive.

Fret not! I’m going to show you a simple, lightweight technique that will enable your controllers to call functions on your directives, without resorting to $watch, and with full support for isolated scope.

Here’s how it works:

  1. The controller defines a view-model object named “accessor” with no members
  2. The page passes this object to the directive, via an attribute also named “accessor”
  3. The directive receives the accessor, and attaches a function to it
  4. The controller is now able to call the directive function via the accessor

Let’s demonstrate with an example. The directive template has two text boxes for input, but no button. Instead, there is a button on the page that is wired to a handler on the page’s controller. When the user clicks the button, the controller calls the directive. In response, the directive prepares an object with data entered by the user in the text boxes and returns it to the controller.

myPage.html

<div ng-controller=”myController”>
    <my-directive accessor=”vm.accessor” />
    <button ng-click=”vm.callDirective()”>Get Data</button>
</div>

myController.js

function myController($scope) {
    var vm = this;
    vm.accessor = {};
    vm.callDirective = function () {
        if (vm.accessor.getData) {
            var data = vm.accessor.getData();
            alert(‘Data from directive: ‘ + JSON.stringify(data));
        }
    };
}

myDirectiveTemplate.html

Name: <input type=”text” ng-model=”name” /><br />
Credit: <input type=”text” ng-model=”credit” /><br />

myDirective.js

function myDirective() {
    return {
        restrict: ‘E’,
        templateUrl: ‘/Templates/myDirectiveTemplate.html’,
        scope: {
            accessor: ‘=’
        },
        link: link
    };

    function link(scope, element, attrs, controller) {
        if (scope.accessor) {
            scope.accessor.getData = function () {
                return {
                    name: scope.name,
                    credit: scope.credit
                };
            };
        }
    }
}

Notice how the controller defines vm.accessor as a new object with no members. The controller’s expectation is that the directive will attach a getData function to this object. And the directive’s expectation is that the controller has defined and passed in the accessor object specifically for this purpose. Defensive coding patterns are employed on behalf of both expectations; that is, we ensure that no runtime error is raised by the browser in case the controller doesn’t define and pass in the expected accessor object, or in case the directive doesn’t attach the expected function to the accessor object.

The accessor pattern described in this blog post simplifies the task of bi-directional communication, making it just as easy to call your directive from your controller as it is to call in the other direction.

Happy coding!

 

 

Just Published: Microsoft Azure SQL Database Step by Step

I’m extremely pleased to announce that my new book on Microsoft Azure SQL Database has just been published! The book, part of the Microsoft Press “Step By Step” series, is designed for readers to quickly get productive with Microsoft Azure SQL Database — the cloud version of SQL Server.

WP_20140713_001

I’m especially lucky to have worked with Eric Boyd, who has co-authored the book with me. Eric has great knowledge and experience with the Azure platform, which really shines through in his chapters.

So who is this book for? Well, anyone interested in quickly getting up and running with SQL Database on Microsoft Azure. This includes not only those experienced with SQL Server, but readers having general experience with other database technologies, and even those with little to no experience at all. The Step By Step series follows an inviting format that’s chock full of quick rewards — small bits of conceptual information are presented, and that information is then immediately put to practical use by walking through a relatively short procedure, one step at a time.

Here are just some of the great things we cover:

  • Quick-Start, Setup, and Configuration
  • Security in the cloud
  • Reporting Services in the cloud
  • SQL Data Sync
  • Migration and Backup
  • Using the online management portal, and familiar tools like SSMS and SSDT
  • Programming using such tools as the Entity Framework ORM layer
  • Scalability and Performance
  • Differences from on-premise SQL Server

Of course, even as these pages come hot off the press, Azure continues to evolve. On to the next edition?…

 

Observations of a SQL Saturday Speaker

I enjoy speaking at SQL Saturday events every chance I get. I’ve spoken at well over a dozen of them by now, both in the U.S. and abroad. Every SQL Saturday is different, and that’s part of the charm of course. Organizers (all volunteers) work hard to make sure everything runs smoothly. Still, on occasion, problems can arise, and in some cases, these are problems that can be avoided. For you SQL Saturday organizers out there planning your next event, here are some tips based on personal experience that can help ensure success:

  • Notify speakers of agenda changes
    • On occasion, the agenda needs to be changed, which is understandable. However, if this happens, always be sure to notify the affected speakers (or all speakers). This is especially important when the changes are last-minute. Don’t assume speakers double-check the web-site to look for last-minute changes the day before your event
  • Don’t assume all speakers are local
    • When providing directions, don’t use general references like “the east entrance of the parking garage.” Out-of-towners tend to be directionally challenged!
  • Include proper location details
    • Be as specific as possible and include all relevant details, such as the floor, when pertinent. If the event is being hosted at a university, don’t just advertise the general address for the entire campus; specify the exact building address or building name.
  • Provide organizer contact phone number
    • Having no one to call in the event of a problem right before (or during) the event can pile more stress on an already stressful situation. Make sure that every speaker has a name and phone number they can contact if necessary.
  • Don’t assume every shows up in the morning
    • Very often, SQL Saturday signs are placed near and around the venue. These signs are very helpful, so they shouldn’t be taken down in the middle of the day. Some attendees (and speakers) make only be able to arrive for the latter part of the day, and they also need to benefit from the signs.

 

Follow

Get every new post delivered to your Inbox.

Join 53 other followers