Integrating Document BLOB Storage with SQL Server

NoSQL platforms can support highly scalable databases with BLOB attachments (images, documents, and other files), but if you think you need to embrace a NoSQL solution in lieu of SQL Server just because you have a high volume of BLOBs in your database, then think again. Sure, if you have good reasons to go with NoSQL anyway – for example, if you want the flexibility of schema-free tables, and you can accept the compromises of eventual transactional consistency – then NoSQL can fit the bill nicely.

But critical line-of-business applications often can’t afford the relaxed constraints of NoSQL databases, and usually require schemas that are strongly typed, with full transactional integrity; that is, a full-fledged relational database system (RDBMS). However, relational database platforms like SQL Server were originally designed and optimized to work primarily with structured data, not BLOBs. And so historically, it’s never been feasible to store large amounts of BLOB data directly in the database. That is, until FILESTREAM.

With FILESTREAM, Microsoft addresses the dilemma of storing BLOBs within the relational database. My new Pluralsight course, SQL Server 2012-2014 Native File Streaming, explains this innovative feature in detail, and in this blog post, I’ll discuss how FILESTREAM (and its related technologies) can be used to implement a highly-scalable BLOB storage solution that’s fully integrated with a relational SQL Server database. You’ll also find live demos on everything covered by this post in the course.

Introducing FILESTREAM

Although SQL Server was never originally intended to handle BLOBs in large scale, this is no longer true as of FILESTREAM (introduced in SQL Server 2008). Before FILESTREAM, SQL Server was forced to shove BLOBs into the standard database filegroups, which are really optimized for storing structured row data in 8k pages. Because BLOBs don’t fit naturally within this structure, they must be pushed into off-row storage, which bloats the structured filegroups, and ultimately kills performance.


FILESTREAM changes all that. First, to be clear, FILESTREAM is not actually a data type. Rather, it’s an attribute that you apply to the varbinary(max) data type, the same data type that you would use to store BLOBs directly inside the row. But by merely appending the FILESTREAM attribute to the varbinary(max) data type, SQL Server takes a radically different approach to physical BLOB storage. Rather than inundating the standard database filegroups with BLOBs, SQL Server stores BLOB content as files in the file system – where they belong; the file system being a native environment optimized for storing and streaming unstructured binary content. At the same time, it establishes and maintains reference pointers between the rows in the standard filegroups and the files in the file system that are tied to varbinary(max) columns in those rows. All this magic occurs behind the scenes, and is totally transparent to any existing code that works with ordinary varbinary(max) columns.


In this manner, the BLOB data is physically stored separately from structured row data, but it is logically an integral part of the database. So for example, backing up the database includes the BLOB data, with the option of performing a partial backup that excludes the FILESTREAM filegroup when you want to create smaller backups that don’t include BLOB data.

Furthermore, this solution provides full transactional consistency – because FILESTREAM integrates with the NTFS file system, and NTFS is a transactional file system. So when you start a database transaction and insert a row, and that row includes BLOB data stored in a varbinary(max) FILESTREAM column, then SQL Server automatically initiates an NTFS file system transaction over that BLOB data. Then, the fate of the file system transaction hinges on the fate of the database transaction. If and when the database transaction commits, then SQL Server will also commit the NTFS file system transaction; similarly, rolling back the database transaction automatically rolls back the NTFS transaction.

Accessing BLOBs with T-SQL

With FILESTREAM, you can treat BLOBs as ordinary varbinary(max) columns in T-SQL. For example, you can use the OPENROWSET function with the BULK provider to import an external file into a varbinary(max) column, and if that column is decorated with the FILESTREAM attribute, then SQL Server will automatically store a copy of that file as a BLOB in the NTFS file system behind the scenes, rather than force-fitting it into the standard database filegroups.

For example:

INSERT INTO PhotoAlbum(PhotoId, PhotoDescription, Photo)
    (SELECT BulkColumn FROM OPENROWSET(BULK 'C:\Demo\Ascent.jpg', SINGLE_BLOB) AS x))

To retrieve BLOBs, it’s a simple SELECT:

SELECT * FROM PhotoAlbum


Using SqlFileStream and the Streaming API

Although FILESTREAM delivers scalable storage by leveraging the NTFS file system behind the scenes, BLOB access needs to scale as well. It’s great that you can enjoy total transparency by just using T-SQL access, but stop for a moment and think about what SQL Server needs to do when retrieving BLOBs with T-SQL. In order to serve up the Photo column in the resultset shown above for the SELECT statement, for example, SQL Server needed to read the entire contents of each BLOB from the NTFS file system that it’s managing internally, and this can easily and suddenly place a great deal of memory pressure on the server.

To address this concern, FILESTREAM exposes the streaming API. When you use this API, SQL Server still manages the file system behind the scenes, only it shifts the burden and memory requirements of actually reading and writing BLOBs in the file system off of itself and onto the client application. This keeps the memory requirements on SQL Server very lean, regardless of how large your BLOBs may be.

The SqlFileStream class is a managed code wrapper around the streaming API, which makes it extremely easy to use from .NET. In C# or VB .NET, you start a database transaction and issue an INSERT statement, but you don’t actually include the BLOB content with the INSERT statement. Instead, SQL Server passes you back the information you need to create a SqlFileStream object. This object inherits from the base System.IO.Stream class, meaning that it supports all the standard read/write methods of standard .NET stream classes, including memory streams, HTTP request/response streams, and local file streams. So it’s easy to then stream your BLOBs in and out, using buffers in memory allocated to your application – not SQL Server. Then, you just commit the database transaction, and SQL Server automatically commits the NTFS file system transaction at the same time.

In my course, I show you SqlFileStream up close, and demonstrate how to program against the streaming API from a thick client application, a client/server (Web) application, and in an n-tier (WCF) scenario as well.

Introducing FileTable

The FILESTREAM story only gets better with FileTable, added in SQL Server 2012. While FILESTREAM revolutionizes BLOB storage in SQL Server, it’s only accessible to developers and administrators. What about ordinary users? They’re certainly not going to write T-SQL or streaming API code to access BLOBs. And there’s also no way for ordinary client applications to access FILESTREAM data.

The solution is FileTable, which combines FILESTREAM with the hierarchyid data type to furnish an “emulated” file system; that is, a file system that users and applications can work with, but which is really a FileTable in a SQL Server database. A FileTable is just a regular table except that it has a fixed schema; specifically, it has these pre-determined columns for the metadata of the emulated file system:


Every row in a FileTable represents either a file or a folder (depending on the is_directory column), and the hierarchyid value in the path_locator column is what implements the folder structure of the emulated file system. The hierarchyid data type has methods that you can use to query and manipulate the structure; for example, you can programmatically move entire subtrees from one parent to another.

For rows that represent files, the file_stream column holds the actual BLOB, and this is a varbinary(max) FILESTREAM column. So behind the scenes, it is stored in the NTFS file system just like a varbinary(max) FILESTREAM column of an ordinary table (a non-FileTable) would be.

And so, in addition to being able to use T-SQL or the streaming API with a FileTable, the emulated file system that a FileTable represents also gets exposed to users and client applications via a Windows file share. As a result, changes made to the table in the database are reflected in the emulated file system, and conversely, changes made to the emulated file system by users or client applications are reflected automatically in the database, which ultimately pushes down into the physical NTFS file system being used for BLOB storage behind the scenes.



This blog post explained FILESTREAM, and its related feature, FileTable. We first saw how FILESTREAM supports scalable BLOB storage using the NTFS file system behind the scenes, and provides transparent T-SQL access using the varbinary(max) data type. We also learned about the streaming API and SqlFileStream, which shifts the burden and memory requirements for streaming BLOBs off of SQL Server and onto client applications, providing scalable BLOB access. And we finally saw how FileTable combines FILESTREAM with the hierarchyid data type to furnish an emulated file system on the front end that users and client applications can interact with, but which in actuality is just a table in the database.

And so, with FILESTREAM, line-of-business applications can embrace scalable BLOB integration without being forced to consider a NoSQL alternative to SQL Server.

Bidirectional Communication Between Directives and Controllers in Angular

In Angular, it’s very easy for a directive to call into a controller. Working in the other direction – that is, calling a directive function from the controller – is not quite as intuitive. In this blog post, I’ll show you an easy way for your controllers to call functions defined in your directives in your Angular applications.

Like I said, calling a controller function from a directive is straightforward. You simply define a “callback” function in the controller and pass it to the directive (using the ‘&’ symbol in the isolated scope definition). It’s then trivial for the directive to invoke the function, which calls into the controller. To put things in .NET terms, this is akin to a user control (the directive) raising an event, which the user control’s host (the controller) can handle.

For example, you may want your directive to call your controller when the user clicks a button defined inside the directive’s template:


<div ng-controller=”myController”>
    <my-directive on-button-click=”vm.directiveButtonClicked()” />


function myController($scope) {
    var vm = this;
    vm.directiveButtonClicked = function () {
        // Controller reacting to call initiated by directive
        alert(‘Button was clicked in directive’);


<button ng-click=”buttonClicked”>Click Me</button>


function myDirective() {
    return {
        restrict: ‘E’,
        templateUrl: ‘/Templates/myDirectiveTemplate.html’,
        scope: {
            onButtonClick: ‘&’
        link: link

    function link(scope, element, attrs, controller) {
        scope.buttonClicked = function () {
            // Button was clicked in the directive
            // Invoke callback function on the controller

Unfortunately, there is no clearly established pattern in Angular for communicating in the opposite direction (calling functions of the directive from the controller). Again, in .NET terms, it’s easy for a user control’s host (the controller) to invoke public or internal methods defined by the user control (the directive). But there is no native way to achieve the same thing in Angular, which is certainly curious, because this is not an uncommon requirement.

Several solutions to this problem can be found on the web, but most of them carry caveats and/or add unwanted complexity. Some work by using $watch, but $watch is undesirable and should generally be avoided when possible. Others work, but not with isolated scope, which means you won’t achieve isolation across multiple instances of the directive.

Fret not! I’m going to show you a simple, lightweight technique that will enable your controllers to call functions on your directives, without resorting to $watch, and with full support for isolated scope.

Here’s how it works:

  1. The controller defines a view-model object named “accessor” with no members
  2. The page passes this object to the directive, via an attribute also named “accessor”
  3. The directive receives the accessor, and attaches a function to it
  4. The controller is now able to call the directive function via the accessor

Let’s demonstrate with an example. The directive template has two text boxes for input, but no button. Instead, there is a button on the page that is wired to a handler on the page’s controller. When the user clicks the button, the controller calls the directive. In response, the directive prepares an object with data entered by the user in the text boxes and returns it to the controller.


<div ng-controller=”myController”>
    <my-directive accessor=”vm.accessor” />
    <button ng-click=”vm.callDirective()”>Get Data</button>


function myController($scope) {
    var vm = this;
    vm.accessor = {};
    vm.callDirective = function () {
        if (vm.accessor.getData) {
            var data = vm.accessor.getData();
            alert(‘Data from directive: ‘ + JSON.stringify(data));


Name: <input type=”text” ng-model=”name” /><br />
Credit: <input type=”text” ng-model=”credit” /><br />


function myDirective() {
    return {
        restrict: ‘E’,
        templateUrl: ‘/Templates/myDirectiveTemplate.html’,
        scope: {
            accessor: ‘=’
        link: link

    function link(scope, element, attrs, controller) {
        if (scope.accessor) {
            scope.accessor.getData = function () {
                return {

Notice how the controller defines vm.accessor as a new object with no members. The controller’s expectation is that the directive will attach a getData function to this object. And the directive’s expectation is that the controller has defined and passed in the accessor object specifically for this purpose. Defensive coding patterns are employed on behalf of both expectations; that is, we ensure that no runtime error is raised by the browser in case the controller doesn’t define and pass in the expected accessor object, or in case the directive doesn’t attach the expected function to the accessor object.

The accessor pattern described in this blog post simplifies the task of bi-directional communication, making it just as easy to call your directive from your controller as it is to call in the other direction.

Happy coding!



Just Published: Microsoft Azure SQL Database Step by Step

I’m extremely pleased to announce that my new book on Microsoft Azure SQL Database has just been published! The book, part of the Microsoft Press “Step By Step” series, is designed for readers to quickly get productive with Microsoft Azure SQL Database — the cloud version of SQL Server.


I’m especially lucky to have worked with Eric Boyd, who has co-authored the book with me. Eric has great knowledge and experience with the Azure platform, which really shines through in his chapters.

So who is this book for? Well, anyone interested in quickly getting up and running with SQL Database on Microsoft Azure. This includes not only those experienced with SQL Server, but readers having general experience with other database technologies, and even those with little to no experience at all. The Step By Step series follows an inviting format that’s chock full of quick rewards — small bits of conceptual information are presented, and that information is then immediately put to practical use by walking through a relatively short procedure, one step at a time.

Here are just some of the great things we cover:

  • Quick-Start, Setup, and Configuration
  • Security in the cloud
  • Reporting Services in the cloud
  • SQL Data Sync
  • Migration and Backup
  • Using the online management portal, and familiar tools like SSMS and SSDT
  • Programming using such tools as the Entity Framework ORM layer
  • Scalability and Performance
  • Differences from on-premise SQL Server

Of course, even as these pages come hot off the press, Azure continues to evolve. On to the next edition?…


Observations of a SQL Saturday Speaker

I enjoy speaking at SQL Saturday events every chance I get. I’ve spoken at well over a dozen of them by now, both in the U.S. and abroad. Every SQL Saturday is different, and that’s part of the charm of course. Organizers (all volunteers) work hard to make sure everything runs smoothly. Still, on occasion, problems can arise, and in some cases, these are problems that can be avoided. For you SQL Saturday organizers out there planning your next event, here are some tips based on personal experience that can help ensure success:

  • Notify speakers of agenda changes
    • On occasion, the agenda needs to be changed, which is understandable. However, if this happens, always be sure to notify the affected speakers (or all speakers). This is especially important when the changes are last-minute. Don’t assume speakers double-check the web-site to look for last-minute changes the day before your event
  • Don’t assume all speakers are local
    • When providing directions, don’t use general references like “the east entrance of the parking garage.” Out-of-towners tend to be directionally challenged!
  • Include proper location details
    • Be as specific as possible and include all relevant details, such as the floor, when pertinent. If the event is being hosted at a university, don’t just advertise the general address for the entire campus; specify the exact building address or building name.
  • Provide organizer contact phone number
    • Having no one to call in the event of a problem right before (or during) the event can pile more stress on an already stressful situation. Make sure that every speaker has a name and phone number they can contact if necessary.
  • Don’t assume every shows up in the morning
    • Very often, SQL Saturday signs are placed near and around the venue. These signs are very helpful, so they shouldn’t be taken down in the middle of the day. Some attendees (and speakers) make only be able to arrive for the latter part of the day, and they also need to benefit from the signs.


Using PowerShell to Manage Windows Azure SQL Database

PowerShell is Microsoft’s modern scripting language for system administration that supports a wide variety of tasks by executing commands (known as cmdlets, pronounced command-lets) from the PowerShell command line. Microsoft has also developed PowerShell cmdlets for managing Microsoft Azure, including a number of useful SQL Database commands. In this post, I’ll show you how to get started using these cmdlets.

Even if you already have PowerShell installed, these special cmdlets for Microsoft Azure need to be installed separately. The following instructions describe how to download and install the cmdlets.

Installing the Microsoft Azure PowerShell cmdlets

To install the Microsoft Azure PowerShell cmdlets, follow these steps:

  1. Navigate your web browser to
  2. Scroll down to Command-Line Tools, and click the Install link beneath Windows PowerShell.
  3. When prompted to run or save, click Run.
  4. If the User Account Control dialog appears, click Yes.
  5. In the Web Platform Installer dialog, click Install.
  6. Click I Accept to start the installation.
  7. When installation completes, click Finish.
  8. Click Exit to close the Web Platform Installer dialog.

Using the PowerShell Integrated Scripting Environment

Although you can use a simple text editor (even Notepad) to write PowerShell scripts, the PowerShell Integrated Scripting Environment (ISE) is a much more productive environment. As you’ll see, it offers some nice features, such as syntax highlighting and IntelliSense-style auto-completion.

PowerShell can be launched from the command line. To start the PowerShell ISE and view help information for the Azure SQL Database cmdlets, follow these steps:

  1. Open a command prompt.
  2. At the command prompt, type powershell_ise.
  3. At the PowerShell ISE prompt, type get-help get-azuresql, and then pause. In a moment, a popup window appears showing all the cmdlets that start with get-azuresql.
  4. Double-click on any of the cmdlets to complete the command, and then press Enter to view help for the selected cmdlet.

Configuring PowerShell for your Microsoft account

Before you can start using PowerShell to manage Azure, you need to configure it for your account. This is simply a matter of logging in to the portal, and then running a few PowerShell commands to retrieve your account information from Azure and import it into PowerShell. This section walks you through the process.

To configure PowerShell for your Microsoft account, follow these steps:

  1. Log in to the Microsoft Azure portal at This step is necessary for PowerShell to identify your account. If you don’t log in first, you will be prompted to log in when you try to retrieve your account settings in PowerShell.
  2. Start the PowerShell ISE, as explained in the previous procedure.
  3. Type Get-AzurePublishSettingsFile to retrieve your account settings. Internet Explorer will open up automatically and download a .publishsettings file with your account information.
  4. Click Save to save the .publishsettings file to your default Downloads folder.
    Important: The .publishsettings file should be kept safe and private, because it effectively provides access to the Azure subscriptions on your Microsoft account.
  5. Back in the PowerShell ISE, type Import-AzurePublishSettingsFile <.publishsettings file>, where <.publishsettings file> is the complete file name (with path) of the account settings file you just saved to your default Downloads folder. (This is typically C:\Users\<username>\Downloads\<subscription-name>.publishsettings.)

The PowerShell ISE doesn’t boast with a message when the settings are imported successfully. You’ll only get an error message if it fails. Otherwise, you’ll know that all went well if you are silently returned back to the PowerShell command-line prompt.

You are now ready to create a server, and then you can create a database on that server.

Creating a new server

First, create a new server and add a firewall rule for your IP address so that the server will allow you to connect to it using PowerShell. To do this, follow these steps in the PowerShell ISE:

  1. Type New-AzureSqlDatabaseServer –Location “East US” –AdministratorLogin “<new-login>” –AdministratorLoginPassword “<new-password>, where <new-login> and <new-password> are the credentials you want to assign for the new server. The server is created, and PowerShell responds by displaying the new server name.
  2. Type New-AzureSqlDatabaseServerFirewallRule –ServerName <server-name> –RuleName <any-name> –StartIpAddress <your-ip-address> –EndIpAddress <your-ip-address>, where <server-name> is the name of the new server created in step 1, <any-name> is an arbitrary name for the new rule (no spaces permitted), and <your-ip-address> is the IP address of your machine. This command creates a new firewall rule to allow PowerShell access to the server from your IP address. If you don’t know your IP address, you can find out what it is by visiting

Creating a new database

The New-AzureSqlDatabase cmdlet creates a new database. Before you can use this cmdlet, you must first create an object with your credentials, and then you use those credentials to create a context associated with the server that you want to create the new database on. You store the server context in a variable, and then you specify the server context variable with the New-AzureSqlDatabase cmdlet to create the database (as well as all other cmdlets you might run for that particular server).

To create a new database now, follow these steps in the PowerShell ISE:

  1. Type $creds = new-object System.Management.Automation.PSCredential(“<login-name>“, (“<login-password>” | ConvertTo-SecureString –asPlainText –Force)), where <login-name> and <login-password> are the administrator login and password you assigned when you created the server in the previous procedure. This stores those administrator credentials in a secure string named $creds.
  2. Type $context = New-AzureSqlDatabaseServerContext –ServerName <server-name> –Credential $creds, where <server-name> is the name of the server you created in the previous procedure. (The server name is displayed when you create the server) This creates a context associated with the credentials you created in step 1 and the server you created in the previous procedure, and it stores that context in an object named $context.
  3. Type New-AzureSqlDatabase –Context $context –DatabaseName MyNewDb. This creates a new database named MyNewDb. The database is created on the server associated with $context, using the credentials associated with $creds. When the database is created, PowerShell displays information about the new database.
  4. It’s often useful to view all the databases that exist on the server. To do so, type Get-AzureSqlDatabase –Context $context. As shown by this cmdlet’s output below, the server includes a master database, just as an on-premises SQL Server does.

The database you just created with New-AzureSqlDatabase is, by default, a Web edition database with a maximum size of 1 GB and the default collation. This is the same type of database that gets created when you use Quick Create in the Microsoft Azure management portal. To override these defaults, specify the –Edition, –MaxSizeGb, and –Collation switches with an edition, maximum size, and collation of your own choosing. For example, the following statement creates a Business edition database with a maximum size of 150 GB (the largest possible):

New-AzureSqlDatabase –Context $context –DatabaseName MyBigDb -Edition Business -MaxSizeGB 150

You can also change the edition and maximum size (but not the collation) of an existing database by using the Set-AzureSqlDatabase cmdlet with the –Edition and –MaxSizeGb switches. For example, you can use the following command to reconfigure the MyNewDb database you just created as a Business edition database with a maximum size of 20 GB:

Set-AzureSqlDatabase -Context $context -DatabaseName MyNewDb -Edition Business -MaxSizeGB 20

Deleting a database

The Remove-AzureSqlDatabase cmdlet deletes a SQL Database. To delete the MyNewDb database you just created in the previous section, follow these steps:

  1. Type Remove-AzureSqlDatabase –Context $context –DatabaseName MyNewDb.
  2. When prompted to confirm, click Yes.

If you are using Remove-AzureSqlDatabase to write scripts you intend to run with no user intervention, you can include the –Force switch. This switch causes the database to be deleted immediately, without being prompted to confirm.

There’s a lot more you can do with these PowerShell cmdlets of course, so be sure to explore all the help topics to learn about all the other supported functionality!


Downloading SQL Server Express Edition

I’m often asked how to install SQL Server Express Edition, and what the different download options are. So I finally decided to explain it all in this blog post.

There are several SQL Server Express Edition downloads available on the Microsoft site, and they are available in both 32-bit and 64-bit versions. You can choose to install just the SQL Server Express database engine (and nothing else), or you can choose one of two other (larger) downloads: Express With Tools (which includes SQL Server Management Studio [SSMS]) or Express With Advanced Services (which includes SSMS, Full Text Search, and Reporting Services). There are also separate downloads for SSMS and LocalDB, but these do not include the SQL Server Express database engine needed to host local databases.

To install the SQL Server Express Edition database engine, follow these steps:

  1. Open Internet Explorer, and navigate to
  2. Click the large orange Download button.
  3. Select the appropriate download for your system.
    1. For 64-bit systems, choose ENU\x64\SQLEXPR_x64_ENU.exe.
    2. For 32-bit or 64-bit WoW systems, choose ENU\x86\SQLEXPR32_x86_ENU.exe.
    3. For 32-bit systems, choose ENU\x86\SQLEXPR_x86_ENU.exe.Note If you need to download SQL Server Management Studio (SSMS) as well, choose the Express With Tools file instead, which is the one that includes WT in the filename.
  4. Click Next.
  5. If you receive a pop-up warning, click Allow Once.
  6. When prompted to run or save the file, choose Run. This starts and runs the download.
  7. If the User Account Control dialog appears after the download files are extracted, click Yes.
  8. In the SQL Server Installation Center, click New SQL Server Stand-Alone Installation.
  9. In the SQL Server 2012 Setup wizard, do the following:
    1. On the License Terms page, select I Accept The License Terms and click Next.
    2. On the Product Updates page, allow the wizard to scan for updates, and then click Next.
    3. On the Install Setup Files page, wait for the installation to proceed.
    4. On the Feature Selection page, Click Next.
    5. Continue clicking Next through all the remaining pages until the Installation Progress page, and wait for the installation to proceed.
    6. On the Complete page indicating a successful setup, click Close.

I hope these instructions help you choose the right installation option for SQL Server Express Edition!





Creating a New SQL Database on Windows Azure

In my previous post, I showed you how to sign up for a Windows Azure subscription. Once you’ve got your subscription, it’s easy to create a SQL Database, which is very much like an ordinary SQL Server database hosted in the cloud. In this post, I’ll show you how. Essentially, you first need to create a server on Windows Azure, and then you can create the database on that server.

Creating a server

It’s easy to create a server, which is akin to an instance of SQL Server in the sense that it can host multiple databases. All you need to do is create an administrator account user name with a strong password, and specify the geographical region where the server should be located physically. To achieve the best performance, you should choose the region closest to your consumers. You will also want to be sure that any Windows Azure cloud Web sites and services you create are hosted in the same region as the SQL Database servers they communicate with. By locating both in the same region, you will avoid the bandwidth-based fee that gets incurred when your cloud sites, services, and databases communicate across different Azure regions. You will also reduce latency, which results in perceivably better performance.

SQL Database also has special firewall rules you can set to control exactly which computer or computers can access your database server in the cloud. Minimally, you’ll need to add a rule granting access to the IP address of your computer so that you can access the server from your local machine. For production, you might need to add rules granting access to blocks of IP addresses.

Follow these steps to create a new server:

  1. Log in to the Windows Azure portal at This brings you to the main portal page showing ALL ITEMS.
  2. Click SQL DATABASES in the vertical navigation pane on the left, then click SERVERS at the top of the page, and then click CREATE A SQL DATABASE SERVER.
  3. Provide a new server login name—for example, saz.
  4. Supply a password for the new server, and then reenter it to confirm. Typical strong password guidelines apply, which require you to use a combination of mixed case, numbers, and symbols.
  5. Choose a region from the drop-down list—for example, East US. For best performance, pick the region you are located in or nearest to.
  6. Be sure to leave the ALLOW WINDOWS AZURE SERVICES TO ACCESS THE SERVER  check box selected. This makes the server accessible to other Windows Azure cloud services that you can create.
  7. Click the checkmark icon on the lower-right side of the dialog to complete the settings. After just a few moments, the new server is provisioned and ready to use.

If you’ve ever prepared a new server from scratch yourself, you can really appreciate the time and effort you just saved. This server is now available and ready to host databases in the cloud, and SQL Database has automatically assigned a randomly unique (but relatively short) name by which it can be accessed. But before access is granted, the server firewall must be configured. So the next step is to add a firewall rule so that you can connect to the server from your local machine.

The check box mentioned in step 6 added the special IP address, which allows cloud services running on Windws Azure to access the SQL Database server. However, you still need to add the IP address of your local machine to access the server from the SQL Database management portal and other tools (such as SQL Server Management Studio and SQL Server Data Tools in Microsoft Visual Studio).

To add a firewall rule for the IP address of your local machine, follow these steps:

  1. Click the server name, and then click the CONFIGURE link at the top of the page.
  2. To the right of your current detected IP address, click ADD TO THE ALLOWED IP ADDRESSES. A new firewall rule for your IP address is added.
  3. Click SAVE at the bottom of the page.
  4. Click the back icon (the large back-pointing arrow) to return to the SQL DATABASES page for the new server.

You might need to wait a few moments for the new firewall rule to take effect, although typically it happens very quickly (often within five to ten seconds). If you don’t wait long enough, however, and the rule has not yet taken effect, you can be quite certain that you will not be able to connect to the server from your local machine until it does.

Creating a SQL Database instance

It will be just about as easy to create a database as it was to create the server. You simply need to choose a name for the new database, an edition, a database size, a default collation, and of course, the server to host the database on.

These settings can be easily changed later on. As part of the elastic scaling provided by SQL Database, you can freely switch back and forth between the Web and Business editions. You can also switch up and down between the sizes (1 GB or 5 GB for the Web edition, or 10 GB through 150 GB for the Business edition) as your changing needs dictate. And if 150 GB is still too small for you, you can partition your database using special sharding techniques.

Follow these steps to create a new SQL Database:

  1. In the Windows Azure portal, click the DATABASES link at the top of the page.
  2. Click CREATE A SQL DATABASE. This opens the NEW SQL DATABASE dialog.
  3. Type the name for the new database.
  4. Leave the default settings to create a Web edition database up to 1 GB in size using the SQL_Lating1_GeneralCP1_CI_AS collation.
  5. Choose the server you created in the previous procedure from the drop-down list.
  6. Click the checkmark icon in the lower right of the dialog to complete the settings.

After a few more moments, the new database is created and ready to use.


And that’s all there is to creating a new SQL Database on Windows Azure!






Get every new post delivered to your Inbox.

Join 54 other followers