Building Your First Windows Azure Cloud Application with Visual Studio 2008

In a previous post, I introduced the Azure Services Platform, and provided step-by-step procedures for getting started with Azure. It can take a bit of time downloading the tools and SDKs and redeeming invitation tokens, but once that’s all done, it’s remarkably simple and fast to build, debug, run, and deploy applications and services to the cloud. In this post, I’ll walk through the process for creating a simple cloud application with Visual Studio 2008.

You don’t need to sign up for anything or request any invitation tokens to walk through the steps in this post. Once you’ve installed the Windows Azure Tools for Microsoft Visual Studio July 2009 CTP and related hotfixes as described in my previous post, you’ll have the necessary templates and runtime components for creating and testing Windows Azure projects. (Of course, your Azure account and services must be set up to actually deploy and run in the cloud, which I’ll cover in a future post.)

Installing the Windows Azure Tools for Microsoft Visual Studio July 2009 CTP provides templates that simplify the process of developing a Windows Azure application. It also installs the Windows Azure SDK, which provides the cloud on your desktop. This means you can build applications on your local machine and debug them as if they were running in cloud. It does this by providing Development Fabric and Development Storage services that simulate the cloud environment on your local machine. When you’re ready, it then takes just a few clicks to deploy to the real cloud.

Let’s dive in!

Start Visual Studio 2008, and begin a new project. Scroll to the new Cloud Service project type, select the Cloud Service template, and name the project HelloCloud.

NewCloudServiceProject

When you choose the Cloud Service template, you are creating at least two projects for your solution: the cloud service project itself, and any number of hosted role projects which Visual Studio prompts for with the New Cloud Service Project dialog. There are three types of role projects you can have, but the one we’re interested in is the ASP.NET Web Role. Add an ASP.NET Web Role to the solution from the Visual C# group and click OK.

AddWebRole

We now have two separate projects in our solution: a Cloud Service project named HelloCloud, and an ASP.NET Web Role project named WebRole1:

CloudSolution

The HelloCloud service project just holds configuration information for hosting one or more role projects in the cloud. Its Roles node in Solution Explorer presently indicates that it’s hosting one role, which is our WebRole1 ASP.NET Web Role. Additional roles can be added to the service, including ASP.NET Web Roles that host WCF services in the cloud, but we’ll cover that in a future post. Note also that it’s set as the solution’s startup project.

The project contains two XML files named ServiceDefinition.csdef and ServiceConfiguration.cscfg. Together, these two files define the roles hosted by the service. Again, for our first cloud application, they currently reflect the single ASP.NET Web Role named WebRole1:

ServiceDefinition.csdef

<?xml version="1.0"?>
<ServiceConfiguration serviceName="HelloCloud" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
<Role name="WebRole1">
<Instances count="1" />
<ConfigurationSettings />
</Role>
</ServiceConfiguration>

ServiceConfiguration.cscfg

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="HelloCloud" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="WebRole1" enableNativeCodeExecution="false">
<InputEndpoints>
<!-- Must use port 80 for http and port 443 for https when running in the cloud -->
<InputEndpoint name="HttpIn" protocol="http" port="80" />
</InputEndpoints>
<ConfigurationSettings />
</WebRole>
</ServiceDefinition>

The second project, WebRole1, is nothing more than a conventional ASP.NET application that holds a reference to the Azure runtime assembly System.ServiceHosting.ServiceRuntime. From your perspective as an ASP.NET developer, an ASP.NET Web Role is an ASP.NET application, but one that can be hosted in the cloud. You can add any Web components to it that you would typically include in a Web application, including HTML pages, ASPX pages, ASMX or WCF services, images, media, etc.

For our exercise, we’ll just set the title text and add some HTML content in the Default.aspx page created by Visual Studio for the WebRole1 project.

<html xmlns="http://www.w3.org/1999/xhtml">

<head runat="server">
<title>Hello Windows Azure</title>
</head>
<body>
<form id="form1" runat="server">
<div>
<h1>Hello From The Cloud!</h1>
</div>
</form>
</body>
</html>

We’re ready to debug/run our application, but unlike debugging a conventional ASP.NET Web application:

  • The ASP.NET Web Role project is not the startup project; the Cloud Service project is
  • The ASP.NET Web Role project won’t run on the Development Server (aka Cassini) or IIS

So debugging cloud services locally means starting the Cloud Service project, which in turn will start all the role projects that the service project hosts. And instead of Cassini or IIS, the ASP.NET Web Role projects will be hosted by two special services that simulate the cloud on your local machine: Development Fabric and Development Storage. The Development Fabric service provides the Azure computational services used in the cloud, and the Development Storage service provides the Azure storage services used in the cloud.

There are still a few things you need to ensure before you hit F5:

  • You must have started Visual Studio as an administrator. If you haven’t, you’ll get an error message complaining that “The Development Fabric must be run elevated.” You’ll need to restart Visual Studio as an administrator and try again.
  • SQL Server Express Edition (2005 or 2008) must be running as the .\SQLEXPRESS instance, your Windows account must have a login in .\SQLEXPRESS, and must be a member of the sysadmin role. If SQL Express isn’t configured properly, you’ll get a permissions error.

Enough talk! Let’s walk through a few build-and-run cycles to get a sense of how these two services work. Go ahead and hit F5 and give it a run.

If this is the very first time you are building a cloud service, Visual Studio will prompt you to initialize the Development Storage service (this won’t happen again for future builds). Click Yes, and wait a few moments while Visual Studio sets up the SQL Express database.

Although it uses SQL Server Express Edition as its backing store, make sure you don’t confuse Development Storage with SQL Azure, which offers a full SQL Server relational database environment in the cloud. Rather, Development Storage uses the local SQL Express database to persist table (dictionary), BLOB (file system), and queue storage just as Windows Azure provides the very same Storage Services in the real cloud.

Once the build is complete, Internet Explorer should launch and display our Hello Cloud page.

HelloFromTheCloud

While this may seem awfully anti-climactic, realize that it’s supposed to. The whole idea is that the development experience for building cloud-based ASP.NET Web Role projects is no different than it is for building conventional on-premises ASP.NET projects. Our Hello Cloud application is actually running on the Development Fabric service, which emulates the real cloud fabric provided by Azure on your local machine.

In the tray area, the Development Fabric service appears as a gears icon. Click on the gears icon to display the context menu:

DevelopmentFabricTray

Click Show Development Fabric UI to display the service’s user interface. In the Service Deployments treeview on the left, drill down to the HelloCloud service. Beneath it, you’ll see the WebRole1 project is running. Expand the WebRole1 project to see the number of fabric instances that are running:

DevelopmentFabric1Instance

At present, and by default, only one instance is running. But you can scale out to increase the capacity of your application simply by changing one parameter in the ServiceDefinition.csdef file.

Close the browser and open ServiceDefinition.csdef in the HelloCloud service project. Change the value of the count attribute in the Instances tag from 1 to 4:

<Instances count="4" />

Now hit F5 again, and view the Development Fabric UI again. This time, it shows 4 instances hosting WebRole1:

DevelopmentFabric4Instances

As you can see, it’s easy to instantly increase the capacity of our applications and services. The experience would be the same in the cloud.

Congratulations! You’ve just built your first Windows Azure application. It may not do much, but it clearly demonstrates the transparency of the Azure runtime environment. From your perspective, it’s really no different than building conventional ASP.NET applications. The Visual Studio debugger attaches to the process being hosted by the Development Fabric service, giving you the same ability to set breakpoints, single-step, etc., that you are used to, so that you can be just as productive building cloud applications.

You won’t get full satisfaction, of course, until you deploy your application to the real cloud. To do that, get your Windows Azure account and invitation tokens set up (as described in my previous post), and stay tuned for a future post that walk you through the steps for Windows Azure deployment.

Advertisements

SQL Azure CTP1 Released

Get Introduced to The Cloud

Read my previous post for a .NET developer’s introduction to the Azure Services Platform, and the detailed steps to get up and running quickly with Azure.

Get Your SQL Azure Token

If you’ve been waiting for a SQL Azure token to test-drive Microsoft’s latest cloud-based incarnation of SQL Server, your wait will soon be over. Just this morning, Microsoft announced the release of SQL Azure CTP1, and over the next several weeks they should be sending invitation tokens to everyone that requested them (if you’ve not yet requested a SQL Azure token, go to http://msdn.microsoft.com/en-us/sqlserver/dataservices/default.aspx and click Register for the CTP).

Get the Azure Training Kit

Microsoft has also just released the August update to the Azure training kit that has a lot of new SQL Azure content in it. Be sure to download it here: http://www.microsoft.com/downloads/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78.

Read the Documentation

The SQL Azure documentation, which was posted on MSDN on August 4, can be found here: http://msdn.microsoft.com/en-us/library/ee336279.aspx.

Get Your Head In The Clouds: Introducing the Azure Services Platform

Azure is coming, and it’s coming soon. If you’ve not yet gotten familiar with Azure (that is, the overall Azure Services Platform), you owe it to yourself to start learning about it now. Especially since it remains free until RTM in November. This post, and many that will follow, will help you get acquainted with Microsoft’s new cloud computing platform so that you can leverage your hard-earned .NET programming skills quickly and effectively as your applications and services begin moving to the cloud (and they will).

There are many pieces to Azure, and it would be overwhelming to dive into them all at once. So this post will just give the high-level overview, and future posts will target the individual components one at a time (so that you can chew and swallow like a normal person).

Cloud Computing: The Concept

Provisioning servers on your own is difficult. You need to first acquire and physically install the hardware. Then you need to get the necessary software license(s), install the OS, and deploy and configure your application. For medium-to-enterprise scale applications, you’ll also need to implement some form of load balancing and redundancy (mirroring, clustering, etc.) to ensure acceptable performance levels and continuous uptime in the event of unexpected hardware, software, or network failures. You’ll have to come up with a backup strategy and attend to it religiously as part of an overall disaster recovery plan that you’ll also need to establish. And once all of that is set up, you’ll need to maintain everything throughout the lifetime of your application. It’s no understatement to assert that moving to the cloud eliminates all of these burdens.

In short, the idea of applications and services running in “the cloud” means you’re dealing with intangible hardware resources. In our context, intangible translates to a maintenance-free runtime infrastructure. You sign up with a cloud hosting company for access, pay them for how much power your applications need (RAM, CPU, storage, scale-out load balancing, etc.), and let them worry about the rest.

Azure Virtualization and Fabric

Azure is Microsoft’s cloud computing offering. With Azure, your applications, services, and data reside in the Azure cloud. The Azure cloud is backed by large, geographically dispersed Microsoft data centers equipped with powerful servers, massive storage capacities, and very high redundancy to ensure continuous uptime.

However, this infrastructure is much more than just a mere Web hosting facility. Your cloud-based applications and services don’t actually run directly on these server machines. Instead, sophisticated virtualization technology manufactures a “fabric” that runs on top of all this physical hardware. Your “code in the cloud,” in turn, runs on that fabric. So scaling out during peak usage periods becomes a simple matter of changing a configuration setting that increases the number of instances running on the fabric to meet the higher demand. Similarly, when the busy season is over, it’s the same simple change to drop the instance count and scale back down. Azure manages the scaling by dynamically granting more or less hardware processing power to the fabric running the virtualized runtime environment. The process is virtually instantaneous.

Now consider the same scenario with conventional infrastructure. You’d need to provision servers, bring them online as participants in a load-balanced farm, and then take them offline to be decommissioned later when the extra capacity is no longer required. That requires a great deal of work and time — either for you directly, or for your hosting company — compared to tweaking some configuration with just a few mouse clicks,

The Azure Services Platform

The Azure Services Platform is the umbrella term for the comprehensive set of cloud-based hosting services and developer tools provided by Microsoft. It is still presently in beta, and is scheduled to RTM this November at the Microsoft PDC in Los Angeles. Even as it launches, Microsoft is busy expanding the platform with additional services for future releases.

Warning: During the past beta release cycles of Azure, there have been many confusing product brand name changes. Names used in this post (and any future posts between now and RTM) are based on the Azure July 2009 Community Technology Preview (CTP), and still remain subject to change.

At present, Azure is composed of the following components and services, each of which I’ll cover individually in future posts.

  • Windows Azure
    • Deploy, host, and manage applications and services in the cloud
    • Storage Services provides persisted storage for table (dictionary-style), BLOB, and queue data
  • SQL Azure
    • A SQL Server relational database in the cloud
    • With a few exceptions, supports the full SQL Server 2008 relational database feature set
  • Microsoft .NET Services
    • Service Bus enables easy bi-directional communication through firewall and NAT barriers via new WCF relay bindings
    • Access Control provides a claims-based security model that eliminates virtually all the gnarly security plumbing code typically used in business applications
  • Live Services
    • A set of user-centric services focused primarily on social applications and experiences
    • Mesh Services, Identity Services, Directory Services, User-Data Storage Services, Communication and Presence Services, Search Services, Geospatial Services

Getting Started with Azure

Ready to roll up your sleeves and dive in? Here’s a quick checklist for you to start building .NET applications for the Azure cloud using Visual Studio:

  • Ensure that your development environment meets the pre-requisites
    • Vista or Windows Server 2008 with the latest service packs, or Windows 7 (sorry, Windows XP not supported)
    • Visual Studio 2008 (or Visual Web Developer Express Edition) SP1
    • .NET Framework 3.5 SP1
    • SQL Server 2005/2008 Express Edition (workaround available for using non-Express editions)
  • Install Windows Azure Tools for Microsoft Visual Studio July 2009 CTP and related hotfixes
    • Download from http://msdn.microsoft.com/en-us/vstudio/cc972640.aspx
    • Installing the tools also installs the Windows Azure SDK
    • Provides Development Fabric and Development Storage services to simulate the cloud on your local development machine
    • Provides Visual Studio templates for quickly creating cloud-based ASP.NET applications and WCF services
  • Sign up for an Azure portal account using your Windows Live ID

Azure cloud services are free during the CTP, but Windows Azure and SQL Azure require invitation tokens that you need to request before you can use them. Also be aware that there could be a waiting period, if invitation requests during the CTP are in high demand, and that PDC attendees get higher priority in the queue. As of the July 2009 CTP, invitation tokens are no longer required for .NET Services or Live Services, and you can proceed directly to the Azure portal at http://windows.azure.com to start configuring those services.

The process to request invitation tokens for Windows Azure and SQL Azure can be a little confusing with the current CTP, so I’ve prepared the following step-by-step instructions for you:

  • To request an invitation token for Windows Azure:
    • Go to http://www.microsoft.com/azure/register.mspx and click Register for Azure Services
    • A new browser window will launch to the Microsft Connect Web site
    • If you’re not  currently logged in with your Windows Live ID, you’ll be prompted to do so now
    • You’ll then be taken through a short wizard-like registration process and asked to provide some profile information
    • You’ll then arrive at the Applying to the Azure Services Invitation Program page, which you need to complete and submit
    • Finally, you should receive a message that invitation tokens for both Windows Azure and SQL Azure will be sent to your Connect email account. Note that this is incorrect, and you will only be sent a Windows Azure token.
  • To request an invitation token for SQL Azure, you need to join the mailing list on the SQL Azure Dev Center:
    • Go to http://msdn.microsoft.com/en-us/sqlserver/dataservices/default.aspx
    • Click Register for the CTP
    • If you’re not  currently logged in with your Windows Live ID, you’ll be prompted to do so now
    • You’ll then arrive at the mailing list signup page, which you need to complete and submit
    • Finally, you should receive a message that you will be notified when your SQL Azure access becomes available
    • Note: SQL Azure tokens are limited; you may have to be patient to receive one during the CTP

Once you receive your invitation tokens, you can log in to the Azure portal at http://windows.azure.com and redeem them.

You don’t need to wait for your Windows Azure invitation code to begin building cloud applications and services with Visual Studio. Once you’ve installed the Windows Azure Tools for Microsoft Visual Studio July 2009 CTP and related hotfixes as described earlier, you’ll have the necessary templates and runtime components for creating and testing Windows Azure projects. Using the Development Fabric service that emulates the cloud on your desktop, you can run and debug Windows Azure projects locally (of course, you won’t be able to deploy them to the real cloud until you receive and redeem your Windows Azure invitation token). It’s easy to do, and in an upcoming post, I’ll show you how.

Posted in Windows Azure. Tags: . 2 Comments »

Optimizing Factory Methods with Static Delegate Arrays

In my last post, I explained the benefits of using factory classes to achieve polymorphism in your business applications, and demonstrated how implementing such an architecture greatly improves the design and maintainability of your code. In this post (part 2 if you will) I’ll first quickly review the benefits of the factory pattern, and then demonstrate how to refactor the typical switch/case factory implementation to achieve ultra-high performance using a static array of delegates.

Here is the TenderType enum and TenderFactory code from the last post:

public enum TenderType
{
  Cash,
  DebitCard,
  CreditCard,
   //
   // more tender types
   //
}

public static class TenderFactory
{
  public static ITender CreateTender(TenderType tenderType)
  {
    switch (tenderType)
    {
      case TenderType.Cash:
        return new CashTender();

      case TenderType.DebitCard:
        return new DebitCardTender();

      case TenderType.CreditCard:
        return new CreditCardTender();
       //
       // more case statements here
       //
      default:
        throw new Exception("Unsupported tender: " + (int)tenderType);
    }
  }
}

In this example, we have a factory method that accepts a TenderType enum and uses it to create a concrete tender object corresponding to that enum. All the different tender objects implement the ITender interface, so that’s the type returned by the factory. Because the different tender behaviors are encapsulated in the set of concrete tender classes, client code can simply call this factory method to retrieve an ITender object for any tender type and work with that object without knowing the actual type. That is, you can write polymorphic code that doesn’t need to be maintained as concrete type behaviors change or new concrete types are added in the future.

It’s easy to recognize when you should be using this pattern in your own applications. When you find yourself coding the same switch/case ladder repeatedly, that’s a sure indication that your architecture can be improved significantly by using factories and polymorphism. It’s important to sensitize yourself so that you detect this condition early in your development efforts and apply the factory pattern then. It will be a much greater burden to refactor your code to use the factory pattern later on, once you have allowed a proliferation of switch/case statements to spread throughout your code.

So now we’ve got a single switch/case ladder in a centralized factory method, which eliminates the multitudes of duplicate switch/case ladders that would have otherwise been scattered throughout our application’s code. A huge improvement, to be certain, but can we improve it even more? You bet!

The Need For Speed

Because object instantiation is a very common occurrence, factory methods in general need to perform well. A performance bottleneck in the factory will quickly translate to a serious application-wide problem. If you’re dealing with a small set of concrete types (like our tenders example), and/or your factory code is running on a desktop or other single-user environment, the switch/case implementation of the factory method shown above is probably perfectly suitable and won’t require optimization.

In the real world, however, a factory method frequently supports dozens or even hundreds of different concrete types, which means coding a very long switch/case ladder in that method. Next consider the fact that the switch/case ladder is evaluated sequentially at runtime, top-to-bottom. This means that a matching case statement further down the ladder takes longer to reach than one further up the ladder. Again, for a handful of types in a single-user environment, that differential is imperceptible. But if your factory code supports many types through a long switch/case ladder, and runs on a multi-user application server that is servicing many concurrent client requests for new object instances in high demand, then it becomes critical that your factory code executes as quickly as possible.

A Golden Rule of Optimization: Let The Compiler Do The Work

The following alternative implementation is vastly superior to the switch/case version:

public static class TenderFactory
{
  private delegate ITender CreateTenderMethod();

  // Important: The order of delegates in this static array must match the
  //  order that the TenderType enums are defined.
  private static CreateTenderMethod[] CreateTenderMethods = new CreateTenderMethod[]
  {
    delegate() { return new CashTender(); },
    delegate() { return new DebitCardTender(); },
    delegate() { return new CreditCardTender(); },
     //
     // more delegates here
     //
  };

  public static ITender CreateTender(TenderType tenderType)
  {
    var tender = CreateTenderMethods[(int)tenderType].Invoke();
    return tender;
  }
}

This factory code takes a novel approach. Let’s dissect it.

At the top of the class we define a delegate named CreateTenderMethod which takes no parameters and returns an ITender object. We then declare a static array of CreateTenderMethod delegates, and populate it with anonymous methods that return for each concrete tender type. Stop and think about what this means. The static array is allocated in memory and populated with method pointers for each tender type by the compiler at compile-time. So when this assembly loads into memory, the static array just rolls right into the address space with all the elements already mapped to the methods returning the concrete types. There is no runtime hit whatsoever for dynamically allocating storage space for the array from heap and populating it with delegate instances. The work was already done by the compiler. Having the compiler do work at compile time to avoid having to do the work at runtime is one of the golden rules of optimization.

The CreateTender method itself is now ridiculously simple. It takes the incoming enum and converts it to an integer which it uses as an index into the static array. That instantaneously retrieves the correct delegate which points to the method that will return the concrete tender type specified by the enum. In an array of 250 elements, it won’t take any longer to retrieve the delegate for the 250th element than it will for the first. The Invoke method on the delegate actually runs the method and returns the correct ITender derivative to the requesting client. The only important thing to remember when using this technique, which you may have already realized on your own, is that the order of delegates in the array must match the order that the enums are defined (as mentioned in the code comments). A mismatch will obviously manifest itself as a serious runtime error.

What’s really nice here is that anonymous methods added in C# 2.0 greatly reduce the amount of code this solution requires, compared to what was required before anonymous methods. Back then, you’d need to explicitly create one-line methods for each concrete tender type, and then reference each of those one-line methods from an explicit delegate constructor in the static array. So this implementation is now not only significantly faster than the typical switch/case approach, it’s also just as easy to implement and maintain. Don’t you just love it when there are no downsides?

Leveraging Polymorphism with Factory Classes

In this post, I’ll explain the benefits of using the factory pattern, and show you how to code a typical factory implementation to achieve polymorphism in your .NET applications. In my next post, I’ll show you how to optimize the factory method for ultra-high performance by using a static array of delegates.

Working with Different Types

Let’s use a typical POS (Point Of Sale) scenario to illustrate the factory pattern. POS applications run on checkout counters in retail stores, where the customer can pay with a variety of tender types, such as cash, credit, debit, voucher, etc. Each of these types exhibit different behaviors; for example, paying with cash pops open the cash drawer, paying by debit card requires a PIN, paying by credit card requires a signature, etc.

Without using a factory pattern to manage the different tender types, code to determine if a customer signature is required on checkout for a given tender type might look like this:

bool isSignatureRequired;
switch (tenderType)
{
  case TenderType.Cash:
    isSignatureRequired = false;
    break;


  case TenderType.DebitCard:
    isSignatureRequired = false;
    break;


  case TenderType.CreditCard:
    isSignatureRequired = true;   // credit cards require signature
    break;

   //
   // more case statements here
   //

  default:
    throw new Exception("Unsupported tender: " + (int)tenderType);
}

Similarly, to determine if a PIN is required for any given tender type:

bool isPinRequired;
switch (tenderType)
{
  case TenderType.Cash:
    isPinRequired = false;
    break;


  case TenderType.DebitCard:
    isPinRequired = true;   // debit cards require PIN
    break;


  case TenderType.CreditCard:
    isPinRequired = false;
    break;

   //
   // more case statements here
   //

  default:
    throw new Exception("Unsupported tender: " + (int)tenderType);
}

With this approach, the same switch/case “ladder” will appear numerous times scattered throughout your application, each one dealing with a different aspect of each possible tender type. In short order, maintenance will become a nightmare. Changing the behavior of any type means hunting down all the pertinent switch/case blocks and modifying them. Creating a new type means adding a new case to many existing switch/case blocks—wherever they are (good luck finding them all). It’s nearly impossible to gain a clear picture of how the tender types compare to one another, because bits of information about each tender are fragmented in switch/case blocks across the application’s codebase.

Creating a Factory

The factory pattern solves this problem by eliminating all these switch/case blocks from your application, and consolidating the logic for each type in its own class instead. All the classes for each type implement the same interface, so that you can work with an instance of any type without knowing or caring what the concrete type is, and get the information you need.

With a factory pattern in place, you won’t have to search for or modify one line of code in your core application logic when the behavior of an existing tender changes or a tender type is supported in the future. This represents a huge improvement over the scattered switch/case approach.

Here are the steps to implement the pattern:

1) Create the ITender interface:

With this interface, we have defined certain things that we know about every tender type, such as IsSignatureRequired, IsPinRequired, AllowCashBack, etc.

public interface ITender
{
  bool IsSignatureRequired { get; }
  bool IsPinRequired { get; }
  bool AllowCashBack { get; }
  bool PopOpenCashDrawer { get; }
   //
   // more members
   //
}

2) Define enums for each ITender derivative: 

public enum TenderType
{
  Cash,
  DebitCard,
  CreditCard,
   //
   // more tender types
   //
}

3) Create concrete ITender classes. Here are three concrete classes that implement ITender:

public class CashTender : ITender
{
  bool ITender.IsSignatureRequired { get { return false; } }
  bool ITender.IsPinRequired       { get { return false; } }
  bool ITender.AllowCashBack       { get { return false; } }
  bool ITender.PopOpenCashDrawer   { get { return true; } }
}

public class DebitCardTender : ITender
{
  bool ITender.IsSignatureRequired { get { return false; } }
  bool ITender.IsPinRequired       { get { return true; } }
  bool ITender.AllowCashBack       { get { return true; } }
  bool ITender.PopOpenCashDrawer   { get { return false; } }
}

public class CreditCardTender : ITender
{
  bool ITender.IsSignatureRequired { get { return true; } }
  bool ITender.IsPinRequired       { get { return false; } }
  bool ITender.AllowCashBack       { get { return false; } }
  bool ITender.PopOpenCashDrawer   { get { return false; } }
}

4) Create the factory class:

public static class TenderFactory
{
  public static ITender CreateTender(TenderType tenderType)
  {
    switch (tenderType)
    {
      case TenderType.Cash:
        return new CashTender();


      case TenderType.DebitCard:
        return new DebitCardTender();


      case TenderType.CreditCard:
        return new CreditCardTender();


       //
       // more case statements here
       //

      default:
        throw new Exception("Unsupported tender: " + (int)tenderType);
    }
  }
}

With these elements in place, you can handle any ITender derivative throughout your application easily and elegantly. Given an enum for any particular tender type, a single call to the CreateTender factory method will return a new instance of the correct concrete ITender object. You can then work with the properties and methods of the returned instance and get the results expected for the specific tender type, without needing to know the specific tender type or testing for different tender types. This is the essence of polymorphism.

For example, to determine if a signature is required, it’s as simple as:

ITender tender = TenderFactory.CreateTender(tenderType);
bool isSignatureRequired = tender.IsSignatureRequired;

Unlike the code at the top of the post that retrieved this information without using the factory pattern, this code will never change, even as new tenders are added in the future. Adding support for a new tender (for example, food stamps) now involves only adding a new enum, creating a new concrete class that implements all the ITender members for the new tender type, and adding a single case statement to the factory method’s switch/case block. Doing so allows you to instantly plug in new tender types without touching one line of code throughout the body of your core application logic. The code above requires no modifications to determine if a signature is required for a newly added tender.

Enhancing and Optimizing the Factory Method

So now your application calls a method in the factory class to “new up” an ITender instance rather than explicitly instantiating a new ITender object through one of its constructors. You can enhance this pattern by encapsulating all the concrete classes in a separate assembly and scoping their constructors as internal (Friend, in VB) so that clients cannot circumvent the factory method and are instead forced to create new ITender instances by calling the factory method.

Another logical next step would be to create an abstract base class named TenderBase which would house common functionality that all tender types require. TenderBase would implement ITender, and all the concrete tender classes would inherit TenderBase instead of directly implementing ITender (though they will still implement ITender implicitly of course, since TenderBase implements ITender).

It’s important to ensure that your factory methods execute as quickly as possible, especially in scenarios where there is a high-volume demand to create new objects. In my next post, I’ll show you an improved version of the factory method that significantly out-performs the switch/case approach shown here (especially when dealing very a great many entity types), by replacing the switch/case logic with a static array of delegates.

Rethinking the Dynamic SQL vs. Stored Procedure Debate with LINQ

Dynamic SQL is evil, right? We should only be using stored procedures to access our tables, right? That will protect us from SQL injection attacks and boost performance because stored procedures are parameterized and compiled, right?

Well think again! The landscape of this debate has changed dramatically, especially with the advent of Language-Integrated Query (LINQ) technologies against relational databases, such as LINQ to SQL (L2S) against SQL Server and LINQ to Entities (L2E) against the Entity Framework’s (EF) Entity Data Model (EDM) over any RDBMS with an ADO.NET provider for EF. This is because, despite the fact that both these ORM technologies support the use of stored procedures, their real intended use is to generate dynamic SQL.

For simplicity, the examples below only use LINQ to SQL, but the same principles apply to LINQ to Entities.

Comparing Dynamic SQL and Stored Procedures in LINQ Queries

Let’s compare the behavior of LINQ queries that generate dynamic SQL with LINQ queries that invoke stored procedures. In our example, we’ll use the Sales.Currency table in the AdventureWorks 2008 database that contains a complete list of global currency and exchange rates. The following L2S query generates dynamic SQL to select just those currencies with codes beginning with the letter B:

// LINQ to SQL query using dynamic SQL
var q =
    from currency in ctx.Currencies
    where currency.CurrencyCode.StartsWith("B")
    select currency;

The ctx variable is the L2S DataContext, and its Currencies property is a collection mapped to the Sales.Currency table in the database (it’s common practice to singularize table names and pluralize collection names).

Modifying this query to use a stored procedure is easy. Assuming we create a stored procedure named SelectCurrencies that executes SELECT * FROM Sales.Currency and then import that stored procedure into either our L2S (.dbml) or EF (.edmx) model, the LINQ query requires only a slight modification to have it call the stored procedure instead of generating dynamic SQL against the underlying table:

// LINQ to SQL using a stored procedure
var q =
    from currency in ctx.SelectCurrencies()
    where currency.CurrencyCode.StartsWith("B")
    select currency;

The only change made to the query is the substitution of the SelectCurrencies method (which is a function that maps to the stored procedure we imported into the data model) for the Currencies property (which is a collection that maps directly to the underlying Sales.Currency table). 

Examining the Generated SQL

There is a major performance problem with this new version of the query, however, which may not be immediately apparent. To understand, take a look at the generated SQL for both queries:

Query 1 (dynamic SQL):

SELECT [t0].[CurrencyCode], [t0].[Name], [t0].[ModifiedDate]
FROM [Sales].[Currency] AS [t0]
WHERE ([t0].[CurrencyCode] LIKE @p0)
-- @p0: Input NVarChar (Size = 2; Prec = 0; Scale = 0) [B%]

Query 2 (stored procedure):

EXEC @RETURN_VALUE = [dbo].[SelectCurrencies]
-- @RETURN_VALUE: Output Int (Size = 0; Prec = 0; Scale = 0) [Null]

Now the problem should be blatantly obvious. The first query executes the filter condition for currencies with codes that start with the letter B in the WHERE clause of the SELECT statement on the database server (by setting parameter @p0 to ‘B%’ and testing with LIKE in the WHERE clause). Only results of interest are returned to the client across the network. The second query executes the SelectCurrencies stored procedure which returns the entire table to the client across the network. Only then does it get filtered by the where clause of the LINQ query to reduce that resultset and obtain only currencies with codes that start with B, while all the other (“non-B”) rows that just needlessly traversed the network from SQL Server are immediately discarded. That clearly amounts to wasted processing, and is a serious performance penalty for the use of stored procedures with LINQ.

Of course, one obvious solution to this problem is to modify the SelectCurrencies stored procedure to accept a @CurrencyCodeFilter parameter and change its SELECT statement to test against that parameter as follows: SELECT * FROM Sales.Currency WHERE CurrencyCode LIKE @CurrencyCodeFilter. That will ensure that only the rows of interest are returned from the server, just like the dynamic SQL version behaves. The LINQ query would then look like this:

// LINQ to SQL using a parameterized stored procedure
var q =
    from currency in ctx.SelectCurrencies("B%")
    select currency;

Performance problem solved, but this solution definitely begs the question “where’s the WHERE?” – in the stored procedure, or in the LINQ query? It needs to be in the stored procedure to prevent unneeded rows from being returned across the network, but then it’s not in the LINQ query any more. So LINQ queries that you need optimized for stored procedures won’t have a where clause in them, and in my humble opinion, that seriously undermines the effectiveness and expressiveness of LINQ queries because the query is now far less “language-integrated”.

Revisiting the Big Debate

Clearly LINQ to SQL and Entity Framework want us to embrace hitting the database server with dynamic SQL, but many database professionals live by the creed “dynamic SQL is the devil, and thou shalt only use stored procedures.” So let’s re-visit the heated “dynamic SQL vs. stored procedure” debate.

Proponents of stored procedures cite the following primary reasons against using dynamic SQL:

1) Security: Your application becomes vulnerable to SQL injection attacks.

2) Performance: Dynamic SQL doesn’t get compiled like stored procedures do, so it’s slower.

3) Maintainability: Spaghetti code results, as server-side T-SQL code is then tightly coupled and interwoven with client-side C# or VB .NET.

Let’s address these concerns:

1) Security: Vulnerability to SQL injection attacks results from building T-SQL statements using string concatenation. Even stored procedures are vulnerable in this respect, if they generate dynamic SQL by concatenating strings. The primary line of defense against SQL injection attacks it to parameterize the query, which is easily done with dynamic SQL. That is, instead of concatenating strings to build the T-SQL, compose a single string that has one or more parameters, and then populate the Parameters collection of the SqlCommand object with the parameter values (this is how the L2S query above prepared the command for SQL Server, as evidenced by the output of the generated T-SQL that uses the parameter @p0 in the WHERE clause).

2) Performance: You’re a little behind the times on this one. It’s true that stored procedures would get partially compiled to speed multiple executions in SQL Server versions 6.5 (released in 1996) and earlier. But as of SQL Server 7.0 (released in 1999), that is no longer the case. Instead, SQL Server 7.0 (and later) compiles and caches the query execution plan to speed multiple executions of the same query (where only parameter values vary), and that’s true whether executing a stored procedure or a SQL statement built with dynamic SQL.

3) Maintainability: This remains a concern if you are embedding T-SQL directly in your .NET code. But with LINQ, that’s not the case because you’re only composing a LINQ query (designed to be part of your .NET code); the translation to T-SQL occurs on the fly at runtime when you execute your application.

Making a Good Compromise

These facts should change your perspective somewhat. But if you’re a die-hard stored procedure proponent that finds it hard to change your ways (like me), consider this compromise: Allow dynamic SQL for SELECT only, but continue using stored procedures for INSERT, UPDATE, and DELETE (which can be imported into your data model just like a SELECT stored procedure can). This is a good strategy because LINQ queries only generate SELECT statements to retrieve data. They can’t update data. Only the SubmitChanges method on the L2S DataContext (or the SaveChanges method on the L2E ObjectContext) generates commands for updating data, with no downside to using stored procedures over dynamic SQL like there is for SELECT.

So you can (and should) stick to using stored procedures for INSERT, UPDATE, and DELETE operations, while denying direct access to the underlying tables (except for SELECT). Doing so allows you to continue using stored procedures to perform additional validation that cannot be bypassed by circumventing the application layer and communicating directly with the database server.