Streaming Into LINQ to XML Using C# Custom Iterators and XmlReader

In this post, I’ll show you how to create LINQ to XML queries that don’t require you to first load and cache XML content into the in-memory LINQ to XML DOM (that is, without first populating an XDocument or XElement query source), but instead operate against an input stream implemented with a C# custom iterator method and an old-fashioned XmlReader object.

LINQ to XML

LINQ to XML, introduced with the .NET Framework 3.5, is a huge win for developers working with XML in any shape or form. Whether XML is being queried, parsed, or transformed, LINQ to XML can almost always be used as an easier alternative to previous technologies that are based on XML-specific languages (XPath, XQuery, XSLT).

At the center of the LINQ to XML stage lies a new DOM for caching XML data in memory. This object model, based on either a root XDocument object or independent XElement objects, represents a major improvement over the older XmlDocument-based DOM in numerous ways (details of which will serve as the topic for a future post). In terms of querying, the XDocument and XElement objects provide methods (such as Descendants) that expose collections of nodes which can be iterated by a LINQ to XML query.

Consuming Sequences with LINQ

A common misconception by many developers learning LINQ is that LINQ only consumes collections. That is not surprising, since one of first benefits developers come to understand is that LINQ can be used to easily query an input collection without coding a foreach loop. While this is certainly true, it’s a very limited view of what can really be accomplished with LINQ.

The broader view is that LINQ works against sequences, which are often — but certainly not always — collections. A sequence can be generated by any object that implements IEnumerable<T> or any method that returns an IEnumerable<T>. Such objects and methods provide iterators that LINQ calls to retrieve one element after another from the source sequence being queried. For a collection, the enumerator simply walks the elements of the collection and returns them one at a time. But you can create your own class that implements IEnumerable<T> or your own custom iterator method which serves as the enumerator that returns a sequence based on something other than a collection.

The point? LINQ is not limited to querying in-memory collections. It can be used to query any sequence, which is fed to the LINQ query by enumerator methods exposed by all classes that implement IEnumerable<T>. What this means is that you don’t necessarily need to load an entire XML document into an in-memory cache before you can query it using LINQ to XML. If you are querying very large documents in order to extract just a few elements of interest, you can achieve better performance by having your LINQ to XML query stream through the XML—without ever caching the XML in memory.

Querying Cached XML vs. Streamed XML

How do you write LINQ to XML queries that consume a read-only, forward-only input stream instead of a pre-populated in-memory cache? Easy. Refer to a custom iterator method instead of a pre-populated XDocument or XElement in the from clause of your query.

For example, consider the following query:

var xd = XDocument.Load("Customers.xml");
var domQuery =
  from c in xd.Descendants("Customer")
  where (string)c.Attribute("Country") == "UK"
  select c;

Now compare that query with this version:

var streamQuery =
  from c in StreamElements("Customers.xml", "Customer")
  where (string)c.Attribute("Country") == "UK"
  select c;

Both versions query over the same XML and produce the same output result (a sequence of customer nodes from the UK), but they consume their input sequences in completely differently ways. The first version first loads the XML content into an XDocument (in-memory cache), and then queries the collection of nodes returned by the Descendants method (all <Customer> nodes) for those in the UK. The larger the XML document being queried, the more memory is consumed by this approach. The second version queries directly against an input stream that feeds the XML content as a sequence using a custom iterator method named StreamElements. This version will consume no more memory for querying a huge XML file than it will for a tiny one. Here’s the implementation of the custom iterator method:

private static IEnumerable<XElement> StreamElements(
  string fileName,
  string elementName)
{
  using (var rdr = XmlReader.Create(fileName))
  {
    rdr.MoveToContent();
    while (rdr.Read())
    {
      if ((rdr.NodeType == XmlNodeType.Element) && (rdr.Name == elementName))
      {
        var e = XElement.ReadFrom(rdr) as XElement;
        yield return e;
      }
    }
    rdr.Close();
  }
}

Understanding C# Custom Iterators

By definition, this method is a custom iterator since it returns an IEnumerable<T> and has a yield return statement in it. By returning IEnumerable<XElement> specifically (just like the Descendants method in the cached DOM version of the query does), this custom iterator is suitable as the source for a LINQ to XML query. The implementation of this method is a beautiful demonstration of how seamlessly new technology (LINQ to XML) integrates with old technology (the XmlReader object has been around since .NET 1.0). Let’s examine the method piece by piece to understand exactly how it works.

When the query first begins to execute, and LINQ needs to start scanning the input sequence, the StreamElements custom iterator method is called with two parameters. The first parameter is the name of the input XML file (“Customers.xml”) and the second parameter is the element of interest to be queried (“Customer”, for each <Customer> node in the XML file).  The method then opens an “old-fashioned” XmlReader against the XML file and advances the stream to the beginning of its content by invoking MoveToContent. It then enters a loop that reads from the stream one element at a time. Each element is tested against the second parameter (“Customer”, in this example). Each matching element (which would be all <Customer> elements) is converted into in XElement object by invoking the static XElement.ReadFrom method against the reader. The XElement object representing the matching node is then yield returned to the LINQ query.

As the query continues to execute, and LINQ needs to continue scanning the input sequence for additional elements, the StreamElements custom iterator method continues execution right after the point at which it yield returned the previous element, rather than entering at the top of the method like an ordinary method would. In this manner, the input stream advances and returns one matching XElement after another while the LINQ query consumes that sequence and filters for UK to produce a new output sequence with the results. When the end of the input stream is reached, the reader’s Read method will return false, which will end the loop, close the reader, and finally exit the method. When the custom iterator method exits, that signals the LINQ query that there are no more input elements in the sequence and the query completes execution at that point.

One important point that may seem obvious but is worth calling out anyway, is that running streaming queries more than once results in reading through the entire stream each time. So to best apply this technique, you should try to extract everything you need from the XML content in one pass (that is, with one LINQ query). If it turns out that you need to query over the stream multiple times, you’ll need to reconsider matters to determine if you aren’t better off caching the XML content once and then querying over the in-memory cache multiple times.

Custom iterators are a very powerful C# language feature, not necessarily limited for use with LINQ. Streaming input into LINQ to XML queries as an alternative to using a cached input source is just one example, but there are numerous others ways to leverage custom iterators. To learn how they can be used with SQL Server 2008 Table-Valued Parameters to marshal an entire business object collection to a stored procedure in a single round-trip, view my earlier blog post that explains the details: SQL Server 2008 Table-Valued Parameters and C# Custom Iterators: A Match Made In Heaven!

Advertisements

SQL Server 2008 Table-Valued Parameters and C# Custom Iterators: A Match Made In Heaven!

The new Table-Valued Parameter (TVP) feature is perhaps the most significant T-SQL enhancement in SQL Server 2008. Many people have already discovered how to use this feature for passing entire sets of rows with a typed schema definition along from one stored procedure to another on the server. This is something that has not been previously possible using either table variables or temp tables, although temp tables can of course be shared between multiple stored procedures executing on the same connection. Conveniently, TVPs can be passed freely between stored procedures and user-defined functions (UDFs) on the server.

A less widely-known capability of TVPs—yet arguably their most compelling facet—is the ability to marshal an entire set of rows across the network, from your ADO.NET client to your SQL Server 2008 database, with a single stored procedure call (only one round-trip) that accepts a single table-valued parameter. Developers have resorted to a variety of clever hacks over the years to reduce multiple round-trips for processing multiple rows—including XML, delimited text, or even (gasp) accepting hundreds (up to 2100!) of parameters. But special logic then needs to be implemented for packaging and unpackaging the parameter values on both sides of the wire. Worse, the code to implement that logic is often gnarly, and tends to impair developer productivity. None of those techniques even come close to the elegance and simplicity of using TVPs as a native solution to this problem.

I won’t cover the basics of TVPs, since many folks know them already and there are numerous blog posts already on the Web that explain the basics (not to mention Chapter 2 in my book) What I will show is how to implement TVPs in ADO.NET client code for passing an entire DataTable object (all its rows and columns) to SQL Server with one stored procedure call.

If you’re a business-object person and not a DataSet/DataTable person, I’ll also show you how to do the same thing with an entire business object collection using C# custom iterators. There are few places (if any) other than this blog post that show the necessary steps to implement TVPs against business object collections (it’s not even covered in my book!). So let’s get started.

Preparing the SQL Server Database

Before diving into the C# code, we need to set up our database for a simple Order Entry scenario. That means creating an Order table and a related OrderDetail table. We’ll also need an OrderUdt and OrderDetailUdt user-defined table type (UDTT) to base our TVPs on, and a single InsertOrders stored procedure that accepts header and detail TVPs containing typed rows to be bulk inserted into the underlying tables:

CREATE TABLE [Order](
 OrderId int NOT NULL,
 CustomerId int NOT NULL,
 OrderedAt date NOT NULL,
 CreatedAt datetime2(0) NOT NULL,
  CONSTRAINT PK_Order PRIMARY KEY CLUSTERED (OrderId ASC))
GO

CREATE TABLE [OrderDetail](
 OrderId int NOT NULL,
 LineNumber int NOT NULL,
 ProductId int NOT NULL,
 Quantity int NOT NULL,
 Price money NOT NULL,
 CreatedAt datetime2(0) NOT NULL,
  CONSTRAINT PK_OrderDetail PRIMARY KEY CLUSTERED (OrderId ASC, LineNumber ASC))
GO

CREATE TYPE OrderUdt AS TABLE(
 OrderId int,
 CustomerId int,
 OrderedAt date)
GO

CREATE TYPE OrderDetailUdt AS TABLE(
 OrderId int,
 LineNumber int,
 ProductId int,
 Quantity int,
 Price money)
GO

CREATE PROCEDURE InsertOrders(
 @OrderHeaders AS OrderUdt READONLY,
 @OrderDetails AS OrderDetailUdt READONLY)
AS
 BEGIN

    -- Bulk insert order header rows from TVP
    INSERT INTO [Order]
     SELECT *, SYSDATETIME() FROM @OrderHeaders

    -- Bulk insert order detail rows from TVP
    INSERT INTO [OrderDetail]
     SELECT *, SYSDATETIME() FROM @OrderDetails

 END
GO

Notice how the schemas of the UDTTs and the actual tables themselves are almost, but not quite, identical. The CreatedAt column in the two tables is not present in the UDTTs, since we won’t be accepting the client’s date and time—as we shouldn’t trust it, not knowing the time zone, not being in sync with the server and other clients, etc. So we’ll accept all required column values for orders and order details from the client, except for CreatedAt, for which our stored procedure will call the SYSDATETIME function to provide the current date and time based on the server clock (just like the GETDATE function, but returns as a datetime2 data type rather than a datetime data type). Independent of CreatedAt, the header table has an OrderedAt column which will be accepted from the client as a date value (the “time-less” date and enhanced datetime2 are new data types in SQL Server 2008 that I’ll cover in a future post).

Passing a DataTable to SQL Server

The easiest way to call this stored procedure from a .NET client is to use a DataTable. You simply pass the entire populated DataTable object as the parameter value and set the parameter’s SqlDbType property to SqlDbType.Structured. Everything else is plain old vanilla ADO.NET:

var headers = new DataTable();
headers.Columns.Add("OrderId", typeof(int));
headers.Columns.Add("CustomerId", typeof(int));
headers.Columns.Add("OrderedAt", typeof(DateTime));

var details = new DataTable();
details.Columns.Add("OrderId", typeof(int));
details.Columns.Add("LineNumber", typeof(int));
details.Columns.Add("ProductId", typeof(int));
details.Columns.Add("Quantity", typeof(decimal));
details.Columns.Add("Price", typeof(int));

headers.Rows.Add(new object[] { 6, 51, DateTime.Today });
details.Rows.Add(new object[] { 6, 1, 12, 2, 15.95m });
details.Rows.Add(new object[] { 6, 2, 57, 1, 59.99m });
details.Rows.Add(new object[] { 6, 3, 36, 10, 8.50m });

headers.Rows.Add(new object[] { 7, 51, DateTime.Today });
details.Rows.Add(new object[] { 7, 1, 23, 2, 79.50m });
details.Rows.Add(new object[] { 7, 2, 78, 1, 3.25m });

using (var conn = new SqlConnection("Data Source=.;Initial Catalog=MyDb;Integrated Security=True;"))
{
  conn.Open();
  using (var cmd = new SqlCommand("InsertOrders", conn))
  {
    cmd.CommandType = CommandType.StoredProcedure;

    var headersParam = cmd.Parameters.AddWithValue("@OrderHeaders", headers);
    var detailsParam = cmd.Parameters.AddWithValue("@OrderDetails", details);

    headersParam.SqlDbType = SqlDbType.Structured;
    detailsParam.SqlDbType = SqlDbType.Structured;

    cmd.ExecuteNonQuery();
  }
  conn.Close();
}

The magic here is that the one line of code that invokes ExecuteNonQuery sends all the data for two orders with five details from the client over to the InsertOrders stored procedure on the server. The data is structured as a strongly-typed set of rows on the client, and winds up as a strongly-typed set of rows on the server in a single round-trip, without any extra work on our part.

This example passes a generic DataTable object to a TVP, but you can also pass a strongly typed DataTable or even a DbDataReader object (that is connected to another data source) to a TVP in exactly the same way. The only requirement is that the columns of the DataTable or DbDataReader correspond to the columns defined for the UDTT that the TVP is declared as.

So TVPs totally rock, eh? But hold on, the TVP story gets even better…

Passing a Collection of Objects to SQL Server

What if you’re working with collections populated with business objects rather than DataTable objects populated with DataRow objects? You might not think at first that business objects and TVPs could work together, but the fact is that they can… and quite gracefully too. All you need to do is implement the IEnumerable<SqlDataRecord> interface in your collection class. This interface requires your collection class to supply a C# custom iterator (sorry, VB .NET doesn’t support custom iterators) method named GetEnumerator which ADO.NET will call for each object contained in the collection when you invoke ExecuteNonQuery. Here are the detailed steps:

First we’ll define the OrderHeader and OrderDetail classes and properties, similar to the way we created the DataTable objects and columns before:

public class OrderHeader
{
  public int OrderId { get; set; }
  public int CustomerId { get; set; }
  public DateTime OrderedAt { get; set; }
}

public class OrderDetail
{
  public int OrderId { get; set; }
  public int LineNumber { get; set; }
  public int ProductId { get; set; }
  public int Quantity { get; set; }
  public decimal Price { get; set; }
}

Ordinarily, List<OrderHeader> and List<OrderDetail> objects might be suitable for serving as collections of OrderHeader and OrderDetail objects in our application. But these collections they won’t suffice on their own as input values for TVPs because List<T> doesn’t implement IEnumerable<SqlDataRecord>. We need to add that ourselves. So we’ll define OrderHeaderCollection and OrderDetailCollection classes that inherit List<OrderHeader> and List<OrderDetail> respectively, and also implement IEnumerable<SqlDataRecord> to “TVP-enable” them:

public class OrderHeaderCollection : List<OrderHeader>, IEnumerable<SqlDataRecord>
{
  IEnumerator<SqlDataRecord> IEnumerable<SqlDataRecord>.GetEnumerator()
  {
    var sdr = new SqlDataRecord(
     new SqlMetaData("OrderId", SqlDbType.Int),
     new SqlMetaData("CustomerId", SqlDbType.Int),
     new SqlMetaData("OrderedAt", SqlDbType.Date));

    foreach (OrderHeader oh in this)
    {
      sdr.SetInt32(0, oh.OrderId);
      sdr.SetInt32(1, oh.CustomerId);
      sdr.SetDateTime(2, oh.OrderedAt);

      yield return sdr;
    }
  }
}

public class OrderDetailCollection : List<OrderDetail>, IEnumerable<SqlDataRecord>
{
  IEnumerator<SqlDataRecord> IEnumerable<SqlDataRecord>.GetEnumerator()
  {
    var sdr = new SqlDataRecord(
     new SqlMetaData("OrderId", SqlDbType.Int),
     new SqlMetaData("LineNumber", SqlDbType.Int),
     new SqlMetaData("ProductId", SqlDbType.Int),
     new SqlMetaData("Quantity", SqlDbType.Int),
     new SqlMetaData("Price", SqlDbType.Money));

    foreach (OrderDetail od in this)
    {
      sdr.SetInt32(0, od.OrderId);
      sdr.SetInt32(1, od.LineNumber);
      sdr.SetInt32(2, od.ProductId);
      sdr.SetInt32(3, od.Quantity);
      sdr.SetDecimal(4, od.Price);

      yield return sdr;
    }
  }
}

I’ll only explain the OrderHeaderCollection class; you can then infer how the OrderDetailCollection class–or any of your own collection classes–implements the custom iterator needed to support TVPs.

First, again, it inherits List<OrderHeader>, so an OrderHeaderCollection object is everything that a List<OrderHeader> object is. This means implicitly, by the way, that it also implements IEnumerable<OrderHeader>, which is what makes any sequence “foreach-able” or “LINQ-able”. But in addition, it explicitly implements IEnumerable<SqlDataRecord> which means it also has a customer iterator method for ADO.NET to consume when an instance of this collection class is assigned to a SqlDbType.Structured parameter for piping over to SQL Server with a TVP.

Every enumerable class requires a matching enumerator method, so not surprisingly implementing IEnumerable<SqlDataRecord> requires providing a GetEnumerator method that returns an IEnumerator<SqlDataRecord>. This method first initializes a new SqlDataRecord object with a schema that matches the UDTTs that the TVPs are declared as. It then enters a loop that iterates all the elements in the collection (possible because List<OrderHeader> implicitly implements IEnumerable<OrderHeader>). On the first iteration, it sets the column property values of the SqlDataRecord object to the property values of the first OrderHeader element, and then issues the magic yield return statement. By definition, any method (like this one) which returns IEnumerator<T> and has a yield return statement in it, is a custom iterator method that is expected to return a sequence of T objects until the method execution path completes (in this case, when the foreach loop finishes).

The crux of this is that we are never calling this method directly. Instead, when we invoke ExecuteNonQuery to run a stored procedure with a SqlDbType.Structured parameter (that is, a TVP), ADO.NET expects the collection passed for the parameter value to implement IEnumerable<SqlDataRecord> so that IEnumerable<SqlDataRecord>.GetEnumerator can be called internally to fetch each new record for piping over to the server. When the first element is fetched from the collection, GetEnumerator is entered, the SqlDataRecord is initialized and is then populated with values using the SetInt32 and SetDateTime methods (there’s a SetXXX method for each data type). That SqlDataRecord “row” is then pushed into the pipeline to the server by yield return. When the next element is fetched from the collection, the GetEnumerator method resumes from the point that it yield returned the previous element, rather than entering GetEnumerator again from the top. This means the SqlDataRecord gets initialized with schema information only once, while its population with one element after another is orchestrated by the controlling ADO.NET code for ExecuteNonQuery that actually ships one SqlDataRecord after another to the server..

The actual ADO.NET code is 100% identical to the code at the top of the post that works with DataTable objects. Substituting a collection for a DataTable object requires no code changes and works flawlessly, provided the collection implements IEnumerator<SqlDataRecord> and provides a GetEnumerator method that maps each object instance to a SqlDataRecord and each object property to a column defined by the UDTT that the TVP is declared as.

How freaking c-o-o-l is that?