Sergio and the sigil

Designing With Lambdas - Part II

Posted by Sergio on 2008-04-14

In my last post I went through a very simple example of applying lambdas to achieve more DRY.

In this installment I'll cheat a little and rehash a previous article I wrote before this blog existed. The article fits rather nicely in this series.

Creating XML with .Net

In .Net two of the most popular ways of creating XML are the System.Xml.XmlDocument, which implements the XML DOM, and System.Xml.XmlTextWriter. There's a new interesting way in VB9 using Xml Literals, but it is hardly popular at the time of this writing.

These APIs are obviously old-timers in .Net and were created before lambdas were available. For the sake of comparison, let's see how we would write the following XML document using these two APIs.

<?xml version="1.0" encoding="utf-8"?>
<children>
    <!--Children below...-->
    <child age="1" referenceNumber="ref-1">child &amp; content #1</child>

    <child age="2" referenceNumber="ref-2">child &amp; content #2</child>
    <child age="3" referenceNumber="ref-3">child &amp; content #3</child>
    <child age="4" referenceNumber="ref-4">child &amp; content #4</child>

    <child age="5" referenceNumber="ref-5">child &amp; content #5</child>
    <child age="6" referenceNumber="ref-6">child &amp; content #6</child>
    <child age="7" referenceNumber="ref-7">child &amp; content #7</child>

    <child age="8" referenceNumber="ref-8">child &amp; content #8</child>
    <child age="9" referenceNumber="ref-9">child &amp; content #9</child>
</children>

With the good ol' DOM, this document could be produced using something like this.

XmlDocument xml = new XmlDocument();
XmlElement root = xml.CreateElement("children");
xml.AppendChild(root);

XmlComment comment = xml.CreateComment("Children below...");
root.AppendChild(comment);

for(int i = 1; i < 10; i++)
{
	XmlElement child = xml.CreateElement("child");
	child.SetAttribute("age", i.ToString());
	child.SetAttribute("referenceNumber", "ref-" + i);
	child.InnerText = "child & content #" + i;
	root.AppendChild(child);
}

string s = xml.OuterXml;

Nothing too dramatic here. But my argument is that the only thing the DOM API has going for it is its ubiquitousness, which is not a minor feat considering how clunky the API is. Look at all those set this and append that. Can you still remember when you were first learning the DOM and never remembering how attributes were set?

Now it's the XmlTextWriter's turn. Here's the code to write the same XML document.

StringWriter sw = new StringWriter();
XmlTextWriter wr = new XmlTextWriter(sw);

wr.WriteStartDocument();
wr.WriteComment("Children below...");
wr.WriteStartElement("children");

for(int i=1; i<10; i++)
{
	wr.WriteStartElement("child");
	wr.WriteAttributeString("age", i.ToString());
	wr.WriteAttributeString("referenceNumber", "ref-" + i);
	wr.WriteString("child & content #" + i);
	wr.WriteEndElement();
}

wr.WriteEndElement();
wr.WriteEndDocument();


wr.Flush();
wr.Close();
string s = sw.ToString();

The XmlTextWriter API is rather efficient but, golly, is it a b!tch to use. No kidding, folks. Miss one of those WriteEndXXXXXX and you're toast. Good luck in your debugging session.

But enough of bashing our favorite APIs. The point here is just to show a draft of what an API like this could be designed in the era of lambdas.

XmlBuilder - let the lambdas in

What if we could somehow wrap the XmlTextWriter in a way that we could never forget to close an element? Remember how we wrapped the code in FileUtil.EachLine in the first installment of this series? We wrote that method in such a way that the file will never be left open by accident. I think we could do the same with the XmlTextWriter API.

Take a moment to inspect the following code. Put yourself in the shoes of a developer that is trying to write XML for the first time and needs to choose an XML API.

string s = XmlBuilder.Build(xml =>
{
	xml.Root(children =>
	{
		children.Comment("Children below...");

		for(int i = 1; i < 10; i++)
		{
			children.Element(child =>
			{
				child["age"] = i.ToString();
				child["referenceNumber"] = "ref-" + i;
				child.AppendText("child & content #" + i);
			});
		}
	});
});

Did you notice how the code structure maps nicely to the XML document structure? See how there's no way for you to forget one of those AppendChild calls from the DOM or WriteEndElement from the XmlTextWriter?

I particularly like the way the attributes are defined using the indexer syntax. Do you see how I chose to format the lambdas so that they look like C# control blocks? Placing the opening brace of the lambda in the next line created this indented block of code that defines some form of context. The context in this case is "inside this block I'll be building one XML element. When the block ends, the element ends."

You can download the code and play with it. It's only a proof of concept and there's a lot of missing functionality that I hope to implement one day, probably when I decide to use it in some real project.

Explanation of the code

Below you can see an excerpt from the code, showing how the Element() method was implemented. Let's discuss it.

public virtual void Element(Action<XmlElementBuilder> build)
{
	string name = build.Method.GetParameters()[0].Name;
	Element(name, new Dictionary<string, string>(), build);
}

public virtual void Element(string localName, 
			Action<XmlElementBuilder> build)
{
	Element(localName, new Dictionary<string, string>(), build);
}

public virtual void Element(string localName, 
			IDictionary<string, string> attributes, 
			Action<XmlElementBuilder> build)
{
	XmlElementBuilder child = new XmlElementBuilder(localName, Writer);
	
	Writer.WriteStartElement(localName);
	child._tagStarted = true;

	foreach(var att in attributes)
		child[att.Key] = att.Value;

	build(child);// <-- element content is generated here
	Writer.WriteEndElement();
	_contentAdded = true;
}

Looking at the various overload of this method we can see how the lambda comes into play and also at least one more trick. The very first method reflects into the given delegate (the lambda) to determine the name that was used for the single parameter of Action<XmlElementBuilder>. That's how we did not need to specify the children and child node names. Of course this is not always desirable or possible because the naming rules or XML elements is different than C# identifiers, so the other overloads let us specify the node name.

In the last overload of Element() is where the real code is. Line #19 Writer.WriteStartElement(localName); opens the element, line #25 build(child); invokes the lambda passing a builder instance for what goes inside the element. Line #26 Writer.WriteEndElement(); makes sure we keep synch with the element we started in line #19, by ending it before the method exits.

For easier reference I'm including the code for XmlElementBuilder and its base class.

public class XmlElementBuilder : XmlBuilderBase
{
	internal XmlElementBuilder(string localName, XmlTextWriter writer)
		: base(writer)
	{
		Name = localName;
	}

	public string Name { get; protected set; }

	public void AppendText(string text)
	{
		Writer.WriteString(text);
	}
}
public abstract class XmlBuilderBase
{
	protected XmlBuilderBase(XmlTextWriter writer)
	{
		Writer = writer;
	}

	internal XmlTextWriter Writer { get; set; }
	private bool _contentAdded = false;
	private bool _tagStarted = false;

	public virtual void Comment(string comment)
	{
		Writer.WriteComment(comment);
		_contentAdded = true;
	}

	public virtual void Element(Action<XmlElementBuilder> build)
	{
		string name = build.Method.GetParameters()[0].Name;
		Element(name, new Dictionary<string, string>(), build);
	}

	public virtual void Element(string localName, Action<XmlElementBuilder> build)
	{
		Element(localName, new Dictionary<string, string>(), build);
	}

	public virtual void Element(string localName, IDictionary<string, string> attributes, Action<XmlElementBuilder> build)
	{
		XmlElementBuilder child = new XmlElementBuilder(localName, Writer);
		
		Writer.WriteStartElement(localName);
		child._tagStarted = true;

		foreach(var att in attributes)
			child[att.Key] = att.Value;

		build(child);// <-- element content is generated here
		Writer.WriteEndElement();
		_contentAdded = true;
	}

	Dictionary<string, string> _attributes = new Dictionary<string, string>();
	
	public string this[string attributeName] 
	{
		get
		{
			if(_attributes.ContainsKey(attributeName))
				return _attributes[attributeName];
			return null;
		}
		set
		{
			if(_contentAdded)
				throw new InvalidOperationException(
					"Cannot add attributes after" + 
					" content has been added to the element.");

			_attributes[attributeName] = value;

			if(_tagStarted)
				Writer.WriteAttributeString(attributeName, value);
		}
	}
}

I realize the code I'm providing here is not a complete solution for all XML creation needs, but that's also not the point of this series. The idea here is to explore ways to incorporate lambdas in the API. When you think about it, this design has been possible all along via delegates since .Net 1.0. Anonymous delegates made this a much, much better. But only with the expressive lambda syntax we are seeing an explosion of this type of delegate usage.

Designing With Lambdas - Part I

Posted by Sergio on 2008-04-12

When our programming language of choice gets a new feature, it's usually not that hard to start using that feature right away from a consumer's point of view.

I could use the introduction of generics in .Net 2.0 as an example. When I wrote my first C# 2.0 piece of code, it already made use of the existing generic classes and methods, especially the ones in the System.Collections.Generic namespace such as List<T> and Dictionary<TKey,TValue>.

But it took a little more time until I learned how to design my own classes offering generic functionality. Reaching the balance of when to create generic classes, when to create generic methods, or when not to use generics only comes with some exercise.

I think this will be the case with lambdas for many people, including myself. Detecting opportunities to apply lambdas can make all the difference between a class that is a joy to use and one that is just the same old thing.

Processing lines in a file

My first example will be a more concise and less error-prone way of processing lines in a text file. Consider this hopefully familiar piece of code.

using(StreamReader rd = File.OpenText("Data.txt"))
{
	string line = rd.ReadLine();
	while(line != null)
	{
		DoSomething(line);
		// do more stuff with the line text

		//move on
		line = rd.ReadLine();
	}
}

How many times have you written something like this over and over? I know I did. If I were to compare the various times I implemented this, I would probably notice that the only thing that is different is the logic inside the while block. This should be a clue that a delegate or lambda could help make this pattern reusable.

But how do we create a reusable method that performs this common task without providing the logic inside the while? The last paragraph gave away the answer: delegates.

Let's create a helper class with a method to encapsulate the pattern at hand.

public static class FileUtil
{
	public static void EachLine(string fileName, Action<string> process)
	{
		using(StreamReader rd = File.OpenText(fileName))
		{
			string line = rd.ReadLine();
			while(line != null)
			{
				process(line);
				line = rd.ReadLine();
			}
		}
	}
}

The body of the EachLine method is almost the same as the original implementation we started with. The difference, as expected, is that we replaced the DoSomething(line) with a call to process, which is a delegate of type Action<string>, meaning that it expects a function that accepts a single parameter of type string and does not have a return value.

Using our new method, we can rewrite the original example like this.

FileUtil.EachLine("Data.txt", line => DoSomething(line));

Not bad. In this particular case, because we are just forwarding the line parameter to DoSomething, the call can be further simplified taking advantage of C#'s new shortened delegate creation syntax.

FileUtil.EachLine("Data.txt", DoSomething );

There you have it. Hopefully this assists someone in their journey in this new lambda thing.

Trying to get rid of MS Access

Posted by Sergio on 2008-04-09

I have this small personal organizer application that helps me keeping track of where my hard earned money is going. There's nothing special about this application other that it was designed to be used only by myself and it works exactly the way I think it should.

This application has been a trusty companion for the last 10 years and it needs its well deserved retirement. This is the last piece of VB6 that I have installed on my system. Since I stopped installing Visual Studio 6 years ago, this means I have been dealing with a couple of known bugs. I also have not added any new feature in a long time (maybe since the year 2000).

This year I decided to finally rewrite this app in .Net and I have an interesting choice to make. The old app uses MS Access for its database and, while I know I could very well keep using Access, I just don't want to deal with Access anymore. It's a technology from the last century and I think there must be something better out there.

A few things I need the database to support:

  • I need to be able to carry the app in a thumb drive and run in any system that only has .Net installed
  • Must be in-process (no service, sorry SQL Express, mySQL)
  • Must be as much compatible with SQL-92 as possible
  • Even better if it's supported by the popular ORM tools

After a brief research I chose a few candidates that seemed convenient: SQLce (because of my familiarity with SQL Server) and SQLite (because it's everywhere, comes on the mac, trivial to install in Linux, it's the new Rails 2.0 default database).

As I'm increasingly living in a multi-platform environment, I think I'm leaning towards SQLite, but I'll welcome other suggestions that fit in the requirements.

I'll return to this topic with my findings and overall development experience with the chosen database in the near future. For now I'll leave these useful SQLite links.

The new data types in SQL Server 2008

Posted by Sergio on 2008-04-06

I can't wait to use SQL 2008. I wish I could convince my DBA to jump on it as soon as it goes RTM, but the data peeps don't suffer of the same short attention span as us developers.

Reading a recent Technet Magazine article and attending the launch event in Chicago made me drool for the new features. And I don't usually care that much for database technologies.

The new features that are more within reach for developers are the new data types.

Spatial Data Types

A good part of my work involves GIS databases and overlaying business intelligence on top of it (or deriving BI from it). As such, it's with great interest that I see the addition of spatial data types in SQL 2008.

With SQL 2008 I'll be able to treat all those latitude/longitude pairs as first class citizens in my tables and stored procedures. Add to that all the new supporting functions that come with these new data types. That means I do not have to write my own functions to determine the bounding rectangles around a collection of points or shapes. Nor will I have to create my own function to calculate the distance between two coordinate points considering the Earth's shape. Believe me, this can be huge.

Suppose I have some oddly-shaped polygon that represents a geometric region like a mining field, a flood zone, or a pizza delivery service area (or anything small enough that can be considered flat, discarding the Earth's curvature).

DECLARE @shape geometry
SET @shape = geometry::STPolyFromText('POLYGON ((
                 47.653 -122.358, 
                 47.653 -122.354, 
                 47.657 -122.354, 
                 47.653 -122.350, 
                 47.645 -122.350, 
                 ... (snip)
                 47.651 -122.355, 
                 47.653 -122.358))',  0)

First of all, it's nice to have a data type to represent this polygon, and not having to use my own parsing mechanism or using other tables to store that. Another good thing is that these data types follow the OGC standards.

To determine the bounding rectangle for the shape defined above, it's as simple as calling a method on that shape object.

SELECT @shape.STEnvelope().ToString()
-- outputs something like
/*
 POLYGON (( 
	47.657 -122.358,
	47.657 -122.350,
	47.645 -122.350,
	47.645 -122.358,
	47.657 -122.358))
*/

Did I say object? Doesn't the syntax above look like plain old .Net? Exactly. The spatial data types were implemented in .Net, leveraging the capability of adding .Net user defined types, which was introduced in SQL 2005.

Spatial data types is a large topic, maybe I'll come back to it with a longer post once I have a chance to build an actual application with them. For now, I'll just point you to this nice series from Jason Follas.

The DATE type

When you don't need the time portion of a DATETIME, you can use the new 3-byte DATE type to store dates from 1/1/0001 to 12/31/9999. This coincides with the range (minus time) of the .Net System.DateTime structure.

DECLARE @d DATE
SET @d = GETDATE()
SELECT @d -- outputs '2008-04-06 00:00:00.000'

SET @d = '1234-03-22 11:25:09'
SELECT @d  -- outputs '1234-03-22  00:00:00.000'

The TIME type

There are also scenarios when we only need the time portion of a DATETIME, that's where the new TIME type comes in handy.

The TIME data type's size can be 3, 4, or 5 bytes, depending on the chosen precision. The default precision is 7 digits, but you can specify the precision you need when declaring the data.

CREATE TABLE #times(
	T 	TIME,
	T1	TIME(1),
	T5	TIME(5),
	T7	TIME(7)
	)

INSERT INTO #times VALUES (
	'01:02:03.1234567',
	'01:02:03.1234567',
	'01:02:03.1234567',
	'01:02:03.1234567')

SELECT * FROM #times

T                T1               T5               T7
---------------- ---------------- ---------------- ----------------
01:02:03.1234567 01:02:03.1000000 01:02:03.1234600 01:02:03.1234567

The DATETIME2 type

If you liked the increased precision of the TIME type and the increased date range of the DATE type, you'll be happy to learn that the new DATETIME2 stored date and time, with greater precision and range, basically combining those other 2 new data types.

DECLARE @dateA DATETIME2 = '2008-04-05 01:02:03.12345'
PRINT @dateA -- outputs 2008-04-05 01:02:03.1234500
DECLARE @dateB DATETIME2(4) = '2008-04-05 01:02:03.12345'
PRINT @dateB -- outputs 2008-04-05 01:02:03.1235

With this new data type you can have your dates range from 0001-01-01 00:00:00.0000000 to 9999-12-31 23:59:59.9999999. Again, this works well with .Net applications. You can initialize the DATETIME2 with string literals as shown above. These literal can be either ODBC format or ISO-8601 (sortable date time format, same as DateTime.ToString("s") in .Net).

The DATETIMEOFFSET type

With this new data type, SQL Server learns about time-zones. This is of particular interest to me because of the globally distributed data and users that I have to deal with.

The DATETIMEOFFSET type ranges in size from 8 to 10 bytes. It's precision is also defined at declaration time, just like the other new types shown above.

-- time stamp on Central Daylight Time
DECLARE @today DATETIMEOFFSET = '2008-04-05T01:02:03.1234567-05:00'
PRINT @today -- outputs 2008-04-05 01:02:03.1234567 -05:00
DECLARE @today2 DATETIMEOFFSET(2) = '2008-04-05T01:02:03.1234567-05:00'
PRINT @today2 -- outputs 2008-04-05 01:02:03.12 -05:00

We can initialize the DATETIMEOFFSET values using literal strings in ISO-8601 YYYY-MM-DDThh:mm:ss[.nnnnnnn][{+|-}hh:mm] or YYYY-MM-DDThh:mm:ss[.nnnnnnn]Z (for times exclusively in UTC.)

The HIERARCHYID type

About time, thank you. That's all I'm going to say. After having to endure the pain of representing, traversing, and querying hierarchical information stored in flat tables, I plan to use this data type extensively and never having to look back ever again.

You probably know what I'm talking about. All those foreign keys that point to the same table, like ParentCategoryID in a Categories table or ReportsToID in an Employees table.

Now we can simply define a column of type HIERARCHYID that will keep track of the record's position within the hierarchy being managed.

-- our Categories table
CREATE TABLE #Categories (
	CategoryID INT IDENTITY(1,1),
	CategoryNode HIERARCHYID NOT NULL,
	CategName NVARCHAR(40) NOT NULL
	)

We will need to populate the CategoyNode field with the correct hierarchy information. The first element (the root) is the odd man out. After the root node, the process is quite repetitive.

-- the root category
DECLARE @root HIERARCHYID = hierarchyid::GetRoot()
INSERT INTO #Categories (CategoryNode, CategName) 
	VALUES (@root, 'All #Categories')

-- insert the 'Electronics' category
DECLARE @electronics HIERARCHYID
SELECT @electronics = @root.GetDescendant(NULL, NULL)
INSERT INTO #Categories (CategoryNode, CategName) 
	VALUES (@electronics, 'Electronics')

-- insert the 'Music' category after 'Electronics'
DECLARE @music HIERARCHYID
SELECT @music = @root.GetDescendant(NULL, @electronics)
INSERT INTO #Categories (CategoryNode, CategName) 
	VALUES (@music, 'Music')

-- insert the 'Apparel' category between 'Electronics' and 'Music'
SELECT @music = @root.GetDescendant(@music, @electronics)
INSERT INTO #Categories (CategoryNode, CategName) 
	VALUES (@music, 'Apparel')

-- insert some children under 'Electronics'
DECLARE @video HIERARCHYID
--   We could do a simple @category.GetDescendant() but, let's
--      show something that is more likely to happen
SELECT @video = CategoryNode.GetDescendant(NULL, NULL)
  FROM #Categories WHERE CategName ='Electronics'
INSERT INTO #Categories (CategoryNode, CategName) 
	VALUES (@video, 'Video Equipment')

-- insert some children under 'Video Equipment'
DECLARE @tvs HIERARCHYID
SELECT @tvs = @video.GetDescendant(NULL, NULL)
INSERT INTO #Categories (CategoryNode, CategName) 
	VALUES (@tvs, 'Televisions')

DECLARE @players HIERARCHYID
SELECT @players = @video.GetDescendant(NULL, @tvs)
INSERT INTO #Categories (CategoryNode, CategName) 
	VALUES (@players, 'DVD - BluRay')

When we query the table, the output from the CategoryNode column reflects the position of the record in the hierarchy, similar to directory paths.

SELECT 
	CategoryID, CategName, 
	CategoryNode, 
	CategoryNode.ToString() AS Path
FROM #Categories
Output:
CategoryID  CategName         CategoryNode   Path
----------- ----------------- -------------- ---------
1           All #Categories   0x             /
2           Electronics       0x58           /1/
3           Music             0x48           /0/
4           Apparel           0x52C0         /0.1/
5           Video Equipment   0x5AC0         /1/1/
6           Televisions       0x5AD6         /1/1/1/
7           DVD - BluRay      0x5AD2         /1/1/0/

Note that the numbers that are separated by / in the Path column are not the CategoryID values. They are values that represent the sequence of the record within its siblings. The value is directly related to where we positioned the node when we called GetDescendant. The HIERARCHYID data type stores a binary value, as shown in the above output.

Now let's see how we would return all the items under a given category, recursively.

DECLARE @electronics_Categ HIERARCHYID
SELECT @electronics_Categ=CategoryNode 
	FROM #Categories WHERE CategoryID=2
SELECT CategoryID, CategName, CategoryNode.ToString() AS Path 
	FROM #Categories 
	WHERE @electronics_Categ.IsDescendant(CategoryNode)=1
Output:
CategoryID  CategName       Path
----------- --------------- --------
2           Electronics     /1/
5           Video Equipment /1/1/
6           Televisions     /1/1/1/
7           DVD - BluRay    /1/1/0/

Being able to perform these sorts of queries without having to resort to Common Table Expressions makes the queries so much simpler. I need this now.

You can get more details on the HIERARCHYID data type here.

Wrapping up

This post is by no means an extensive overview of these data types, but I hope it serves as a brief introduction to what is available in terms of data in the upcoming release of SQL 2008.

Creating Windows Services

Posted by Sergio on 2008-03-31
How to Create Windows Services

It's not uncommon for an enterprise application to need some form of background processes to do continuous work. These could be tasks such as

  • Cleanup abandoned shopping carts
  • Delete temporary files left behind by some report or image generation feature
  • Send email notifications
  • Create daily reports and send them out
  • Check an email inbox
  • Pull messages from a queue
  • Perform daily or monthly archiving
  • etc

For many of these things there are dedicated tools that provide that feature, like a reporting service (SSRS or BO,) scripts that run in the email server, or even simple executables that are fired by the Windows Task Scheduler. When you have only one or two background tasks, using something like the task scheduler may be OK, but administration quickly becomes painful when the number of tasks grows. The dedicated services like SSRS or BO can be overkill depending on the size of your application or organization.

One approach I like to take is to create a Windows Service for the application, grouping all the different background tasks under a single project, a single .exe, and a single configuration file. Visual Studio has always had a Windows Service project type, but the process of creating a working service is not as simple as you would hope, especially when your service performs more than one independent task.

After creating a couple of services, I realized that I definitely needed to stash all that monkey work somewhere I could just reuse later. I decided to create a helper library to assist creating and maintaining Windows services.

The library doesn't help with all kinds of Windows services, but has helped me a lot with the type of tasks I explained above. The key to the library is the ITask interface.

public interface ITask: IDisposable
{
    bool Started { get; }
    string Name { get; }
    void Start();
    void Stop();
    void Execute();
}

This interface shown all that is needed to create a task that can be started, stopped, and invoked by the service process. But this interface has too many members and many tasks are common enough that these members will be implemented almost identically. For example, tasks that execute on a regular interval will be almost identical, the only different member will be the Execute method. That's why the library comes with some handy base classes as shown in this diagram.

Now when I need to implement a task that runs repeatedly I simply inherit a task from PeriodicalTask or ScheduledTask as seen below. These classes will be part of my service project, from which I remove all the other classes that were added by default.

class CleanupTask : PeriodicalTask
{
    readonly static log4net.ILog Log =
        log4net.LogManager.GetLogger(
           System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

    public override void Execute()
    {
        //time to run....
        //TODO: write the actual code here
        // ShoppingCart.DeleteAbandonedCarts();
        Log.InfoFormat("Executed: {0}", this.GetType().Name);
    }
}


class DailyReportTask : ScheduledTask
{
    readonly static log4net.ILog Log =
        log4net.LogManager.GetLogger(
              System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

    protected override void Execute(DateTime scheduledDate)
    {
        //time to run....
        //TODO: write the actual code here
        // SalesReport.SendDailySummary();
        Log.InfoFormat("Executed: {0}", this.GetType().Name);
    }
}

Instead of hard coding the interval or the scheduled time of the above tasks, we use the service's .config file for that:

<WindowsService>
    <tasks>
        <task name="CleanupTask" interval="600"  />
        <task name="DailyReportTask" time="21:30"  />
    </tasks>
</WindowsService>

There are only a few more things we need to do to get this service ready. First we need to add a new WindowsService item. Here we are naming it MyAppService and making it inherit from from SPServiceBase.

partial class MyAppService : SPServiceBase
{
    public const string MyAppSvcName = "MyAppSVC";
    public MyAppService()
    {
        InitializeComponent();
        //Important.. use the constant here AFTER 
        //   the call to InitializeComponent()
        this.ServiceName = MyAppSvcName;
    }
}

We also need to add an Installer Class, which I'll name simply Installer and which will be invoked during the service installation phase to add the appropriate registry entries to make the service be listed in the Services applet. Here's how this class looks like. Note that it inherits from another base class from the library.

[RunInstaller(true)]
public class Installer : SergioPereira.WindowsService.ServiceInstaller
{
    //That's all we need. Hooray!
}

I mentioned that the installer will add the necessary registry information. Some of that are the name and description of the service. We provide that with an assembly attribute that you can put in the AssemblyInfo.cs or anywhere you like in a .cs file (outside any class or namespace.)

[assembly: ServiceRegistration(
    SampleService.MyAppService.MyAppSvcName, // <-- just a string constant
    "MyApp Support Service",
    "Supports the MyApp application performing several " + 
           "critical background tasks.")
]

A Windows service is compiled as an .exe, so it needs an en entry point, a static Main function. Let's add a Program.cs like this:

class Program
{
    static void Main(string[] args)
    {
        if (!SelfInstaller.ProcessIntallationRequest(args))
        {

            MyAppService svc = new MyAppService();

            svc.AddTask(new CleanupTask());
            svc.AddTask(new DailyReportTask());
            //add more tasks if you have them

            svc.Run();
        }
    }
}

The code above is pretty simple, we are creating the tasks and telling our service to take care of them. Then we start the service. The interesting thing is the call to ProcessIntallationRequest. This is where we added the self-installing capability of the service. If you wrote a service in the past, you know that they get installed by using InstallUtil.exe. One potential problem is that InstallUtil.exe may not be present on the server or not in the PATH, making an scripted installation a little more complicated. Instead, by using the that call from SelfInstaller, we enabled our service to be invoked like the following to install or uninstall it (remember to execute as an Administrator).

SampleService.exe -i[nstall]
SampleService.exe -u[ninstall]

After installing it, you should see the service in the Services applet.

Here's the final structure of our project.

If you want, download the library source code along with a sample service project. There's more in the library than I have time to explain here, including an auto-update task and extra configuration properties for each task.