OpenNETCF IoC Project Templates

I’m trying my hand at making using some of our stuff a bit easier.  Today I burned some time trying to understand the project template infrastructure and the result was the creation of a couple VSIX files for installing IoC templates into Visual Studio. 



Right now it only supports desktop projects (IoC supports Windows Phone, Mono for Android, Monotouch and Compact Framework). I’ve also not figured out how to actually deploy the IoC and Extensions binaries with the templates, so when you create your project, the References section will contain IoC references, but they’ll be broken.  Still, as a first cut it greatly simplifies setting up a new IoC UI (SmartClientApplication) or IoC Module project.


You can install the templates in one of three ways:


Software Development: An Intro to ORMs

This post is part of my “Software Development” series. The TOC for the entire series can be found here.



Let’s start this post with a couple questions.  What is an ORM and why would you use one?  If you don’t know the answer to both of these questions, then this post is for you.  If you do know the answers, feel free to skip to the next post (after I’ve created it of course).


First, let’s look at what an ORM is. And ORM is an object-relational mapping.  According to Wikipedia, which we all know is the infallible and definitive resource for all knowledge, and ORM is “a programming technique for converting data between incompatible type systems in object-oriented programming languages.”  Ok, what the hell does that mean?  My definition is a little less “scholarly.”  I’d say that



An ORM is a way to abstract your data storage into simple class objects (POCOs if you’d like) so that you don’t have to worry about all the crap involved in actually writing data fields, rows, tables and the like.  As a developer, I want to deal with a “Person” class.  I don’t want to think about SQL statements.  I don’t want to have to worry about indexes or tables or even how my “Person” gets stored to or read from disk.  I just want to say “Save this Person instance” and have it done.  THAT is what an ORM is.  A framework that lets me concentrate on solving my business problem instead of spending days writing bullshit, mind-numbing, error-prone data access layer code.


Why would we use an ORM?  I think I’ve been fairly upfront that I consider myself to be a lazy developer.  No, I don’t mean that I take shortcuts or do shoddy work, I mean that I hate doing things more than once.  I hate having to write reams of code to solve problems that have already been solved.  I’ve been writing applications that consume data for years, and as a consequence, I’ve been writing data access code for years.  If you’re not using an ORM, you probably know down in your soul that this type of work flat-out sucks.  Anything that simplifies data access to me is a win.


Of course there are other, more tangible benefits as well.  If you’re using an ORM, it’s often possible to swap out data stores – so maybe you could write to SQL Server then swap a line of two in configuration and write to MySQL or an XML file.  It also allows you to mock things or create stubs to remove data access (really handy when someone tells you that your data access is what’s slowing things down when you’re pretty sure it’s not).


I can see that some of you still need convincing.  That’s good – you shouldn’t ever just take someone’s word for it that they know what they’re doing.  Ask for proof.  Well let’s look at a case of my own code, why I built the OpenNETCF ORM and how I was recently reminded why it’s a good thing.


A couple years ago I wrote an application for a time and attendance device (i.e. a time clock) that, not surprisingly, stored data about employees, punches, schedules, etc.  on the device.  It also has the option to store it on a server and synchronize the data from clock to server, but that’s not core to our discussion today.  The point is that we were storing a fair bit of data and this was a project I did right before I create the ORM framework.  It was, in fact, the project that made it clear to me that I needed to write an ORM.


Just about a month ago the customer wanted to extend the application, adding a couple features to the time clock that required updates to how the data was stored.  It took me very little time to realize that the existing DAL code was crap.  Crap that I architected and wrote.  Sure, it works.  They’ve shipped thousands of these devices and I’ve heard no complaints and had no bug reports, so functionally it’s fine and it does exactly what was required.  Nonetheless the code is crap and here’s why.


First, let’s look at a table class in the DAL, like an Employee (the fat that the DAL knows about tables is the first indication there’s a problem):

internal class EmployeeTable : Table
{
public override string Name { get { return “Employees”; } }

public override ColumnInfo KeyField
{
get
{
return new ColumnInfo { Name = “EmployeeID”, DataType = SqlDbType.Int };
}
}

internal protected override ColumnInfo[] GetColumnInfo()
{
return new ColumnInfo[]
{
KeyField,
new ColumnInfo { Name = “BadgeNumber”, DataType = SqlDbType.NVarChar, Size = 50 },
new ColumnInfo { Name = “BasePay”, DataType = SqlDbType.Money },
new ColumnInfo { Name = “DateOfHire”, DataType = SqlDbType.DateTime },
new ColumnInfo { Name = “FirstName”, DataType = SqlDbType.NVarChar, Size = 50 },
new ColumnInfo { Name = “LastName”, DataType = SqlDbType.NVarChar, Size = 50 },
// … lots more column definitions …
};
}

public override Index[] GetIndexDefinitions()
{
return new Index[]
{
new Index
{
Name = “IDX_EMPLOYEES_EMPLOYEEID_GUID”,
SQL = “CREATE INDEX IDX_EMPLOYEES_EMPLOYEEID_GUID ON Employees (EmployeeID, GUID DESC)”
},
};
}
}


So every table for an entity derives from Table, which looks like this (shorted a load for brevity in this post):

public abstract class Table : ITable
{
internal protected abstract ColumnInfo[] GetColumnInfo();

public abstract string Name { get; }
public abstract ColumnInfo KeyField { get; }

public virtual string GetCreateTableSql()
{
// default implementation – override for different versions
StringBuilder sb = new StringBuilder();
sb.Append(string.Format(“CREATE TABLE {0} (“, Name));

ColumnInfo[] infoset = GetColumnInfo();
int i;
for (i = 0; i < (infoset.Length – 1); i++)
{
sb.Append(string.Format(“{0},”, infoset[i].ToColumnDeclaration()));
}

sb.Append(string.Format(“{0}”, infoset[i].ToColumnDeclaration()));
sb.Append(“)”);

return sb.ToString();
}

public virtual Index[] GetIndexDefinitions()
{
return null;
}

public virtual IDbCommand GetInsertCommand()
{
SqlCeCommand cmd = new SqlCeCommand();

// long, manual generation of the SQL and then creating the command

return cmd;
}

public IDbCommand GetUpdateCommand()
{
SqlCeCommand cmd = new SqlCeCommand();

// long, manual generation of the SQL and then creating the command

return cmd;
}

public IDbCommand GetDeleteCommand()
{
SqlCeCommand cmd = new SqlCeCommand();

// long, manual generation of the SQL and then creating the command

return cmd;
}


Sure, I get a few bonus points for using inheritance so that each table doesn’t have to do all of this work, but it’s still a pain.  Adding a new Table required that I understand all of this goo, create the ColumnInfo right, know what the index stuff is, etc.  And what happens when I need to add a field to an existing table?  It’s not so clear.


Now how about consuming this from the app?  When the app needs to get an Employee  you have code like this:

public IEmployee[] GetAllEmployees(IDbConnection connection)
{
List list = new List();

using (SqlCeCommand command = new SqlCeCommand())
{
command.CommandText = EMPLOYEES_SELECT_SQL2;
command.Connection = connection as SqlCeConnection;

using (var rs = command.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Insensitive))
{
GetEmployeeFieldOrdinals(rs, false);

while (rs.Read())
{
IEmployee employee = EntityService.CreateEmployee();

employee.BadgeNumber = rs.IsDBNull(m_employeeFieldOrdinals[“BadgeNumber”])
? null : rs.GetString(m_employeeFieldOrdinals[“BadgeNumber”]);
employee.BasePay = rs.IsDBNull(m_employeeFieldOrdinals[“EmployeeNumber”])
? 0 : rs.GetDecimal(m_employeeFieldOrdinals[“BasePay”]);
employee.DateOfHire = rs.IsDBNull(m_employeeFieldOrdinals[“DateOfHire”])
? DateTime.MinValue : rs.GetDateTime(m_employeeFieldOrdinals[“DateOfHire”]);
employee.EmployeeID = rs.GetInt32(m_employeeFieldOrdinals[“EmployeeID”]);
employee.FirstName = rs.GetString(m_employeeFieldOrdinals[“FirstName”]);
employee.LastName = rs.GetString(m_employeeFieldOrdinals[“LastName”]);
string middleInitialStr = !rs.IsDBNull(m_employeeFieldOrdinals[“MiddleInitial”])
? rs.GetString(m_employeeFieldOrdinals[“MiddleInitial”]) : String.Empty;
employee.MiddleInitial = String.IsNullOrEmpty(middleInitialStr) ? ” : middleInitialStr[0];

// on and on for another 100 lines of code)

list.Add(employee);
}
}
}

return list.ToArray();
}


Nevermind the fact that this could be improved a little with GetFields – the big issues here are that you have to hard-code the SQL to get the data, then you have to parse the results and fill out the Employee entity instance.  You do this for every table.  You change a table, you then have to go change the SQL and every method that touches the table.  The process is error prone, time consuming and just not fun.  It also makes me uneasy because the test surface area needs to be big.  How do I ensure that all places that access the table were fixed?  Unit tests help give me some comfort, but really it has to go through full integration testing of all features (since Employees are used by just about every feature on the clock).


Now what would the ORM do for me here?  Without going into too much detail on exactly how to use the ORM (we’ll look at that in another blog entry), let’s just look at what the ORM version of things would look like.


We’d not have any “Table” crap.  No SQL.  No building Commands and no parsing Resultsets.  We’d just define an Entity like this:

[Entity]
internal class Employee
{
[Field(IsPrimaryKey = true)]
public int EmployeeID { get; set; }
[Field]
public DateTime DateOfHire { get; set; }
[Field]
public string FirstName { get; set; }
[Field]
public string LastName { get; set; }

// etc.
}


Note how much cleaner this is than the Table code I had previously.  Also note that this one class replaces *both* the Table class and the Business Object class.  So this is much shorter.


What about all of that create table, insert, update and delete SQL and index garbage I had to know about, write and maintain?  Well, it’s replaced with this:

m_store = new DataStore(databasePath);

if (!m_store.StoreExists)
{
m_store.CreateStore();
}

m_store.AddType<Employee>();


That’s it.  Adding another Entity simply requires adding just one more line of code – a call to AddType for the new Entity type.  In fact the ORM can auto-detect all Entity types in an assembly with a single call if you want.  So that’s another big win.  The base class garbage gets shifted into a framework that’s already tested.  Less code for me to write means more time to solve my real problems and less chance for me to add bugs.


What about the long, ugly, unmaintainable query though?  Well that’s where the ORM really, really pays off.  Getting all Employees becomes stupid simple.

var allEmployees = m_store.Select<Employee>();  

Yep, that’s it. There are overloads that let you do filtering.  There are other methods that allow you to do paging.  Creates, updates and deletes are similarly easy. 


Why did I create my own instead of using one that already exists?  Simple – there isn’t one for the Compact Framework.  I also find that, like many existing IoC frameworks, they try to be everything to everyone and end up overly complex.  Another benefit to the OpenNETCF ORM is that it is fully supported on both the Compact and Full Frameworks, so I can use it in desktop and device projects and not have to cloud my brain with knowing multiple frameworks.  I even have a partial port to Windows Phone (it just needs a little time to work around my use of TableDirect in the SQL Compact implementation). 


Oh, and it’s fast.  Really fast.  Since my initial target was a device with limited resources, I wrote the code for that environment.  The SQL Compact implementation avoids using the query parser whenever possible because experience has taught me that as soon as you write actual SQL, you’re going to pay an order of magnitude performance penalty (yes, it really is that bad).  It uses TableDirect whenever possible.  It caches type info so reflection use is kept to a bare minimum.  It caches common commands so if SQL was necessary, it at least can reuse query plans.


So that’s why I use an ORM. Doing data access in any other way has become insanity.

OpenNETCF Extensions: Eliminating Control.Invoke

Marshaling UI access to the proper thread is a very common task in app development, yet it still tends to be a pain in the ass.  You have to check if InvokeRequired is true (well you probably don’t *have* to, but not doing it feels dirty to me) and even using an anonymous delegate tends to be verbose.  And then there are also the simple bugs that are not always easy to spot.

So, like any good developer, I stole someone else’s idea and put it into the OpenNETCF Extensions.  Now, instead of doing this:

void MyMethod()
{
    if (this.InvokeRequired)
    {
        this.Invoke(new EventHandler(delegate
            {
                MyMethod();
            }));
        return;
    }

    // do stuff with the UI
}

You do this:

void MyMethod()
{
    Invoker.InvokeIfRequired(i =>
    {
        // do stuff with the UI
    });
}

Less code.  Less potential for error.  More readable.  What’s not to like?


The OpenNETCF Extensions library is a collection of extension methods and helper classes that I find useful in a lot of different projects. It’s compatible with the Compact Framework, Tirethe full framework and Windows Phone.

On Software Development

This post is part of my “Software Development” series.  The TOC for the entire series can be found here.






Developing good software is hard.  Really hard.  Sure, anyone can buy a book on writing software or pull up some code samples and get something that compiles and runs, but that’s not’s really developing software.  A lot of code in the wild – I’d bet a vast majority of it – just plain sucks.


It’s hard to point out where the blame lies.  It seems that most developers are environmentally or institutionally destined to write bad code. Schools teach how to write code, but not how to architect it or to follow reasonable design practices.  In the zeal for clarity, publishers churn out books, blogs and samples that show bad practices (when is it ever a good idea to access a data model from your UI event handler?).  Managers and customers alike push hard to get things done now, not necessarily done right – only to find months or years later that doing it right would have saved a boatload of time and money.  And let’s face it – many developers are simply showing up to work to pull a paycheck.  You know who they are.  You’ve no doubt worked with them in the past.  You’re probably working with them now.


I was watching Gordon Ramsay the other day and it occurred to me that he and I are alike in our own peculiar way.  I’m not saying that I see myself as the “Gordon Ramsay of Software Development” – hardly –   but we share a common trait.  Just as Gordon gets angry and starts spewing colorful language when he walks into a crap kitchen, it bothers the hell out of me to see complete idiots in my chosen field out there just making a mess of things.  When I see bad code – not necessarily minor errors, or code that could be refactored and made better – but just outright shit code that should not have occurred to a developer in the first place it pisses me off.  By the nature of my work, often getting called in only when the project is off the rails, I see it all the time. Code that, on review, a peer or mentor should have seen and said “Whoa!  There’s no way that’s going into our code base”.  Code that just makes it harder for the next person to do their job.


In an effort to simplify things for my own code, for my customers’ code as well as anyone who is willing to listen to my ravings, I’ve spent a lot of time building, testing, fixing and extending tools and frameworks that many of which I turn around and give away.  This isn’t out of altruism, no, it’s largely because I’m a lazy developer.  I hate writing the same thing twice.  When I start a project, I don’t want to spend large amounts of time building up the same infrastructure that every project needs. Building up a framework for handling UI navigation isn’t what I’d call interesting, but just about every project needs it.  Handling object dependencies and events is common.  Writing a DAL for serializing and deserializing entities is not just drudgery, I find it’s highly susceptible to errors because you end up doing a lot of copy and paste.


I have these cool tools and frameworks that I use in literally every project I work on now.  That’s great for me, but it doesn’t really help others, right?  Without reasonable documentation or explanation, only a small handful of people are going to go through the effort of getting the tools and trying to understand them – even if they are deceptively simple and could potentially save you weeks of effort. 


So I’ve decided to put together a series of blogs over the coming weeks and months that explain, hopefully in simple terms, what these frameworks do, how to use them, and most importantly, why they are worth using.  There’s nothing groundbreaking here.  I didn’t invent some new way to do things.  I’ve simply appropriated other peoples’ ideas and extended them to work in the environments that I work.


Generally I’ll be covering the following topics and frameworks:



  • Dependency Injection and Inversion of Control (using OpenNETCF IoC)
  • Event Aggregation (using OpenNETCF IoC)
  • Plug-in Architectures and interface-based programming (using OpenNETCF IoC)
  • Software features as services (using OpenNETCF IoC)
  • Data Access through an ORM (using OpenNETCF ORM)
  • Parameter Checking (using OpenNETCF Extensions)
  • Exposing data services over HTTP (using Padarn)
  • Whatever else I think of

If there’s a topic you’d like me to talk about, feel free to send me an email.  I may turn on comments here and let you post ideas, but I find that when I enable comments on my blog, I start getting more comment spam than I really want to deal with, so if comments are turned off just drop me a line.

OpenNETCF.IoC: New Release

We’ve been heavily dogfooding the IoC project (and others) lately and I finally took the time today to back-port the fixes and updates to the public code. This is being used for a solution that runs on both the desktop and the compact framework, so it’s been heavily tested under both of those environments. The new release (1.0.11235) is now available on Codeplex.

Improving your DWR: Calling events

I’m a big fan of writing less code or even actively deleting existing code.  Less code means less bugs, and who doesn’t like shortcuts to getting around writing mind-numbing, repetitive code?  One place that I find a whole lot of repetition in my code is in calling event delegates.  Before calling an event delegate you always have to make sure it’s not null, and a generally accepted practice is to make a copy of the delegate in case a subscriber unhooks the event after you’ve checked.  So a typical call to raise an event looks like this:

virtual void OnMyEvent(EventArgs args)
{
var handler = MyEvent;
if (handler != null)
return;

handler(this, args);
}


Sure, it’s small enough, but when you have a lot of events in a large solution, you have a whole lot of repetitive code.  Admit it – you’ve used CTRL-C/CTRL-V to copy this for a new event. There’s got to be a better way, right? 


Well how about an extension method (well a pair, since you might have EventHandler<T>)?


public static class EventHandlerExtensions
{
public static void Fire(this EventHandler h, object sender, EventArgs args)
{
var handler = h;
if (handler == null) return;
handler(sender, args);
}

public static void Fire(this EventHandler h, object sender, T args) where T : EventArgs
{
var handler = h;
if (handler == null) return;
handler(sender, args);
}
}

Once you’ve done this your code becomes a simple one-liner:


virtual void OnMyEvent(EventArgs args)
{
MyEvent.Fire(this, args);
}

OpenNETCF.ORM update

I’ve been using the OpenNETCF.ORM library on a shipping project for a while now.  As expected, as I add features to the product, I’ve found problems and limitations with the ORM that I’ve addressed.  This morning I merged that branch back with the trunk available on Codeplex, so there’s a whole new set of code available.  New features include:



  • Better handling of reference fields
  • Cascading inserts
  • Cascading deletes
  • Expanded capabilities for filtering on deletes
  • Added support for more data types, including the “object” type
  • Support for ROWGUID column

What it really needs now is a definitive sample application and documentation.  If you’d like to volunteer to work on either, I’d really appreciate it.