Benchmarking OpenNETCF’s IoC Framework

Last night I was browsing around the Net on a back tracking a visit to the OpenNETCF Inversion Of Control (IoC) project.  I came upon a page where someone had done a quick set of benchmarks for many of the popular IoC engines out there [].  This got me wondering.  How would OpenNETCF IoC stack up against these, after all I’ve never done any performance comparisons against any other framework.

Well I downloaded the benchmarking code the original author had put together and added added an IoC version of the test (which was really easy to do).  I then ran the singleton test, which essentially registers a set of dependent classes by interface name (in theRootWorkItem.Services collection) and then uses the IoC engine to extract the from the store 1,000,000 times.  It then reports how long it took to do the extraction.  Simple enough.

Here are the results:

The hard numbers are not really important – the actual number is going to depend on your hardware – what is interesting is how they look relative to one anotehr.  You can see that if you exclude the new operator (which isn’t IoC, it’s just there for reference), then the Dynamo engine (which is the test author’s engine) preforms the best.  Now I know nothing about that engine, so if you’re interested, I’ll let you investigate that further.

What was interesting to me is that IoC is three times faster than the next fastest IoC engine (AutoFac), five times faster than Structure Map, roughly 6 times faster than Castle Windsor or Unity and a whopping 25 times faster than the ever-popular Ninject engine.  Evidently my focus on performance when developign the engine paid off.

How about the time it takes to register all of the types that get created?  Are we paying the piper at creation time in order to save it at resolution?  Here are the time results for setup (instantiation of singletons):

Again, IoC is at near front of the pack.

Binary size isn’t overly relevent any longer, I mean if you’re using managed code then you’ve already made the decision that file sizes aren’t critical, but it’s still interesting to look at.  Here’s a breakdown of the sizes of each of the libraries.  The libraries marked with an asterisk also have configuration files that are *not* included in the resported size (so deployment would actually be larger than what the graph shows).

All in all, I’m pretty pleased with the results.  We dogfood the OpenNETCF IoC framework for all of our projects, both desktop and Compact Framework, so it gets a fair amound of testing and beating and you can be assured that if there are bugs I’ll do my best to resolve them.  If you’ve not taken a look at the project, give it a try.


One thought on “Benchmarking OpenNETCF’s IoC Framework”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s