New Blog Page: Archives

I’ve been scrubbing through the all of the 404 errors the new blog migration has led to, trying to wire up old links with content.  That’s not too difficult – it’s just a long manual process of wading through the logs, finding the proper target and adding redirects.  Some items, however aren’t as simple – things like links to articles in the old Community site.  For those, I’ve been digging through old backups and trying to find the content.  For the items I’m finding, I’m putting links to them in a new Page called “Archives” (you’ll see a link in the menu above as well).

If there’s some content you’d like to see that I’m missing, just let me know.

Windows CE: Risks v. Benefits

Jan 30, 2014 Update: A few vendors are now supporting, or planning support for Wi-Fi for WEC, which would likely change a significant portion of the analysis below.  If indicates that not only can we get new Wi-Fi hardcware, but also that third-party vendors are once again taking interest in Windows CE. See my comments and links here.

I’ve been developing applications for Windows CE for more than 12 years now and for a pretty good chunk of that time I also created OS builds themselves for custom hardware. Still, if you’re running a business it’s often a good idea to do a risk analysis of some of your dependencies occasionally, even if those dependencies define a core part of your business or expertise. I recently took a hard look at both the Compact Framework and Windows CE (or Windows Embedded Compact as it’s now known) recently, to identify some of the risks and benefits of choosing them as a foundation of a product or business.

Since I like dealing with concrete examples, instead of abstract “what if’s” let’s look at it from a specific point of view – specifically a customer of mine that wanted to get a turnkey upgrade for a product platform. This customer builds and ships tens of thousands of custom point-of-sale devices each year. A large portion of their current platforms use a home-brewed Compact Framework wrapper sitting on top of Silverlight for Windows Embedded, with a pretty large infrastructure code base written in C# below that. Historically they have been running on x86 hardware.

Recently they’ve decided to create and release a new product. An ARM-based tablet that will run the same application, with the goal being that the new hardware will provide them room to expand and grow in new markets and use cases. They designed the hardware, but contracted a third-party to deliver a Windows CE Windows Embedded Compact 7 OS for the platform. Once they got everything up and running, it turned out that the existing application ran, but really right on the edge of acceptable for performance. Adding more to the application would likely lead to customer dissatisfaction. This was not a good place to start with a new product, so they had me come in to see if I could help improve the application performance and make some recommendations.

The application code base had some areas that could be improved, but that was expected.  Pretty much every code base can be improved, including our own, and anything that had gone through several revisions over many years is always going to have room for improvement.  That said, the code wasn’t bad – in fact it was pretty good.  There really was no low-hanging fruit to be picked.  Any performance gains were going to take a fair amount of work to achieve, and we’d be measuring improvements in milliseconds.  With a month of work, I guessed that we might be able to get a 5% improvement, and that’s probably best-case.

So how could we get large improvements?  Well, WEC 2013 has been generally available since June.  It has a new SWE engine, which if the past is any indicator, probably has a decent performance improvement over previous versions. It includes Compact Framework 3.9, which includes lots of optimizations from Microsoft’s Windows Phone 7 work, as well as, and probably most importantly, multi-core support.  The new platform has 4 processor cores.  My guess is that moving to WEC 2013 and CF 3.9 on this hardware will probably improve the application performance by 20% conservatively.

So the decision to move is simple, right? They already have WEC7 running, so porting the OS shouldn’t be too much work, and managed code should just transfer right over.  They asked for a quote to do the whole thing, and that’s when I started doing a real risk analysis.  The decision, it turns out, is far from simple.

The Question

So what’s the big question here?  What, exactly do we need to understand to be able to estimate the work involved, the cost to the customer, the risks associated, and the recommendations to present?  In this case there are really three “options” to look at.  The first option, which is almost always an option (though not necessarily a good one) in these cases is to just do nothing.  Keep the hardware as is, keep the OS as-is and the application as-is.  The cost is zero and the schedule impact is zero so you shouldn’t just hand-wave it away.  The problem here is that the performance, and therefore the quality, of the “do nothing” option isn’t desirable, so we’d really like another option.

Option 2 is to port the OS and application to WEC 2013 and CF 3.6.  What are the benefits of doing so? What are the risks to the schedule, the budget and project as a whole?

Option 3 would be a nebulous “port it to another platform altogether” option.  Maybe Android or some other embedded Linux platform.  I wasn’t tasked to do this, or even quote it, but it’s certainly an option for them and we should at least consider the risks and benefits of the choice if we’re to make a recommendation that is good for their business.  While my core competency is in Microsoft technologies, I don’t have any qualms about recommending other technologies if it makes sense for the customer.  We integrate Linux, Android and iOS support into some of our products because they make sense in many cases.

Really, though, we need to come up with a cost/benefit analysis of Option 2 and then present that while keeping in mind that Option 1 or 3 are on the table.

The Benefits

So let’s look at what Option 2 brings to the table. Some of the benefits I’ve already talked about, but it’s a good idea to look at the larger picture and see what shakes out.

  • We get multi-core support.  This is likely to give us a significant performance boost, especially considering that their application makes heavy use of multiple threads, and a code analysis shows that the UI rendering thread is really what’s eating a large amount of the processor time on the existing system.
  • We get the CF 3.9 optimizations. Microsoft spent a lot of time squeezing performance out of the CF for Windows Phone 7 (before abandoning it for Windows Phone 8), and those performance benefits looks promising.  Probably double-digit performance increases in several areas.
  • The existing application code base ports easily, after all, it’s managed code.  The expectation is that no rewriting should have to happen (more on this expectation in a bit).  Yes, some optimization could be done, but the app should just run.
  • The tools are first class, meaning developers are more productive.  Yes, we could argue for days about Eclipse versus Studio, or Platform Builder versus whatever tool chain another OS supports, but I have experience in all of them and my experience is that Studio is flat out better.  The debugging experience is better at the app and OS level, and this saves a lot of developer time.
  • The current team is experienced with Microsoft technologies.  They know Windows. They know Studio.  They know Silverlight.  They know Win32.  They know C#.  Having to learn a new OS, a new development environment, a new driver and kernel architecture and a new development language would be very, very costly – especially when time to market is important.

The Risks

And now the fun part.  The part where there are surprises, and the reason you’re likely reading this article.  What are the risks involved with the migration?  After all, it’s just a port of an OS to a newer version and the application shouldn’t need changing, so what could possibly be a scary enough risk to make us even question the port?  It turns out there are lots of them.

  • When moving to WEC 2013, you get code compatibility, but not binary compatibility.  <— Read that again. This is a huge, huge risk in many cases. Let’s look at why.  The application is managed code, so that compiles down to MSIL, which is portable right?  Actually yes.  In fact, I call bullshit on Microsoft’s statement that CF applications aren’t binary compatible because they only way to break that would be to deviate from the ECMA spec or to remove opcode support – I think they just say it’s not supported because they don’t test it and their legal department therefore tells them they can’t say it’s supported.  However, lack of compatibility makes a huge difference on the native side. What exactly broke binary compatibility anyway?  Well WEC 2013 ships with a new C runtime, and all older binaries were built against an old runtime, which is baked right into the OS (so you can’t ship the old runtime with your app).  What this means is that all native code has to be recompiled.If your application uses *any* P/Invokes to something outside the OS (so not in coredll), that DLL must be recompiled for WEC 2013 (coredll is already going to be recompiled).  If that library came from a third-party, then you either have to get the full source and recompile it, or convince the vendor to compile for WEC 2013 (and they probably don’t have an SDK, since that’s exported from a BSP – hello schedule risk!).What this means is that if you’re using a third-party native component and you don’t either have (or can’t get) full source and the vendor won’t provide WEC 2013 support, then you must either find another vendor, recreate the capability yourself or the project is at a dead stop.  That is a big risk.  Maybe not insurmountable, but big.
  • Processor and BSP support.  The new hardware is using a processor that WEC 2013 doesn’t support out of the box.  It’s not an esoteric processor, it’s actually a pretty common Freescale processor, but the point is that support isn’t in the box.  Microsoft has said that support for it is “coming soon” but they’ve not quantified “soon” and their record on ship dates isn’t something I’d want to bet a business-critical project on.  So we must assume that we’d have to build BSP support for this processor ourselves.  This isn’t an impossible task – in fact it’s not even a new task.  But it takes time, and it takes experience.It also requires full source code to the existing BSP including all of their existing drivers.  Again, because of the lack of binary compatibility, all drivers must get recompiled.  It’s very rare in CE to have source code for all peripherals – many silicon vendors just don’t supply it (no idea why, since it’s only useful with their silicon).  That means you need to get vendor support for WEC 2013 drivers, and my experience so far is that there aren’t many ship vendors supporting WEC 2013.  This is especially true for WiFi.  The selection of WiFi drivers for WEC 2013 is very, very sparse.  To make matters worse, Microsoft changed the NDIS driver model in WEC 2013 (from 5 to 6) *and* they completely dropped WZC support in favor of native WiFi .  So it’s a lot of work for WiFi chip vendors to add support, and if your application happened to use WZC for network status and control, you now get to rewrite all of that code.Binary compatibility here is probably the biggest risk.  If you can’t get a driver for silicon that’s already on your board, especially for something as critical as WiFi, the project is at a dead stop (unless you want to respin the board for different silicon).
  • General OS support.  WEC 2013 is supported by Microsoft.  It’s not been out long, and they guarantee at least 10 years of support.  On paper, that’s great, but really you need more than just Microsoft support to be successful.  OS support without any peripherals isn’t overly helpful.  Yes, if you’re on an x86 and all you need is a serial port, then you can probably get by, but if you’re using ARM and other peripherals some confidence that when parts go EOL you’ll be able to get replacements would be nice.It’s only anecdotal evidence and a small subset of the market, but I know of exactly zero customers using WEC 2013.  In fact I can count on one hand the number of WEC 7 installations I’ve ever seen.  The general consensus that I see is that CE 6.0R3 was solid, covered 90% of use cases and has peripheral drivers.  I’m still working with customers who are rolling out new CE applications and still adding new features, it’s just they’re all based on CE 6.0.

The Conclusion

So where does that leave us?  In this particular case, I can’t in good conscience recommend that the customer move to WEC 2013.  If it were my business, I definitely wouldn’t – the risks are just way too scary (unless they can get all of that source code).  What about all of the existing source code?  Well Mono would help them migrate a lot of the business logic, but the UI is going to have to be completely rewritten whether they go iOS, Android or some other flavor of Linux.  Maybe HTML5 would make sense (though in general I think it’s overhyped, performs poorly compared to native apps, and sucks at getting to any actual device resources, which they need), but since they are a POS system, which is all about UI, a UI rewrite is going to be expensive.

Does this mean that WEC2013 won’t ever be a good choice?  I, honestly, have no idea.  As I’ve mentioned before, I have zero visibility into what Microsoft is thinking or planning.  I do know that we still build all of our code to support the Compact Framework – though that’s because it generally makes porting to Mono simple.  I also know that I’ve been spending a lot more time working on porting our products to Mono and ramping up on Linux – more specifically Wind River Linux, but I’ve also been playing with Fedora, OpenSUSE and Ubuntu.  Just like our customers, we have to be forward-looking and mitigate risks if we’re going to keep paying the bills, and from my perspective WEC 2013 is a high-risk, low reward path.  I’d say it’s telling that the anchor of our product line, Solution Engine, which is based on a Microsoft technology (C#) is tested and supported on Wind River Linux but I’ve not even bothered to try running it on WEC 2013.

An end to Comment Spam?

One of the major pain points I had with the last blog engine, and really the entire failed experiment that was the OpenNETCF Community Server, which replaced the OpenNETCF Forums was SPAM.  Comment spam caused me to shut off comments in the blog.  Forum spam caused me to have to turn on Forum moderation.  I was getting thousands of spam posts in a day, which caused me to say “screw this, I can’t wade through all of this for the few valid posts”, so I quit bothering to even moderate it, the site quickly went stale and eventually I just turned it off. I still have all of the content, though it’s spam riddled and I’m not even sure what the value of old Compact Framework related questions is (more on this thought later).

At any rate, WordPress has a filtering plug in called Akismet that says it has some form of algorithm that takes info from all sorts of WordPress sites, puts it together, and uses it to block comment and pingback SPAM.  It’s $5 a month, which is a bargain if it works and allows me to not think about dealing with spammers.  We’ll see how it goes.

A New Blog Engine, and hopefully more frequent posts

I’ve you’ve followed this blog at all in the past, or even if you just look at the post history, you’ll see that my activity has been very, very slow over the past couple years.  That’s really been the result of a few things.  First, I’ve been pretty busy doing actual work, but that’s a pretty poor excuse.

The larger problem was that the blog engine I was using – a version of dasBlog from probably 10 years ago – was just really, really clunky to use.  It was painful any time I wanted to post anything – especially if it had images, or code.  And something in it’s pages often would cause client browsers to just hang.  Nothing like getting half way through a long post and having the browser seize up, everything to be lost and me to start cussing.

So over the past couple days, I installed PHP, MySQL and WordPress on my server, then migrated all of the old dasBlog content to the new engine . At least I hope it all got migrated – if you are searching for something and can’t find it let me know.  I still have full copies of the old install.

My plan now is to post more frequently.  You’ll still see code problems that I think are interesting, and I plan to do more part of the “Software Development Series“, but I think I’ll also be posting more thoughts of business and strategy.  Why we make the decisions we do, and what I think I see in the tea leaves of our industry.

New IoC Sample : A Basic Wizard

I’ve been asked before about creating a Wizard framework using the IoC Library.  We’ve created several wizards in the past for both the desktop and the compact framework using IoC, but I never formally created a reference project.  Well now that someone asked again over on Stack Overflow, I decided I should actually put one together.

The Wizard reference app follows reasonably good coding practices and uses a Model-View-Presenter/Model-View-Controller pattern (though since it uses data binding you might argue it’s more MVVM).  A Service handles storing all state info and a Presenter handles marshaling state to UI and UI to State data exchanges.


Here are some screen shots of the end product:


To get the source code for this wizard, get the latest change set from the Codeplex project.

New Release: OpenNETCF Virtual Agent

I’ve published a release of the OpenNETCF Virtual Agent.  The Virtual Agent is an .NET implementation of an application using the MTConnect protocol for publishing data.  It uses the MTConnect Managed SDK for the publisher interface and comes with a Machine Simulator showing a simple, yet real-world example of how you might integrate it into your own factory floor.


OpenNETCF.ORM: Updated Release

I’ve published a new release of the OpenNETCF ORM framework.  This release has some minor bug fixes and adds support for a few new things (TimeSpan data, field defaults, etc).  See the check-in notes for more details.

This release supports SQL Compact 3.5 for both the CF and full (desktop) frameworks and is in heavy use in released commercial applications, so I’m very confident in its stability.  I still don’t have an implementation for Windows Phone yet (I’ve had a few volunteers, but no one has actually ever delivered anything).  If you’d like to help me out, I’d love to see some community involvement here so contact me and let me know.

Disable/Enable Network Connections under Vista

Note: This is content from a blog originally published by Neil Cowburn in June of 2008, recovered with the aid of the wayback machine.

I got an email last week asking about how to disable a particular network connection under Vista. The specific scenario, how to disable an active 3G connection, is not something I’m going to cover, but what I present below could be used as basis for that scenario.

With Vista, Microsoft introduced two new methods to the Win32_NetworkAdapter class under WMI:Enable and Disable. Before can call either of those methods, we need to know how to enumerate the network connections.

The .NET Framework SDK provides a helpful utility called mgmtclassgen.exe, which can be used to create .NET-friendly wrappers of the WMI classes. Open up a Visual Studio command prompt and enter the following:

mgmtclassgen Win32_NetworkAdapter -p NetworkAdapter.cs

This will generate a file called NetworkAdapter.cs which will contain a C# representation of the WMI Win32_NetworkAdapter class. You can add this source code file to your C# project and then access all the properties without too much extra effort.

To filter and disable the specific adapters, you do something like this:

SelectQuery query = new SelectQuery(&quot;Win32_NetworkAdapter&quot;, &quot;NetConnectionStatus=2&quot;);
ManagementObjectSearcher search = new ManagementObjectSearcher(query);
foreach(ManagementObject result in search.Get())
    NetworkAdapter adapter = new NetworkAdapter(result);

    // Identify the adapter you wish to disable here. 
    // In particular, check the AdapterType and 
    // Description properties.

    // Here, we're selecting the LAN adapters.
    if (adapter.AdapterType.Equals(&quot;Ethernet 802.3&quot;)) 

Don’t forget to add a reference to System.Management.dll!

New in OpenNETCF SDF 2.0 – Imaging API Wrapper – part II

NOTE: This entry is originally from Feb 3, 2006 and was recovered from the Internet Wayback Machine on May 27, 2014 due to demand for the content. Some links, etc may no longer be valid.

As promised, here are some details on the OpenNETCF.Drawing.Imaging namespace. I’m going to demonstrate how to accomplish several tasks listed in the previous post as not supported by the CF Bitmap class.

0. Preface. helper classes

In the wrapper we introduce 2 helper classes – StreamOnFile and ImageUtils. The latter is simply a collection of high-level image proverssing methods. The former is an IStream implemented over .NET Stream (including FileStream). The implementation is not complete, but sufficient for the Imaging API methods that expect an IStream parameter.

1. Thumbnails, loading parts of the large image

Loading an image in Imaging API is achieved via calls to decoders – COM objects implementing IImageDecoder interface. The basic imaging interface IImage uses decoders to load image data. Most of the decoders support loading partial image, dicarding the unnecessary data. E.g. if you need to load a 3000×2000 image into a 300×200 PictureBox control, it is obvious that you don’t need all 6MP of data taking a whopping 18 MB of RAM (24bpp). Moreover, most devices will simply throw an OutOfMemoryException fi you try something like this. Decoder can be instructed to load an image of the required size so that it will skip over those pixels that don’t make it (or factor them into interpolation process to scale the image more smoothly). Here is how we achieve it.

        static public IBitmapImage CreateThumbnail(Stream stream, Size size)
IBitmapImage imageBitmap;
ImageInfo ii;
IImage image;

ImagingFactory factory = new ImagingFactoryClass();
factory.CreateImageFromStream(new StreamOnFile(stream), out image);
image.GetImageInfo(out ii);
factory.CreateBitmapFromImage(image, (uint)size.Width, (uint)size.Height,
ii.PixelFormat, InterpolationHint.InterpolationHintDefault, out imageBitmap);
return imageBitmap;

After we got IBitmapImage object, we can convert it to the .NET Bitmap:

Bitmap bm = ImageUtils.IBitmapImageToBitmap(imageBitmap);

2. Image transformation (flip, rotate, gamma/brightness/contrast controls)

Imaging library offers a limited set of the image operations exposed via interface IBasicBitmapOps. These are also wrapped in the ImageUtils class so that you get the following methods:

public Bitmap RotateFlip(Bitmap bitmap, RotateFlipType type)
public Bitmap Rotate(Bitmap bitmap, float angle)
public Bitmap Flip(Bitmap bitmap, bool flipX, bool flipY)

Of course you are welcome to use the IBasicBitmapOps directly.

3. Image tags


4. Transparency and alpha blending

If you have a PNG image with alpha channel information and you load it into a Bitmap object, the transparency is immediately lost. Not so, if using IImage class.

ImagingFactory factory = newImagingFactoryClass();
“rgba8.png”, out img);

Bitmap imageBackground = new Bitmap(“MyImage.bmp“);
g = Graphics.FromImage(imageBackground);

IntPtr hDC = g.GetHdc();

RECT rc = RECT.FromXYWH(200, 200, width, height);
img.Draw(hDC, rc,

The above code will transparently draw rgba8.png over the specified bitmap.