ORM Updates

Generally speaking, the OpenNETCF ORM library gets worked on offline.  We use it extensively in our Solution Engine product and so the “working” copy of the code resides on a separate source tree.  The good part of this is that it’s always getting tested, extended and fixed.  The down side is that I have to manually merge those changes back to the public source tree so it doesn’t happen probably as often as it should.

At any rate, I did a large merge this morning and there are some cool new features that have been added, along with a long list of bug fixes.

The general bug fixes and enhancements are largely SQL Server related, since we did a large SQL Server push for a customer this year, and the ORM benefitted from that.

The new features are twofold:

1. I added some rudimentary Connection Pool support.  The ORM itself can now keep a pool of connections to your target data store and it will re-use them in a round-robin fashion.  It’s nothing overly complex, but if you’ve got a few different threads doing different work on the database, it improves performance.

2. I added what I call a “Recovery Service” because I couldn’t come up with a better name.  This doesn’t restore data, but instead it attempt to recover from failures when it can. It’s really useful for remotely connected data stores.  For example, if you are using a SQL Server or DreamFactory DSP and the server or network goes down, your data actions are going to fail and you have to add in a load of handling logic for that.  The Recovery Service is designed to help you out here.  It will look for those types of failure and cache your operations until the store comes back online, at which time it will then apply the changes.  Sort of a “store-and-forward” type of capability, but at the command level and in RAM.

The new release is over on CodePlex in source and binary format.

OpenNETCF Scanner Compatibility Library

Some days I think I have too much code “lying around”.  As you would expect from many years as a developer, I have utility libraries for all sorts of tasks.  Generally when I think something is likely to be useful for others I like to make it publicly available for anyone to use – just take a look at the list of Codeplex projects I admin.

This morning I saw a question on StackOverflow about intelligently detecting a platform and loading the proper binaries for it.  In this case it was specific to doing so with Windows Mobile barcode scanners.  I immediately thought, “hey, I have a library for that” and went to answer and give a link.  Except the link didn’t exist.  I never created the open source project for it, so the code has just been sitting here doing nothing.

Yes, this code is probably 5 years or more past it’s prime useful period due to the decline in use of Windows Mobile, but hey, I just used it on a project last week, so it’s got some life left in it.

So, here’s yet another open source library from my archive – the OpenNETCF Barcode Scanner Compatibility Library.

MJPEG (and other camera work)

Back in 2009 I was doing a fair bit of work for some customers in the security field.  I ended up doing some proof-of-concept stuff and ended up with some code that, while not groundbreaking, is at least might be useful to others.  It’s really too small to bother starting a Codeplex project for it, unless I get some pull requests, in which case I’ll turnn it into a full project.  In the meantime feel free to Download the source.

Developing Compact Framework App in Visual Studio 2013

A friend, colleague and fellow MVP, Pete Vickers, brought an interesting product to my attention this weekend.  iFactr has a Compact Framework plug-in for Studio 2013.  I’ve not tried the plug-in, so this isn’t an endorsement just a bit of information.  I also don’t know how they’re pulling it off.  It looks like they have WinMo 6.5 and emulator support, and it requires an MSDN subscription.  I suspect that it requires you to install Studio 2008 so you get the compilers, emulators and all of that goodness on your development system, and it then hooks into those pieces from Studio 2013.

It most certainly is not adding any new language features – you’re still going to be targeting CF 3.5 in all its glory – but the ability to use a newer toolset is a welcome addition.  If they somehow are pulling it off without requiring Visual Studio 2008 that will be really nice.  If you’ve tried the plug-in, let me know how it went in the comments.

Windows CE on Arduino?

If you do much “maker” stuff, you’re probably aware of the Netduino, an Arduino-compatible board that runs the .NET Micro Framework.  Cool stuff and it allows you to run C# code on a low-cost device that could replace a lot of microcontroller solutions out there.

It just came to my attention today that there’s a new game in town – 86duino, an Arduino-compatible x86 board.  Say what?!  Basically we have an Arduino-size, and Arduino-cost ($39 quantity-1 retail price, hell0!) device that can run any OS that runs on x86.  Let’s see, an OS that runs on x86, does well in a headless environment, runs managed code, can be real-time, has a small footprint and low resource utilization?  How about Windows CE?  There’s no BSP for it yet that I see, but it’s x86, so the CEPC BSP is probably most of what you need for bring-up.

I’ll be looking to build up a managed code library to access all of the I/O on this and some popular shields.  Any requests/thoughts on “must-have” shield support?

Getting Mouse Events in a Compact Framework TextBox

Yesterday I got a support request for the Smart Device Framework.  The user was after a seemingly simple behavior – they wanted to get a Click or MouseDown event for a TextBox is their Compact Framework application so they could select the full contents of the TextBox when a user tapped on it.

Of course on the desktop this is pretty simple, you’d just add a handler to the Click event and party on. Well, of course the Compact Framework can’t be that simple. The TextBox has no Click, MouseUp or MouseDown events. Thanks CF team. There are some published workarounds – one on CodeProject and one on MSDN – but they involve subclassing the underlying native control to get the WM_LBUTTONDOWN and WM_LBUTTONUP messages, and honestly that’s just not all that fun. Nothing like making your code look like C++ to kill readability.

For whatever reason (I can’t give a good one offhand) the TextBox2 in the Smart Device Framework also doesn’t give any of the mouse events, *but* it does offer a real easy way to add them since it does allow simply overriding of the WndProc. Basically you just have to create a new derived control like this:

    public class ClickableTextBox : TextBox2
    {
        public event EventHandler MouseDown;

        protected override void WndProc(ref Microsoft.WindowsCE.Forms.Message m)
        {
            base.WndProc(ref m);

            switch ((WM)m.Msg)
            {
                // do this *after* the base so it can do the focus, etc. for us
                case WM.LBUTTONDOWN:
                    var handler = MouseDown;
                    if (handler != null) handler(this, EventArgs.Empty);
                    break;
            }
        }
    }

And then using it becomes as simple as this:

    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();

            textBox2.Text = "lorem ipsum";
            textBox2.MouseDown += new EventHandler(textBox2_MouseDown);
        }

        void textBox2_MouseDown(object sender, EventArgs e)
        {
            textBox2.SelectAll();
        }
    }

Being an MVP

I’ve been a Microsoft “Most Valued Professional,” or MVP, for a long time now.  So long that I had to actually go look up when I was first got the award (2002).  Over those years, the program has changed, the technology for which I am an MVP has changed, and I’m certain that I’ve changed.

When I first got my MVP status, it was not long after I co-authored a book on embedded Visual Basic with a friend, Tim Bassett and at the time I was being pretty prolific in that extremely niche segment of the development community, publishing articles on DevBuzz and answering question in the now-defunct Usenet groups from embedded development (as a side note Microsoft has tried many incarnations to replace those groups, and have never found one that was as easy to use or navigate).

I remember that I felt a bit out of place at my first MVP Summit – the annual meeting of all MVPs in Redmond – because I was seeing and meeting all sorts of people that I had been using for information since I had started developing.

It wasn’t long, and my focus changed from eVB to the .NET Compact Framework – largely because Microsoft killed eVB as a product.  I embraced the CF and continued writing articles for MSDN and answering loads of questions and even doing the speaking circuit at what was then called Embedded DevCon.  I helped “start” OpenNETCF, which at the time was really just a collection of Compact Framework MVPs trying to answer questions, cover the stuff Microsoft was missing, and not duplicate effort.

In those early days of the CF, being an MVP was fantastic – it really was.  A few times a year the MVPs and the product teams would actually get together in the same room.  They would tell us what they were planning.  They would ask for feedback.  They’d listen to our ideas and our criticisms.  You could count on seeing direct results from those meetings in future releases of products coming out of Redmond, and so the MVPs continued to pour effort into keeping the community vibrant and well-informed.

Back in those days I knew both PMs and developers from most of the teams involved in the products I used.  I knew people on the CF team.  The test team.  The CE kernel and drivers teams.  The CE tools team.  The Visual Studio for Devices team.  And when I say I “knew” them, I don’t mean that at a conference, I could point them out, I mean that at the conference we went to the bars together.  I had their phone numbers and email addresses, and they would respond if I needed to get a hold of them.

Those I now know were the golden days of the program, at least from the embedded perspective. It could well be that C# MVPs or some other group still sees that kind of interaction, but the embedded side doesn’t see much of that any longer.  In fact, I know very few people on any of the teams, and I guess that most of those people probably wouldn’t answer my emails.

What’s changed, then?  Well, Microsoft is a business, of course.  A large one at that.  As such, they have constant internal churn.  Most of the people I one knew are still at Microsoft – they’ve just moved on to other things, other groups and other positions and the people that came in to replace them didn’t necessarily have the mechanism (or maybe the desire) to get to know the community of experts in the field.  The teams also shifted a lot – Embedded got moved from one group to another and to another inside of Microsoft, and the business unit got less and less priority for quite some time – especially when it and the group shipping quantity (Windows Mobile) were separated.  The Embedded DevCon became the Mobile and Embedded DevCon, then it went away.  Budgets shrank.  Community interaction receded.

I can’t say I fault Microsoft for this.  After all, they are a business, and they make decisions based on what provides shareholder value.  I may disagree with some, or even many of their decisions on the embedded side, but I don’t work at Microsoft, and I’m definitely not in their management team, so my opinions simply do not matter.

So why do I bother mentioning all of this, if not to complain?  Because I want you, as a reader, to understand where I come from in some of the articles I’ll be posting over the coming weeks.  I no longer have any inside information about the future of Windows CE or the Compact Framework.  I don’t know what Microsoft intends to do or not do.  I find out things about the technologies at the same time as the rest of the world.

This means that if you see me write something about the future of Windows CE, now called Windows Embedded Compact (and yes, expect to see something on that soon), it’s all based on my personal opinion and thoughts based on history *not* on any insider information.  If what I predict happens, it’s only because my guesses were educated and/or lucky.  If it doesn’t, it’s not because I was trying to be misleading, it’s because I simply got it wrong.  Basically, as with anything you read, I expect you to use your own critical thinking skills, and I fully encourage discussion and debate.

Raw Flash Access from a CF app?

A new customer came to me late last week with an interesting problem.  They have hundreds of Motorola/Symbol MC70 barcode scanners in the field and occasionally the flash memory on the devices gets corrupted.

The current “solution” these guys are using involves removing the flash chips from the board, reprogramming it in a separate device, re-balling the BGA pins, and re-soldering it to the board. That explains why they desperately want an application that can do it.

They know the range where the corruption occurs and wanted an application that would allow a user to read, update and rewrite the corrupted bytes in flash.  They had talked with five separate developers before they found me, and all 5 had agreed that it was impossible, so naturally I took that as a challenge.

First, there are lots of things to know about how flash access works.  Most importantly, it’s not like RAM.  You can’t just map to it, then go about your merry way doing 32-bit reads and writes.  You can read it that way, sure, but writing is a whole new game.  Flash is broken up into “blocks” (which aren’t even always the same size – in the case of the MC70, the first 4 blocks are 64k long, and the rest are 256k long.) and writes must be done following this general procedure:

  1. Read the *entire* block that contains the byte you want to change into RAM
  2. Change to flash to Lock mode (a flash register write)
  3. Unlock the block of flash (another register write)
  4. Change the flash to erase mode (register write)
  5. Erase the *entire* block of flash (which writes all FF’s to it)
  6. Change the flash to write mode (register write)
  7. Update the RAM buffer with your changes
  8. Write in the *entire* block block to flash
  9. Tell the flash to commit (register write)
  10. Wait for the flash to finish (register read)
  11. Put the flash back into read mode (register write)

Oh, and if you get any of this wrong, you’ve made yourself an expensive brick.  The only solution at that point is the de-soldering and reprogramming route, and I don’t have that kind of hardware in my office.

So I started writing the app Monday morning, using C# since I had to create a UI for the editor, and on Wednesday morning this is what I delivered:

FlashEdit

So, in just 2 days I did what was “impossible”. I not only wrote all of the flash access code, I also wrote a hex editor control and an app UI to make use of the flash library.

On Software Development: Moving from statics or instances to a DI container

I’ve recently started refactoring a customer’s code base for a working application.  They recognize the need to make their code more extensible and maintainable so I’m helping to massage the existing code into something that they will be able to continue shipping and upgrading for years to come without ending up backed into a corner.

One of my first suggestions was to start eliminating the abundance of static variables in the code base.  In this case, static classes and methods abound, and it looks like it was used as a quick-and-dirty mechanism to provide singleton behavior.  Now I’m not going to go into depth on why an actual singleton might have been better, or the pitfalls of all of these statics.  Write-ups on that kind of thing about in books and on line.

Instead, let’s look at what it means to migrate from a static, an instance or a singleton over to using a DI container, specifically OpenNETCF’s IoC framework.

First, let’s look at a “service” class that exposes a single integer and how we might consume it.

   1: class MyStaticService

   2: {

   3:     public static int MyValue = 1;

   4: }

And how we’d get a value from it:

   1: var staticValue = MyStaticService.MyValue;

Simple enough.  Some of the down sides here are:

  • There’s no way to protect the Field value from unwanted changes
  • To use the value, I have to have a reference to the assembly containing the class
  • It’s really hard to mock and cannot be moved into an interface

Now let’s move that from a static to an instance Field in a constructed class:

   1: class MyInstanceService

   2: {

   3:     public MyInstanceService()

   4:     {

   5:         MyValue = 1;

   6:     }

   7:  

   8:     public int MyValue { get; set; }

   9: }

Now we have to create the class instance and later retrieve the value.

   1: var service = new MyInstanceService();

   2:  

   3: // and at a later point....

   4: var instanceValue = service.MyValue;

We’ve got some benefit from doing this.  We can now control access to the underlying value, making the setter protected or private, and we’re able to do bounds checking, etc.  All good things.  Still, there are downsides:

  • I have to keep track of the instance I created, passing it between consumers or maintaining a reachable reference
  • I have no protection from multiple copies being created
  • The consumer must have a reference to the assembly containing the class (making run-time plug-ins very hard)

Well let’s see what a Singleton pattern buys us:

   1: class MySingletonService

   2: {

   3:     private static MySingletonService m_instance;

   4:  

   5:     private MySingletonService()

   6:     {

   7:         MyValue = 1;

   8:     }

   9:  

  10:     public static MySingletonService Instance

  11:     {

  12:         get

  13:         {

  14:             if (m_instance == null)

  15:             {

  16:                 m_instance = new MySingletonService();

  17:             }

  18:             return m_instance;

  19:         }

  20:     }

  21:  

  22:     public int MyValue { get; set; }

  23: }

And now the consumer code:

   1: var singleTonValue = MySingletonService.Instance.MyValue;

That looks nice from a consumer perspective.  Very clean.  I’m not overly thrilled about having the Instance accessor property, but it’s not all that painful.  Still, there are drawbacks:

  • The consumer must have a reference to the assembly containing the class (making run-time plug-ins very hard)
  • If I want to mock this or swap implementations, I’ll got to go all over my code base replacing the calls to the new instance (or implement a factory).

How would all of this look with a DI container?

   1: interface IService

   2: {

   3:     int MyValue { get; }

   4: }

   5:  

   6: class MyDIService : IService

   7: {

   8:     public MyDIService()

   9:     {

  10:         MyValue = 1;

  11:     }

  12:  

  13:     public int MyValue { get; set; }

  14: }

Note that the class is interface-based and we register the instance with the DI container by *interface* type.  This allows us to pull it back out of the container later by that interface type.  The consumer doesn’t need to know anything about the actual implementation.

   1: // Note that the Services collection holds (conceptually) singletons.  Only one instance per registered type is allowed.

   2: // If you need multiple instances, use the Items collection, which requires a unique identifier string key for each instance

   3: RootWorkItem.Services.AddNew<MyDIService, IService>();

   4:  

   5: // and at a later point....

   6: var diValue = RootWorkItem.Services.Get<IService>().MyValue;

Mocks, implementation changes based on environment (like different hardware) and testing become very easy.  Plug-in and run-time feature additions based on configuration or license level also are simplified.