Content Migration Update

It’s been an insanely busy week this week.  You know – one of those weeks where you burn 60 hours, but feel like you maybe got 8 hours of productive time in?  I’m fighting the good fight trying to understand Mono and Linux, and it’s sure to be a topic soon, but for now I want to give an update on the blog content migration.

I’m pleased to say that I’m only seeing about 1200 404s a day (that’s a huge improvement) and I think nearly all of the 404s to my blog are taken car of.  There are still some permalink’s that I have to resolve  – dasBlog used GUIDs, so tracking it to something meaningful is a real pain in the ass  – but for the most part it’s done.  I still get a lot for Alex (Feinman and Yakhnin), Neil Cowburn and Mark Arteaga’s blogs, none of which have been active in years, but which evidently contain some worthwhile content.  I have to figure out how to get either a redirect set up or a side-by-side site install working (I’m not a server admin – in fact I hate it almost as much as accounting or installing fiberglass insulation) at which time those will come back on line.

In the mean time, feedback on anything is welcome, and if you want to know more about what we’re up to and what we’re doing, drop us a line.  I’m taking tomorrow off to install some doors – really, putting in pre-hung physical doors in a building – but will be back at it Friday.  Tchuss.

Windows Phone in a LOB Application

Microsoft recently published some news about Windows Phone being used in a line-of-business app for Delta Airlines. That’s great news for the platform, and I’d like to say congrats to the team that delivered it.  However (isn’t there always a however?), it brings up some questions.  The card reader peripheral is not complicated – it’s just using he mic input as a sort of old-school modem.

The big questions are around device lock-down.  Generally speaking in an LOB app, you don’t want users messing around with, or even offered the ability to get to applications other than your own.  The way Windows Phone works, at least for consumers, is that you can’t lock down the device.  So this means that the flight attendants can use the device to play games.  They could install other apps (that could also read the card data from the reader – hello security problem?).  They could uninstall the LOB app itself.  The industrious may even be able to swap out the existing app with one that looks the same but sends the card data somewhere else.  How is the device protected against these types of scenarios?

So yes, this is cool, but I’d really like to more about how they technically solved some of these concerns, if they did at all.

Update

For those interested, the card reader is from Anywhere Commerce.  They do have off-the-shelf hardware that they say has a Windows Phone SDK.  I’ve not used it, so I can’t say anything about it, but if you’re in the market for a magstripe reader for Windows Phone, they’re probably worth a look.

RestSharp: Don’t Serialize null Properties

I’m working on building a .NET API for the DreamFactory Cloud Service, with the intent of creating a full ORM implementation for it. They use a pretty straightforward REST interface for the service, so I decided that I’d give RestSharp a try for the communications layer. As seems to be typical for me, almost immediately I ran into trouble. First the APIs weren’t abundantly clear. The responsiveness of the DreamFactory support team, however was fantastic, and we got past the documentation confusion.

The next problem I ran into was the API for creating tables. The DreamFactory service has, as you would expect, a boatload of parameters for each Field in a Table. Many of the fields are interdependent. For example you would only set the Scale property if the DataType was numeric, or the MaxLength only makes sense for strings.

[SerializeAs(Name = "field")]
internal class FieldDescriptor
{
    private string m_type;
    public string name { get; set; }
    public string label { get; set; }
    public string type { get; set; }
    public string db_type { get; set; }
    public int length { get; set; }
    public int precision { get; set; }
    public int scale { get; set; }
    public bool required { get; set; }
    public bool allow_null { get; set; }
    public bool fixed_length { get; set; }
    public bool supports_multibyte { get; set; }
    public bool auto_increment { get; set; }
    public bool is_primary_key { get; set; }
    public bool is_foreign_key { get; set; }
    public string ref_table { get; set; }
    public string ref_fields { get; set; }
    public string validation { get; set; }
}

Well RestSharp serializes *all* properties in your class to JSON before shipping it across the wire, and the service was not appreciative of me setting empty values (or worse, zeros for numerics and false for booleans).

I ended up forking the RestSharp code base and adding in a SerializerOptions class, and the plumbing for it, that allows you to ignore/skip serialization of null values. So I just changed the class definition to use nullable properties:

[SerializeAs(Name = "field")]
internal class FieldDescriptor
{
    private string m_type;
    public string name { get; set; }
    public string label { get; set; }
    public string type { get; set; }
    public string db_type { get; set; }
    public int? length { get; set; }
    public int? precision { get; set; }
    public int? scale { get; set; }
    public bool? required { get; set; }
    public bool? allow_null { get; set; }
    public bool? fixed_length { get; set; }
    public bool? supports_multibyte { get; set; }
    public bool? auto_increment { get; set; }
    public bool? is_primary_key { get; set; }
    public bool? is_foreign_key { get; set; }
    public string ref_table { get; set; }
    public string ref_fields { get; set; }
    public string validation { get; set; }
}

And then when you serialize, you simply set the option (it’s off by default so as to not change the existing default behavior) and go about serializing like normal.

// build up a request to create the table
var request = new RestRequest("/rest/schema", Method.POST);
request.AddHeader("X-DreamFactory-Application-Name", Session.ApplicationName);
request.AddHeader("X-DreamFactory-Session-Token", Session.ID);
request.RequestFormat = DataFormat.Json;
request.JsonSerializer.ContentType = "application/json; charset=utf-8";
request.JsonSerializer.Options = new SerializerOptions()
{
    SkipNullProperties = true
};
request.AddBody(tableDescriptor);

If you’re interested, I’ve pushed this back up to GitHub as pull request #404.

New Blog Page: Archives

I’ve been scrubbing through the all of the 404 errors the new blog migration has led to, trying to wire up old links with content.  That’s not too difficult – it’s just a long manual process of wading through the logs, finding the proper target and adding redirects.  Some items, however aren’t as simple – things like links to articles in the old Community site.  For those, I’ve been digging through old backups and trying to find the content.  For the items I’m finding, I’m putting links to them in a new Page called “Archives” (you’ll see a link in the menu above as well).

If there’s some content you’d like to see that I’m missing, just let me know.

Windows CE: Risks v. Benefits

Jan 30, 2014 Update: A few vendors are now supporting, or planning support for Wi-Fi for WEC, which would likely change a significant portion of the analysis below.  If indicates that not only can we get new Wi-Fi hardcware, but also that third-party vendors are once again taking interest in Windows CE. See my comments and links here.

I’ve been developing applications for Windows CE for more than 12 years now and for a pretty good chunk of that time I also created OS builds themselves for custom hardware. Still, if you’re running a business it’s often a good idea to do a risk analysis of some of your dependencies occasionally, even if those dependencies define a core part of your business or expertise. I recently took a hard look at both the Compact Framework and Windows CE (or Windows Embedded Compact as it’s now known) recently, to identify some of the risks and benefits of choosing them as a foundation of a product or business.

Since I like dealing with concrete examples, instead of abstract “what if’s” let’s look at it from a specific point of view – specifically a customer of mine that wanted to get a turnkey upgrade for a product platform. This customer builds and ships tens of thousands of custom point-of-sale devices each year. A large portion of their current platforms use a home-brewed Compact Framework wrapper sitting on top of Silverlight for Windows Embedded, with a pretty large infrastructure code base written in C# below that. Historically they have been running on x86 hardware.

Recently they’ve decided to create and release a new product. An ARM-based tablet that will run the same application, with the goal being that the new hardware will provide them room to expand and grow in new markets and use cases. They designed the hardware, but contracted a third-party to deliver a Windows CE Windows Embedded Compact 7 OS for the platform. Once they got everything up and running, it turned out that the existing application ran, but really right on the edge of acceptable for performance. Adding more to the application would likely lead to customer dissatisfaction. This was not a good place to start with a new product, so they had me come in to see if I could help improve the application performance and make some recommendations.

The application code base had some areas that could be improved, but that was expected.  Pretty much every code base can be improved, including our own, and anything that had gone through several revisions over many years is always going to have room for improvement.  That said, the code wasn’t bad – in fact it was pretty good.  There really was no low-hanging fruit to be picked.  Any performance gains were going to take a fair amount of work to achieve, and we’d be measuring improvements in milliseconds.  With a month of work, I guessed that we might be able to get a 5% improvement, and that’s probably best-case.

So how could we get large improvements?  Well, WEC 2013 has been generally available since June.  It has a new SWE engine, which if the past is any indicator, probably has a decent performance improvement over previous versions. It includes Compact Framework 3.9, which includes lots of optimizations from Microsoft’s Windows Phone 7 work, as well as, and probably most importantly, multi-core support.  The new platform has 4 processor cores.  My guess is that moving to WEC 2013 and CF 3.9 on this hardware will probably improve the application performance by 20% conservatively.

So the decision to move is simple, right? They already have WEC7 running, so porting the OS shouldn’t be too much work, and managed code should just transfer right over.  They asked for a quote to do the whole thing, and that’s when I started doing a real risk analysis.  The decision, it turns out, is far from simple.

The Question

So what’s the big question here?  What, exactly do we need to understand to be able to estimate the work involved, the cost to the customer, the risks associated, and the recommendations to present?  In this case there are really three “options” to look at.  The first option, which is almost always an option (though not necessarily a good one) in these cases is to just do nothing.  Keep the hardware as is, keep the OS as-is and the application as-is.  The cost is zero and the schedule impact is zero so you shouldn’t just hand-wave it away.  The problem here is that the performance, and therefore the quality, of the “do nothing” option isn’t desirable, so we’d really like another option.

Option 2 is to port the OS and application to WEC 2013 and CF 3.6.  What are the benefits of doing so? What are the risks to the schedule, the budget and project as a whole?

Option 3 would be a nebulous “port it to another platform altogether” option.  Maybe Android or some other embedded Linux platform.  I wasn’t tasked to do this, or even quote it, but it’s certainly an option for them and we should at least consider the risks and benefits of the choice if we’re to make a recommendation that is good for their business.  While my core competency is in Microsoft technologies, I don’t have any qualms about recommending other technologies if it makes sense for the customer.  We integrate Linux, Android and iOS support into some of our products because they make sense in many cases.

Really, though, we need to come up with a cost/benefit analysis of Option 2 and then present that while keeping in mind that Option 1 or 3 are on the table.

The Benefits

So let’s look at what Option 2 brings to the table. Some of the benefits I’ve already talked about, but it’s a good idea to look at the larger picture and see what shakes out.

  • We get multi-core support.  This is likely to give us a significant performance boost, especially considering that their application makes heavy use of multiple threads, and a code analysis shows that the UI rendering thread is really what’s eating a large amount of the processor time on the existing system.
  • We get the CF 3.9 optimizations. Microsoft spent a lot of time squeezing performance out of the CF for Windows Phone 7 (before abandoning it for Windows Phone 8), and those performance benefits looks promising.  Probably double-digit performance increases in several areas.
  • The existing application code base ports easily, after all, it’s managed code.  The expectation is that no rewriting should have to happen (more on this expectation in a bit).  Yes, some optimization could be done, but the app should just run.
  • The tools are first class, meaning developers are more productive.  Yes, we could argue for days about Eclipse versus Studio, or Platform Builder versus whatever tool chain another OS supports, but I have experience in all of them and my experience is that Studio is flat out better.  The debugging experience is better at the app and OS level, and this saves a lot of developer time.
  • The current team is experienced with Microsoft technologies.  They know Windows. They know Studio.  They know Silverlight.  They know Win32.  They know C#.  Having to learn a new OS, a new development environment, a new driver and kernel architecture and a new development language would be very, very costly – especially when time to market is important.

The Risks

And now the fun part.  The part where there are surprises, and the reason you’re likely reading this article.  What are the risks involved with the migration?  After all, it’s just a port of an OS to a newer version and the application shouldn’t need changing, so what could possibly be a scary enough risk to make us even question the port?  It turns out there are lots of them.

  • When moving to WEC 2013, you get code compatibility, but not binary compatibility.  <— Read that again. This is a huge, huge risk in many cases. Let’s look at why.  The application is managed code, so that compiles down to MSIL, which is portable right?  Actually yes.  In fact, I call bullshit on Microsoft’s statement that CF applications aren’t binary compatible because they only way to break that would be to deviate from the ECMA spec or to remove opcode support – I think they just say it’s not supported because they don’t test it and their legal department therefore tells them they can’t say it’s supported.  However, lack of compatibility makes a huge difference on the native side. What exactly broke binary compatibility anyway?  Well WEC 2013 ships with a new C runtime, and all older binaries were built against an old runtime, which is baked right into the OS (so you can’t ship the old runtime with your app).  What this means is that all native code has to be recompiled.If your application uses *any* P/Invokes to something outside the OS (so not in coredll), that DLL must be recompiled for WEC 2013 (coredll is already going to be recompiled).  If that library came from a third-party, then you either have to get the full source and recompile it, or convince the vendor to compile for WEC 2013 (and they probably don’t have an SDK, since that’s exported from a BSP – hello schedule risk!).What this means is that if you’re using a third-party native component and you don’t either have (or can’t get) full source and the vendor won’t provide WEC 2013 support, then you must either find another vendor, recreate the capability yourself or the project is at a dead stop.  That is a big risk.  Maybe not insurmountable, but big.
  • Processor and BSP support.  The new hardware is using a processor that WEC 2013 doesn’t support out of the box.  It’s not an esoteric processor, it’s actually a pretty common Freescale processor, but the point is that support isn’t in the box.  Microsoft has said that support for it is “coming soon” but they’ve not quantified “soon” and their record on ship dates isn’t something I’d want to bet a business-critical project on.  So we must assume that we’d have to build BSP support for this processor ourselves.  This isn’t an impossible task – in fact it’s not even a new task.  But it takes time, and it takes experience.It also requires full source code to the existing BSP including all of their existing drivers.  Again, because of the lack of binary compatibility, all drivers must get recompiled.  It’s very rare in CE to have source code for all peripherals – many silicon vendors just don’t supply it (no idea why, since it’s only useful with their silicon).  That means you need to get vendor support for WEC 2013 drivers, and my experience so far is that there aren’t many ship vendors supporting WEC 2013.  This is especially true for WiFi.  The selection of WiFi drivers for WEC 2013 is very, very sparse.  To make matters worse, Microsoft changed the NDIS driver model in WEC 2013 (from 5 to 6) *and* they completely dropped WZC support in favor of native WiFi .  So it’s a lot of work for WiFi chip vendors to add support, and if your application happened to use WZC for network status and control, you now get to rewrite all of that code.Binary compatibility here is probably the biggest risk.  If you can’t get a driver for silicon that’s already on your board, especially for something as critical as WiFi, the project is at a dead stop (unless you want to respin the board for different silicon).
  • General OS support.  WEC 2013 is supported by Microsoft.  It’s not been out long, and they guarantee at least 10 years of support.  On paper, that’s great, but really you need more than just Microsoft support to be successful.  OS support without any peripherals isn’t overly helpful.  Yes, if you’re on an x86 and all you need is a serial port, then you can probably get by, but if you’re using ARM and other peripherals some confidence that when parts go EOL you’ll be able to get replacements would be nice.It’s only anecdotal evidence and a small subset of the market, but I know of exactly zero customers using WEC 2013.  In fact I can count on one hand the number of WEC 7 installations I’ve ever seen.  The general consensus that I see is that CE 6.0R3 was solid, covered 90% of use cases and has peripheral drivers.  I’m still working with customers who are rolling out new CE applications and still adding new features, it’s just they’re all based on CE 6.0.

The Conclusion

So where does that leave us?  In this particular case, I can’t in good conscience recommend that the customer move to WEC 2013.  If it were my business, I definitely wouldn’t – the risks are just way too scary (unless they can get all of that source code).  What about all of the existing source code?  Well Mono would help them migrate a lot of the business logic, but the UI is going to have to be completely rewritten whether they go iOS, Android or some other flavor of Linux.  Maybe HTML5 would make sense (though in general I think it’s overhyped, performs poorly compared to native apps, and sucks at getting to any actual device resources, which they need), but since they are a POS system, which is all about UI, a UI rewrite is going to be expensive.

Does this mean that WEC2013 won’t ever be a good choice?  I, honestly, have no idea.  As I’ve mentioned before, I have zero visibility into what Microsoft is thinking or planning.  I do know that we still build all of our code to support the Compact Framework – though that’s because it generally makes porting to Mono simple.  I also know that I’ve been spending a lot more time working on porting our products to Mono and ramping up on Linux – more specifically Wind River Linux, but I’ve also been playing with Fedora, OpenSUSE and Ubuntu.  Just like our customers, we have to be forward-looking and mitigate risks if we’re going to keep paying the bills, and from my perspective WEC 2013 is a high-risk, low reward path.  I’d say it’s telling that the anchor of our product line, Solution Engine, which is based on a Microsoft technology (C#) is tested and supported on Wind River Linux but I’ve not even bothered to try running it on WEC 2013.

An end to Comment Spam?

One of the major pain points I had with the last blog engine, and really the entire failed experiment that was the OpenNETCF Community Server, which replaced the OpenNETCF Forums was SPAM.  Comment spam caused me to shut off comments in the blog.  Forum spam caused me to have to turn on Forum moderation.  I was getting thousands of spam posts in a day, which caused me to say “screw this, I can’t wade through all of this for the few valid posts”, so I quit bothering to even moderate it, the site quickly went stale and eventually I just turned it off. I still have all of the content, though it’s spam riddled and I’m not even sure what the value of old Compact Framework related questions is (more on this thought later).

At any rate, WordPress has a filtering plug in called Akismet that says it has some form of algorithm that takes info from all sorts of WordPress sites, puts it together, and uses it to block comment and pingback SPAM.  It’s $5 a month, which is a bargain if it works and allows me to not think about dealing with spammers.  We’ll see how it goes.

A New Blog Engine, and hopefully more frequent posts

I’ve you’ve followed this blog at all in the past, or even if you just look at the post history, you’ll see that my activity has been very, very slow over the past couple years.  That’s really been the result of a few things.  First, I’ve been pretty busy doing actual work, but that’s a pretty poor excuse.

The larger problem was that the blog engine I was using – a version of dasBlog from probably 10 years ago – was just really, really clunky to use.  It was painful any time I wanted to post anything – especially if it had images, or code.  And something in it’s pages often would cause client browsers to just hang.  Nothing like getting half way through a long post and having the browser seize up, everything to be lost and me to start cussing.

So over the past couple days, I installed PHP, MySQL and WordPress on my server, then migrated all of the old dasBlog content to the new engine . At least I hope it all got migrated – if you are searching for something and can’t find it let me know.  I still have full copies of the old install.

My plan now is to post more frequently.  You’ll still see code problems that I think are interesting, and I plan to do more part of the “Software Development Series“, but I think I’ll also be posting more thoughts of business and strategy.  Why we make the decisions we do, and what I think I see in the tea leaves of our industry.