New Open Source Projects

I’m working on a new application that’s going to be open source.  Once it’s a bit further along I’ll provide more details, but essentially it’s a paperless forms solution that encompasses a WPF desktop app, a self-hosted ASP.NET WebAPI service and a Xamarin Forms mobile app all working together.

In the process of getting the foundation up and running, I’m moving a lot of stuff over to Github, but more importantly I’m updating, extending, and creating whole new libraries that can be used for other stuff.  These, too, are open source.

What’s being updated?

    I’m primamrily working with SQLite, but I’ve already uncovered and fixed some issues around one-to-many entity relationships
  • OpenNETCF.IoC
    Work and updates to make it more Xamarin/PCL friendly
  • OpenNETCF Extensions
    This thing has been stable and in heavy production use for years.  Support continues for it with some minor changes and improvements in the Validation namespace so far.

What’s New?

  • OpenNETCF.Xamarin.Forms
    Adding things that I’m finding missing or hard in Xamarin’s BCL.  Things like “hey, scale this label to the right size depending on my screen resolution, on any platform”.  Thanks go out to Peter Foot for pointing me in the right direction there.
    No idea why Xamarin didn’t provide a basic navigation framework.  I created one.  Is it awesome?  I don’t know – but it works for my use case (plus 3 other apps I’ve done with it).
  • OpenNETCF.Google.Analytics (moving to it’s own repo soon)
    Again, seems like low-hanging fruit here.  Why isn’t it in the box?  I don’t know, but there’s now a simple, open source library for it.

One note – these are all in active development, so don’t expect a NuGet package updates on them for at least a little while (end of May?).  I’d like to get features in and stable before rolling them out.

Feedback welcome. Testers wanted.  Enjoy.

OpenNETCF Scanner Compatibility Library

Some days I think I have too much code “lying around”.  As you would expect from many years as a developer, I have utility libraries for all sorts of tasks.  Generally when I think something is likely to be useful for others I like to make it publicly available for anyone to use – just take a look at the list of Codeplex projects I admin.

This morning I saw a question on StackOverflow about intelligently detecting a platform and loading the proper binaries for it.  In this case it was specific to doing so with Windows Mobile barcode scanners.  I immediately thought, “hey, I have a library for that” and went to answer and give a link.  Except the link didn’t exist.  I never created the open source project for it, so the code has just been sitting here doing nothing.

Yes, this code is probably 5 years or more past it’s prime useful period due to the decline in use of Windows Mobile, but hey, I just used it on a project last week, so it’s got some life left in it.

So, here’s yet another open source library from my archive – the OpenNETCF Barcode Scanner Compatibility Library.

OpenNETCF ORM Updates: Dream Factory and Azure Tables

We’ve been busy lately.  Very, very busy with lots of IoT work.  A significant amount of that work has been using the Dream Factory DSP for cloud storage, and as such we’ve done a lot of work to make the Dream Factory implementation of the OpenNETCF ORM more solid and reliable (as well as a pretty robust, stand-along .NET SDK for the Dream Factory DSP as a byproduct) .  It also shook out a few more bugs and added a few more features to the ORM core itself.

I’ve pushed a set of code updates (though not an official release yet) up to the ORM Codeplex project that includes these changes, plus an older Azure Table Service implementation I had been working on a while back in case anyone is interested and wanted to play with it, use it or extend it.  The interesting thing about the Azure implementation is that it includes an Azure Table Service SDK that is Compact Framework-compatible.

As always, feel free to provide feedback, suggestions, patches or whatever over on the project site.

Using Jenkins to Build from Git without the Git plug-in

A few months ago we decided to upgrade our source control provider and moved everything over to Visual Studio Online.  It’s been working great for source control, though getting used to using Git instead of the TFS source control is a bit of work.  For a few reasons we’re not using the build features of Visual Studi Online, but instead are using a Jenkins build server.  Jenkins is really nice and can do just about anything you could want, which is a big plus.  The only down side is that it’s all in Java.  Why is that a down side, you may wonder?  Well if things get broken, you’re in a pickle.

We were running all well and good for over a month.  Nightly builds were running.  Version umbers were auto-incrementing. Releases for Windows and Mono were getting auto-generated and getting FTPed up to the public release directory.  Things were great.  Until about a week before Christmas, when an update for the git plug-in was released.  The Git plug-in is what allows you to configure Jenkins to easily connect to a Git server and pull your source code.  Well the plug-in update broke the ability to get code from a Git server on Windows.  Now Jenkins has a rollback feature, and had I understood what the failure actually was (it wasn’t obvious that it was a plug-in failure) then I could have rolled back and probably been fine.  But I didn’t know.  And in my effort to “fix” things, I wiped out the roll-back archived version.

So the option was to either install a Java environment and try to learn how Jenkins works and fix it, or to wait for the community to fix the problem.  I opted to do the latter, because it surely would break other people and would get straightened out quickly, right?  Hmm, not so much it seems.  I found a reported bug and asked for a time estimate.  I waited a few days.  No fix.  I left the office for a week of “unplugged” vacation and came back.. No fix.  I then learned that you can access the nightly builds for the plug ins themselves (which is actually pretty cool) so I tried manually installing the latest builds of the plug-in.  Turns out it was still broken.

While I was trying to figure out what was broken, I also appear to have broken something in the Git workspace on the server too, so it was hard to tell if the plug-in was failing, or if Git was confused.  I know that I was confused.  So today I decided that I really needed to get this stuff working again.  I changed the Job to no longer use source control, but instead to just run Window batch files.

REM make sure nothing is hidden 
attrib -H /S
REM recursively remove child folders 
for /d %%X in (*.*) do rd /s /q "%%X"
REM delete files in root folder 
del /q /f *
REM get the source 
git init 
git clone https://[username]:[password]
REM check out
git checkout master
git add ./Common/SolutionFamily.VersionInfo.cs
REM increment the build number
powershell -File "%WORKSPACE%UtilitySetFamilyVersion.ps1" 2.1.%BUILD_NUMBER%.0
REM commit the change
git commit -a -m auto-version
git push https://[username]:[password]

Once that was done, the MSBUILD plug-in was then able to build from the workspace, though the source code directory had changed one level as compared to where the Git plug-in had been pulling code.  If I had wanted to, I could have had my command do the build as well and not even used the MSBUILD plug in by adding this to the end:

C:WindowsMicrosoft.NETFrameworkv4.0.30319msbuild.exe "/p:Configuration=Debug;Platform=Any CPU" /m "%WORKSPACE%SolutionFamilySolutionEngine.FFx.sln" && exit %%ERRORLEVEL%%

Once the Git plug-in is actually fixed, I’ll post how to use it to connect to Visual Studio Online.  It actually seems to be working “somewhat” this morning.  I say “somewhat” because while it actually is pulling the code and behaving properly, when you do the configuration you get an error, which makes it look like it’s going to fail.  Until that’s ironed out I’m going to wait.

Lots of ORM Updates

We use the OpenNETCF ORM in the Solution Family products.  Unfortunately I haven’t figured out a good way to keep the code base for the ORM stuff we use in Solution Family in sync with the public code base on CodePlex, so occasionally I have to go in and use Araxis Merge to push changes into the public tree, then check them into the public source control server.  What that means to you is that you’re often working with stale code.  Sorry, that’s just how the cookie crumbles, and until I figure out how to clone myself Multiplicity-style, it’s not likely to change.

At any rate, we’re pretty stable on the Solution Family side of things, so I did a large merge back into the public tree this evening.  I still have to do a full release package, but the code is at least up to date as of change set 104901 and all of the projects (at least I hope) properly build.

Most of the changes revolve around work I’ve been doing with the Dream Factory cloud implementation, so there are lots of changes there, but I also have been doing more with DynamicEntities, so some changes were required for that too.  Of course there are assorted bug fixes as well, most of them in the SQLite implementation.  I leave it to you and your own diff skills if you really, really want to know what they are.

Go get it.  Use it.  And for Pete’s sake, quit writing SQL statements!

Our New Cross-Platform Build, Test, Store and Deploy Architecture

First, let me preface this by saying no, I’ve not migrated any Compact Framework application to Visual Studio 2013. We’re still using Visual studio 2008 for CF apps, so don’t get too excited. That said, we’ve done some pretty interesting work over the last week that was interesting so please, read further.

Microsoft recently announced the availability of not just Visual Studio 2013, but also Visual Studio Online, which is effectively a hosted version of Team Foundation Server 2013.  We use the older TFS 2010 internally as our source control provider as well as for running unit tests, but it’s got some significant limitations for our use case.

The biggest problem is that our flagship Solution Engine product runs on a lot of platforms – Windows CE, Windows Desktop and several Linux variants.  For Linux we’re using Mono as the runtime, which means we’re using XBuild to compile and Xamarin Studio for code editing and debugging.  Well Mono, XBuild and Xamarin Studio don’t really play well with TFS 2010.  To put it bluntly, it’s a pain in the ass using them together.  You have to train yourself to have Visual Studio and Xamarin Studio open side by side and to absolutely always do code edits in Visual Studio so the file gets checked out, but do the debugging in Xamarin Studio.  Needless to say, we lose a lot of time dealing with conflicts, missing files, missing edits and the like when we go to do builds.

TFS 2013 supports not just the original TFS SCC, it also supports Git as an SCC, which is huge, since Xamarin Studio also supports Git. The thinking was that this would solve this cross-platform source control problem, so even if everything else stayed the same, we’d end up net positive.

I decided that if we were going to move to TFS 2013, we might as well look at having is hosted by Microsoft at the same time.  The server we’re running TFS 2010 on is pretty old, and to be honest I hate doing server maintenance.  I loathe it.  I don’t want to deal with getting networking set up.  I don’t like doing Hyper-V machines.  I don’t like dealing with updates, potential outages and all the other crap associated with having a physical server.  Even worse, that server isn’t physically where I am (all of the other servers we have are) so I have to deal with all of that remotely.  So I figured I’d solve problem #2 at the same time by moving to the hosted version of TFS 2013.

Of course I like challenges, and Solution Engine is a mission-critical product for us.  We have to be able to deliver updates routinely. It’s effectively a continuously updated application – features are constantly rolling into the product instead of defined periodic releases with a set number of features.  We’ll add bits and pieces of a feature incrementally over weeks to allow us to get feedback from end users and to allow feature depth to grow organically based on actual usage.  What this means is that the move had to happen pretty seamlessly – we can’t tolerate more than probably 2 or 3 days of down time.  So how did I handle that?  Well, by adding more requirements to my plate, of course!

If I was going to stop putting my attention toward architecting and developing features and shift to our build and deployment system, I decided it was an excellent opportunity to implement some other things I’ve wanted to do.  So my list of goals started to grow:

  1. Move all projects (that are migratable) to Visual Studio 2013
  2. Move source control to Visual Studio Online
  3. Abandon our current InstallAware installer and move to NSIS, which meant:
    1. Learn more about NSIS than just how to spell it
    2. Split each product into separate installers with selectable features
  4. Automate the generation of a tarball for installation on Linux
  5. Automate FTPing all of the installers to a public FTP
  6. Setting up that FTP server
  7. Setting up a nightly build for each product on each platform that would also do the installers and the FTP actions
  8. Setting up a Continuous Integration build for each product on each platform with build break notifications

Once I had my list, I started looking at the hosted TFS features and what it could do to help me get some of the rest of the items on my list done.  Well it turns out that it does have a Build service and a Test service, so it could do the CI builds for me – well the non Mono CI builds anyway.  The nightly builds could be done, but no installer and FTP actions would be happening.  And it looked like I was only going to get 60 minutes of build time per month for free.  Considering that a build of just Engine and Builder for Windows takes roughly 6 minutes, and I wanted to do it nightly meant that I probably needed to think outside the box.

I did a little research and ended up installing Jenkins on a local server here in my office (yes, I was trying to get away from a server and ended up just trading our SCC server for a Build server).  The benefit here is that I’ve now got it configured to pull code for each product as check-ins happen and then to do CI builds to check for regression.  If a check-in breaks any platform, everyone gets an email.  So if a Mono change breaks a CF change, we know.  If a CF change breaks the desktop build we know.  That’s a powerful feature that we didn’t have before.

Jenkins also does out nightly builds, compiles the NSIS installers and builds the Linux tarballs.  It FTPs them to our web site so a new installation is available to us or customers every morning just like clockwork, and it emails us if there’s a problem.

It was not simple or straightforward to set all of this up – it was actually a substantial learning curve for me on a whole lot of disparate technologies.  But it’s working and working well, and only took about 6 days to get going.  We had a manual workaround for being able to generate builds after only 2 days, so there was no customer impact.  The system isn’t yet “complete” – I still have some more Jobs I want to put into Jenkins, and I need to do some other housekeeping like getting build numbers auto-incrementing and showing up in the installers but it mostly detail work that’s left.  All of the infrastructure is set up and rolling.  I plan to document some of the Jenkins work here in the next few days, since it’s not straightforward, especially if you’re not familiar with Git or Jenkins, plus I found a couple bugs along the path that you have to work around.

In the end, though, what we ended up with for an extremely versatile cross-platform infrastructure.  I’m really liking the power and flexibility it has already brought to our process, and I’ve already got a lot of ideas for additions to it.  If you’re looking to set up something similar, here’s the checklist of what I ended up with (hopefully I’m not missing anything).

Developer workstations with:

  • Visual Studio 2013 for developing Windows Desktop and Server editions of apps
  • Xamarin Studio for developing Linux, Android and iOS editions
  • Visual Studio 2008 for developing Compact Framework editions

A server with:

  • Visual Studio 2013 and 2008 (trying to get msbuild and mstest running without Studio proved too frustrating inside my time constraints)
  • Mono 3.3
  • 7-Zip (for making the Linux tarballs)
  • NSIS (for making the desktop installers)
  • Jenkins with the following plug-ins
    • msbuild
    • mstest
    • git
    • ftp

Should You Develop Software for Free?

I just read an excellent opinion piece over on the New York Times site. I’d recommend you read it, but if you’ve not got the time, or are generally just too lazy, let me summarize. The author is, not unexpected, a writer. Often he is asked to do writing or give speeches and offered $0.00 in return. Basically people want him to give away the output of his “craft” yet those same people would never ask or expect “a keychain or a Twizzler” for free. What gives? How do you tell them to pound sand, but in a polite way? And should you?

Well it’s not so greatly different here in the computer world. Many people feel software should be free. After all, it’s just electrons, and I can shuffle my feet across the carpet and get electron for free, why should you buy electrons from someone else? In fact the idea is so popular that there’s an entire sect that firmly believes that all software should be free. Amazingly, these people even tend to be software developers.

Yes, I’m somewhat of a contributor to this. I have written a boatload of software that I give away. I even encourage people to go get for free. But let’s be clear, that software was paid for. It was developed over many years and many projects. The customers whom I was working for at the time paid for me to solve their problem, and part of that solution involved adding to or fixing some of those core libraries. Still, it was paid for. I simply reserved a small portion of the work as a part of a “free” library that carried over to the next project.

And here’s another dirty little secret. I didn’t do it out of altruism. No, I didn’t do it for “exposure”. That’s a line that someone trying to wheedle free stuff from you would use. I kept those libraries “free” so that a customer can’t say “hey, you did that work on my dime, I own it!” No, sir, you own the solution to your problem part of which uses a set of free libraries.  You’ve gained from work done for countless other projects in the past. Those libraries allowed me to solve you problem much faster than if I had to start them from scratch. So the reason I provide free software is to save me from doing repetitive work and to, generally, keep a set of base libraries that I’m free to use anywhere, on any project.

So why do I also give those libraries away? I don’t know, maybe I am an altruist. Maybe I have the delusion that some day when I’m called in to fix a broken project they’ll already be using my libraries and it will save me the headache and frustration of coming up to speed. Maybe I just hate seeing people reinvent the wheel. Maybe I’m just crazy. Maybe it’s a bit of all of the above, but let’s be clear: I don’t do work for free.

I don’t feel that anyone has a right to ask for software that is not paid for. You may be able to sway me somewhat with the argument about you being free to run and modify the software you paid for, but you’ll have a harder time on the copy and distribute side of the argument.  We, as software developers, still have to pay our bills.  We’ve spent years honing our skills to be able to solve these problems (admittedly, some have done a better job of honing than others) and it takes actual time and work solving problems.  Sometimes I work on problems or with customers where it, most decidedly, is work.  Even drudgery.  So no, I’m not a believer that software, music, writing, photographs or really any other form of “art” should be free.  You wouldn’t expect an electrician to work for free, don’t expect it of a “content creator” either.

Memory Management in the Compact Framework

Way back in 2006 I presented a talk on Memory Management in the Compact Framework. Well the CF for the CF, at least up through 3.5, is no different than it was back then and the content from that talk is still valid. I had also done the talk as a Webcast that Microsoft had hosted, but evidently that content has been purged from their servers. Unfortunately I don’t have any backup of the actual talk, but I do have a backup of the presentation PowerPoint. You can get it here.

    Update! (Oct 28, 2013)

Hooray! Microsoft recovered the talk from a backup. Not certain if/when they’ll post it back on their site, but as an added back-up I’m also keeping a backup copy. You can download it directly here.

Padarn: Dynamic Authentication

Our Padarn Web Server largely uses a subset of the IIS object model, so for many features and capabilities how you do things in IIS is paralleled in Padarn. Authentication follows that rule, but with a couple exceptions.

For good old-fashioned, built-into-the-browser Basic or Digest authentication it follows the same steps. Set the Authentication in your app.config file and the user will get a browser-provided authentication popup.

But what if we want to do authentication ourselves against our own user store? In this case Padarn has a few “custom” extensions to make things easier.

The first mechanism is to add users to the underlying user cache when you start up the Padarn Server. Something along these lines:

var server = new WebServer();
    new OpenNETCF.Web.Configuration.User { Name = "username", Password = "password" }

You could also hard-code the user identities right into the app.config file.

This isn’t all that friendly, though – at least in my thinking. Another, more robust, way is to use Padarn’s AuthenticationCallback. This allows you to send a delegate into Padarn and have it call your customer authentication algorithm every time it needs to do authentication. It’s slightly more complex, but really it’s still not too terrible, and it supports Basic or Digest authentication.

// somewhere in startup
var server = new WebServer();
server.Configuration.Authentication.AuthenticationCallback = VerifyUsername;

// then somewhere else - in this case in the same class
bool VerifyUsername(IAuthenticationCallbackInfo info)
    // no username is invalid
    if (string.IsNullOrEmpty(info.UserName)) return false;

    // first do a lookup of the password - this might come from a database, file, etc
    string password = GetPasswordForUser(info.UserName);
    if (password == null) return false;

    // determine the authentication type
    var basic = info as BasicAuthInfo;
    if (basic != null)
        // we're using basic auth
        return (basic.Password == password);

    // it wasn't basic, so it must be digest (the only two supported options)
    var digest = info as DigestAuthInfo;
    // let the DigestAuthInfo check the password for us (it's hashed)
    return digest.MatchCredentials(password);

Being an MVP

I’ve been a Microsoft “Most Valued Professional,” or MVP, for a long time now.  So long that I had to actually go look up when I was first got the award (2002).  Over those years, the program has changed, the technology for which I am an MVP has changed, and I’m certain that I’ve changed.

When I first got my MVP status, it was not long after I co-authored a book on embedded Visual Basic with a friend, Tim Bassett and at the time I was being pretty prolific in that extremely niche segment of the development community, publishing articles on DevBuzz and answering question in the now-defunct Usenet groups from embedded development (as a side note Microsoft has tried many incarnations to replace those groups, and have never found one that was as easy to use or navigate).

I remember that I felt a bit out of place at my first MVP Summit – the annual meeting of all MVPs in Redmond – because I was seeing and meeting all sorts of people that I had been using for information since I had started developing.

It wasn’t long, and my focus changed from eVB to the .NET Compact Framework – largely because Microsoft killed eVB as a product.  I embraced the CF and continued writing articles for MSDN and answering loads of questions and even doing the speaking circuit at what was then called Embedded DevCon.  I helped “start” OpenNETCF, which at the time was really just a collection of Compact Framework MVPs trying to answer questions, cover the stuff Microsoft was missing, and not duplicate effort.

In those early days of the CF, being an MVP was fantastic – it really was.  A few times a year the MVPs and the product teams would actually get together in the same room.  They would tell us what they were planning.  They would ask for feedback.  They’d listen to our ideas and our criticisms.  You could count on seeing direct results from those meetings in future releases of products coming out of Redmond, and so the MVPs continued to pour effort into keeping the community vibrant and well-informed.

Back in those days I knew both PMs and developers from most of the teams involved in the products I used.  I knew people on the CF team.  The test team.  The CE kernel and drivers teams.  The CE tools team.  The Visual Studio for Devices team.  And when I say I “knew” them, I don’t mean that at a conference, I could point them out, I mean that at the conference we went to the bars together.  I had their phone numbers and email addresses, and they would respond if I needed to get a hold of them.

Those I now know were the golden days of the program, at least from the embedded perspective. It could well be that C# MVPs or some other group still sees that kind of interaction, but the embedded side doesn’t see much of that any longer.  In fact, I know very few people on any of the teams, and I guess that most of those people probably wouldn’t answer my emails.

What’s changed, then?  Well, Microsoft is a business, of course.  A large one at that.  As such, they have constant internal churn.  Most of the people I one knew are still at Microsoft – they’ve just moved on to other things, other groups and other positions and the people that came in to replace them didn’t necessarily have the mechanism (or maybe the desire) to get to know the community of experts in the field.  The teams also shifted a lot – Embedded got moved from one group to another and to another inside of Microsoft, and the business unit got less and less priority for quite some time – especially when it and the group shipping quantity (Windows Mobile) were separated.  The Embedded DevCon became the Mobile and Embedded DevCon, then it went away.  Budgets shrank.  Community interaction receded.

I can’t say I fault Microsoft for this.  After all, they are a business, and they make decisions based on what provides shareholder value.  I may disagree with some, or even many of their decisions on the embedded side, but I don’t work at Microsoft, and I’m definitely not in their management team, so my opinions simply do not matter.

So why do I bother mentioning all of this, if not to complain?  Because I want you, as a reader, to understand where I come from in some of the articles I’ll be posting over the coming weeks.  I no longer have any inside information about the future of Windows CE or the Compact Framework.  I don’t know what Microsoft intends to do or not do.  I find out things about the technologies at the same time as the rest of the world.

This means that if you see me write something about the future of Windows CE, now called Windows Embedded Compact (and yes, expect to see something on that soon), it’s all based on my personal opinion and thoughts based on history *not* on any insider information.  If what I predict happens, it’s only because my guesses were educated and/or lucky.  If it doesn’t, it’s not because I was trying to be misleading, it’s because I simply got it wrong.  Basically, as with anything you read, I expect you to use your own critical thinking skills, and I fully encourage discussion and debate.