First, let me preface this by saying no, I’ve not migrated any Compact Framework application to Visual Studio 2013. We’re still using Visual studio 2008 for CF apps, so don’t get too excited. That said, we’ve done some pretty interesting work over the last week that was interesting so please, read further.
Microsoft recently announced the availability of not just Visual Studio 2013, but also Visual Studio Online, which is effectively a hosted version of Team Foundation Server 2013. We use the older TFS 2010 internally as our source control provider as well as for running unit tests, but it’s got some significant limitations for our use case.
The biggest problem is that our flagship Solution Engine product runs on a lot of platforms – Windows CE, Windows Desktop and several Linux variants. For Linux we’re using Mono as the runtime, which means we’re using XBuild to compile and Xamarin Studio for code editing and debugging. Well Mono, XBuild and Xamarin Studio don’t really play well with TFS 2010. To put it bluntly, it’s a pain in the ass using them together. You have to train yourself to have Visual Studio and Xamarin Studio open side by side and to absolutely always do code edits in Visual Studio so the file gets checked out, but do the debugging in Xamarin Studio. Needless to say, we lose a lot of time dealing with conflicts, missing files, missing edits and the like when we go to do builds.
TFS 2013 supports not just the original TFS SCC, it also supports Git as an SCC, which is huge, since Xamarin Studio also supports Git. The thinking was that this would solve this cross-platform source control problem, so even if everything else stayed the same, we’d end up net positive.
I decided that if we were going to move to TFS 2013, we might as well look at having is hosted by Microsoft at the same time. The server we’re running TFS 2010 on is pretty old, and to be honest I hate doing server maintenance. I loathe it. I don’t want to deal with getting networking set up. I don’t like doing Hyper-V machines. I don’t like dealing with updates, potential outages and all the other crap associated with having a physical server. Even worse, that server isn’t physically where I am (all of the other servers we have are) so I have to deal with all of that remotely. So I figured I’d solve problem #2 at the same time by moving to the hosted version of TFS 2013.
Of course I like challenges, and Solution Engine is a mission-critical product for us. We have to be able to deliver updates routinely. It’s effectively a continuously updated application – features are constantly rolling into the product instead of defined periodic releases with a set number of features. We’ll add bits and pieces of a feature incrementally over weeks to allow us to get feedback from end users and to allow feature depth to grow organically based on actual usage. What this means is that the move had to happen pretty seamlessly – we can’t tolerate more than probably 2 or 3 days of down time. So how did I handle that? Well, by adding more requirements to my plate, of course!
If I was going to stop putting my attention toward architecting and developing features and shift to our build and deployment system, I decided it was an excellent opportunity to implement some other things I’ve wanted to do. So my list of goals started to grow:
- Move all projects (that are migratable) to Visual Studio 2013
- Move source control to Visual Studio Online
- Abandon our current InstallAware installer and move to NSIS, which meant:
- Learn more about NSIS than just how to spell it
- Split each product into separate installers with selectable features
- Automate the generation of a tarball for installation on Linux
- Automate FTPing all of the installers to a public FTP
- Setting up that FTP server
- Setting up a nightly build for each product on each platform that would also do the installers and the FTP actions
- Setting up a Continuous Integration build for each product on each platform with build break notifications
Once I had my list, I started looking at the hosted TFS features and what it could do to help me get some of the rest of the items on my list done. Well it turns out that it does have a Build service and a Test service, so it could do the CI builds for me – well the non Mono CI builds anyway. The nightly builds could be done, but no installer and FTP actions would be happening. And it looked like I was only going to get 60 minutes of build time per month for free. Considering that a build of just Engine and Builder for Windows takes roughly 6 minutes, and I wanted to do it nightly meant that I probably needed to think outside the box.
I did a little research and ended up installing Jenkins on a local server here in my office (yes, I was trying to get away from a server and ended up just trading our SCC server for a Build server). The benefit here is that I’ve now got it configured to pull code for each product as check-ins happen and then to do CI builds to check for regression. If a check-in breaks any platform, everyone gets an email. So if a Mono change breaks a CF change, we know. If a CF change breaks the desktop build we know. That’s a powerful feature that we didn’t have before.
Jenkins also does out nightly builds, compiles the NSIS installers and builds the Linux tarballs. It FTPs them to our web site so a new installation is available to us or customers every morning just like clockwork, and it emails us if there’s a problem.
It was not simple or straightforward to set all of this up – it was actually a substantial learning curve for me on a whole lot of disparate technologies. But it’s working and working well, and only took about 6 days to get going. We had a manual workaround for being able to generate builds after only 2 days, so there was no customer impact. The system isn’t yet “complete” – I still have some more Jobs I want to put into Jenkins, and I need to do some other housekeeping like getting build numbers auto-incrementing and showing up in the installers but it mostly detail work that’s left. All of the infrastructure is set up and rolling. I plan to document some of the Jenkins work here in the next few days, since it’s not straightforward, especially if you’re not familiar with Git or Jenkins, plus I found a couple bugs along the path that you have to work around.
In the end, though, what we ended up with for an extremely versatile cross-platform infrastructure. I’m really liking the power and flexibility it has already brought to our process, and I’ve already got a lot of ideas for additions to it. If you’re looking to set up something similar, here’s the checklist of what I ended up with (hopefully I’m not missing anything).
Developer workstations with:
- Visual Studio 2013 for developing Windows Desktop and Server editions of apps
- Xamarin Studio for developing Linux, Android and iOS editions
- Visual Studio 2008 for developing Compact Framework editions
A server with:
- Visual Studio 2013 and 2008 (trying to get msbuild and mstest running without Studio proved too frustrating inside my time constraints)
- Mono 3.3
- 7-Zip (for making the Linux tarballs)
- NSIS (for making the desktop installers)
- Jenkins with the following plug-ins