New Open Source Projects

I’m working on a new application that’s going to be open source.  Once it’s a bit further along I’ll provide more details, but essentially it’s a paperless forms solution that encompasses a WPF desktop app, a self-hosted ASP.NET WebAPI service and a Xamarin Forms mobile app all working together.

In the process of getting the foundation up and running, I’m moving a lot of stuff over to Github, but more importantly I’m updating, extending, and creating whole new libraries that can be used for other stuff.  These, too, are open source.

What’s being updated?

  • OpenNETCF.ORM
    I’m primamrily working with SQLite, but I’ve already uncovered and fixed some issues around one-to-many entity relationships
  • OpenNETCF.IoC
    Work and updates to make it more Xamarin/PCL friendly
  • OpenNETCF Extensions
    This thing has been stable and in heavy production use for years.  Support continues for it with some minor changes and improvements in the Validation namespace so far.

What’s New?

  • OpenNETCF.Xamarin.Forms
    Adding things that I’m finding missing or hard in Xamarin’s BCL.  Things like “hey, scale this label to the right size depending on my screen resolution, on any platform”.  Thanks go out to Peter Foot for pointing me in the right direction there.
  • OpenNETCF.MVVM
    No idea why Xamarin didn’t provide a basic navigation framework.  I created one.  Is it awesome?  I don’t know – but it works for my use case (plus 3 other apps I’ve done with it).
  • OpenNETCF.Google.Analytics (moving to it’s own repo soon)
    Again, seems like low-hanging fruit here.  Why isn’t it in the box?  I don’t know, but there’s now a simple, open source library for it.

One note – these are all in active development, so don’t expect a NuGet package updates on them for at least a little while (end of May?).  I’d like to get features in and stable before rolling them out.

Feedback welcome. Testers wanted.  Enjoy.

ORM Migration to Github

With the announced mothballing of Codeplex, I’m working to migrate at least some of the open source projects I have to Github.

It turns out I own 29 projects over on Codeplex. Some of them, like the barcode decoding library, were simply learning exercises.  Some of them were idea that I never found time to actually work on.  Many are for technologies that are now dead (I’m looking at you Compact Framework!). But some are actually useful and I still use quite regularly.

As I pull them into Github, I’m also taking the time to merge in local forks I have from doing other projects as well as doing NuGet releases.  Turns out that this is a fairly large undertaking, but it’s forcing me to do some cleanup that’s long overdue.

So yesterday I pulled over the OpenNETCF ORM.  For those who don’t know, it’s a lightweight ORM (way, way, way lighter than Entity Framework) that supports code first, data first *and* I don’t know my data structure until run time, which no other ORM I’ve seen supports.  It has support for MySQL, SQLite, SQL Server, SQL Compact, Oracle, Azure Table Service and Dream Factory out of the box.  Adding new implementations, especially anything that had an ADO provider already, is really easy.

So, if you’re a current ORM user, or looking for a simple ORM for a .NET project, take a look.

Another ORM Update

One of the laws of software development is “A better way of finding bugs than testing is to release” and sure enough, that happened.  I rolled a release of the ORM yesterday, and no sooner than I did, someone hit a bug with the new connection pooling in the Compact Framework.  It turns out that the SqlCeConnection exposes a Disposed event in the CF, but that event never fires.  The connection pooling implementation relied on that event to know when to remove connections from the pool, and voila – a bug is born.  Is it a bug in the ORM, or a bug in the SqlCeConnection?  I’d say that’s debatable both ways, but since I only have control over one of them I made the fix where I could.

There’s a workaround checked into the source tree over on Codeplex (change set 107904).  This fix is *not* yet rolled into a release package (sure would be nice if that were automated).

ORM Updates

Generally speaking, the OpenNETCF ORM library gets worked on offline.  We use it extensively in our Solution Engine product and so the “working” copy of the code resides on a separate source tree.  The good part of this is that it’s always getting tested, extended and fixed.  The down side is that I have to manually merge those changes back to the public source tree so it doesn’t happen probably as often as it should.

At any rate, I did a large merge this morning and there are some cool new features that have been added, along with a long list of bug fixes.

The general bug fixes and enhancements are largely SQL Server related, since we did a large SQL Server push for a customer this year, and the ORM benefitted from that.

The new features are twofold:

1. I added some rudimentary Connection Pool support.  The ORM itself can now keep a pool of connections to your target data store and it will re-use them in a round-robin fashion.  It’s nothing overly complex, but if you’ve got a few different threads doing different work on the database, it improves performance.

2. I added what I call a “Recovery Service” because I couldn’t come up with a better name.  This doesn’t restore data, but instead it attempt to recover from failures when it can. It’s really useful for remotely connected data stores.  For example, if you are using a SQL Server or DreamFactory DSP and the server or network goes down, your data actions are going to fail and you have to add in a load of handling logic for that.  The Recovery Service is designed to help you out here.  It will look for those types of failure and cache your operations until the store comes back online, at which time it will then apply the changes.  Sort of a “store-and-forward” type of capability, but at the command level and in RAM.

The new release is over on CodePlex in source and binary format.

Row filtering in the ORM

For a while now we’ve had an unwanted behavior in our Solution Engine product. The larger the on-device database got, the longer the app took to load. To the point that some devices in the field were taking nearly 5 minutes to boot (up from roughly 1 minute under normal circumstances). This morning we decided to go figure out what was causing it.

First, we pulled a database file from a device that is slow to boot. It turns out that the database was largely empty except to about 50k rows in a log table where we record general boot/run information on the device for diagnostics.

At startup the logging service pulls the last hour of log information and outputs it to the console, which has proven to be very helpful in diagnosing bad behaviors and crashes. Looking at the code that gets that last hour of data, we saw the following:

var lastHourEntries = m_store.Select<SFTraceEntry>(a => a.TimeStamp >= selectFromTime);

Now let’s look at this call in the context of having 50k rows in the table. What it effectively says is “Retrieve every row from the SFTraceEntry Table, hydrate a SFTraceEntry class for each row, then walk through that entire list checking the TimeStamp field. If the TimeStamp is less that an hour old, then copy that item to a new list and when you’re done, return the filtered list.” Ouch. This falls into the category of “using a tool wrong”. The ORM supports FilterConditions that, depending on the backing store, will attempt to decipher into a SQL statement, index walk or something more efficient than “return all rows”. In this case, the change was as simple as this:

var dateFilter = new FilterCondition("TimeStamp", selectFromTime, FilterCondition.FilterOperator.GreaterThan);
var lastHourEntries = m_store.Select<SFTraceEntry>(new FilterCondition[] { dateFilter });

OpenNETCF ORM Updates: Dream Factory and Azure Tables

We’ve been busy lately.  Very, very busy with lots of IoT work.  A significant amount of that work has been using the Dream Factory DSP for cloud storage, and as such we’ve done a lot of work to make the Dream Factory implementation of the OpenNETCF ORM more solid and reliable (as well as a pretty robust, stand-along .NET SDK for the Dream Factory DSP as a byproduct) .  It also shook out a few more bugs and added a few more features to the ORM core itself.

I’ve pushed a set of code updates (though not an official release yet) up to the ORM Codeplex project that includes these changes, plus an older Azure Table Service implementation I had been working on a while back in case anyone is interested and wanted to play with it, use it or extend it.  The interesting thing about the Azure implementation is that it includes an Azure Table Service SDK that is Compact Framework-compatible.

As always, feel free to provide feedback, suggestions, patches or whatever over on the project site.

New ORM Release: v1.0.14007

I’ve finally gotten around to wrapping up all of the changes I’ve made in the last year (has it really been that long since the last release?) to the OpenNETCF ORM library.  The changes have always been availble in the change set browser, but I actually have them as binary and source downloads now.  I probably should find the time to create a NuGet package for it (and IoC) now.

Lots of ORM Updates

We use the OpenNETCF ORM in the Solution Family products.  Unfortunately I haven’t figured out a good way to keep the code base for the ORM stuff we use in Solution Family in sync with the public code base on CodePlex, so occasionally I have to go in and use Araxis Merge to push changes into the public tree, then check them into the public source control server.  What that means to you is that you’re often working with stale code.  Sorry, that’s just how the cookie crumbles, and until I figure out how to clone myself Multiplicity-style, it’s not likely to change.

At any rate, we’re pretty stable on the Solution Family side of things, so I did a large merge back into the public tree this evening.  I still have to do a full release package, but the code is at least up to date as of change set 104901 and all of the projects (at least I hope) properly build.

Most of the changes revolve around work I’ve been doing with the Dream Factory cloud implementation, so there are lots of changes there, but I also have been doing more with DynamicEntities, so some changes were required for that too.  Of course there are assorted bug fixes as well, most of them in the SQLite implementation.  I leave it to you and your own diff skills if you really, really want to know what they are.

Go get it.  Use it.  And for Pete’s sake, quit writing SQL statements!

Storing Cloud Data Over Transient Connections

An interesting challenge in many M2M scenarios is that your network connection is often far from good. If you’re trying to collect Engine data from an over-the-road truck going 80 miles an hour across rural Montana it’s a pretty safe bet that you’re going to have places where you have no access to a network. Even in urban areas you have dead spots, plus there’s always the old “driver put a tuna can over the antenna” scenarios to throw a wrench into things. Just because we lose our connection doesn’t mean we should start throwing data onto the floor though. We need a data storage mechanism that’s robust enough to deal with these kind of problems.

What you need is a local data store for the times when you don’t have connectivity and a remote store when you do. Or maybe a local store that does store-and-forward or replication. Yes, you could roll your own data storage service that can do these things, but why would you when there’s a perfectly good, already written solution out there? Again, you should be abstracting your application’s data services so you can focus on the business problems you’re good at. Solve the problems your company is hired to solve – not the grunt work of putting data into storage.

I added a new feature today to the OpenNETCF ORM called Replication (it’s only in the source downloads right now, not the latest release). A Replicator attaches to any DataStore and ties it to any other DataStore. It doesn’t matter what the actual storage is – there’s the beauty of abstraction, it works with any supported stores – it can take data from one store and push it to anther behind the scenes for you. So you can store to a local SQLite data file and have a Replicator push that data off to an Azure table. And it requires no change in your data Insert logic at all. Zero.

Currently Replicators are simplistic in capability. They can only replicate Inserts, and they only do a “Replicate and Delete” meaning that during replication the data is “moved” from the local store to the remote store, but that’s typically all you need and the typical case is all I’m trying to solve in the first pass.

So what does it look like, you ask? Below is an example of a working test that stores locally to a SQL Compact database, and when the network is up, those rows get moved off to a DreamFactory Cloud table. Notice that the only “new” thing you do here is to define the DataStore where the replicated data goes, you define which Entities will get replicated (it’s opt-in or a per-table basis), and you add the Replicator to the source DataStore’s new Replicators collection (lines 11-29). Yes, that means you could even replicate different tables to different target Stores.

[TestMethod()]
public void BasicLocalReplicationTest()
{
    var source = new SqlCeDataStore("source.sdf");
    if (!source.StoreExists)
    {
        source.CreateStore();
    }
    source.AddType<TestItem>();

    var destination = new DreamFactoryDataStore(
        "https://dsp-mydsp.mycompany.dreamfactory.com/",
        "ORM", 
        "MyUID",
        "MyPWD");

    if (!destination.StoreExists)
    {
        destination.CreateStore();
    }

    // build a replicator to send data to the destiantion store
    var replicator = new Replicator(destination, ReplicationBehavior.ReplicateAndDelete);

    // replication is opt-in, so tell it what type(s) we want to replicate
    replicator.RegisterEntity<TestItem>();

    // add the replicator to the source
    source.Replicators.Add(replicator);

    // watch an event for when data batches go out
    replicator.DataReplicated += delegate
    {
        // get a count
        Debug.WriteLine(string.Format("Sent {0} rows", replicator.GetCount<TestItem>()));
    };

    var rows = 200;

    // put some data in the source
    for (int i = 0; i < rows; i++)
    {
        var item = new TestItem(string.Format("Item {0}", i));
        source.Insert(item);
    }

    int remaining = 0;
    // loop until the source table is empty
    do
    {
        Thread.Sleep(500);
        remaining = source.Count<TestItem>();
    } while(remaining > 0);

    // make sure the destination has all rows
    Assert.AreEqual(rows, destination.Count<TestItem>());
}

Sending M2M data to The Cloud

If you’re doing M2M work, it’s a pretty good bet that at some point you’ll need to send data off of a device for storage somewhere else (it better not be all of the data you have, though!).  Maybe it’s off to a MySQL server inside your network.  Maybe it’s off to The Cloud.  Regardless, you should expect that the storage location requirement could change, and that you might even need to send data to multiple locations.  What you should not do is code in a hard dependency on any particular storage form.  From your app’s perspective storage should be a transparent service.  Your app should say “Hey storage service, here’s some aggregated data.  Save it for me,” and that should be it.  The app shouldn’t even tell it “save it to The Cloud” or “save it to a local server.”  It should be up to the service to determine where the data should go, and that should be easily configurable and changeable.

This is pretty easy to do with an ORM, and naturally I think that the OpenNETCF ORM is really well suited for the job (I may be biased, but I doubt it).  It supports a boatload of storage mechanisms, from local SQLite to enterprise SQL Server to the newest Dream Factory DSP cloud.  And the code to actually store the data doesn’t change at all from the client perspective.

For example, let’s say I have a class called Temperatures that holds temps for several apartments that I’m monitoring.  Using the ORM, this is what the code to store those temperature from Windows CE to a local SQL Compact database would look like:

store.Insert(currentTemps);

This is what the code to store those temperature from Wind River Linux running Mono to an Oracle database would look like:

store.Insert(currentTemps);

And this is what the code to store those temperature from Window Embedded Standard to the Dream Factory DSP cloud would look like:

store.Insert(currentTemps);

Notice any pattern? They key here is to decouple your code. Make storage a ubiquitous service that you configure once, and you can spend your time writing code that’s interesting and actually solves your business problem.