OpenNETCF ORM Updates: Dream Factory and Azure Tables

We’ve been busy lately.  Very, very busy with lots of IoT work.  A significant amount of that work has been using the Dream Factory DSP for cloud storage, and as such we’ve done a lot of work to make the Dream Factory implementation of the OpenNETCF ORM more solid and reliable (as well as a pretty robust, stand-along .NET SDK for the Dream Factory DSP as a byproduct) .  It also shook out a few more bugs and added a few more features to the ORM core itself.

I’ve pushed a set of code updates (though not an official release yet) up to the ORM Codeplex project that includes these changes, plus an older Azure Table Service implementation I had been working on a while back in case anyone is interested and wanted to play with it, use it or extend it.  The interesting thing about the Azure implementation is that it includes an Azure Table Service SDK that is Compact Framework-compatible.

As always, feel free to provide feedback, suggestions, patches or whatever over on the project site.

Lots of ORM Updates

We use the OpenNETCF ORM in the Solution Family products.  Unfortunately I haven’t figured out a good way to keep the code base for the ORM stuff we use in Solution Family in sync with the public code base on CodePlex, so occasionally I have to go in and use Araxis Merge to push changes into the public tree, then check them into the public source control server.  What that means to you is that you’re often working with stale code.  Sorry, that’s just how the cookie crumbles, and until I figure out how to clone myself Multiplicity-style, it’s not likely to change.

At any rate, we’re pretty stable on the Solution Family side of things, so I did a large merge back into the public tree this evening.  I still have to do a full release package, but the code is at least up to date as of change set 104901 and all of the projects (at least I hope) properly build.

Most of the changes revolve around work I’ve been doing with the Dream Factory cloud implementation, so there are lots of changes there, but I also have been doing more with DynamicEntities, so some changes were required for that too.  Of course there are assorted bug fixes as well, most of them in the SQLite implementation.  I leave it to you and your own diff skills if you really, really want to know what they are.

Go get it.  Use it.  And for Pete’s sake, quit writing SQL statements!

Storing Cloud Data Over Transient Connections

An interesting challenge in many M2M scenarios is that your network connection is often far from good. If you’re trying to collect Engine data from an over-the-road truck going 80 miles an hour across rural Montana it’s a pretty safe bet that you’re going to have places where you have no access to a network. Even in urban areas you have dead spots, plus there’s always the old “driver put a tuna can over the antenna” scenarios to throw a wrench into things. Just because we lose our connection doesn’t mean we should start throwing data onto the floor though. We need a data storage mechanism that’s robust enough to deal with these kind of problems.

What you need is a local data store for the times when you don’t have connectivity and a remote store when you do. Or maybe a local store that does store-and-forward or replication. Yes, you could roll your own data storage service that can do these things, but why would you when there’s a perfectly good, already written solution out there? Again, you should be abstracting your application’s data services so you can focus on the business problems you’re good at. Solve the problems your company is hired to solve – not the grunt work of putting data into storage.

I added a new feature today to the OpenNETCF ORM called Replication (it’s only in the source downloads right now, not the latest release). A Replicator attaches to any DataStore and ties it to any other DataStore. It doesn’t matter what the actual storage is – there’s the beauty of abstraction, it works with any supported stores – it can take data from one store and push it to anther behind the scenes for you. So you can store to a local SQLite data file and have a Replicator push that data off to an Azure table. And it requires no change in your data Insert logic at all. Zero.

Currently Replicators are simplistic in capability. They can only replicate Inserts, and they only do a “Replicate and Delete” meaning that during replication the data is “moved” from the local store to the remote store, but that’s typically all you need and the typical case is all I’m trying to solve in the first pass.

So what does it look like, you ask? Below is an example of a working test that stores locally to a SQL Compact database, and when the network is up, those rows get moved off to a DreamFactory Cloud table. Notice that the only “new” thing you do here is to define the DataStore where the replicated data goes, you define which Entities will get replicated (it’s opt-in or a per-table basis), and you add the Replicator to the source DataStore’s new Replicators collection (lines 11-29). Yes, that means you could even replicate different tables to different target Stores.

public void BasicLocalReplicationTest()
    var source = new SqlCeDataStore("source.sdf");
    if (!source.StoreExists)

    var destination = new DreamFactoryDataStore(

    if (!destination.StoreExists)

    // build a replicator to send data to the destiantion store
    var replicator = new Replicator(destination, ReplicationBehavior.ReplicateAndDelete);

    // replication is opt-in, so tell it what type(s) we want to replicate

    // add the replicator to the source

    // watch an event for when data batches go out
    replicator.DataReplicated += delegate
        // get a count
        Debug.WriteLine(string.Format("Sent {0} rows", replicator.GetCount<TestItem>()));

    var rows = 200;

    // put some data in the source
    for (int i = 0; i < rows; i++)
        var item = new TestItem(string.Format("Item {0}", i));

    int remaining = 0;
    // loop until the source table is empty
        remaining = source.Count<TestItem>();
    } while(remaining > 0);

    // make sure the destination has all rows
    Assert.AreEqual(rows, destination.Count<TestItem>());

New ORM Implementation: Dream Factory

As the chief architect for the Solution Family, I’m always s looking for new places to store M2M data.  I like to investigate anything that might make sense for either ourselves or our customers.  A few months ago I came across Dream Factory, which is an open-source, standards-based Cloud Services Platform.  They provide more than just storage, but my first foray was into the storage side of things.

First I had to understand exactly how to use their REST APIs to do the basic CRUD operations.  On the surface, their API documentation was cool, but it actually lacked some of the fundamental pieces of information on how to initiate a session and start working.  I suspect that this is because they’re small and new and let’s face it, documentation isn’t that fun to generate.  Fortunately, their support has been fantastic – I actually can’t praise it enough.  Working with their engineers I was able to piece together everything necessary to build up a .NET SDK to hit their DSP service. To be fair, the documentation has also improved a bit since I started my development as well, so several of the questions I had have been clarified for future developers.  Again, this points to their excellent support and reacting to customer feedback.

Once I had a basic SDK, I then wrapped that in an implementation of the ORM that, for now, supports all of your basic CRUD operations.  The code is still wet, doesn’t support all ORM features, and is likely to be a bit brittle still, but I wanted to get it published in case anyone else wanted to start playing with it as well. So far I’m pretty pleased with it and once I have it integrated into Solution Engine, it will get a whole lot of use.

I’ve published the SDK and the ORM implementations up in CodePlex, all under the OpenNETCF.DreamFactory branch of the source tree.  Head over to Dream Factory to learn more and set up your free DSP account, then jump in.

It’s worth noting that at this point it’s a Windows Desktop implementation only.  Mono, Compact Framework and Windows Phone will follow at some point.