New ORM Implementation: Dream Factory

As the chief architect for the Solution Family, I’m always s looking for new places to store M2M data.  I like to investigate anything that might make sense for either ourselves or our customers.  A few months ago I came across Dream Factory, which is an open-source, standards-based Cloud Services Platform.  They provide more than just storage, but my first foray was into the storage side of things.

First I had to understand exactly how to use their REST APIs to do the basic CRUD operations.  On the surface, their API documentation was cool, but it actually lacked some of the fundamental pieces of information on how to initiate a session and start working.  I suspect that this is because they’re small and new and let’s face it, documentation isn’t that fun to generate.  Fortunately, their support has been fantastic – I actually can’t praise it enough.  Working with their engineers I was able to piece together everything necessary to build up a .NET SDK to hit their DSP service. To be fair, the documentation has also improved a bit since I started my development as well, so several of the questions I had have been clarified for future developers.  Again, this points to their excellent support and reacting to customer feedback.

Once I had a basic SDK, I then wrapped that in an implementation of the ORM that, for now, supports all of your basic CRUD operations.  The code is still wet, doesn’t support all ORM features, and is likely to be a bit brittle still, but I wanted to get it published in case anyone else wanted to start playing with it as well. So far I’m pretty pleased with it and once I have it integrated into Solution Engine, it will get a whole lot of use.

I’ve published the SDK and the ORM implementations up in CodePlex, all under the OpenNETCF.DreamFactory branch of the source tree.  Head over to Dream Factory to learn more and set up your free DSP account, then jump in.

It’s worth noting that at this point it’s a Windows Desktop implementation only.  Mono, Compact Framework and Windows Phone will follow at some point.

Stop Using “The Cloud”

We’ve all been in a meeting, conference or seminar where the talk goes something along these lines:

“We collect the data at the device, then we send it up to The Cloud where we can do analysis and other things on it.”

This is usually accompanies by the vague graphic of a line – sometimes solid, sometimes dotted – pointing to the generic PowerPoint cloud image.

I hate this statement.  I hate this cloud.  This is usually a huge red flag that the speaker or author really has no idea what they’re talking about.  It a sales or marketing cop-out that is the technical equivalent of the popular “then a miracle happens” cartoon.

What does “send it to The Cloud” even mean?  What cloud?  How is the data going to get there?  How will I get it back out? I’ve got other questions about “the cloud” that I’ll address in another post, but these three are the biggies that you should always raise your hand and ask.

1. What is “The Cloud?”

Here’s the little secret – there is no single “cloud”.  There are a boatload of them, and the list seems to be continually growing as it gets cheaper to stand up servers with a lot of storage and as more and more use-cases make use of off-device storage.  “The Cloud” can mean a whole panoply of things, including (but definitely not limited to):

Again there are plenty of others, these are just most of the ones I’ve dealt with in the past year or so.

And bear in mind that not all customers are going to be amenable to all clouds.  Some customers aren’t so comfortable putting their data on servers they don’t control.  Some aren’t comfortable putting their data on servers in particular countries where governmental agencies are known to mine them.  Some customers simply have predispositions to different services for different reasons.

Maybe they like Azure because it provides a simple interface for their .NET enterprise apps.  Maybe they like Amazon’s scale.  Maybe they like Brand X just because they do.  The point is that if you have more than one customer, you’re probably going to need to look at more than one cloud provider.  We’ve got customers that use multiple clouds due to the benefits each provides for certain types of data, data retention policies or total cost of use.

2. How do I get my data into “The Cloud”?

Yeeaaahhh, about that…since there are a boatload of cloud services, there are a boatload of ways that an app gets data into them.  Network databases might use ODBC or a proprietary/specific API set.  Services on the web typically use a web service interface.  Maybe REST, maybe OData, maybe something else.  The point here is none of them are the same.  None of them.

So that means that if you have to support multiple clouds, and you will have to support multiple API sets.  Of course you can abstract them all back to a common interface – I’d certainly recommend it, it’s what we do using the OpenNETCF ORM – but there’s still work to be done to actually implement each.

3. How do I get my data back out of “The Cloud”?

Just like putting the data in, getting it out requires an API, and again, they are all different.  Another thing to consider in the “getting it out” part of the equation is how you actually use it.

Some clouds allow you to run services and/or applications on the servers as well and data access is direct.  Sometimes you have to pull it out back to another server.  Once again, this means more work from a development perspective.  And again, you’ve got to multiply that by the clouds you need to support.

And how about data retention? Some clouds are not “forever” storage.  The data gets purged on a time basis.  If you need to keep the data archived for later data mining then add that work onto your plate too.

So the next time you see that slide with the cloud on it and the speaker says “just send the data to the cloud” raise your hand.  Take them to task over it.  We build software, and while that seems like magic to some, we don’t do miracles.

OpenNETCF ORM Implementation Update

I’m been maintaining and expanding the OpenNETCF ORM code base for quite some time now and it’s becoming pretty robust.  We dogfood it heavily and have a variety of application installs using it for all sorts of things, from local apps to M2M solutions.  One key tenet I’ve been following is that I opt for portability over expansive features support. Some features that you might think an ORM would have are very difficult to do in a generic way that would support both RDBMS systems and object or cloud databases.  It becomes easier for an application to do those relationships, or for you to (gasp) denormalize your data. For example, composite primary keys are a common request, but it’s a pretty complex thing to implement for an RDBMS, and for an object database, it’s a friggin’ nightmare.  It’s a lot easier for me to go do a whole new store implementation and just tell users that they should use surrogate keys.  We’re not the only ORM that feels that way, and honestly, I think composite keys are generally a bad idea anyway.

Features have largely been need driven, and by “need” I mean what I need at any given time.  I’ve also taken some time to experiment with different backing stores, and it’s led me to have a variety of implementations in different states of “doneness.”  For example, I have SQL Compact, Oracle and SQLite at a point I’d call complete, but I have a variety of others that aren’t quite so done.  Some are in the public source tree on CodePlex, some haven’t found there way there yet, but probably will when they get further along.

Here’s a complete list of implementations I have worked on, and a rough guess on state of completion.  If you’d like to se me work on any one in particular, let me know:

A majority of these (I think all but Oracle) work cross-platform, meaning I’ve tested them on big Windows, Windows CE and Mono under Linux.

A key point here, beyond the cross-platform capability (which was no small effort), is the fact that identical data access code in your application can perform the good old standard CRUD operations on *any* of those data stores.  The only code changes needed are setup bits (providing credentials, file paths, etc) that are specific to each store type.  Show me any other existing ORM that has even close to this kind of coverage.

OpenNETCF ORM: Dynamic Entities

There’s no doubt in my mind that code libraries and frameworks are fantastic for saving time and work, after all I want to spend time solving my business problem, not writing infrastructure.  Nowhere is this more true than it is with Object-Relational Mapping, or ORM, libraries.

Broadly speaking, if you’re still writing application code that requires that you also write SQL, you’re wasting your time.  Wasting time thinking about SQL syntax.  Wasting time writing it.  Wasting time testing, debugging and maintaining it.

I believe this so much, but was so dissatisfied with any existing ORM offering, that I wrote my own ORM. It wasn’t a trivial task, but I have something that does exactly what I need, on the platforms I need, and does it at a speed that I consider more than acceptable.  Occasionally I hit a data storage requirement that ORMs, even my own, aren’t so good at.

ORM usage is usually viewed as one of two approaches: code first or data first.  With the code-first approach, a developer defines the storage entity classes and the ORM generates a backing database from them.  With data first, the developer feeds a database into the ORM or an ORM tool, and it then generates the entity classes for you.

But this doesn’t cover all scenarios – which is something no ORM that I’m aware of seems to acknowledge.  Consider the following use-case (and this is a real-world use case that I had to design for, not some mental exercise).

I have an application that allows users to define storage for data at run time in a completely ad-hoc manner.  They get to choose what data items they want to save, but even those data items are dynamically available so they are available only at run time.

So we need to store effectively a flat table of data with an unknown set of columns.  The column names and data types are unknown until after the application is running on the user’s machine.

So the entities are neither data first nor code first.  I’ve not thought of a catchy term for these types of scenarios, so for now I’ll just call it “user first” since the user has the idea of what they want to store and we have to accommodate that.  This is why I created support in the OpenNETCF ORM for the DynamicEntity.

Let’s assume that the user decides they wanted to store a FirstName and LastName for a person.  For convenience, we also want to to store a generated ID for the Person entities that get stored.

At run time, we generate some FieldAttributes that define the Person:

var fieldList = new List();
fieldList.Add(new FieldAttribute()
    FieldName = "ID",
    IsPrimaryKey = true,
    DataType = System.Data.DbType.Int32

fieldList.Add(new FieldAttribute()
    FieldName = "FirstName",
    DataType = System.Data.DbType.String

fieldList.Add(new FieldAttribute()
    FieldName = "LastName",
    DataType = System.Data.DbType.String,
    AllowsNulls = false

And then we create and register a DynamicEntityDefinition with the DataStore:

var definition = new DynamicEntityDefinition(

Now, any time we want to store an entity instance, we simply create a DynamicEntity and pass that to the Insert method, just like any other Entity instance, and the ORM handles storage for us.

var entity = new DynamicEntity("Person");
entity.Fields["FirstName"] = "John";
entity.Fields["LastName"] = "Doe";
entity = new DynamicEntity("Person");
entity.Fields["FirstName"] = "Jim";
entity.Fields["LastName"] = "Smith";

The rest of the CRUD operations are similar, we simply have to name the definition type where appropriate.  For example, retrieving looks like this:

var people = store.Select("Person")

Updating like this:

var person = people.First();
person.Fields["FirstName"] = "Joe";
person.Fields["LastName"] = "Satriani";

And Deleting like this

store.Delete("Person", people.First().Fields["ID"]);

We’re no longer bound by the either-or box of traditional ORM thinking, and it leads to offering users some really interesting and powerful capabilities that before were relegated to only those who wanted to abandon an ORM and hand-roll the logic.

ORM: Transactions are now supported

I’ve just checked in new code changes and rolled a full release for the OpenNETCF ORM.  The latest code changes add transaction support.  This new release adds a load of features since the last (the last was way back in February), most notably full support for SQLite on all of the following platforms: Windows Desktop, Windows CE, Windows Phone and Mono for Android.

ORM Update: SQLite for Compact Framework now supported

The OpenNETCF ORM has supported SQLite for a while now – ever since I needed an ORM for Android – but somehow I’d overlooked addeing SQLite support for the Compact Framework.  That oversight has been addressed and support is now in the latest change set (99274).  I’ve not yet rolled it into the Release as it doesn’t support Dynamic Entities yet, but that’s on the way.

ORM Update: Dynamic Entities

The Data Collector feature in our Solution Family product line is one of the oldest (if not the oldest) sections of code, and as such it’s in need of a refactor to improve how it works.  We’ve updated just about everything else that uses data storage to use the OpenNETCF ORM framework, but Data Collectors have languished, largely because they are complex.  The Data Collector lets a user create an ad-hoc data collection definition that translates into a SQL Compact Table at run time.  The problem with migrating to ORM is that the ORM requires compile-time definitions of all Entities.  At least that was the problem until today.

I just checked in a new change set that supports the concept of a DynamicEntity (along with all of the old goodness that is ORM).  Now you can create and register a DynamicEntityDefinition with your IDataStore at run time and it will generate a table for you in the back end.  New overloads for all of the typical CRUD commands (Select, Update, Insert, Delete) allow you to just ask for the DynamicEntity by name and it returns an array of DynamicEntity instances that hold the field names and data values.

It’s in a “beta” state right now, but there’s a test method in the change set that shows general usage of all CRUD operations.  Give it a spin, and if you find any bad behavior, report it on Codeplex.

ORM Update: Added events

We recently needed the ability to do most-recently-updated data caching in our Solution Family products.  Since the products use the OpenNETCF ORM Framework, it only made sense to update the framework itself to include events that fire whenever an Insert, Update or Delete occurs.  In fact I added Before and After versions for each. While I was at it, I also added a full complement of virtual On[Before|After][Insert|Update|Delete] methods to the DataStore base, allowing DataStore implementers to hook into the process as well.  I’m thinking I’ll use those at some point in the future to add some form of Trigger capabilities.

OpenNETCF.ORM Now supports Xamarin Mono for Android

We’ve started doing work creating client applications for a few of our products to run on Android and iOS devices.  Since I’m a huge fan of code re-use, I took the time this week to port the OpenNETCF.ORM over to Xamarin’s Mono for Android using a SQLite backing store implementation.  The biggest challenge was that SQLite doesn’t support TableDirect not ResultSets, so it took a bit of code to get running.  Still, it took only a day and a half to get what I feel is pretty good support up and running.  I’ve not yet tested it through all of the possible permutations of queries, etc, but basic, single-table CRUD operations all test out fine.

So now a single code base can work on the Windows Desktop, Windows CE and Android (probably iOS and Windows Phone as well with very little work). If you’re doing MonoDroid work, give it a try.