If you’re doing M2M work, it’s a pretty good bet that at some point you’ll need to send data off of a device for storage somewhere else (it better not be all of the data you have, though!). Maybe it’s off to a MySQL server inside your network. Maybe it’s off to The Cloud. Regardless, you should expect that the storage location requirement could change, and that you might even need to send data to multiple locations. What you should not do is code in a hard dependency on any particular storage form. From your app’s perspective storage should be a transparent service. Your app should say “Hey storage service, here’s some aggregated data. Save it for me,” and that should be it. The app shouldn’t even tell it “save it to The Cloud” or “save it to a local server.” It should be up to the service to determine where the data should go, and that should be easily configurable and changeable.
This is pretty easy to do with an ORM, and naturally I think that the OpenNETCF ORM is really well suited for the job (I may be biased, but I doubt it). It supports a boatload of storage mechanisms, from local SQLite to enterprise SQL Server to the newest Dream Factory DSP cloud. And the code to actually store the data doesn’t change at all from the client perspective.
For example, let’s say I have a class called Temperatures that holds temps for several apartments that I’m monitoring. Using the ORM, this is what the code to store those temperature from Windows CE to a local SQL Compact database would look like:
This is what the code to store those temperature from Wind River Linux running Mono to an Oracle database would look like:
And this is what the code to store those temperature from Window Embedded Standard to the Dream Factory DSP cloud would look like:
Notice any pattern? They key here is to decouple your code. Make storage a ubiquitous service that you configure once, and you can spend your time writing code that’s interesting and actually solves your business problem.