Raw Flash Access from a CF app?

A new customer came to me late last week with an interesting problem.  They have hundreds of Motorola/Symbol MC70 barcode scanners in the field and occasionally the flash memory on the devices gets corrupted.

The current “solution” these guys are using involves removing the flash chips from the board, reprogramming it in a separate device, re-balling the BGA pins, and re-soldering it to the board. That explains why they desperately want an application that can do it.

They know the range where the corruption occurs and wanted an application that would allow a user to read, update and rewrite the corrupted bytes in flash.  They had talked with five separate developers before they found me, and all 5 had agreed that it was impossible, so naturally I took that as a challenge.

First, there are lots of things to know about how flash access works.  Most importantly, it’s not like RAM.  You can’t just map to it, then go about your merry way doing 32-bit reads and writes.  You can read it that way, sure, but writing is a whole new game.  Flash is broken up into “blocks” (which aren’t even always the same size – in the case of the MC70, the first 4 blocks are 64k long, and the rest are 256k long.) and writes must be done following this general procedure:

  1. Read the *entire* block that contains the byte you want to change into RAM
  2. Change to flash to Lock mode (a flash register write)
  3. Unlock the block of flash (another register write)
  4. Change the flash to erase mode (register write)
  5. Erase the *entire* block of flash (which writes all FF’s to it)
  6. Change the flash to write mode (register write)
  7. Update the RAM buffer with your changes
  8. Write in the *entire* block block to flash
  9. Tell the flash to commit (register write)
  10. Wait for the flash to finish (register read)
  11. Put the flash back into read mode (register write)

Oh, and if you get any of this wrong, you’ve made yourself an expensive brick.  The only solution at that point is the de-soldering and reprogramming route, and I don’t have that kind of hardware in my office.

So I started writing the app Monday morning, using C# since I had to create a UI for the editor, and on Wednesday morning this is what I delivered:

FlashEdit

So, in just 2 days I did what was “impossible”. I not only wrote all of the flash access code, I also wrote a hex editor control and an app UI to make use of the flash library.

Sending large files to a client in Padarn

A customer recently contacted me with a Padarn Web Server problem.  They wanted users to be able to download data files from their device to a client browser, but they were running out of device memory when the file got large (38MB in their case, but the size would be completely hardware and environment dependent).

What you have to understand is that Padarn caches your Page’s response in memory until such time as you either tell it to Flush explicitly, or until the Page has full rendered (in Padarn’s view) at which point it will call Flush for you.

For a large file, it’s going to be bad to try to hold the entire thing in memory before pushing it across the wire, so the strategy would be to read the local file in packets, then send those packets across to the client using Response.BinaryWrite.

Now BinaryWrite simply puts the data into the memory cache, so you have to actually tell Padarn to send it across the wire by calling Flush.  The issue there, though, is that on the first Flush, Padarn has to assemble and send the payload header to the client, and part of that header is the total length of the content.  If you haven’t explicitly set the content length, Padarn has no idea how big it might be, so it defaults to the length what it has in the buffer, and sends the header.  The browser then decodes that header and expects a length of only that first packet.

The solution is to set the Response.ContentLength property before you send any data.  Padarn will use your override value when it sends it to the client, and all will be well.

Here’s a quick example of a page that I used to send a 1.1GB media file from a page request with no problem.  You might want to tune the packet size based on device memory and speed, but 64k worked pretty well for my test.

class LargeFileDownload : Page
{
    protected override void Render(HtmlTextWriter writer)
    {
        var largeFilePath = @"D:MediaMoviesshort_circuit.mp4";

        var packetSize = 0x10000; // 64k packets

        using(var stream = new FileStream(
            largeFilePath,
            FileMode.Open,
            FileAccess.Read))
        {
            var toSend = stream.Length;
            var packet = new byte[packetSize];

            // set the response content length
            Response.ContentLength = toSend;
            // set the content type as a binary stream
            Response.ContentType = "application/octet-stream";
            // set the filename so the browser doesn't try to render it
            Response.AppendHeader("Content-Disposition",
                string.Format("attachment; filename={0}",
                Path.GetFileName(largeFilePath)));

            // send the content in packets
            while (toSend > 0)
            {
                var actual = stream.Read(packet, 0, packetSize);

                if (!Response.IsClientConnected)
                {
                    Debug.WriteLine(
                        "Client disconnected.  Aborting file send.");
                    return;
                }
                else
                {
                    Debug.WriteLine(
                        string.Format("Sending {0} bytes. {1} to go.",
                        actual, toSend));
                }

                if (actual == packetSize)
                {
                    Response.BinaryWrite(packet);
                }
                else
                {
                    // partial packet, so crop
                    var last = new byte[actual];
                    Array.Copy(packet, last, actual);
                    Response.BinaryWrite(packet);
                }

                // send the packet acrtoss the wire
                Response.Flush();

                toSend -= actual;
            }
        }
    }
}

OpenNETCF ORM: Dynamic Entities

There’s no doubt in my mind that code libraries and frameworks are fantastic for saving time and work, after all I want to spend time solving my business problem, not writing infrastructure.  Nowhere is this more true than it is with Object-Relational Mapping, or ORM, libraries.

Broadly speaking, if you’re still writing application code that requires that you also write SQL, you’re wasting your time.  Wasting time thinking about SQL syntax.  Wasting time writing it.  Wasting time testing, debugging and maintaining it.

I believe this so much, but was so dissatisfied with any existing ORM offering, that I wrote my own ORM. It wasn’t a trivial task, but I have something that does exactly what I need, on the platforms I need, and does it at a speed that I consider more than acceptable.  Occasionally I hit a data storage requirement that ORMs, even my own, aren’t so good at.

ORM usage is usually viewed as one of two approaches: code first or data first.  With the code-first approach, a developer defines the storage entity classes and the ORM generates a backing database from them.  With data first, the developer feeds a database into the ORM or an ORM tool, and it then generates the entity classes for you.

But this doesn’t cover all scenarios – which is something no ORM that I’m aware of seems to acknowledge.  Consider the following use-case (and this is a real-world use case that I had to design for, not some mental exercise).

I have an application that allows users to define storage for data at run time in a completely ad-hoc manner.  They get to choose what data items they want to save, but even those data items are dynamically available so they are available only at run time.

So we need to store effectively a flat table of data with an unknown set of columns.  The column names and data types are unknown until after the application is running on the user’s machine.

So the entities are neither data first nor code first.  I’ve not thought of a catchy term for these types of scenarios, so for now I’ll just call it “user first” since the user has the idea of what they want to store and we have to accommodate that.  This is why I created support in the OpenNETCF ORM for the DynamicEntity.

Let’s assume that the user decides they wanted to store a FirstName and LastName for a person.  For convenience, we also want to to store a generated ID for the Person entities that get stored.

At run time, we generate some FieldAttributes that define the Person:

var fieldList = new List();
fieldList.Add(new FieldAttribute()
{
    FieldName = "ID",
    IsPrimaryKey = true,
    DataType = System.Data.DbType.Int32
});

fieldList.Add(new FieldAttribute()
{
    FieldName = "FirstName",
    DataType = System.Data.DbType.String
});

fieldList.Add(new FieldAttribute()
{
    FieldName = "LastName",
    DataType = System.Data.DbType.String,
    AllowsNulls = false
});

And then we create and register a DynamicEntityDefinition with the DataStore:

var definition = new DynamicEntityDefinition(
                              "Person", 
                              fieldList, 
                              KeyScheme.Identity);
 
store.RegisterDynamicEntity(definition);

Now, any time we want to store an entity instance, we simply create a DynamicEntity and pass that to the Insert method, just like any other Entity instance, and the ORM handles storage for us.

var entity = new DynamicEntity("Person");
entity.Fields["FirstName"] = "John";
entity.Fields["LastName"] = "Doe";
store.Insert(entity);
 
entity = new DynamicEntity("Person");
entity.Fields["FirstName"] = "Jim";
entity.Fields["LastName"] = "Smith";
store.Insert(entity);

The rest of the CRUD operations are similar, we simply have to name the definition type where appropriate.  For example, retrieving looks like this:

var people = store.Select("Person")

Updating like this:

var person = people.First();
person.Fields["FirstName"] = "Joe";
person.Fields["LastName"] = "Satriani";
store.Update(person);

And Deleting like this

store.Delete("Person", people.First().Fields["ID"]);

We’re no longer bound by the either-or box of traditional ORM thinking, and it leads to offering users some really interesting and powerful capabilities that before were relegated to only those who wanted to abandon an ORM and hand-roll the logic.

On Software Development: Moving from statics or instances to a DI container

I’ve recently started refactoring a customer’s code base for a working application.  They recognize the need to make their code more extensible and maintainable so I’m helping to massage the existing code into something that they will be able to continue shipping and upgrading for years to come without ending up backed into a corner.

One of my first suggestions was to start eliminating the abundance of static variables in the code base.  In this case, static classes and methods abound, and it looks like it was used as a quick-and-dirty mechanism to provide singleton behavior.  Now I’m not going to go into depth on why an actual singleton might have been better, or the pitfalls of all of these statics.  Write-ups on that kind of thing about in books and on line.

Instead, let’s look at what it means to migrate from a static, an instance or a singleton over to using a DI container, specifically OpenNETCF’s IoC framework.

First, let’s look at a “service” class that exposes a single integer and how we might consume it.

   1: class MyStaticService

   2: {

   3:     public static int MyValue = 1;

   4: }

And how we’d get a value from it:

   1: var staticValue = MyStaticService.MyValue;

Simple enough.  Some of the down sides here are:

  • There’s no way to protect the Field value from unwanted changes
  • To use the value, I have to have a reference to the assembly containing the class
  • It’s really hard to mock and cannot be moved into an interface

Now let’s move that from a static to an instance Field in a constructed class:

   1: class MyInstanceService

   2: {

   3:     public MyInstanceService()

   4:     {

   5:         MyValue = 1;

   6:     }

   7:  

   8:     public int MyValue { get; set; }

   9: }

Now we have to create the class instance and later retrieve the value.

   1: var service = new MyInstanceService();

   2:  

   3: // and at a later point....

   4: var instanceValue = service.MyValue;

We’ve got some benefit from doing this.  We can now control access to the underlying value, making the setter protected or private, and we’re able to do bounds checking, etc.  All good things.  Still, there are downsides:

  • I have to keep track of the instance I created, passing it between consumers or maintaining a reachable reference
  • I have no protection from multiple copies being created
  • The consumer must have a reference to the assembly containing the class (making run-time plug-ins very hard)

Well let’s see what a Singleton pattern buys us:

   1: class MySingletonService

   2: {

   3:     private static MySingletonService m_instance;

   4:  

   5:     private MySingletonService()

   6:     {

   7:         MyValue = 1;

   8:     }

   9:  

  10:     public static MySingletonService Instance

  11:     {

  12:         get

  13:         {

  14:             if (m_instance == null)

  15:             {

  16:                 m_instance = new MySingletonService();

  17:             }

  18:             return m_instance;

  19:         }

  20:     }

  21:  

  22:     public int MyValue { get; set; }

  23: }

And now the consumer code:

   1: var singleTonValue = MySingletonService.Instance.MyValue;

That looks nice from a consumer perspective.  Very clean.  I’m not overly thrilled about having the Instance accessor property, but it’s not all that painful.  Still, there are drawbacks:

  • The consumer must have a reference to the assembly containing the class (making run-time plug-ins very hard)
  • If I want to mock this or swap implementations, I’ll got to go all over my code base replacing the calls to the new instance (or implement a factory).

How would all of this look with a DI container?

   1: interface IService

   2: {

   3:     int MyValue { get; }

   4: }

   5:  

   6: class MyDIService : IService

   7: {

   8:     public MyDIService()

   9:     {

  10:         MyValue = 1;

  11:     }

  12:  

  13:     public int MyValue { get; set; }

  14: }

Note that the class is interface-based and we register the instance with the DI container by *interface* type.  This allows us to pull it back out of the container later by that interface type.  The consumer doesn’t need to know anything about the actual implementation.

   1: // Note that the Services collection holds (conceptually) singletons.  Only one instance per registered type is allowed.

   2: // If you need multiple instances, use the Items collection, which requires a unique identifier string key for each instance

   3: RootWorkItem.Services.AddNew<MyDIService, IService>();

   4:  

   5: // and at a later point....

   6: var diValue = RootWorkItem.Services.Get<IService>().MyValue;

Mocks, implementation changes based on environment (like different hardware) and testing become very easy.  Plug-in and run-time feature additions based on configuration or license level also are simplified.

ORM: Transactions are now supported

I’ve just checked in new code changes and rolled a full release for the OpenNETCF ORM.  The latest code changes add transaction support.  This new release adds a load of features since the last (the last was way back in February), most notably full support for SQLite on all of the following platforms: Windows Desktop, Windows CE, Windows Phone and Mono for Android.

ORM Update: SQLite for Compact Framework now supported

The OpenNETCF ORM has supported SQLite for a while now – ever since I needed an ORM for Android – but somehow I’d overlooked addeing SQLite support for the Compact Framework.  That oversight has been addressed and support is now in the latest change set (99274).  I’ve not yet rolled it into the Release as it doesn’t support Dynamic Entities yet, but that’s on the way.