OpenNETCF.Barcode library published

I’ve published the code for the OpenNETCF.Barcode library I’ve been working on.  It is certainly not done – I wouldn’t use it in a shipping product at this point becasue the success rate is still pretty low, so you’ll end up with nothing but customer support requests. 


I have some other work for actual customers to do, so I’m not going to be able to dedicate a lot of time to the project over the next couple weeks so I decided to at least publish what I have now for those who may be interested and to open it up for development by anyone else who may wish to contribute.

Recognizing Barcodes: Take 4

Earlier blogs on the barcode library can be found here.


The next task I set for myself for the OpenNETCF.Barcode library was detecting the bounds of a barcode.  My initial thought on how to achieve this was to use a Fast Fourier Transform, or FFT of the horizontal luminance lines I was already extracting.  The general idea would eb that I’d get an initial line across the middle of the image and make the assumption that this line crosses the barcode.  I then run an FFT for this data and look at the power spectrum (basically looking at what the highest frequencies are). 


The actual frequencies returned are not relevant.  My idea was that I could then “move” up and down from that line, say at 10 pixel spacings, and do an FFT on these lines.  If those FFTs lead to similar spectra, then we’re still on the barcode.  If the spectra changes dramatically, we’ve left the barcode.


Fortunately I’d done an FFT algorithm port for the SDF ages ago though this is the first real-world use of the class.  I had to add a bit of code to generate a power spectrum, but once that was done I was a bit surprised to see how well the code worked.  I updated the test UI to display both the current Y position used for decoding (in green) and a red box for the “detected” bounds on the barcode, then ran it against several new images I’d taken.





As you can see, the bounds detection is pretty good.  Next up will be figuring out how to actually use this information.
 

Recognizing Barcodes: Take 3

Last week I spent some time working on code to recognize an EAN13 barcode from a photograph.  You can read the earlier blogs on my work here and here.


After improving the algorithms for determining whether a particular pixel was part of a bar or a space, I was still having trouble with decoding one of my barcode pictures.  Considering I had only tried two barcodes, I’d say that a 50% failure rate isn’t very good.  So the question now was “how do I improve recognition?” 


Stepping through the code, I saw where the failure was occurring.  My code was determining a “bit width” based on the EAN13’s start delimiter of bar-space-bar, then stepping across the image at that interval.  I had originally tried to add some “intelligence” to the code to have the sample point shift left or right on the pixel line if it was close to a state transition.  While it was an interesting idea and improved decoding of the first barcode, it still wasn’t doing much on this second barcode. 


It then occurred to me that I was being a bit of an idiot.  EAN13 is a fixed-width code, so it always contains the exact same number of bits, so if I just found the start and the end, I could divide the total width by the number of bits in the code to get a better bit-width.  Even better, rather than sampling one pixel per bit, why not just take the average luminance across the entire width and then check that *average* against the threshold?  I updated the code and sure enough, I was able to decode the second barcode.  Huzzah, success!



So I’d say that now I have at least a rudimentary library for recognizing and decoding an EAN13 barcode, but it still lacks one critical piece (and several things that would be nice-to-haves).  When you want to recognize a barcode, you have to provide a Y position across which the luminance line is extracted and you have to provide a “threshold” value which the library uses to determine the value of a bit.  A real consumer of this library isn’t going to want to have to provide this information.


Sure, for the Y position we can make an assumption that a horizontal line across the center of the image will be on the barcode (and that’s what I’m doing now) but are there potentially better places to sample?  Would we get better results if we did some sort of vertical averaging?  And what about the threshold?   The two barcodes I was working with had fairly different threshold values that successfully returned a result. 


One of the next steps would be to add code to find the barcode bounds. I’m already finding the left and right, which is simple, but finding the top and bottom would be a bit more of a challenge.  Also I’d need a way to efficiently determine a “best” threshold.  Those will be the tasks for this week (I’ve already done some promising work on bounds finding, which I’ll report in the next blog entry).

Recognizing Barcodes: Take 2

Earlier in the week I was working on a barcode library that successfully recognized a barcode from a picture.  The code was pretty rough, but the first cut worked for the first barcode picture I took so I was pretty satisfied.


I then took another picture from a different book, and of course it failed to decode.  So that meant revisiting the code and the algorithms I was using to try to improve the recognition rate.


Here’s the new barcode:



When I ran it through the luminance algorithm, this is what I got back:



So how do we improve the “recognizability of this thing?  Well the first thing I see is that the useful range, that is the range of threshold values that might yield something meaningful, is pretty small. The threshold is essentially the Y value on the above chart.  If you draw horizontal lines for each y value across the chart, you see that almost the entire bottom half is going to be “black”, meaning low luminance and essentially worthless data.  So if our overall range is 255 (each point is a byte value) then instead of using 0-255 for our useful data, just eyeballing it we’re probably only using about 100-255.


My next step was to “scale” all of the values.  Basically subtract off the actual range of values (the difference from the minimum and maximums across the range) and then each by a factor to spread that new range across the 0-255 set.  Graphically the newly ranged data looks like this:



This is much better, but it’s still got a whole lot of ugly “noise” at the transitions between bars and spaces.  These are the long but thin peaks and valleys you see at the edges of the apparent data bits.  I decided that I’d decrease that effect by doing a “nearest neighbor” average across the data.  What this means is that instead of using the raw luminance value of each pixel, I’d use the average of each pixel along with the ones just to the left and right of it.  Running that algorithm yields a luminance graph that looks like this:



It looks much, much better – at least visually.  I ran it through the existing recognizer algorithm, and while it got further into it – actually pulling a few digits out – it still failed to recognize.  It seems that I have a better filtering algorithm, but my decoding algorithm is still a bit lacking.  We’ll look at what I did to address that in the next article.

Recognizing Barcodes

Earlier this week Nick Randolph asked me if I knew of any .NET barcode recognition software.  Not an SDK for an existing laser scanner, but something that might take a picture of a barcode and decode it. 


Early on in my development career I dealt with barcode scanners and I’ve always found the technology to be fairly interesting.  I remembered some work Casey Chesnut had done quite some time ago on barcode recognition that I had always wanted to spend a little time playing building off of.  Of course Casey didn’t provide any code, but that’s fine – I’d rather attack this as a pure mental exercise for myself. 


I actually decided to attack the problem a little differently than Casey.  I knew that doing a decode of a “pure” clean barcode would be easy and not a realistic scenario anyway, so I didn’t even bother working on code to do it.  I knew I was going to be able to do the decoding algorithm part – that’s simply turning bits into text and anyone can do that given the algorithm.  The challenge, and fun, is in extracting the binary data bits from an “analog” picture.  I’m saying “digital” and “analog” here in that an ideal barcode bit is either black or white, on or off, a picture (even a digital picture) is not going to be so clear.


So step one was to take a picture of a barcode.  I grabbed a book off the shelf and used my phone to snap a picture (bonus points if you know what book it is):



You can see, it’s not an “optimal” image – it’s dark and there are a lot of variations in color.  This seemed way more realistic.


Next I needed to get rid of the “color” and I decided the best path would be to simply “draw” a logical line horizontally through the center of the image assuming that it would cross the barcode. 



I extracted every pixel across this line and determined the luminance of each, essentially turning the line of pixels into greyscale.



Next I needed to turn this analog grey data into binary, which introduced a variable I’ll call the “threshold.” Luminance values above the threshold (brighter) would be a zero, below the threshold would be a one.  This means that you can alter what the software “sees” for bars by simply adjusting that threshold.


I put together a library and a sample application that showed all of this process at once – the barcode, a chart of the luminance and a chart of the “binary” representation of that luminance based on a given threshold.


The next step was to try to turn this “binary” data into actual bits that I could decode.  My first attempt followed this reasoning: 
1. an EAN13 barcode (which is what I’m working with) starts with three “guard” or delimiter bits: 1-0-1. 
2. I start at the left edge of the image and traverse until I hit the first “on” pixel and record that as the start position.
3. I traverse until the next “off” pixel.  The distance between the on and off is the width of a bit.
4. I “back up” 1/2 of a bit width (to give me the best odds of hitting the actual bit value) and then step forward by bit widths, checking the pixel value.


It took a little tweaking of the logic, in which I’d move the current and subsequent sample points away from any nearby “edge” to help get me in the middle of a data point.  With only this minor logic improvement I was able to decode the picture I started with.  I had a sample app that showed each of these steps:
– The loaded barcode image
– The luminance across the barcode at the mid point as a graph
– A graph of the binary representation of that luminance based on a threshold
– decoding of that binary set



All that in just about 1 day of work.


Of course the second barcode picture I took and tried failed.  It worked through all of the steps until the decode, where it always fell apart, being unable to correctly identify any digit but the first.  The failure, I think, is due to the fact the book cover is glossy, so I have a lot more “noise” in the luminance graph.  That simply means I need to improve my image recognition algorithm, which will be the focus of the next blog entry on this library.


Now you might be asking “well where’s the code for this?”  Patience.  It’s in Codeplex right now, it’s just not yet published.  I want to get it to a state that’s a little less ugly before I turn it loose.  When can you expect it?  Well I can’t say for certain, but Codeplex required publication within 30 days, so that gives you a “latest possible release” date – though I hope to publish earlier.