Refresh of MTConnect Client SDK

Last week I had a call with a couple guys looking to implement an MTConnect Client into their system.  I decided to take that as an opportunity to revisit some MTConnect SDK code I wrote a few years back and apply at least a little modernity to the code base.

So what I did was:

The source code on Github contains a sample Client application that lets you quickly view, graph and store MTConnect data.  This is what it looks like when graphing X and Y axis positions from the public, sample MTConnect Agent.


Responsible M2M

About a year ago, maybe two years now, we had a large manufacturing customer that we were working with to implement MTConnect on their production floor. Basically they had 20 five-axis machine tools running creating aircraft parts and they wanted to be able to get data off of those machines and “put it in The Cloud.” Well, first off I’ve talked about how much I dislike the term “The Cloud” so we had to clarify that. Turns out they meant “in a SQL Server database on a local server.”

MTConnect is a machine tool (hence the “MT” part) standard that we leveraged and heavily extended for use in our Solution Family products. Painting it with a broad brush, what it means is that all data from each machine tool – axis positions, part program information, door switches, coolant temperature, run hours, basically the kitchen sink – can be made available through a REST service running either on the machine tool or on a device connected to it.

They wanted to take that data and put it into SQL Server so their engineering group could run analytics on the data. Maybe they wanted to look at part times, energy consumption, tool path length, whatever. They actually weren’t fully sure what they wanted to do with the data, they just knew that “in The Cloud” is where everyone said it should be, so the commandment came down that that’s where the data would go.

Ugh. The conversation went something like this.

“So you want all of the data from each machine tool to go to the server?”

“Yes. Absolutely.”

“You know that there are 6 continually moving axes on those machines, right? And a constantly changing part program.”

“Of course. That’s the data we want.”

“You are aware that that’s *a lot* of data, right?”

“Yes. We want it.”

“You’re sure about this?”

“Yes, we’re sure. Send the data to The Cloud.”

So we set up a mesh of Solution Engines to publish *all* of the data from *all* of the machines to their local server. We turned on the shop floor. And roughly 20 seconds later the network crashed. This was a large, well built, very fast, hard-wired network. There was a lot of available bandwidth. But we were generating more than a lot of data, and the thing puked, and puked fast.

So what’s the lesson here? That you can always generate more data out at the edge of your system than the infrastructure is capable of carrying. If you’re implementing the system for yourself, trying to transfer all of the data is a problem, but if you’re implementing it for a customer, trying to transfer all of it is irresponsible. We did it in a closed system that was just for test, knowing what the result would be and that it would be non-critical (they simply turned off data broadcasting and everything went back to normal), but we had to show the customer the problem. They simply wouldn’t be told.

We need to do this thing, this M2M, IoT, Intelligent Device Systems or whatever you want to call it responsibly. Responsible M2M means understanding the system. It means using Edge Analytics, or rules running out at the data collection nodes, to do data collection, aggregation and filtering. You cannot push all of the data into remote storage, no matter how badly you or your customer might think it’s what needs to happen.

But that’s fine. Most of the time you don’t need all of the data anyway, and if, somehow, you do there are still ways you can have your cake and eat it too.

Let’s look at a real-world example. Let’s say we have a fleet of municipal busses. These busses drive around all day long on fixed routes, pickup up and dropping off people. These busses are nodes that can collect a lot of data. They have engine controller data on CAN or J1708. They have on-board peripherals like fare boxes, head signs and passenger counters. The have constantly changing positional data coming from GPS and/or dead-reckoning systems. They’re also moving, so they can’t be wired into a network.

Well we could send all of that data to “The Cloud”, or at least try it, but not only would it likely cause network problems, think of the cost. Yes, if you’re AT&T, Verizon or one of the mobile carriers, you’ve just hit pay dirt, but if you’re the municipality the cost would be astronomical. Hello $20 bus fares.

What’s the solution here? Well, first of all there’s a load of data that we have that’s near useless. The engine temperature, RPMs or oil pressure (or any of the other of the thousands of data points available from the engine controller) might fluctuate, but generally we don’t care about that data. We care about it only when it’s outside of a “normal” range. So we need Edge Analytics to be able to watch the local data, measure it, and react when some conditions are met. This means we can’t just use a “dumb” device that grabs data from the controller and forwards it on. Instead we need an Intelligent Device – maybe an Intelligent Gateway (a device with a modem) – that is capable of running logic.

Now when we’re out of the “normal” range, what do we do? Maybe we want to just store that data locally on the vehicle in a database and we can download it at the end of the shift when the vehicle returns to the barn. Maybe we want to send just a notification back to the maintenance team to let them know there’s a problem. Maybe we want to send a capture of a lot of a specific set of data immediately off to some enterprise storage system for further analysis so the maintenance team can order a repair part or send out a replacement vehicle. It depends on the scenario, and that scenario may need to change dynamically based on conditions or the maintenance team’s desires.

Positional data is also ever-changing, but do we need *all* of it? Maybe we can send it periodically and it can provide enough information to meet to data consumer’s needs. Maybe once a minute to update a web service allowing passengers to see where the bus is and how long it will be until it arrives at a particular spot. Or the device could match positional data against a known path and only send data when it’s off-route.

And remember, you’re in a moving vehicle with a network that may or may not be available at any given time. So the device has to be able to handle transient connectivity.

The device also needs to be able to affect change itself. For a vehicle maybe it puts the system into “limp mode” to allow the vehicle to get back to the barn and not be towed. For a building maybe it needs to be able to turn on a boiler.

The point here is that when you’re developing your Intelligent Systems you have to do it with thought. I’d say that it’s rare that you can get away with a simple data-forwarding device. You need a device that can:

– Run local Edge Analytics
– Store data locally
– Filter and aggregate data
– Run rules based on the data
– Function with a transient or missing network
– Effect change locally

Intelligent Systems are great, but they still need to be cost-effective and stable. They also should be extensible and maintainable. You owe it to yourself and your customer to do M2M responsibly.

Of course if you want help building a robust Intelligent System, we have both products and services to help you get there and would be happy to help. Just contact us.

MTConnect Updates

I’ve published new updates to our MTConnect projects on Codeplex. Both the MTConnect SDK and the VirtualAgent have changes and for detailed info take a look at the change logs in the Source tabs on the project.  Here’s a high-level list of what I think the important additions are:

  • I’ve added a fully working reference implementation of an MTConnect Adapter for Okuma THINC controllers (full source is included).  If you have an Okuma machine with a THINC-supported controller on it, you now have a simple agent and adapter you can drop onto the machine to start publishing data immediately. I hope to find time to put together a reference implementation for Fanuc FOCAS controllers.  If you’re interested or in need of that support, let me know and we can discuss prioritizing it and the features you need.

  • I’ve added support for SHDR adapters.  At the [MC]2 conference in Cincinnati there was concern that once you selected an Agent technology (either ours or the reference MTConnect C++ Agent), then you were locked in to creating Adapters only for that Agent.  That is no longer the case.  Our Virtual Agent now can easily consume data from your existing C/C++ Adapters written against the reference Agent.  No changes are required in your existing Adapters at all.

If you have any questions on implementation, etc, feel free to contact me.  For more info on our entire line of MTConnect products, take a look at our web site.

MTConnect Library Updates

With the upcoming MTConnect conference next week, we’ve been extremely busy getting things ready for our talks as well as our booth on the show floor.  I think we got feature complete (at least as far as we want for the show) today, so I’m merged all of the latest changes into the public trees for both the MTConnect Managed SDK as well as the OpenNETCF Virtual Agent.  I have an updated Machine Simulator that we’ll be using for the show, but it’s not quite ready for public consumption, so the release of that will have to wait until after we get back from Cincinnati.

MTConnect [MC]^2 Conference

[MC]2 MTConnect: Connecting Manufacturing Conference

Since its introduction to the manufacturing industry, the MTConnect standard has been revolutionizing the way manufacturing equipment and devices “talk” to each other on the shop floor. Anyone in the manufacturing industry can benefit from learning about this important standard. That’s why you must attend [MC]2 MTConnect: Connecting Manufacturing Conference, November 8-10, 2011, in Cincinnati, Ohio.

This conference will have something for everyone, from distributors to end users, to manufacturing technology builders, to software developers, to C-level executives, to professors, to students, and to anyone who just wants to really understand MTConnect. [MC]2 offers both business and technical tracks, hands-on technical workshops, panel discussions on the use and benefits of the standards, as well as a showcase of commercially available products utilizing the MTConnect standard.

Register for [MC]2 Today

Conference attendees will have the opportunity to learn from the experts to really understand how MTConnect, as Modern Machine Shop said, is enabling tremendous productivity gains in manufacturing.  Those attending will return with new knowledge and skills so they can engage in a deeper dialogue on manufacturing productivity, as well as a much better understanding on what it takes to compete in 21st century manufacturing. Register now! This is an event like no other.


[MC]2 MTConnect: Connecting Manufacturing Conference
November 8-10, 2011   |   Hyatt Regency – Cincinnati, Ohio
7901 Westpark Dr., McLean, VA 22102   |   703-893-2900

MTConnect Institute logo

MTConnect Client SDK Refresh


I’ve again refreshed OpenNETCF’s MTConnect Managed SDK with a few changes.  Most of the changes are the result of us dogfooding the SDK and making things more thread-safe and solid. 

In preparation for the upcoming MTConnect: Connecting Manufacturing Conference where we’ll be both speaking and have a booth, I’m also creating some hands-on labs and tools.  One of the tools is a Sample Client application that consumes the Client side of the SDK.  The full source for the sample application is also available with the SDK download over on Codeplex.


OpenNETCF VirtualAgent Refresh released

I’ve published a refresh of the OpenNETCF MTConnect VirtualAgent code up on Codeplex.  This is a major refactor from the last publication which extracted the VirtualAgent engine capabilities into a separate IoC module.  This change greatly simplifies integrating VirtualAgent capabilities into an existing application as well as allowing an app to publish other Padarn REST services and ASP.NET pages without having to modify the VirtualAgent engine code base.

MTConnect SDK Refresh Released


I’ve again refreshed OpenNETCF’s MTConnect Managed SDK with a few changes, including:

  • Added support for some current and sample filtering (not all filtering is supported, but I’ve added device name and data item ID filtering)
  • Added AgentInformation to the EntityClient so you get information about the agent returning a data set for a probe
  • Miscellaneous bug fixes and refactoring

As always, if you find a bug or would like me to work on implementing a specific feature from the specification, add it to the lists over on the Codeplex site.

Release of OpenNETCF MTConnect VirtualAgent

I’ve been extremely busy for the past few months putting together a cross-platform MTConnect Agent.  The result, along with our other MTConnect offerings, is the OpenNETCF MTConnect VirtualAgent, which is shared source (MIT license) and runs under either the Compact Framework (3.5) or the Full Framework.  The default implementation uses Padarn as the web server, but it’s designed to deliver content through an interface so that can, in theory, be swapped out pretty easily (I say in theory because I’ve not tested it with another server at this point).

The VirtualAgent offers a boatlod of interesting things that a general Agent doesn’t most notably the ability to load up custom Adapters that you can use to encapsulate a process model or even drive control logic.  We’ve got a simple example of a Hosted Adapter in the code base now and will be adding more complete and robust samples as time progresses.

At any rate, if you need to get a machine tool (or really any sort of device) publishing MTConnect data on a plant floor, we’ve got a solution that can get you there in well under a day in most cases and at a remarkably low cost.

MTConnect Managed SDK now supports Windows Phone 7


OpenNETCF’s MTConnect Managed SDK now has Common and Client projects supporting Windows Phone 7.  This means you can now consume MTConnect-published data in your Phone 7 projects.  I’ll also be publishing and open-sourcing an MTConnect viewer for Phone 7 in the future, but if you want to get started on your own, you now have the tools.