See how sensors, the cloud, and service providers are rallying around the Industrial Internet of Things and what the current trends for IIoT development are.
Let’s be friends:
Comment ( 0)
Join the DZone community and get the full member experience.
Modern machines tend to be smart. You’ve got a piece of industrial equipment, possibly with an OS, a sensor bus, and drivers talking to I/O systems enabling local monitoring and operation. Now you want to monitor, control, and update that equipment from the cloud. This can be accomplished, but only after you’ve deployed data management components to what many people interestingly call “the edge.” In reality, this is the center of your universe and your customer’s operations – your machines, PLCs, and gateways. Before selecting a specific approach or vendor technology, you’ll want to set clear criteria for your goals at the edge and have a solid understanding of your constraints. Many enterprises aren’t aware of all the possibilities and pitfalls at the IoT edge, so let’s start with a quick list and next steps for collecting data and generating new value from your industrial equipment both with and without connectivity to the cloud.
To bring “intelligence to the IoT edge” is to enable more autonomous equipment operation. Latency and bandwidth constraints (and costs) can be removed as limiting factors when everything isn’t managed by the cloud. Sadly, the success of many enterprises at freeing their operations from the requirements of connectivity to the cloud is followed by shackling themselves to a hardware provider or contract manufacturer at the edge. This is your business, and no HMI vendor, gateway vendor, or contract manufacturer should be able to dictate what software or embedded operating system you use for serving your customers, nor lock you into their offerings whether or not they continue to meet your needs.
For new equipment, you may have your pick of which industrial protocols to use. You’ve likely already got machines in the field that you want to connect as well, and if they’re already running on Modbus, Ethernet IP, OPC-UA or other protocols then that’s what you’re going to need to be able to translate into a normalized data format, such as JSON, before doing anything with the data being produced. How are you going to do this? Similarly, you’ll need to build the pipeline between your equipment at the IoT edge and the public cloud you choose for ingest, storage, analytics and other integrations. This not only enables secure and seamless processing of your data, but also makes possible the device management needed to maintain and update your connected machinery. There’s a lot going on in the “real world” that must be handled before you can make use of services in the cloud.
Many teams are eager to jump straight to adding intelligence at the edge and managing machine data. While the ability to execute rules, enforce security policies, and run logic for local operations, as well as the capacity to normalize, clean, buffer, and filter data are critical to successful IoT edge deployments, there are system level prerequisites that must first be addressed before starting this work.
One of the issues you’ll quickly find is your current equipment software and drivers aren’t compatible with the latest cloud connectivity clients. You’ve got library incompatibilities between the two, and your drivers can’t talk to their cloud clients. There’s a wall at the edge between your machines and services from Microsoft Azure, AWS, and Google Cloud Platform. Fortunately, containers like Docker provide a way to sidestep the incompatibilities and enable end to end remote monitoring and control, machine learning, and cloud enterprise integration. Proper use of containers and other best practices can help your team bring intelligence to the edge, speed up delivery, and increase the value of your products in the market, while leveraging your investment in your existing device drivers and control logic.
While containers provide a way forward for resolving incompatibilities and enabling complex solutions spanning multiple systems, they’re just that – a way forward, requiring orchestration and execution. How should the containers be organized? What should be included in each? How will you balance the resource needs of each component inside each container when running on resource-constrained edge devices and gateways?
As a best practice, we recommend isolating your I/O and any control loops inside an individual container. Use a separate container for normalizing data and providing access to history for local usage. Your cloud client, which communicates with AWS IoT, Azure IoT Hub, Google Cloud IoT Core, or other cloud IoT services should be in a separate container as well. If you need a local HMI, this too should be in its own container. Why all the isolation? By separating IoT edge components by major functions, your team gains agility and control that is reflected in the speed of delivery and flexibility of the software produced.
Each container can be owned by a different team, decoupling dependencies at both human and technology levels. Versioning and driver incompatibilities are removed from the equation, and project managers can focus on challenges specific to their domain without impacting and being impacted by others with each change. Much like microservices in the cloud, the containers interact through well-defined interfaces, each with its own contract. These interfaces enable independent evolution and maintenance.
Depending on your BOM constraints and available computing power, your system may only support a limited number of containers. Resource management across Docker containers can be difficult. How will you handle sensor history rotation and memory management? What trade-offs should you make when you can only use two containers? If you don’t have a lot of experience in this area, find someone who does early in the project design phase to avoid costly architecture changes down the road.
Containerization also enables each component to move forward over the life of your products at different rates. Your I/O container will iterate at the speed of your internal engineering team, whereas your HTML5 user interfaces can improve at the speed of the web. Meanwhile, your cloud client components can zip ahead at the speed of the cloud providers themselves, enabling your system to take advantage of their latest offerings regardless of how much effort your team puts into the other areas.
As discussed previously, one of the worst things you can do when building your IoT edge solution is to adopt a vendor’s software products as a condition of using their hardware. An established best practice is to buy your hardware from hardware vendors, your manufacturing services from manufacturing vendors, and your software from software vendors. Most hardware vendors provide software tools along with their components, often at low or no cost, with the simple requirement that you continue to purchase their equipment, now and forever. Some Contract Manufacturers will provide “free” engineering services to get your product working in order to land a the manufacturing contract for your device for a period of time.
The problems that arise with these “free” offerings is that the incentives are misaligned over the lifespan of your product. If the vendor’s hardware products don’t continue to meet your needs or another vendor releases a superior or lower cost offering, you don’t want to be stuck with an inferior or overpriced component because it’s the only brand your software tools are compatible with. Similarly, even if their hardware line is a competitive market leader, the software tools that come in the package tend to be like the ‘free’ earbuds that come with new music players.