(This is a two part article. Part 1 of this article can be found here: https://community.exchange.se.com/t5/Industrial-Edge-Computing-Forum/How-Edge-Computing-is-Deliverin...)
Future Proofing the Edge
In part one of this post, I made the claim that the transformational change brought about by edge computing would be a magnitude of order greater than cloud computing. Here’s why: not only do edge computing deployments provide the supporting infrastructure to make IIoT truly useful, they can also save millions of dollars in hard costs and do so immediately.
Let’s go back to the example of Harrison Steel. Collecting and leveraging this machine data in real-time wasn’t just a matter of improving their operational efficiency, it also provided them with a material competitive advantage. Were their competitors able to collect and leverage this data to enhance precision and reduce failure rates, they would be at a serious disadvantage in the market. These competitive forces dictated that they could not afford to delay. And so, they had planned to deploy an entirely new network, at each location, dedicated solely to the purpose of data collection and analysis. Such a complex project would require the design and deployment of a massive amount of networking infrastructure that would need to be run all over the factory floors and across their production campuses. It would also require the construction and deployment of new server and storage infrastructure running on-site to quickly collect and process all of this data.
An undertaking of this scale would undoubtedly be disruptive to the plant operations, and scoping such a project was plagued with an uncomfortable amount of uncertainty: Exactly how many new sensors would they need to deploy and what new capabilities might these sensors have five years from now? How should a new network be provisioned to managed an unknown volume of data traffic? Should the new network be built to handle today’s volume of data traffic or should it be overprovisioned to ? How would they even know?
This was a disruptive, multi-million dollar project with diverse technology and operational requirements. And even after deployment, the longevity of such a solution was in question. But their leadership team understood that from a competitive standpoint, something had to be done now and a multi-million dollar budget was eventually approved.
Fortunately for Harrison, before breaking ground, they learned about edge computing and realized that it could meet both their current and future needs at a fraction of the cost (which for Harrison was 1/20th of their planned budget). And most critically, that it could be deployed in a matter of weeks rather than over the course of years.
By deploying these mini compute clusters, Harrison was able to isolate the data collection, move the compute resources into immediate proximity of the sensors, and leverage the existing primary networking resources that were already in place. This provided them with the requisite agility to adapt to future and unknown challenges, as they could easily add more edge computing nodes as new sensors and data collectors were added to the network. They could also upgrade these mini infrastructures in whole or in part as their operating environment evolved.
This is why edge computing is so powerful. Like the cloud before it, edge computing can serve as a springboard for new value propositions and innovations and do so without having to make massive capital investments. This combination of factors: new value propositions at massive cost reductions, is why the edge is about to explode as the new frontier in IT and will do so at a rate the dwarfs cloud computing.
Scaling the Autonomic Edge
However, this new frontier is not without its own challenges. Namely, as the number of infrastructure deployments increases, so do potential issues for IT administration. Managing one centralized infrastructure is difficult enough; managing dozens of remote infrastructures is exponentially more challenging.
Imagine we warp ahead a few years in the Harrison Steel example. Perhaps now there are dozens of deployments inside each plant, with between three and four server-type devices for each deployment. So we are talking about perhaps 200 server devices at each facility, multiplied by five facilities, for a total of 1000 devices in their future edge computing infrastructure.
Now, let’s imagine there is a critical BIOS update that needs to be applied to each of these servers, and applying that update takes about 12 minutes of administrative time, plus downtime for that server. Here, a fairly ordinary type of IT event is now slated to take 200 hours of administrative time (ignoring any need to physically touch the servers, or the risk of introducing unintentional human-error at each step), plus potential downtime of data collection and perhaps even the machinery tied to the infrastructure. A simple BIOS update like this could be a $50,000 or more endeavor, even if it all goes smoothly.
Fast forward another few years. What will be the rate of new data collection capabilities and needs? 50% growth per year? 200% growth per year? What happens if we aren’t talking about 1,000 devices, but 10,000 or even more?
It is obvious that traditional IT infrastructure management breaks down in even small scale edge computing deployments. Top-down, administrator-led management will simply not work at edge-scale, and thus something must change.
To further underscore this challenge, consider broader or more remote types of edge computing deployments than the Harrison Steel example: A fleet of ships at sea, or infrastructure inside a large fast-food restaurant chain. In these scenarios, there is no administrator physically near these locations, the scale of the deployments can be massive, and these locations may often not be connected to the internet at all.
How can the “cloud-like” experience promised by the edge be attained in any of these instances?
The solution is for the edge to manage itself. The ability to detect problems, proactively mitigate issues as they arise, and take the actions necessary to keep applications running must exist autonomously in the edge deployments themselves, calling out to the administrative team only when human intervention is absolutely required. Horizontal scalability is achieved not through centralized management, but by pushing the intelligence to keep applications running out to the edge itself.
This is the secret sauce that makes Scale Computing who we are. Since our inception, our core technology has been built to provide for autonomous infrastructure management. Customers will often cite this as “ease of use” combined with “high availability” and “self-healing.” These are true benefits and Scale Computing has been delivering this to customers for many years. The magic of our HyperCore OS is that it detects and fixes problems on its own, and over those many years the system has become smarter and better, and addresses more problems autonomously than ever before.
For a typical datacenter deployment, such capabilities are useful. At the scale of the edge, these capabilities represent the difference between capturing the true value promised by edge computing, or being crushed under the administrative weight of deploying and managing such an infrastructure.
Today, Scale Computing succeeds at the edge because we have these capabilities: Autonomous infrastructure with horizontal scalability, combined with the flexibility to be dropped into the on-the-ground reality of existing environments seamlessly.
Discuss challenges in energy and automation with 30,000+ experts and peers.
Find answers in 10,000+ support articles to help solve your product and business challenges.
Find peer based solutions to your questions. Provide answers for fellow community members!