(Hi All - I'm the CEO of Scale Computing, a long time Schneider Electric partner on the I.T. side. I was just invited to join the forum and share this article after it was noted on my Linkedin profile. Happy to be here, and I can be reached through the forums here of course, or at email@example.com for any questions!)
The Edge of Tomorrow: How Edge Computing is Delivering the Next Wave of Industrial Innovation Today (Part 1 of 2)
Hockey great Wayne Gretzy once famously said that “you need to skate to where the puck is going, not where it’s been.” To carry that analogy forward to the world of IT infrastructure, the puck has been gliding lazily across the cloud for the past two decades but now it’s sliding into a new realm: edge computing. And for as much as cloud computing has transformed the datacenter, I believe the impact of edge computing will be an order of magnitude greater.
First, consider where we’ve been with cloud computing. In hindsight, the emergence of the cloud was inevitable, as it is the economies of scale that drove its adoption. The sheer scale and collective ambitions of Amazon, Google, and Microsoft were predicated on the massive investments they made in building enormous data centers and the subsequent subsidizing of these investments by offering these resources at an affordable price point to third parties further propelled this explosive growth. After all, the size and scope of these giant data centers dwarfed even the largest traditional regional data centers, and so the economies of operational scale also make sound business sense.
Likewise, the massive scale of these data centers lent itself to novel value propositions: huge compute and storage capabilities coupled with virtualization meant these resources could be delivered on-demand, applications could be centrally administered and deployed, and the management of the IT infrastructure itself -- the hardware, the servers, the storage, etc – were vastly simplified because they also became “somebody else’s problem.” As I like to point out, “the cloud” is really just a marketing term meaning “somebody else’s datacenter.”
Rolling the clock back a bit, what we have seen over these last 20 years is the wholesale shift of application infrastructure from on-premises (server rooms, client-server architectures, and the like) to the regional data center, and ultimately to the cloud.
But now, the IT market is undergoing another seismic shift in which the pendulum is beginning to swing back to on-premise computing. Most notably at the edge where data is increasingly being generated by an assortment of people, machines, and devices.
Solving the Latency Problem
These early stages of edge computing are not all that different from the early phases of cloud computing. Applications which were already centralized in a regional data center soon after shifted to the cloud. Likewise, applications which today are still running on-site are making their way to more reliable platforms to provide high availability and ease of management -- effectively a cloud-like experience from an administration perspective, but still running locally.
However, just as cloud computing started as “a bigger datacenter” and then new value propositions emerged, the same is true of edge computing. The difference, however, is the magnitude of these value propositions.
For many years now, the promise of IoT technology has been limited due to the constraints of the underlying IT infrastructure. Yes, massive pools of data can be analyzed, new patterns found, and new decisions made based on those findings. Yes, massive cloud-scale compute resources enable that kind of analysis to happen. But the reality of today’s conventional IT model is keeping the most exciting innovations just out of reach. Having sensors that can collect data does not solve the problem of actually storing that data, moving that data back and forth across the network, and getting it into the cloud in the first place. Even if you could solve for those problems, limitations such as network latency may make the compute power of the cloud impractical as part of a solution. If a decision must be made in real-time, the latency to the cloud might just be too high for a variety of industrial and other real-world use cases. [graphic would be cool here]
When microseconds matter, the cloud is only 80 milliseconds away.
And the problem here is one of physics: Even at the speed of light using fiber optics, the cloud remains “far away” in terms of microsecond decisions. No one wants their self-driving car to see an object, collect imaging and LIDAR data, and then send that data to the cloud to determine if that object is a deer or a paper bag that has entered the road before deciding whether or not to engage the brakes. Such a decision needs to be made as close to the source as possible. Even if you were able to exchange data with the cloud at an optimal rate of 80ms, a car traveling at a rate of 75 MPH has already traveled a distance of nine feet – not an ideal outcome for the car or the deer.
On the other hand, if such a decision can be analyzed and made locally, perhaps in 1ms, the car has traveled only one foot, rendering a faster and m, but also ensuring that the decision is made within the context of the current situation. Edge computing enables such “one foot” decisions, and it does so in a way that can be applied to the down-and-dirty reality of today’s IT infrastructure.
To understand how this is being applied in the world of industry today, I recently spoke with a customer of ours, Shane Rogers from Harrison Steel, an Indiana-based manufacturer of precision engineered steel casting solutions. Precision is the key word here as the parts that Harrison fabricates are driven by robotic production cells that require very precise tolerances. Operating to these exacting specifications means that even miniscule, short-term variances during the production process can quickly cascade to part failure.
Variables such as temperature, vibration, flow rates, and fill time matter enormously in this type of process. For years, such data was collected manually and logged. However, the emergence of Industrial IoT (IioT) in a manufacturing setting means that this data collection is no longer a human function, but is the job of sensors which collect data in real time. Data that was once collected once every few minutes is now collected hundreds or even thousands of times per second. This creates both new opportunities and challenges: we now have a significantly higher volume of data available to analyze in order to keep these machines finely tuned. However, the existing network infrastructure may not be able to handle it as new compute and storage resources are required (perhaps including GPUs or specialized AI processors) while the number of devices (sensors, etc) being deployed by IT create a massive management burden on the part of I.T.
Harrison Steel solved this problem with edge computing. Employing small form-factor Hyperconverged Infrastructure servers such as our HE150 devices, means that compute resources can be deployed right alongside the equipment that is generating the data -- not in a distant server room, but just a few feet away. In this way, the data is collected on an isolated, mini-network and the applications that ingest and process that data are local to both the data pool and the sensors themselves. The resulting decisions are then sent to the robotic cell operating in the immediate proximity – the performance of the production network meanwhile is unaffected as only metadata is being routed back to the centralized servers.
Harrison will repeat this setup dozens of times, deploying mini edge infrastructures and networks near the data generation points, replacing the entire setup this across multiple manufacturing facilities.
In part two of this post, which I'll make later this week, I’ll examine the cost savings, agility, and scale that edge computing can deliver in the real-world, and how it’s being applied today.
(Part 2 of this article is available here: https://community.exchange.se.com/t5/Industrial-Edge-Computing-Forum/How-Edge-Computing-is-Deliverin...)
Thanks for sharing with our community Jeff.
This is why Schneider Electric has partnered with Scale to develop a computing solution at the Edge
Part 2 of this article is available here:
Discuss challenges in energy and automation with 30,000+ experts and peers.
Find answers in 10,000+ support articles to help solve your product and business challenges.
Find peer based solutions to your questions. Provide answers for fellow community members!