Folks - Here is the cleaned up text for our first Blog. Plenty of materials and follow up here. Ildiko: I assume you will be taking it to get it posted on the Super User site. Tangled Up in Edge A Blog by the Open Infra Edge Working Group Edge computing has been part of the emerging technology landscape for coming up on 5 years now, yet somehow, it remains somewhat of a mystery (or possibly just a Byzantine maze). The OpenInfra Edge Computing Group has been working towards understanding what edge computing is and what it is not. To demonstrate the crux of the issue, according to the telecoms and their analysts, it is a way of delivering low latency (something that telecom will tell you is REALLY important) to applications such as some Industrial IoT (autonomous vehicles, UAV), VR/AR, gaming, and telematics applications; these use cases are clearly dependent on low latency for their performance. On the other hand, another way of looking at Edge is use cases such as video processing, AI/ML, IoT monitoring for maintenance and diagnostics, where the objective is to use limited bandwidth capacity more effectively by shifting information processing closer to where the data is generated (the Edge). Since defining what exactly edge is, borders on the futile, that does not mean that there is not plenty to talk about. The group stepped away from creating a definition, as in our experience pinpointing exactly what industry and analysts mean when they talks about edge computing is rather elusive. Far more interesting Edge topics are what is happening to Edge adoption as the technology, use cases and tooling mature. The working group will be sharing our findings and thoughts on some of the more knotty Edge issues that we have encountered, and with that, a blog is born. The OpenInfra Edge Computing Group will be publishing a series of blog posts to address some of the thornier aspects of edge computing as it enters the mainstream adoption cycle. We will focus on the far more interesting discussion of the technology and architectures needed to support edge workloads – in whatever form or criteria is required for the given use case. There is no question that there are many edge use cases, ranging from content delivery to tracking data from IoT devices. However, the deeper into the use cases you get, the more it becomes apparent that there are multiple edge related efforts each with their own definitions, architectures and requirements. That does not mean that there are no common criteria and features, it just shows that they are slippery and constantly evolving, depending more on the specific industry use cases and perspective of the related user community ecosystem. Some common Edge considerations that will be explored in more depth include: - *Workload latency (to end users)* – This metric can vary from under 2 ms to orders of magnitude higher. This criteria is often most touted by telecom operators as being the most important factor to consider, but that is more a function of their strong network bias than any requirement for any give Edge use case. Yes, there are use cases that are heavily dependent on low latency and they are worthy of more exploration - *Network Transport Considerations* – How does the underlying transport affect use cases and the thinking about Edge? This is a rich area to dig into as it has been somewhat overlooked. In short, it is much harder to actually manage in real life so to speak. · Edge should be transport agnostic, so why is it not? · MEC/Edge/5G are not the same thing at all, how did they get so confused? - *Where are my Edge workloads?* -- Where are the workloads and how are they moved around on the Edge and between the Edge and Center -- some workloads will need to be located at the edge for reasons of reducing backhaul and latency to varying degrees, while others can be moved between Edge and Center to offload from Edge or the Center. Determining the best option for a given use case can be tricky. - *How do constraints affect Edge architectures?* – One major criteria, which is often overlooked is that Edge, by its very nature, is defined by the constraints set by its environment and infrastructure. · *Edge Infrastructure and Platform issues* -- There is wide variability in infrastructure specs, hardware and tooling. Some factors that affect deployments include: · Node Size: from single CPU to multiple racks (e.g., hyperscalers) · Does Edge require special hardware, such as accelerators? - *Does Edge work when multi-clouds are required? *– Can the Edge support multiple platforms or does that quickly become a nightmare of mismatched tools and bad integration? - *Day 2 Operations of Edge deployments* – When a deployment can range from 10s of sites to thousands or even millions if you count IoT devices as Edge, how does deployment scale affect architecture, management and application decisions? -- [image: Verizon] <http://www.verizon.com/> *Beth Cohen* *DMTS - *NFV/SDN Network Product Strategy Verizon Business Group M 781.434.8553 [image: Facebook] <http://www.facebook.com/verizon> [image: Twitter] <http://twitter.com/verizon>
Hi Beth, Thank you for sending over the polished version, it looks very good! And yes, I will work with my colleagues to get it posted. If anyone has further comments to the content please send them over ASAP. Thanks and Best Regards, Ildikó ——— Ildikó Váncsa Senior Manager, Community & Ecosystem Open Infrastructure Foundation
On May 24, 2021, at 18:35, Cohen, Beth F <beth.cohen@verizon.com> wrote:
Folks - Here is the cleaned up text for our first Blog. Plenty of materials and follow up here.
Ildiko: I assume you will be taking it to get it posted on the Super User site.
Tangled Up in Edge
A Blog by the Open Infra Edge Working Group
Edge computing has been part of the emerging technology landscape for coming up on 5 years now, yet somehow, it remains somewhat of a mystery (or possibly just a Byzantine maze). The OpenInfra Edge Computing Group has been working towards understanding what edge computing is and what it is not. To demonstrate the crux of the issue, according to the telecoms and their analysts, it is a way of delivering low latency (something that telecom will tell you is REALLY important) to applications such as some Industrial IoT (autonomous vehicles, UAV), VR/AR, gaming, and telematics applications; these use cases are clearly dependent on low latency for their performance. On the other hand, another way of looking at Edge is use cases such as video processing, AI/ML, IoT monitoring for maintenance and diagnostics, where the objective is to use limited bandwidth capacity more effectively by shifting information processing closer to where the data is generated (the Edge).
Since defining what exactly edge is, borders on the futile, that does not mean that there is not plenty to talk about. The group stepped away from creating a definition, as in our experience pinpointing exactly what industry and analysts mean when they talks about edge computing is rather elusive. Far more interesting Edge topics are what is happening to Edge adoption as the technology, use cases and tooling mature. The working group will be sharing our findings and thoughts on some of the more knotty Edge issues that we have encountered, and with that, a blog is born.
The OpenInfra Edge Computing Group will be publishing a series of blog posts to address some of the thornier aspects of edge computing as it enters the mainstream adoption cycle. We will focus on the far more interesting discussion of the technology and architectures needed to support edge workloads – in whatever form or criteria is required for the given use case. There is no question that there are many edge use cases, ranging from content delivery to tracking data from IoT devices. However, the deeper into the use cases you get, the more it becomes apparent that there are multiple edge related efforts each with their own definitions, architectures and requirements. That does not mean that there are no common criteria and features, it just shows that they are slippery and constantly evolving, depending more on the specific industry use cases and perspective of the related user community ecosystem.
Some common Edge considerations that will be explored in more depth include:
• Workload latency (to end users) – This metric can vary from under 2 ms to orders of magnitude higher. This criteria is often most touted by telecom operators as being the most important factor to consider, but that is more a function of their strong network bias than any requirement for any give Edge use case. Yes, there are use cases that are heavily dependent on low latency and they are worthy of more exploration • Network Transport Considerations – How does the underlying transport affect use cases and the thinking about Edge? This is a rich area to dig into as it has been somewhat overlooked. In short, it is much harder to actually manage in real life so to speak. · Edge should be transport agnostic, so why is it not?
· MEC/Edge/5G are not the same thing at all, how did they get so confused?
• Where are my Edge workloads? -- Where are the workloads and how are they moved around on the Edge and between the Edge and Center -- some workloads will need to be located at the edge for reasons of reducing backhaul and latency to varying degrees, while others can be moved between Edge and Center to offload from Edge or the Center. Determining the best option for a given use case can be tricky. • How do constraints affect Edge architectures? – One major criteria, which is often overlooked is that Edge, by its very nature, is defined by the constraints set by its environment and infrastructure. · Edge Infrastructure and Platform issues -- There is wide variability in infrastructure specs, hardware and tooling. Some factors that affect deployments include:
· Node Size: from single CPU to multiple racks (e.g., hyperscalers)
· Does Edge require special hardware, such as accelerators?
• Does Edge work when multi-clouds are required? – Can the Edge support multiple platforms or does that quickly become a nightmare of mismatched tools and bad integration? • Day 2 Operations of Edge deployments – When a deployment can range from 10s of sites to thousands or even millions if you count IoT devices as Edge, how does deployment scale affect architecture, management and application decisions?
--
Beth Cohen DMTS - NFV/SDN Network Product Strategy Verizon Business Group M 781.434.8553
<Edge_Blog_Tangled_up_in_Edge.docx>_______________________________________________ Edge-computing mailing list Edge-computing@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
participants (2)
-
Cohen, Beth F
-
Ildiko Vancsa