Edge computing

What is Edge computing? 

Edge computing is any computer program that delivers low latency nearer to the requests. In an IEEE DAC 2014 Keynote and subsequently in an invited talk at MIT's MTL Seminar in 2015, Karim Arabi defined edge computing broadly as all computing outside the cloud happening at the edge of the network, and more specifically in applications where real-time processing of data is required. In his definition, cloud computing operates on big data while edge computing operates on "instant data" which is real-time data generated by sensors or users. The term is often used synonymously with fog computing.

The world's data will grow 61% to 175 zettabytes by 2025. According to the research firm, Gartner, around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. By 2025, the firm predicts that this figure will reach 75%. The increase of IoT devices at the edge of the network is producing a massive amount of data - storing and using all that data in cloud data centers pushes network bandwidth requirements to the limit. Despite the improvements in network technology, data centers cannot guarantee acceptable transfer rates and response times, which, however, often is a critical requirement for many applications. Furthermore, devices at the edge constantly consume data from the cloud, forcing companies to decentralize data storage and service provisioning, leveraging physical proximity to the end-user.

Is Edge computing secure?

In edge computing, data may travel between different distributed nodes connected through the Internet and thus requires special encryption mechanisms independent of the cloud. Edge nodes may also be resource-constrained devices, limiting the choice in terms of security methods. Moreover, a shift from a centralized top-down infrastructure to a decentralized trust model is required.[19] On the other hand, by keeping and processing data at the edge, it is possible to increase privacy by minimizing the transmission of sensitive information to the cloud. Furthermore, the ownership of collected data shifts from service providers to end-users.

Management of failovers is crucial in order to keep a service alive. If a single node goes down and is unreachable, users should still be able to access a service without interruptions. Moreover, edge computing systems must provide actions to recover from a failure and alert the user about the incident. To this aim, each device must maintain the network topology of the entire distributed system to detect errors and recover easily applicable ones. Other factors that may influence this aspect are the connection technologies in use, which may provide different levels of reliability, and the accuracy of the data produced at the edge that could be unreliable due to particular environmental conditions. As an example, an edge computing device, such as a voice assistant may continue to provide service to local users even during cloud service or internet outages.

Speed and efficiency of Edge computing

Edge computing brings analytical computational resources close to the end-users and therefore can increase the responsiveness and throughput of applications. A well-designed edge platform would significantly outperform a traditional cloud-based system. Some applications rely on short response times, making edge computing a significantly more feasible option than cloud computing. Examples range from IoT to autonomous driving, anything health or human/public safety-relevant, or involving human perception such as facial recognition, which typically takes a human between 370-620 ms to perform. Edge computing is more likely to be able to mimic the same perception speed as humans, which is useful in applications such as augmented reality where the headset should preferably recognize who a person is at the same time as the wearer does.

Due to the nearness of the analytical resources to the end-users, sophisticated analytical tools and Artificial Intelligence tools can run on the edge of the system. This placement at the edge helps to increase operational efficiency and is responsible for many advantages to the system.
Additionally, the usage of edge computing as an intermediate stage between client devices and the wider internet results in efficiency savings that can be demonstrated in the following example: A client device requires computationally intensive processing of video files to be performed on external servers. By using servers located on a local edge network to perform those computations, the video files only need to be transmitted in the local network. Avoiding transmission over the internet results in significant bandwidth savings and therefore increases efficiency. Another example is voice recognition. If the recognition is performed locally, it is possible to send the recognized text to the cloud rather than audio recordings, significantly reducing the amount of required bandwidth.

" That's all, We hope that this blog has added some value to your knowledge, Thank You"

Post a Comment

Previous Post Next Post