Edge and Fog Computing: differences and similarities
15 March 2021
Fog computing and edge computing are often confused, probably due to the similarity of the functionalities they offer in terms of data processing and data sending to analytical platforms located close to the data source (e.g. sensors on machinery, wearables, various types of devices on assets to be monitored and controlled).
Both technologies can help organizations reduce their reliance on cloud-based platforms to analyze data, which often leads to latency issues with negative impacts particularly in use cases where closed-loop control is expected, typical of Smart Building, Health, Industry, to get closer to a data-driven approach that allows it to take faster decisions. In this perspective, in line with the increase in computation stress on Cloud systems within the networks, we will see the decentralization of the selected processing to be implemented on or near (at the edge) of the IoT field sensors collecting data.
Fog and edge computing architectures therefore reduce the amount of data that must be sent and received from remote servers on the cloud and facilitate the immediate use of the data on which it is necessary to operate processing often within milliseconds or tens of milliseconds from the collection. The main difference between fog and edge computing is where the processing of that data occurs.
In Edge Computing, the data are processed on the same devices (if they have enough computational capability), or on the “IoT gateways”, i.e. Edge Computers connected to the devices in the local network.
In Fog Computing solutions, instead, the data processing activities take place on a network of computing elements (hub, node, router, edge computer, smart device or gateway): the microprocessors on IoT gateways, or on edge computers or personal computers, connected to the LAN, which constitute the fog nodes of the LAN, which could be physically far from the sensors and actuators.
In addition, usually the “fog” paradigm prefers “open” technologies while the more flexible “edge” paradigm usually uses both “open” and “proprietary/licensed” technologies. From risk analysis point of view, edge computing could be preferable to fog computing as it potentially contains fewer elements in which a failure could occur. In fact, in the “fog” architecture each sensor or device within the network must be programmed to determine what kind of information should be stored and processed locally and what should be sent to the cloud for further processing and/or storage.
However, the difference is not so pronunced, so both Edge and Fog Computing approaches allow lower latency than Cloud computing and therefore are more suitable for the rapid analysis required in real-time or near-time applications. Cloud computing instead, offers superior and advanced processing capabilities, superior processing power and storage capacity and therefore is the most suitable technology for complex and in-depth long-term data analysis. Another important difference relates to internet use and security: cloud computing requires 24/7 Internet access, while the other two can work offline. Therefore, fog and edge computing are best suited for use cases where IoT sensors may not have seamless connectivity to the Internet. Regarding the security issue, since the data is distributed between the nodes (Fog computing) or in the device itself (Edge computing), it is more difficult to manipulate them -despite the centralized cloud system – but on the other hand, the use of cloud computing is preferred in case security policies are implemented on a single data repository (instead of multiple repositories distributed in the network).
Thanks to the use of different data centers and small devices, the probability that a violation interrupts the entire network is reduced.
Studies reveal that more than 50% of U.S. companies consider rapid and accurate data collection and analysis to be an essential aspect of their strategy, so it seems unlikely that this is just a passing fad.
Statistics also show that the market for edge computing services is set to increase by 50% in the next year, especially with the rise of AI technology that requires fast machine learning. Currently, about 10% of data is collected and processed outside of conventional cloud computing methods. However, a study by Gartner predicts that by 2025, about 75% of data will be processed outside the cloud.
The global edge computing market by region covers 5 major geographic regions, namely, North America, Asia Pacific (APAC), Europe, Middle East and Africa (MEA), and Latin America. Companies that want to ensure a high level of competitiveness will necessarily reinvent the way they do business: the versatility and scalability of edge computing is perfectly suited to both start-ups and established companies, regardless of the type of company. It is well known that collecting, processing and analyzing data is a crucial success factor for companies, small, medium or large.