What is edge computing?

Jonathan MathewsPublic

What is edge computing?

Cloud computing has dominated IT discussions for the last two decades, particularly since Amazon popularized the term in 2006 with the release of its Elastic Compute Cloud. In its simplest form, cloud computing is the centralization of computing services to take advantage of a shared data center infrastructure and the economy of scale to reduce costs. However, latency, influenced by the number of router hops, packet delays introduced by virtualization, or server placement within a data center, has always been a key issue of cloud migration. Edge computing has also been a driver of innovation within OpenStack, the open source cloud computing project.

This is where edge computing comes in. Edge computing is essentially the process of decentralizing computer services and moving them closer to the source of data. This can have a significant impact on latency, as it can drastically reduce the volume of data moved and the distance it travels.

The term “edge computing” covers a wide range of technologies, including peer-to-peer, grid/mesh computing, fog computing, blockchain, and content delivery network. It’s been popular within the mobile sector and is now branching off into almost every industry.

The relationship between edge and cloud

There is much speculation about edge replacing cloud, and in some cases, it may do so. However, in many situations, the two have a symbiotic relationship. For instance, services such as web hosting and IoT benefit greatly from edge computing when it comes to performance and initial processing of data. These services, however, still require a robust cloud backend for things like centralized storage and data analysis.

Edge computing: a brief history

Edge computing can be traced back to the 1990s, when Akamai launched its content delivery network (CDN), which introduced nodes at locations geographically closer to the end user. These nodes store cached static content such as images and videos. Edge computing takes this concept further by allowing nodes to perform basic computational tasks. In 1997, computer scientist Brian Noble demonstrated how mobile technology could use edge computing for speech recognition. Two years later, this method was also used to extend the battery life of mobile phones. At the time, this process was termed “cyber foraging,” which is basically how both Apple’s Siri and Google’s speech recognition services work.

Full Article