Because devices on massive networks need services, says Linux Foundation orchestration guru Arpit Joshipura
Over the next couple of quarters, a team of developers will start fleshing out the roadmap of a Linux Foundation effort A&T ignited in February, when the telco giant donated code to the Akraino project.
The aim is to create a “network edge stack” that plugs a gap in the open source world, between the OpenStack/OPNFV-built clouds home to compute and telco infrastructure, and the proliferating varieties of end user devices.
The Register’s networking desk spoke to the Linux Foundation’s GM of Networking and Orchestration, Arpit Joshipura, to understand why an “open edge stack” is needed.
The simple answer, Joshipura told us, is that “the edge is really hot at the moment” – because telcos are expecting that part of the network is likely to get really messy.
Today’s mobile carrier, for example, knows what it’s dealing with. From the carrier core, there’s a network of base stations (in the world of LTE, most often an Ethernet Layer 2 and fibre or microwave physical connections), and the base stations connect a pretty consistent set of user devices.
In the future the same network will be asked to support a proliferation of different device types with different bandwidth and network latency requirements.
The industry, Joshipura told El Reg already had a good example of what can be achieved with a more intelligent edge – content delivery networks (CDNs).
“The CDN is a use-case of the distributed cloud architecture,” he said, “it’s latency-sensitive, and bandwidth sensitive over long distances.”
A mobile phone just connects to the network – as Joshipura noted, “That end device interface is well-standardised for the cell phone.»
“If you are an augmented reality company, or a drone company, or there’s a massive enterprise sensor network that need to be brought into a framework – that framework doesn’t exist today.”
As well as trying to match network performance to what the device needs, the device also needs background services like logging, billing, and upgrades.
But latency remains the big pain point.
“You have to decide where you put it [the intelligence – El Reg]”, Joshipura said. “Is the application multi-site? Is the user moving? What latency does it need?
An application that needs mobility and low latency needs to be hosted in different places in the network, he explained – for example, on a compute node “in the basement” (to respond quickly) and in the cloud (so the user doesn’t lose access moving between base stations).
A healthcare application might exist entirely in the nearest infrastructure; but an augmented reality application is more likely to be hosted at the telco edge.
If that’s not open and standardised, telcos have to build the software stack themselves, which is why Akraino has big-name backers like AT&T, Intel, WindRiver (whose participation survived its divestment by Chipzilla), Altiostar, Docker, Huawei, China Telecom, China Unicom, ZTE and others.
As it now stands, Joshipura said, Akraino focuses on three interfaces: a northbound API, a southbound interface to lower layers of the stack; and between those, device APIs.
The role of the northbound API is to handle device applications (through connectors to virtual network functions, VNFs), the cloud layer (OpenStack, Ceph and the like), the new EdgeX Foundry for Internet of Things support, and the Acumos AI project.
The “middle” part, that provides the APIs to devices, is where “placement” decisions get made, Joshipuar explained, because that’s the layer that understands the device, the application, and the infrastructure.
The other point of Akraino is that, in keeping with the overarching requirement of a world full of vast infrastructure, that placement decision has to be zero-touch. ®
Sponsored: Unleash the potential of all-flash storage in your Data Center with Huawei