Home United States USA — software Playing nice with a host of tech-pushers pushed OpenStack close to edge

Playing nice with a host of tech-pushers pushed OpenStack close to edge

321
0
SHARE

A cross-vendor framework for edge functionality is no small task
If one thing stood out at OpenStack’s Vancouver summit in May, it’s that the open-source project isn’t just about data centre-based cloud computing any more.
When Rackspace and NASA founded OpenStack eight years back, they wanted it to drive more efficient computing in the data centre by delivering cloud computing resources on standard hardware.
Since then, OpenStack has become commonplace for homegrown, on-premises cloud infrastructure. 72 per cent of the respondents to the OpenStack Foundation’s October 2017 survey used it that way, and that’s up from 62 per cent in 2015.
Today, the OpenStack Foundation sees hardware architectures diversifying beyond commodity x86 platforms into GPUs, FPGAs and Arm-based systems. It also sees approaches to software becoming more complex as containers, microservices and serverless computing take hold, and it sees computing happening increasingly at the edge, outside the data centre.
Alan Clark, chair of the OpenStack Foundation’s board and CTO for SUSE, tells us that it will need help to do that.
« OpenStack is key to that open infrastructure, but we recognised two years ago that not all technologies are going to be developed within this community, and we shouldn’t try to push for that, » he says. Instead, it must play well with others and tap into complementary projects from other industry associations and open-source groups. He calls those « adjacent communities ».
« There are good examples around storage and networking, » he says, highlighting OpenNFV as one of the first groups that it worked with.
But these working relationships aren’t always smooth. « Every community has a personality. Every community works differently, and has a different terminology. » The early days of collaboration with OpenNFV were rocky. « They were frustrated, because their blueprints – their requests for features – were getting a high rate of rejections and they didn’t understand why. It was mostly down to the differences in how communities work. »
OpenStack and OpenNFV had to learn how to communicate, and it took time to get rejection rates down and align the two groups.
As OpenStack’s community tackles more technologies, it’ll have to build and navigate more relationships. One of its destinations involves perhaps more cross-group collaboration than any other: the move to edge computing. OpenStack’s advocates want the project to power a devolution of computing power to the edge – peripheral data centers and devices, away from the central hub of the mega data centre. The challenge there is defining just what the edge is.
Beth Cohen, Verizon cloud technology strategist, opened this year’s submissions by pitching a case for her company’s virtual network services product – effectively OpenStack in a tiny box. This was the product of much discussion. « We spent two days arguing about what is edge computing, » Cohen says. An edge computing committee spent months writing about it and eventually came up with a whitepaper definition.
In summary, OpenStack’s concept of the edge involves distributed nodes, often with intermittent connectivity and latency concerns. But Cohen thinks that the small, low-powered, sensor-type devices that we often think about as part of the IoT might be too small to be included. « We need computing capability, » she says.
Part of the complexity comes in the broad set of applications for edge computing. OpenStack’s edge computing committee sees a range of use cases spanning retail to manufacturing.
« There are a lot of open-source groups focused on edge, because there are a whole bunch of use cases, » says Clark. « That’s where edge is struggling a bit, because there are so many use cases and you need to get focused and figure out which ones you’re trying to target. »
Telcos like Verizon and AT&T have been the ones primarily driving these edge discussions, so it’s no surprise that they’re focusing on mobile networking and 5G rollouts. 5G base stations will be very high frequency and very short range, meaning there will be more of them. Efficient equipment using network function virtualization will be a key tool in that rollout, and it will be important to move functionality close to cellular users because low latency, high-bandwidth applications like augmented reality are likely to feature early on.
The telcos don’t want to reinvent edge technologies for different use cases, so they’re using a building block approach for delivering edge-based systems. According to Cohen: « A composable structure of modules was top of mind for us. »
No matter what sits at the edge, it’s probably going to be a long way from your engineers, so automation for remote provisioning and configuration becomes important. That automation must span IoT, networking, cloud infrastructure and application software provisioning.
A critical part of OpenStack’s edge story is Akraino, the edge stack product launched by AT&T, Intel and Wind River under the auspices of the Linux Foundation in February.
Akraino seeks to pull together technologies from different open-source initiatives to build a common stack. That stack includes tooling for continuous integration and development and SDKs for edge applications development, along with middleware and APIs for integration with third-party edge providers.
Declarative provisioning and management of these systems will be a big part of the automation process. That means stating exactly how these systems will spin up, which access which resources, in a pre-written file. With thousands of devices ranging from base stations to drones all running various bits of these edge stacks and often disconnected for periods of time, having an admin do it from a centralized console won’t always be an option.
Declarative provisioning happens to be a key feature in another project, also from AT&T, called Airship, which was announced at the conference. Airship is used to automate the creation of clouds on bare-metal systems out of the box using Kubernetes-based containers. The idea is to spin up a vanilla container-based machine from nothing, using pre-baked instructions. The promise of Airship is that it will also offer a single workflow for managing the lifecycle of the cloud infrastructure and its applications.
Airship will form part of the Akraino stack, managing software provisioning as just one part of the automation process. It draws on OpenStack Helm, another project for deploying OpenStack and its services on Kubernetes, which has just unhooked itself from Kubernetes’ apron strings and been accepted as a project by the Cloud Native Computing Foundation.
Intel and Wind River have also submitted an edge-related project upstream to OpenStack. Called StarlingX, this project is a hardened cloud infrastructure software stack for managing low-latency edge applications, with a focus on high availability. It also plugs into Akraino.
If nothing else, these developments shows that vendors and operators are serious about building a cross-vendor framework for edge functionality within OpenStack. Clark reckons Verizon, AT&T, Intel and Wind River are driving these developments, and thus open-source, based on commercial need.
« There is an organics in open-source software and it has a lot to do with projects living and dying based on interest, » he says.

Continue reading...