There’s a lot of hype around both Kubernetes and edge computing, so it shouldn’t be a surprise that vendors and cloud providers are offering products and services that combine the two. But what is edge computing? And can you run Kubernetes at the edge?
Finding Your Edge
Depending on who you ask (and whether they have something to sell you), you’ll get different definitions of “the edge.” To my mind, simple definition of the edge is:
- Locations that are close to end users to minimize latency
- Locations not staffed by engineers
Doesn’t that sound a lot like what we’ve had for years? For example, consider an Active Directory server or a print server that’s located at a branch or remote office. The whole idea of running these services locally is to minimize latency when users log in, get their file shares, and so on. Or consider a data center where you rent a rack/cage. Chances are you aren’t there every day supporting those servers.
Another edge example is a Content Delivery Network (CDN). A CDN lets you quickly and dynamically distribute traffic locally without a ton of latency. Online shopping is a typical use case. Let’s say you have customers in the US and in the UK. If customers from the UK access the website hosted in the US, they may experience latency. With a CDN, the business can host a static site in the UK that serves data to local customers. This is of course a simple example; to use a CDN most effectively, it would most likely need to be spread across various data centers in the country.
When you think about it, the edge as a category has been around for a long time. Now let’s consider Kubernetes at the edge.
Kubernetes At The Extreme Edge
On an upcoming episode of the Kubernetes Unpacked podcast, I’ll talk with guest Alan Hohn about how he’s running an edge deployment of Kubernetes. Alan works for a defense contractor that’s using Kubernetes in specific physical locations to support applications that interact with fighter jets. The fighter jets are “fed” application information from Kubernetes clusters.
In this case, the Kubernetes clusters connect only to the fighter jets. The only time that the clusters connect to the public internet is for scheduled updates. This is a link to a comparable use case. I don’t know if this use case is tied to what Alan is working on, but it’s a similar implementation.
Managed Kubernetes Workloads
Cloud providers are also getting in on edge. Let’s take a cloud service as an example – Azure Stack Edge. It’s essentially Hardware-as-a-Service that supports these and other technologies:
- IoT
- Azure Storage Accounts that have a local cache
- Containers
- Kubernetes
From the looks of it, it’s just servers. Servers that could/do run Azure Stack HCI, which is hybrid cloud. The question then becomes – is hybrid, in this case, edge? If you think about the edge examples above, it could make sense from this perspective.
The Kubernetes ecosystem is developing software platforms for edge deployments designed to run on individual servers or small clusters. For example, if you search “Kubernetes edge”, you’ll see mention of k3s and Microk8s, which are smaller form factor, production-ready methods of deploying Kubernetes clusters. They’re lightweight Kubernetes distributions that can run locally, on the cloud, or on servers. Microk8s and k3s also gets categorized as “edge”.
The Edge Of Reason
At present, “edge” for Kubernetes is an imprecise term that’s still being defined. As you consider your business requirements and deployment strategies, keep an open mind about what edge actually means.
In my opinion, I believe edge will be one of the biggest buzzwords of 2023. Any time I ask someone to define it, the response comes down to technology/methods/practices that we’ve seen for years. There’s simply no right definition.