Large-Scale Kubernetes Vulnerability Highlights Need for Compensating Security Control

A critical vulnerability in Kubernetes open-source container software was reported last week. With a CVSS score of 9.8, the vulnerability should be of great concern to anyone using Kubernetes (k8s), which by most accounts is a lot of people. 451 Research estimates current container market share at around $1.5 billion USD, more than double its market size just two years ago. As with any technology, it’s not surprising that bugs and vulnerabilities are discovered and and disclosed, and this announcement serves as a reminder that organizations must have compensating controls in place to tackle problems as they arise, whether due to a software flaw or any other vulnerability that can lead to system compromise.

In the case of k8s, an attacker could exploit the vulnerable API server to gain unlimited remote access to the backend server, escalate privileges, then send illicit requests to hosts, applications, or services in the same infrastructure. This could result in the installation of malware or botnets, exfiltration of data, or tampered workloads in production. The extent of potential is really unlimited; once a backend connection is established, all further requests will be automatically authenticated with the k8s API server TLS credentials. Installed “as is,” there is no control in place to check the authenticity of new requests, which is why k8s (and any other container) should be implemented with supplemental security controls.

Container environments not suited for trust

Not coincidentally, trust models and overly-permissive controls have been the problem with network security for years. The original “castle-moat” design was fine for closed, on-premises networks that had distinct “insiders” and “outsiders,” and it was easy to identify who and what those entities were. Today, and especially with dynamic environments like containers, a perimeter-type of security solution doesn’t work. Too many applications never cross an external perimeter. A once-authenticated and trusted user, host, or system request can be exploited after initial verification. And continuous environmental changes mean that relying on the network as the source of control is no longer viable.

This is why the Kubernetes flaw is so concerning. 

Subscribe to our newsletter:

A new type of of security control

Edgewise’s architects and engineers have thought long and hard about this problem: What happens after an initial compromise? How can we stop attacks from progressing once a threat actor has access to and control of system resources?

Daniel Einspanjer, Chief Systems Architect at Edgewise, says that companies’ container usage can land them in hot water if left unchecked. Even without a known software vulnerability, container misconfigurations are commonplace, and the fundamental benefits of containers—speed, scale, agility—become the very things that can grease the wheels of a spreading attack. As a result, Einspanjer and team have built technology well-suited to protect against vulnerabilities like CVE-2018-1002105, the Kubernetes flaw.

Some microservices have to be able to receive connections from the outside world, Einspanjer explains. In most cases, after the connection is authenticated it remains “trusted” to send and receive communications to other hosts, applications, etc. That is risky, though. Companies can’t stop allowing inbound traffic (or the system breaks), but they can use Edgewise to configure services so that they’re not allowed to establish outbound connections to any unapproved hosts. In this way, says Einspanjer, administrators can stop any attacks-in-progress from spreading.

In a typical attack, like the ones predicted for the k8s vulnerability, the most straightforward method of compromise is to establish an outbound connection to a command-and-control host, download a malicious payload, then infect the local host. Once the payload is installed, it will be a new process on the system which establishes its own outbound connections to the command-and-control; it can also propagate via lateral traversal to other hosts in the same infrastructure. Because there are usually several outbound communication paths needed for trusted applications in the infrastructure to operate properly, such as package repositories and antivirus definition updates, it is common for this type of communication to be left unchecked in a traditional network design, allowing an attack to continue.

Using Edgewise, however, administrators can create a perimeter which includes the hosts (“Collections”) managed by an Edgewise agent. Collections include logical groupings of applications that need protection from unauthorized access and communication. “What you’re doing with Edgewise,” says Einspanjer, “is putting an outbound shield on all hosts inside this perimeter to prevent outbound connections by default, then creating an inbound collection consisting of only approved applications on managed hosts. In effect, what you’re saying is, ‘the Kubernetes API service is only allowed to communicate with other Kubernetes API services as an API connection.’ Kubernetes will not be able to make any outbound connection to any unmanaged hosts. Further, nothing besides approved applications is allowed to make outbound connections anywhere.” As a result, malicious communications will be stopped in their tracks. Malware may still exist in the system, but it can’t do anymore damage because it is not permitted to access an unmanaged host. An attacker may still gain access to the container environment through the API, but a check on the ability to send requests is made through the Edgewise agent. When the attackers fingerprint fails to match system requirements, malicious communication can’t propagate.

Re-architecting the network, not necessary

Other industry advice says that networking and development teams need to make API security a priority. While Edgewise does not disagree with any recommendation for improving security, trying to ensure every API is written and configured 100% properly 100% of the time is resource intensive and (quite frankly) not possible. Architectural changes take time and money. Edgewise doesn’t require any changes to companies’ networks. Once the agent is installed (which can be done in a matter of minutes), a network topology map is built and users can begin to build Collections which protect sensitive applications, databases, hosts, and services.

To learn more about how to secure your organization's container instances with Edgewise, contact us today for a customized demo.

Katherine Teitler, Director of Content

Written by Katherine Teitler, Director of Content

Katherine Teitler leads content strategy and development for Edgewise Networks. In her role as Director of Content she is a storyteller; a translator; and liaison between sales, marketing, and the customer. Prior to Edgewise, Katherine was the Director of Content for MISTI, a global training and events company, where she was in charge of digital content strategy and programming for the company's cybersecurity events, and the Director of Content at IANS, where she built, managed, and contributed to the company's research portal.