Software and application development are integral to many companies’ existences. Software companies aside, every retailer, bank, hospitality chain …you name it … is developing applications that make customer, partner, and employee experience better, faster, and more efficient. Development teams, for their part, have largely migrated to using container environments due to the productivity benefits which help them meet the organization’s demands. One of the benefits container companies tout to their customer is security; developing in an off-premises environment decreases organizational friction.
Except when it doesn’t.
Earlier this week cloud security vendor Lacework published a report showing that the administration consoles of over 22,000 container orchestration and application programming interface (API) management systems were exposed to the internet. Vulnerable systems include Docker, Kubernetes, Openshift, and a host of other well-known names. While most of the container orchestration tools were found to require credentials to access the internal environment, the consoles themselves expose organization- and function-specific information that could be used by an attacker. Further, the Lacework report said that slightly more than 300 companies hadn’t properly configured the administration tools to require login credentials for access. This is still yet compounded by the fact that additional compensating security controls, like firewalls or VPN access, were absent. More than 95% of discovered instances were found in Amazon Web Services (AWS) infrastructure. Perhaps the companies using these containers thought the security provided by the cloud vendor was enough.
Keeping bad guys out isn’t enough
Unfortunately, this is a common theme with the deployment of cloud-based tools or services. Though many cloud providers do an excellent job of offering security controls, a significant portion of those controls are based on IP addresses, ports, and protocols, which can be circumvented by wily attackers. (In the case of the container vulnerabilities, anyone with decent Shodan search skills can find the container administration dashboards, making adversaries’ lives easier.) This research highlights the need to focus on protecting workloads communicating in the cloud rather than the environment itself. Strong authentication should be a no-brainer, but even with credentials set up, without multi-factor authentication (which is still a stretch for many organizations), brute force and dictionary attacks are incredibly common. Once an adversary has stolen or correctly guessed valid credentials, the potential for abuse is practically unlimited. Once inside a container, an adversary can modify workloads, inject malware, delete data, uncover intellectual property, and more. It’s like handing cyber criminals the keys to the company’s kingdom.
Imagine that an attacker is able to drop malicious code into an application your customers use, and this malware steals customers’ login credentials, financial information, or personally identifiable information. The possibility for widespread damage (not to mention legal and compliance ramifications) is huge. Your cyber risk has shot through the roof.
Moving protection directly to the workload is the best way to ensure attackers can’t tamper with them. Fingerprinting the identity of applications and services in the cloud then disallowing any altered state to communicate over the network means that, even if an attacker gets in, they can’t inflict harm on your customers or your company. Your organizational risk automatically decreases.
What about my developers?
Security teams can’t get in the way of development. It’s a fight security won’t win (see: security’s attempts to block cloud usage circa 2008). With Edgewise, protection doesn’t interrupt the development process and it doesn’t attempt to jury-rig network controls into an application environment. Security policies are based on identity attributes that travel with the workload and automatically adapt to DevOps workflows. No interference, no breaking applications after they’re deployed, less friction between security teams. And, of course, insurance against malicious adversaries who find cloud-based vulnerabilities on the internet.