New: ESG Technical Validation: One-Click Segmentation. Download now!
 
 

Extending Zero Trust into IaaS

Any organization making use of cloud resources knows that shared responsibility is the name of the game when it comes to security. Both Amazon’s Shared Responsibility Model and Microsoft’s “Shared Responsibilities for Cloud Computing” are fairly explicit that the service provider and the customer have a role to play in ensuring assets are protected. The cloud provider is responsible for ensuring the security of the cloud environment itself (i.e., the services and components being provided to the customer) while the customer is responsible for ensuring the security of assets (in an IaaS context, data and workloads) deployed into that service.

Normally, this approach of shared responsibility is relatively straightforward: the provider deploys a common set of generic security mechanisms that create a secure “playing field” for customer workloads. The customer in turn implements security measures that make sense given their usage. For example, a cloud provider might deploy a common detection capability and common log repository while the customer security team ensures a robust architecture for virtual hosts in Infrastructure-as-a-Service (IaaS), robust application design, and any additional security processes or procedures that make sense.

As organizations adopt “zero trust” though, “shared responsibility” can lead to confusion. Should the customer trust the cloud provider? Or should they consider the provider a potentially-hostile actor? Will the technical enforcement strategies they use in house operate in the cloud? If not, what can they do instead? The truth is that, like anything, the specifics are complicated — but the extension of the same zero trust mindset that serves the organization so well internally can directly extend into IaaS and provide value.

Zero Trust and the cloud

Zero trust begins with the presupposition that everything in the environment is potentially hostile or already compromised. In traditional, perimeter-based strategies, technology ecosystems are built around the assumption that there’s a clearly-delineated boundary: peaceful, safe assets on one side and pandemonium on the other.

As environments have increased in complexity, this model breaks down. For example, consider an environment with hundreds of hypervisors and using container engines that modularize applications into hundreds or thousands of little pieces. That environment becomes very complicated very quickly. Add to this the fact that attacker activity is becoming more sophisticated where today’s hackers often favor “low and slow” efforts that allow them to dwell unseen inside the network for months or years and you start to see why this idea of a “trusted” perimeter becomes increasingly less useful.

Because of this, zero trust begins with the assumption that everything— regardless of source—is untrusted. For example, if you have a rack in the data center, the device in the topmost slot would require the same level of “proof” for requests originating from the devices below it as it would for requests originating overseas. Since it can’t assume that the network itself is trusted, it might employ encrypted communications even with devices that are serving up different components of the same application.

This model is desirable because it helps restrict an attacker’s lateral movement. In a zero trust world, just getting access to the internal network doesn’t give an attacker an advantage or an on-ramp to access other nodes within the same network. Moreover, because there’s no assumptions made about the environment in which the node will live, workloads become more safely portable. Why? Because the same assumption of “zero trust” extends to any location where that workload might be moved. You can’t run into trouble for example, when a workload isn’t sufficiently hardened to live in a location with fewer or different security controls than the one it was built for.


Subscribe to our newsletter:


But what happens when you introduce cloud? Here is where zero trust strategies can get a little complicated. Why? Because of the shared responsibilities element. Meaning, because we’re only responsible for a part of the mechanisms for security in a cloud context, elements of the security model are out of our hands. Put another way, it’s tempting to think that cloud environments are already “zero trust” by virtue of the provider’s need to support multi-tenancy. Meaning, since cloud providers need to design in protections so that one tenant can’t attack another, their design assumption needs to be that any given entity is potentially nefarious. In other words, a cloud provider needs to design around the assumption that everything is hostile – much like we do in a zero trust paradigm. 

While these things are true, keep in mind that, while segmentation is an important element of zero trust, it’s not the only element. In fact, much of the most freeing aspects of zero trust are freeing yourself from (potentially untrue) assumptions about the substrate on which applications and network services run. But the shared responsibility “covenant” between cloud customer and cloud provider implies exactly that: that there is a context of security controls within which your workloads will operate. What happens if those assumptions change? For example, what if you move the workload from one cloud provider to another? Unless you’ve designed in resilience around this, those assumptions could surface and bite you just the same way that assumptions about the “trusted” status of internal networks did in the past. 

Extending zero-trust into the cloud

With this in mind, it can be beneficial to extend the core premise of zero trust into cloud relationships.  Meaning, in addition to assuming that everything on the cloud provider’s network is by definition untrusted (a useful assumption regardless of whether you’re a cloud tenant or the providers themselves), this also means extending a level of skepticism to the actions of the provider and any security mechanisms that are outside our direct control.

Doing this has a few benefits. First, we all know that the segmentation model employed by container and virtualization technologies are not foolproof. We’ve seen attacks over the past few years that can potentially undermine the segmentation between tenants in side channel information gathering attacks like Spectre and Meltdown; we’ve also seen issues like Rowhammer that can likewise erode that segmentation boundary. So building in some level of protection even when the segmentation model becomes compromised can be a useful strategy in and of itself. 

But beyond this, designing in this way helps foster portability. Because we’re designing in resilience at a higher level of the application stack, and not solely relying on the security measures of the cloud provider (within the area of the shared responsibility matrix that is theirs), we are somewhat insulated against potential changes should they occur. For example, should we decide to change providers, spin up duplicates in other providers (or on premises), or otherwise change the assumptions within which the workload will live, our security posture stays robust. 

Putting this into practice

So how do we do this? A useful place to start is with application threat modeling. For those that are not familiar with it, application threat modeling is a technique designed to deconstruct an application into its various components and look at each of those components from an attacker’s point of view. It can be particularly useful in a zero trust architecture because it allows us to look at the interaction points between components and services to find the ways the application might be subverted, and helps us identify where we can introduce countermeasures to thwart those attacks. In a cloud context, it helps us to differentiate between areas where we are reliant on a security mechanism supplied by the provider vs. one at a higher level of the stack. The point? Threat modeling is a good idea; it’s a good idea anyway, but can bring tremendous value when you have applications that span multiple IaaS and/or PaaS relationships. 

Beyond this, it’s also useful to examine technologies that can be used to enhance separation. Generally speaking, technologies that let us securely establish trust in the connecting party (by authenticating and authorizing them) and then provide protection for exchange traffic are particularly valuable. For example, software defined perimeter (SDP) can help to create secure virtual enclaves (a “black network”) within an IaaS context and ensure that only those devices that are authenticated can participate in that connectivity. SDP is a Cloud Security Alliance standard designed specifically for this purpose; there’s even an open source implementation for those wishing to try it out.

Even encryption of data at rest can be valuable in helping us to achieve these goals. For example, by encrypting data that is stored in the cloud (either in application code for PaaS or at lower levels of the stack for IaaS), we can help to ensure that the data in our cloud relationships remains secured even if assumptions change. 

The specifics here obviously vary depending on your specific needs and usage and we’ve only covered a few of the potential options available to you. That said, extending the same techniques and approaches that you employ for zero trust in internal environments can and does make sense when it comes to cloud. That said, it’s not just as simple as “turning it on” (just as is true for internal environments) — while cloud relationships are designed around a strong and robust segmentation boundary, there can be advantages to extending the lack of trust to some of the service provider security mechanisms. This is not to imply that they are not adequate or robust — just that doing so where possible helps to ensure that the workload stays protected even if you migrate it within the same provider, to another provider, or spin up a new copy of it somewhere else entirely. 

Ed Moyle, General Manager and Chief Content Officer

Written by Ed Moyle, General Manager and Chief Content Officer

Ed Moyle is currently General Manager and Chief Content Officer at Prelude Institute. Prior to joining Prelude, Ed was Director of Thought Leadership and Research for ISACA and a founding partner of the analyst firm Security Curve. In his 20+ years in information security, Ed has held numerous positions including: Senior Security Strategist for Savvis (now CenturyLink), Senior Manager with CTG's global security practice, Vice President and Information Security Officer for Merrill Lynch Investment Managers, and Senior Security Analyst with Trintech. Ed is co-author of "Cryptographic Libraries for Developers" and a frequent contributor to the Information Security industry as author, public speaker, and analyst.