But what happens when you introduce cloud? Here is where zero trust strategies can get a little complicated. Why? Because of the shared responsibilities element. Meaning, because we’re only responsible for a part of the mechanisms for security in a cloud context, elements of the security model are out of our hands. Put another way, it’s tempting to think that cloud environments are already “zero trust” by virtue of the provider’s need to support multi-tenancy. Meaning, since cloud providers need to design in protections so that one tenant can’t attack another, their design assumption needs to be that any given entity is potentially nefarious. In other words, a cloud provider needs to design around the assumption that everything is hostile – much like we do in a zero trust paradigm.
While these things are true, keep in mind that, while segmentation is an important element of zero trust, it’s not the only element. In fact, much of the most freeing aspects of zero trust are freeing yourself from (potentially untrue) assumptions about the substrate on which applications and network services run. But the shared responsibility “covenant” between cloud customer and cloud provider implies exactly that: that there is a context of security controls within which your workloads will operate. What happens if those assumptions change? For example, what if you move the workload from one cloud provider to another? Unless you’ve designed in resilience around this, those assumptions could surface and bite you just the same way that assumptions about the “trusted” status of internal networks did in the past.
Extending zero-trust into the cloud
With this in mind, it can be beneficial to extend the core premise of zero trust into cloud relationships. Meaning, in addition to assuming that everything on the cloud provider’s network is by definition untrusted (a useful assumption regardless of whether you’re a cloud tenant or the providers themselves), this also means extending a level of skepticism to the actions of the provider and any security mechanisms that are outside our direct control.
Doing this has a few benefits. First, we all know that the segmentation model employed by container and virtualization technologies are not foolproof. We’ve seen attacks over the past few years that can potentially undermine the segmentation between tenants in side channel information gathering attacks like Spectre and Meltdown; we’ve also seen issues like Rowhammer that can likewise erode that segmentation boundary. So building in some level of protection even when the segmentation model becomes compromised can be a useful strategy in and of itself.
But beyond this, designing in this way helps foster portability. Because we’re designing in resilience at a higher level of the application stack, and not solely relying on the security measures of the cloud provider (within the area of the shared responsibility matrix that is theirs), we are somewhat insulated against potential changes should they occur. For example, should we decide to change providers, spin up duplicates in other providers (or on premises), or otherwise change the assumptions within which the workload will live, our security posture stays robust.
Putting this into practice
So how do we do this? A useful place to start is with application threat modeling. For those that are not familiar with it, application threat modeling is a technique designed to deconstruct an application into its various components and look at each of those components from an attacker’s point of view. It can be particularly useful in a zero trust architecture because it allows us to look at the interaction points between components and services to find the ways the application might be subverted, and helps us identify where we can introduce countermeasures to thwart those attacks. In a cloud context, it helps us to differentiate between areas where we are reliant on a security mechanism supplied by the provider vs. one at a higher level of the stack. The point? Threat modeling is a good idea; it’s a good idea anyway, but can bring tremendous value when you have applications that span multiple IaaS and/or PaaS relationships.
Beyond this, it’s also useful to examine technologies that can be used to enhance separation. Generally speaking, technologies that let us securely establish trust in the connecting party (by authenticating and authorizing them) and then provide protection for exchange traffic are particularly valuable. For example, software defined perimeter (SDP) can help to create secure virtual enclaves (a “black network”) within an IaaS context and ensure that only those devices that are authenticated can participate in that connectivity. SDP is a Cloud Security Alliance standard designed specifically for this purpose; there’s even an open source implementation for those wishing to try it out.
Even encryption of data at rest can be valuable in helping us to achieve these goals. For example, by encrypting data that is stored in the cloud (either in application code for PaaS or at lower levels of the stack for IaaS), we can help to ensure that the data in our cloud relationships remains secured even if assumptions change.
The specifics here obviously vary depending on your specific needs and usage and we’ve only covered a few of the potential options available to you. That said, extending the same techniques and approaches that you employ for zero trust in internal environments can and does make sense when it comes to cloud. That said, it’s not just as simple as “turning it on” (just as is true for internal environments) — while cloud relationships are designed around a strong and robust segmentation boundary, there can be advantages to extending the lack of trust to some of the service provider security mechanisms. This is not to imply that they are not adequate or robust — just that doing so where possible helps to ensure that the workload stays protected even if you migrate it within the same provider, to another provider, or spin up a new copy of it somewhere else entirely.