NEW: Zero Trust Security For Dummies ebook. Get your free copy now!
 
 

Why Your Environment Needs Zero Trust

By Ed Moyle — Nov 29, 2018

It’s a truism that organizations are becoming more and more “externalized” as time goes by. And it’s happening quickly. Cloud services (of all types and varieties) have proliferated and expanded technology footprints, mobile app use has extended, allowing users to connect to vital services at any time from anywhere, while supply chain interdependencies have led to new connections with business partners, service providers, and everything in between.

This is happening in parallel to increases in architectural complexity. For example, not only are there new physical devices on the network as a result of the “Internet of Things,” but there are also new logical entities as well. Operating system (OS) virtualization has precipitated an increase in density — both within the data center and outside of it. Likewise, application container engines (e.g., Docker and rkt) mean higher density of application services as well as network entities. In addition to environments just being denser, keep in mind that many of these services are also highly portable. A virtual image, for example, can be moved from hypervisor to hypervisor (or even from environment to environment) in a matter of minutes, while application containers are even more portable than that.

The point is, what was already complex is growing even more so with each passing day. It’s become complex enough, in fact, that making sure there are well-understood, manageable, and resilient rules about how entities on the network interact is difficult — difficult enough that there’s a solid argument for just giving up.

Now I know what you’re thinking. “Giving up” sounds like I’m suggesting you “take the ball and go home” on ensuring robust security. I’m not. Instead, I’m suggesting that you give up the idea that it’s advantageous— or even possible—to maintain a defined perimeter. In other words, I’m suggesting engineering around the assumption that everything on the network, regardless of source, is untrusted by default and there is no “Maginot line” that separates the hostile, external world from the internal, trusted one.

What is zero trust?

By assuming that the network is a stew of malicious traffic, malware and hacker activity, eavesdroppers, and other undesirable things, you force yourself instead to focus on hardening individual network nodes regardless of attack source. Meaning, you’re protecting applications, workstations, and servers from devices down the hall just as much as you do from attackers overseas. This, in a nutshell, is what zero trust is all about.

Now, there’s a lot of marketing out there suggesting that zero trust is a product or a service that you buy, but really it’s more of a philosophy than anything else. Specifically, it’s the philosophy that you have zero trust (hence the name) in anything outside a given system. Note that this applies even in situations where the traffic originates internally or via an employee or other staff member.

There are a few ramifications that result from adopting this mindset. First, it means that you authenticate every transaction. For example, in a RESTful application context, you validate that every request is authenticated and authorized even from one request to another. Second, it means that you harden communication pathways against snooping and eavesdropping even when the entities communicating are entities you maintain. For example, you might employ encrypted connections for internal applications to the same degree that you would do so for sources/destinations on the open internet. Third, it means that you design around least privilege. For example, by locking down services only to the bare minimum of what’s required for people to do their jobs and get work done. And lastly, it means that you assume that compromise already happened, so you are on high alert for indicators of compromise already present on your network.

Benefits of a zero trust approach

There are a few reasons why this approach is advantageous. First, keep in mind that in many cases entities on the network are designed to be transient or even ephemeral. A virtual machine running in your data center today could very easily be uploaded and run from a cloud IaaS provider tomorrow. A machine that’s been offline for six months can be back up requesting application data next week. By assuming a default posture that everything is untrusted, you can ensure that security stays robust even when traffic you don’t expect originates from sources you don’t expect.


Subscribe to our newsletter:


Second, this approach limits lateral movement of attackers should they be able to establish a foothold. By assuming a mindset that everything is untrusted by default, you make the effort required for an attacker to move laterally within your environment that much more difficult. Each individual node that an attacker compromises doesn’t necessarily mean that they will be able to move to the next node any more easily.

Over the past few years, organizations are more and more waking up to the fact that they might have had attackers dwelling in their network for long periods of time. Making lateral movement more difficult does two things at once: it increases the amount of time required for an attack campaign and it increases the amount of effort attackers must expend to move between devices. There are two outcomes that derive from this: first, since an attacker’s campaign is time-bound (i.e., it’s a race for them to complete their objective before they’re discovered and you lock them out) it is more likely that you will catch them before they achieve their objective. Second, because you’re increasing the amount of effort attackers need to expend to expand their foothold they need to be “louder” (i.e., less subtle about how they move through your network) in their approach to expand their foothold. This in turn means you increase the chances that you will detect indicators of compromise as they do so.

Putting this into action

If this sounds appealing to you, you’re not alone. More and more often organizations are adopting a zero trust mindset for securing assets. But if your organization isn’t already employing this approach, the question then becomes how to adopt it within your shop. You might, for example, see the value associated with a zero trust architectural approach but be unclear how to get there. For example, you might have a legacy architecture that is everything but this; instead, being designed around a tightly-controlled external perimeter and leveraging internal resources that are trusted because of where they’re located (inside the perimeter) and with plaintext, unauthenticated communication channels.

The most important thing to start with when moving to a zero trust strategy is to recognize that you won’t get all the way there all at once. You won’t wake up one day and magically be a 100% “zero trust” environment just by willing it so. Instead, zero trust is an architectural strategy that requires both a mental shift and then investment to make happen. And, let’s be honest, conversations to change strategies mid-stream can be challenging because there is often little direct business-visible value associated with doing so. Trying to explain to a business stakeholder, for example, that they’ll wind up with the same functionality tomorrow after making a sizeable investment to redesign around different architectural principles is a tough sell. This is OK though.

Instead of trying to “boil the ocean” and address everything all at once to shift to a zero trust architecture all in one swoop, instead do so slowly and migrate as time allows and as new services are rolled in. You can shift to a zero trust mentality in how you view your network, infrastructure, and technology footprint but recognize that it’ll take some time for the environment to come into alignment with how you view it.

To do this, as you release new systems or update existing ones, apply the principles of zero trust where and when you can. Over time, slowly adapt the environment in piecemeal fashion to shape it around your new viewpoint. After some time (potentially in some cases years), you’ll reach a state where the “old guard” of perimeter-bound architecture is phased out while systems that enforce and embrace zero trust are phased in. The important part here is adopting the mindset; the actual execution will get there over time.

Second, keep in mind that you can also strategically select and incorporate tools that forward a zero trust way of looking at the world. Tools like Edgewise for example are designed to work within a zero trust environment by fostering visualization of traffic between entities, by helping you to microsegment network communications, and by providing reporting that can help you identify indicators of compromise — zero in on potential attacker activity, and shut it down. Even if you can’t get to a fully zero trust environment in how you architect your systems and applications, you can get a head start on doing so through building capabilities that foster the approach.

Either way, zero trust can be a powerful concept when incorporated into your overall approach. Understanding what it is and why it’s valuable is a good first step; a more powerful “step two” consists of embracing the concept and using it as a fundamental architectural principle in how your technology landscape is architected.

Ed Moyle

Written by Ed Moyle

Ed Moyle is currently General Manager and Chief Content Officer at Prelude Institute. Prior to joining Prelude, Ed was Director of Thought Leadership and Research for ISACA and a founding partner of the analyst firm Security Curve. In his 20+ years in information security, Ed has held numerous positions including: Senior Security Strategist for Savvis (now CenturyLink), Senior Manager with CTG's global security practice, Vice President and Information Security Officer for Merrill Lynch Investment Managers, and Senior Security Analyst with Trintech. Ed is co-author of "Cryptographic Libraries for Developers" and a frequent contributor to the Information Security industry as author, public speaker, and analyst.