A webinar recap
The digital nature of today’s businesses puts significant pressure on cybersecurity practitioners to be everywhere, all the time. As a result, it’s easy to forget the fundamental reasons for managing and operating a security program. During a recent Edgewise webinar, Dr. Chase Cunningham, Forrester Principal Analyst and guest speaker, reminded listeners that the reason for practicing good network security—namely, microsegmentation—isn’t to achieve more secure workloads, devices, or people on the network, but to protect data. Data should be at the heart of every network security program, and therefore every framework, strategy, tool, and process used to secure the network should be focused on the data itself. Case in point, argued, Cunningham, “No one breaks into a bank to say they broke into a building,” yet security teams expend a tremendous amount of effort “securing the network” to try to keep the bad guys out. Doing so, however, is spreading security teams thin and not providing the context required to defend against modern adversaries.
Effective implementations of microsegmentation keep data at the core of the strategy. While there are many different ways to accomplish microsegmentation, the goals of any initiative should be to:
- Improve network visibility and breach detection
- Localize security controls around critical assets
- Reduce capital and operational expenses
- Reduce compliance costs
- Increase data awareness and insight
- Eliminate internal finger pointing and blame
- Enable digital business transformation
But Cunningham didn’t stop there. Combining microsegmentation with a zero trust networking strategy, he said, is “the larger virtual initiative.” Not surprisingly, zero trust and microsegmentation have similar benefits; when merged into one security strategy, security teams have a hardened method to isolate data and systems, stop the propagation of malware, and truly understand what’s going on inside their networks.
Protecting the data
While zero trust microsegmentation allows security teams to better protect workloads, users, and devices, the key component of what a security program is supposed to focus on, said Cunningham, is protecting the data. Microsegmentation allows security teams to put the right segments, controls, technologies, and capabilities in place, and zero trust requires that everything trying to communicate across segments—inside and in between data centers and cloud environments—is continually assessed for proper authorization and authentication. Enforcement of controls in a zero trust infrastructure happens with every communication request, which means that data assets are always protected from lateral movement and propagation of malware, even if an attacker has already exploited an endpoint.
“If you're doing microsegmentation right and you're following down the sort of zero trust methodology and practices,” Cunningham explained, “it’s OK if you have a threat actor inside your environment,” because zero trust microsegmentation means that infected systems can be segmented away from other systems, that granular controls are in place to ensure the threat actor can’t piggyback on approved policies to access desired systems or data, and that the core of the network—the data—always remains isolated. It’s taking the “monster” in the network and putting him in a virtual box of sorts, and making sure he doesn’t interrupt services, destroy systems, steal data, or otherwise find one, tiny vulnerability in an immense ecosystem that will cost the company dearly.
Avoiding the “iceberg, right ahead”
To illustrate his point, Cunningham referenced the movie (and ostensibly the real-life ship), Titanic. In the movie (and again, theoretically based on the actual ship that sunk in 1912) the ship’s architects had designed an “unsinkable” ship, complete with watertight integrity—sealed doors, compartments, and operational areas for the ship’s controls. The premise of the design was that if anything were to hit the ship, it would occur below the waterline. Therefore, all spaces below the waterline were secured by a form of microsegmentation. If the ship hit ice in the water and it tore a hole in the hull, the crew could simply segment off the damaged area (manually, of course, in 1912) and the ship could continue on its path, slightly battered, but capable of remaining afloat.
What the architects didn’t account for in their Titanic segmentation strategy was an “exploit” happening in an unanticipated place, i.e., above the waterline. Today, most cybersecurity practitioners have come to realize that threats and exploits can originate just about about anywhere—inside or outside the network—yet networks continue to be designed like the Titanic: for the most obvious intrusion points, endpoints. The problem is, said Cunningham, just as when water starts seeping throughout a ship where watertight segmentation is not implemented (e.g., across decks, passenger cabins, the main dining area), once an attacker moves past endpoint protection, without segmentation, there’s no way to stop that monster.
This is exactly what happens in the infrastructure of a flat computer network. When segmentation—or better yet, microsegmentation—isn’t present, and when security controls determine (effectively), “OK, I’ve seen you walking around this ship before. Go ahead and move from room to room,” things go from bad to worse really quickly, just like water seeping through the hole in the side of the Titanic. Zero trust microsegmentation clamps down on overly permissive networks because everything is designed in a “watertight” fashion. Each application, host, and service is given its own segment that is protected by fine-grained controls which are based on the criticality and sensitivity of what’s inside that segment (i.e., data), not what may be traveling around outside of the segment (e.g., IP addresses, ports, and protocols; unmonitored communication pathways).
A new form of segmentation
Microsegmentation has a bad reputation, though. When security practitioners hear the term many automatically associate it with past unsuccessful projects. Because most companies make extensive use of cloud computing and software-as-a-service, data stores are not always easy to find, and the data in them even harder to classify. This is why zero trust is so critical to microsegmentation, said Cunningham. Zero trust helps companies “actually fix the problem,” and “put the monster in a box.” Zero trust places security controls directly around the data assets adversaries are targeting, uses least-privilege access controls, and only allows access to or between data assets after verification is met—every time a communication on the network is requested. Localizing and isolating data assets becomes much easier. Security teams can stop focusing on trying to protect hundreds of thousands of endpoints and instead look at the data itself, what’s communicating on the network and how.
Zero trust microsegmentation allows security teams to place the greatest security controls around what’s important (your data), where it is, and how it’s being accessed. The biggest portion of the security effort, said Cunningham in closing, is built around the core of the business (the data) and allows security teams to expand out from there. “Focus[ing] on the micro to protect the macro,” he said, gives security teams the ability to remove the threat of the unknown and accomplish the goals of achieving better network visibility and breach detection, all the way through to enabling digital transformation.