Attack progression is seemingly unstoppable
Protecting workloads in the cloud and data center from lateral movement of threats continues to be a pressing and unsolved problem. There are numerous factors that contribute to attack progression including permissive east-west network controls, vulnerable targets, highly dynamic environments, and of course, long dwell times. Attackers rarely, if ever, land directly on their ultimate targets. They're opportunistic. (click to tweet this) They make initial compromise through the most vulnerable systems in the data center. Once initial compromise is made, attackers have plenty of time to move laterally and plot a path toward high-value targets by using learned credentials or exploiting vulnerabilities to compromise the ultimate target and exfiltrate content.
Vulnerabilities contributing to an expansive attack surface
The continued prevalence of vulnerabilities makes for a large number of exploitable targets, or in other words, a large and exposed attack surface. There are on average 19 vulnerabilities published every day, four of which are considered critical. It typically takes seven days for exploit code to be published after the initial release of that vulnerability, an additional 10 days for the patch to be released to fix it, and on average 100 days to deploy that patch. Overly permissive network controls combined with the prevalence of vulnerable targets and a 200-day average dwell time before the attacker is discovered make gaining access to an environment and moving laterally to compromise the ultimate target relatively easy.
The cloud does not make it any easier for the defender
Of course, the complexities of the cloud make this even more difficult. You've got multicloud environments that are distributed, are highly dynamic, and have adopted technologies requiring entirely new mechanisms for security and control not supported by legacy security technologies. What we need is a new paradigm for control; but before we get there, let's review the evolution of these networked environments and accompanying security controls.
Evolving network controls — building a maze
Today, most organizations use some amount of perimeter control, whether that's WAF or next-gen firewalls; but the inside of the network perimeter allows for largely open access between addresses. The problem is that the connectivity required by the business isn't the only thing that's allowed. There are many more allowed pathways that are exposed for exploit and compromise. What we're realizing over the years is that we need to limit the blast radius or attack paths inside the perimeter so that when one of these devices is compromised, the ability for attackers to move laterally within the environment is constrained.
The industry has taken a number of different approaches. One is the use of internal perimeters. These are typically coarse-grained zoning by deployment environment, for example, production DMZ, staging, development, or alternatively zoning by a business unit or business function. These are really just broad sets of systems, addresses, and services grouped in a coarse-grained manner. These controls have evolved to be one step closer to the applications—sometimes called microperimeterization or microsegmentation—where the goal is to contain applications. This can get down to very granular control such as securing discrete communication paths using very fine-grained policies, which necessarily increases complexity. These controls can be likened to a maze that is built around vulnerable targets.
While the technique of microsegmentation is fairly well regarded, the question remains what the appropriate technical capabilities are to enable it. Despite having these technologies, attack progression largely remains an unsolved problem. Why? In my view, the solutions that are available today are not threat-centric network controls. They continue to rely on traditional packet-based technologies. We need a fundamental shift in innovation on how we design and enforce more effective controls to stop threats from moving through these segmented, maze-like environments.
Lastly, the complexity of deploying these solutions results in multi-year deployment processes that dangerously constrain the deployment breadth of these technologies and in many cases result in overly permissive controls simply because when the applications themselves aren't adequately understood, the appropriate fine-grained controls can't be put in place. In the maze that microsegmentation technology constructs, attackers always have an entrance and a pathway to the target while the beleaguered defenders are confronted by complexity and confusion.
Reaching the limit of address-based controls
These technologies are all based on the same fundamental packet inspection technologies of yesterday's firewalls. For example, when we look at packets, we can only attribute the packet to the address that sent it and address/port that received it. But these technologies cannot see past the addresses and ports to understand the identity of the actual software that is communicating. This is a crucial, fundamental point. Addresses don't really make connections; they simply emit packets or receive packets. It's software that communicates. Software asks the operating system for a connection, and the software is what's technically communicating.
Another critical insight missed by packet inspection technologies is the identity of the user creating the packet and the host controlling the address. No matter how deeply a packet is inspected, it is never going to tell you the software communicating, the user controlling that software, or the host on which it's running. This critical gap in visibility and control enables malicious actors using malicious software to masquerade as legitimate communications and essentially piggyback over these address or packet inspection based controls. We need a new approach that is more threat-focused to prevent malicious software from circumventing these traditional controls and finding a pathway through the maze (click to tweet this).
Part two of this blog will explain how the Zero Trust Networking model is better-designed to stop lateral movement while making security easier to manage. Check out part two now.