Not all lateral movement is created equally

The cyber attack lifecycle generally follows a series of well-defined steps: the attacker gains a foothold in the network—often by successfully phishing an authorized insider then using those legitimate credentials to move stealthily inside the network—then performing reconnaissance to identify additional attack vectors. Once additional vulnerabilities or privileges are found, the attacker can move laterally throughout the network, escalating privilege to access desirable information or infrastructure.

As a network defender, one might look at the various stages of the attack lifecycle and try to identify where to concentrate efforts to minimize harm. Since we know that phishing is, unfortunately, an attack type that will always be successful as some level, the next logical place for defensive measures is blocking lateral movement. Prevent the attacker (who has already gained a foothold) from accessing additional—and potentially more sensitive—parts of the network. Any experienced network security professional, however, knows that an attempt to identify and block “all lateral movement” puts the organization at risk of casting a too-wide net that will interfere with normal business operations.

The reality of today’s networks

Most modern company networks are broken into “trust zones,” which are segmented “areas” surrounded by access controls. Network administrators can adjust the level of access controls protecting each zone, depending on the sensitivity of systems and associated data within that trust zone. Firewalls, which restrict traffic to only communicating hosts in each zone, are placed between zones as an extra layer of security, using only the protocols required to communicate.

In theory, these “perimeter” controls keep the attacker from getting too far in his efforts. The reality is that even within a trust zone, attackers can locate enough information to move laterally inside the zone to allow them to launch later attacks or movements deeper into the network.

What’s more concerning, however, is that the configuration of today’s networks allow an attacker to piggyback on authorized network access policies and move across trust zones, closer to the final target. This is the type of lateral movement that is most essential to prevent. Once attackers determines how to move across trust zones, they may gain unfettered access to systems containing personally identifiable information, financial data, acquisition plans, or intellectual property; disrupt service; hold data hostage; or initiate myriad other attacks that could harm the organization.

Take a look at our recorded webinar on zero trust networking and how it improves significantly on traditional network security.

An ounce of prevention…

The simplest solution would seem to be (on the surface, anyway) to create highly restrictive policies that don’t allow lateral movement across zones. The problem with this approach, however, is the complexity of today’s network: they’re not the flat, open spaces firewalls were initially created to protect. In modern networks there are various interactions, many of them using the same network protocols.

Modern applications, too, are built differently from those only a few years ago; they’re elastic, take advantage of auto-scaling and microservices, and undergo rapid development changes. Because of this, it is almost impossible to define adequate policies that use only IP addresses, ports, and basic protocol characteristics to restrict traffic. In essence, today’s network complexity forces organizations to “dumb down” policies in order for traffic to communicate. The result is overly permissive firewall rulesets. It is precisely these rulesets that attackers exploit to move laterally across trust zones.

Breaking the rules

While the problems with current network configurations are well known, solutions are not. “Zero trust” is an approach that has been discussed at length, but implementing a true zero-trust network comes with its own set of challenges. The key with zero trust is to disallow any traffic to piggyback on existing policies or network access, and to narrow policies and rulesets to only those which are most effective. No addresses are assumed to be free of malicious intent, hence “zero trust.”

At Edgewise, we use application-level language—rather than underlying infrastructure elements such as IP addresses and protocols—to define and enforce policies. The characteristics of each application are used to determine “allow” or “deny.” To continually improve accuracy, Edgewise’s machine learning engine selects a set of attributes that uniquely identify software in your environment. The engine adapts and updates as software evolves: the more it learns, the better Edgewise is able to recognize approved applications and enforce controls.

Additionally, you can define company-specific access controls that enforce the principle of least privilege (always a recommended policy) without laboriously defining and maintaining thousands of policies. We do this through policy compression, again using machine learning, to automate definition of an optimal policy set using orders of magnitude fewer rules.

Today’s typical network controls don’t allow network admins to effectively protect “leap points” between trust zones and prevent the lateral movement that gives attackers access to organizations’ most critical assets. This is why preventative strategies must evolve to focus on the way attackers operate and how applications communicate. To learn more about Edgewise’s Trusted Application Networking, contact us today for a demo.

Harry Sverdlove, Founder and CTO

Written by Harry Sverdlove, Founder and CTO

Harry Sverdlove, Edgewise’s Chief Technology Officer, was previously CTO of Carbon Black, where he was the key driving force behind their industry-leading endpoint security platform. Earlier in his career, Harry was principal research scientist for McAfee, Inc., where he supervised the architecture of crawlers, spam detectors and link analyzers. Prior to that, Harry was director of engineering at Compuware Corporation (formerly NuMega), and principal architect for Rational Software, where he designed the core automation engine for Rational Robot.