In the past, corporate networks were flat, open places with unfettered access. Anything connected to the corporate LAN was assumed to be inherently good. Sure, there were firewalls between the “inside” and the “outside” of the network (the “perimeter”), but generally, once an attacker infiltrated one host on the network, he could move throughout the network without much interference. As time went on, and networks grew more complex, architects began to realize that one perimeter boundary was not enough to secure traffic and systems.
Over the years, network architects have done a better job of “limiting the blast radius” of an attack by using segmentation. With traditional segmentation, barriers are created—normally using a firewall—to minimize the ability to jump from one area of the network to the other. Policies are then put in place that block unnecessary traffic between subnets.
Microsegmentation, which has become quite the network security buzzword, takes this practice one step further, isolating each workload on the network. The process applies fine-grained access controls in an attempt to allow only the network traffic required for the workload to function.
Few organizations are achieving the desired success with microsegmentation though. Why? While great in theory, there are a number of practical problems:
- Microsegmentation means translating from application speak to network speak. Networks are complex places that have grown up over years, with bits bolted on here and there. Consequently, it’s difficult to know exactly what traffic is able to be blocked without impacting the business. Creating policies that allow only the traffic needed necessitates knowing a great deal about how the application works and how that translates into network policies. This involves huge amounts of back and forth between application developers and IT folks. Unless you have a really well-oiled, truly integrated DevOps organization, this can be really tough, if not impossible.
- Microsegmentation can be a policy management nightmare. Application interactions can be highly complex with many interdependencies. Some microsegmentation solutions add a layer of abstraction, looking to describe policies in terms of the application, and then providing the translation to the network layer. The net result, though, is still thousands of policies; validating that the resulting controls are correct is beyond the scope of all but the largest organizations.
- Prioritizing how to migrate applications to microsegmentation is a major challenge. Networking teams can’t just decide one day to implement microsegmentation in a “big bang” way. Those applications that are the most vulnerable and could have the biggest impact on the business if they were to be compromised must be prioritized. The problem is that practitioners have limited data to understand the current state of risk, and are therefore unable to prioritize deployments based on concrete risk assessments. Without a risk-aligned plan, most organizations will opt for the status quo, and the project stalls.
- Understanding the risks and benefits of microsegmentation is tough. Before microsegmentation is deployed, organizations need to be able to convince peers that risk will, indeed, be reduced. This includes weighing the security risk with the risk of breaking the application. Again, most practitioners struggle to accurately measure the operational risk of deploying these complex policies. As before, without the ability to articulate the risks and benefits, the status quo wins.
- Software development organizations are leery of microsegmentation. By and large, most software developers view anything that smacks of strict policy controls as an impediment to their ability to introduce new features. Also, anything that might impact velocity—and in turn, business agility—is a non-starter for most CIOs.
Microsegmentation evolved out of a need to stop the progression of network-borne attacks inside the cloud and data center, but still falls short of allowing IT operations and security teams to address today’s networking problems. Policy complexity and the inability to prioritize protection based on exposure risk can introduce management headaches. Even for organizations with sufficient resources to reduce friction, the benefits of deploying microsegmentation are difficult to justify against the age-old argument for business agility and velocity (click to tweet this). Risk reduction, especially in an organization that hasn’t yet experienced a major security event, is hard to quantify. Needless to say, an alternative is necessary.
Later this week, we’ll explain how Trusted Application Networking can eliminate the challenges of microsegmentation while improving network security.