In my previous post, I wrote about how the growing complexity of data center environments has made it impossible for any one security person to be able to grasp the entire application stack and formally describe every workload as network security policy. In this post, I will lay out an alternative approach that allows organizations to conquer that complexity by bringing together the teams who know the workloads and the teams who know security.
It’s clearly not sufficient to democratize policy creation nor network insight alone. Put them together, though, and we see the glimmer of a strategy.
Engineering/DevOps staff may not know enough in isolation, but these experts are in the best position to suggest and implement strong policies when presented with deep insight about the network, applications, and workflows.
In our hectic world, engineering/DevOps teams can’t gather that deep insight themselves. What they need are reliable tools that empower them to go from the the behavioral insights we discussed back in Part I to enacting policy that reflects the true inner workings of the environment.
To reach that level of capability, the tools must:
- Help analysts, engineers, and users quickly explore how workloads are intercommunicating.
- Present the data in a way that allows them to assess current reality (e.g., “are these communication paths legit,” or “are we already compromised?”).
- Use machine learning to suggest policies (because, as my colleague John O’Neil points out, it’s far easier to alter a suggested policy than to write one from scratch).
- Vest analysts with the power to move from insight to policy action with a few clicks of the mouse (or, insight to suggested policy code with just a few clicks, for you monks of the order of infrastructure-as-code).
With this toolchain, engineers and security staff can craft precise policy more efficiently, because the drudgery of collecting and analyzing all the data is offloaded to machine learning. These empowered engineers can implement better policy in less time, so the policies deployed will reduce attack surface. The same tooling can also present every potential lateral movement path in the environment, making the entire network auditable in a sense. Finally, the tools can go beyond creation to address the entire policy lifecycle: as machine learning detects network changes, policy modifications can be recommended. When organizations are willing to embrace shared responsibility, the toolchain I outlined will form the foundation of a revolutionary security workflow.
But again, the key is getting the right eyes on the data. When the tooling can report applications and users, in addition to addresses and ports, the engineers who work with the applications every day are in the best position to consume the output (because they can best answer the question, “do the recommended policies make sense?”).
Once we’ve gathered this deep well of information, why write old-school firewall rules? Nobody cares that 10.0.5.5 can talk to port 80 on 172.17.8.8. Besides, renumbering by some outside, unconnected networking company could wreak havoc on our rules during the next change window anyway.. All that matters is that app X on host A can talk to the web server on host B. We have the data. We’ve involved the application experts who know how to write specific rules. Why compromise by using an identifier (the IP address) that changes all the time and has no bearing on the applications we’re trying to protect?
This is why we created Edgewise: Because you’re overworked and the universe will go through its heat death before you can manually write all the policies needed to implement that microsegmentation strategy in the IT roadmap. The world needs a better solution (click to tweet this).
Let’s make security a true team sport by empowering engineers to use their expertise to make the environment safer. Engineering/DevOps staff will be thrilled when they realize they’re freed from the drudgery of maintaining lists of IP addresses and ports for use in firewall rules. They might even buy security a case of beer when they realize that the cool new security tool helped them gain the deeper understanding they needed to debug that weird split-brain problem with the Cassandra cluster (or catch the misconfigured test system that’s been hitting the production database–you did remember to use a different password for the production postgres user, right?).