(This blog post is cross-posted from an article recently published in HelpNet Security.)
In 1987, Bernd Fix developed a method to neutralize the Vienna virus, becoming the first known antivirus software developer. In 2017, as we pass the 30-year anniversary, a lot has changed in endpoint security. I have been fortunate enough to have a front row seat to this evolution, and the parallels between how endpoint security has transformed and what must occur within network security are striking.
When the first antivirus products were developed, there were only a handful of known viruses. Antivirus technology, or endpoint security as it later became known (because malware took on lots of forms like viruses, worms, trojans, and rootkits), involved maintaining a list of static signatures of “known bad” applications and executables. Throughout the 1990s, the technology remained relatively unchanged. It was all about who had the best blacklist.
By the turn of the millennium, the amount of known malware had grown exponentially. Instead of dozens or even thousands of signatures, there were tens of millions. Coupled with polymorphic malware, the permutations were exploding faster than traditional blacklists could keep up. I was working at McAfee at the time, and endpoint security was evolving to “micro-signatures,” or partial binary patterns, that could be updated faster and catch more permutations. It was still a losing battle as security companies required an army of humans to keep the database of signatures (or micro-signatures) properly up to date.
Around a decade ago, with the realization that there was more bad software in the wild than good, a transformation began. As CTO of Bit9 (now Carbon Black), I helped pioneer technology that used whitelists, or signatures of “known good” instead of blacklists. Since the set of trusted software was much smaller and less volatile, it was possible to maintain such lists. The paradigm shift, which continues to this day, is that trust-based approaches to cybersecurity are needed. Adapting to modern security threats means you need to focus on “is this trustworthy?” and not simply “is this bad?”
However, in complex computing environments, maintaining even a trust list can be cumbersome. In a world of continuous change, the fact is that static signatures, even partial ones, don’t scale. The other challenge to security is dual-use software – programs like Powershell which can be used for both business-critical operations and malicious purposes. Trying to classify software as either good or bad doesn’t always work; something more is required.
With these challenges in mind, endpoint security took an additional step forward in recent years. Modern endpoint security solutions, commonly referred to as NGAV or next-gen antivirus, don’t use signature lists and don’t rely solely on software classification. Instead, they analyze behaviors and attributes in real-time to assess both trustworthiness and malicious intent. This is only possible because computing power has advanced to the point where machine learning can analyze millions of data points in real-time so determinations can be made without user intervention. With NGAV, a program is allowed if it is doing good, and prevented or alerted if it is doing bad.
In 30 years, endpoint security evolved from static signature-based blacklists, to micro-signatures, to whitelists, and then to trust-based real-time analytics. The network side of cybersecurity is following a similar trajectory, albeit a few years behind. (click to tweet this)
Repeating the cycle?
While there is some debate about who developed the first firewall product, the basic concept of packet filtering was designed in the late 1980s (though the term “firewall” didn’t gain popularity until the 1990s). Like its antivirus counterpart, for decades firewalls were (and still are) based on static pieces of information – instead of binary signatures, they use IP addresses, ports, and protocols. Since the amount of known bad addresses were relatively small back then, this worked for a while.
Unlike endpoint security, firewalls adopted whitelisting concepts earlier, and through the 1990s and turn of the millennium, firewalls could be deployed with both allow and block rules. These rules were still based on static concepts like IP addresses, and maintenance of those rules was still the responsibility of the consumer or security consultant. As networks became more advanced and interconnected, these lists quickly became unmanageable.
Network security co-opted the term “next-gen” before its endpoint counterpart, but the technologies are hardly similar. About ten years ago, next-gen firewalls or NGFW appeared, utilizing deep-packet inspection to support rules based on traffic content as well as addresses and protocols. NGFWs are more akin to the “micro-signatures” of antivirus, where partial matching of static information can be applied.
In recent years, incremental advancements have been made with firewall management solutions and microsegmentation technologies that help in the management of the rules. But the fundamental limitation of relying on attributes like IP addresses and content tags remains, and the rule sets have exploded. It is not uncommon for an organization to have thousands (even tens of thousands) of firewall rules which makes it a management nightmare.
And just as endpoint security learned that trying to classify trust based solely on file binaries, network security is learning that trying to make decisions based solely on packet information (addresses, ports, protocols, content tags) is a failing proposition. The problem is even more pronounced for networks because in a world of virtualization, cloud computing, containers, and remote computing, IP addresses are becoming increasingly transient and meaningless.
Where to from here?
Learning from the endpoint, the next step in the evolution for network security should be obvious. Decisions about whether to allow or block communication must be made based on information that more closely aligns with intent, such as the user and application making the request, the user and the application receiving the request, and the state of the environment. As with signature-less NGAV, machine learning can be applied to perform this analysis in real-time.
In 2017, we saw the NGAV vendors take on the traditional antivirus vendors, positioning the latest evolution of endpoint security as a replacement for legacy solutions. Network security is poised for the same transformation, where new technologies challenge the static network-address models of traditional firewalls in place of trust-based analytics that are more suited to modern threats and computing environments. (click to tweet this)