Stay on the cutting edge. Subscribe to our blog.
According to a 2018 Fortinet survey, “72% of network traffic is encrypted,” a nearly 20% increase over the previous year. This is a very positive step for curtailing damage after a network intrusion, but it also means that technologies which rely on packets to make access control decisions are incapable of doing so unless they have a way to decrypt, examine, then re-encrypt the traffic before it continues along its communication path. Many next-generation firewall and IDS/IPS vendors have adapted their capabilities to compensate for this need. Doing so allows them to prevent a number of malware-based attacks from propagating. However, the process of decrypting and then re-encrypting takes a performance toll on the network. Given the amount of data traversing organizations’ networks, introducing latency is not optimal. In addition, though the process of decrypting traffic to inspect the contents helps eliminate instances of malware probing deeper into the network, the few moments of exposure between decryption and encryption offer a window of opportunity to attackers. As such, some security experts have advised that the risk of exposure is not worth the effort. You might be removing one vulnerability by inspecting packet content, but another vulnerability is introduced in the process of doing so.
Like many other technologies and processes before it, DPI can no longer be viewed as a failsafe for protecting data and services on the network. As organizations increase their use of encryption to guard against data breach (and if any lessons were learned from the Marriott, Sony, and similar breaches, it’s that encryption is key), organizations will need alternative methods to keep applications and services secret. This is not to say that encryption cannot ever be cracked by worthy adversaries. It is, however, yet another speed bump that will allow organizations more time and intersections at which to prevent a breach when compromise has already occured.
A new control plane based on zero trust
Without the ability to look at packet content, network teams must implement other means to determine anomalous and potentially malicious traffic. Traditional network and security practices advise looking for behavioral anomalies. For example, if you notice a server requesting access to a particular database with which it has never connected, it might be cause for concern. It might also simply be a new, legitimate request. Behavior, alone, is a tricky control plane for security and privacy. Instead, organizations should architect their networks—on-premises, cloud, container, and virtual—based on zero trust. The implications of a zero trust network are:
- All networks and network traffic are considered inherently hostile.
- Least privilege access is enforced across all devices, systems, hosts, users, and applications.
- Access decisions are based on the cryptographic identity of the requesting resource.
- Access controls use machine learning to adapt to the environment (i.e., policies are not based on network information).
Like many legacy security controls before it, deep packet inspection still has a place among the arsenal of tools that help improve network security. However, with encryption on the rise (as it should be), and as networks become more dynamic, DPI can no longer be counted on as a primary mechanism for preventing malware on organization’s systems. In its place, organizations that truly want to protect their networks from malware, unauthorized lateral movements, and data compromise should move to a zero trust model that relies only on the software and services communicating, not the network constructs around it or the ability of technology to deconstruct data.