Deep Packet Inspection: Not an Option

For any computing network, it’s necessary to understand the traffic trying to communicate in the environment. Obviously, from both the operational and security standpoint, you can’t allow just anything requesting access to get network access. Deep packet inspection (DPI) gives organizations a way to examine the content—header plus payload—of the application protocol and determine if the packets should be permitted or denied further communication per policies implemented by the user organization. In other words, DPI looks at data packets and decides if they are what they allege or are something close enough that they should be allowed on the network.

DPI can be a useful way of filtering out malware or viruses from your network, but just like the firewalls that often incorporate deep packet inspection, DPI is based on IP addresses, ports, and protocols—network construct information—to work properly. Though address-based information is an adequate source in an on-premises, bare metal data center, that same cannot be said for cloud, virtual, or container environments where architecture changes constantly. As network complexity increases exponentially alongside the platforms organizations use, policy decisions at the network layer become less reliable and less effective at handling throughput and security.

Adding insult to injury, DPI only works if the content of the traffic can be inspected (as is implied by its name). Encryption, a best practice for data privacy and security, renders data unavailable for deep packet inspection.


Subscribe to our newsletter:


Handling encryption

According to a 2018 Fortinet survey, “72% of network traffic is encrypted,” a nearly 20% increase over the previous year. This is a very positive step for curtailing damage after a network intrusion, but it also means that technologies which rely on packets to make access control decisions are incapable of doing so unless they have a way to decrypt, examine, then re-encrypt the traffic before it continues along its communication path. Many next-generation firewall and IDS/IPS vendors have adapted their capabilities to compensate for this need. Doing so allows them to prevent a number of malware-based attacks from propagating. However, the process of decrypting and then re-encrypting takes a performance toll on the network. Given the amount of data traversing organizations’ networks, introducing latency is not optimal. In addition, though the process of decrypting traffic to inspect the contents helps eliminate instances of malware probing deeper into the network, the few moments of exposure between decryption and encryption offer a window of opportunity to attackers. As such, some security experts have advised that the risk of exposure is not worth the effort. You might be removing one vulnerability by inspecting packet content, but another vulnerability is introduced in the process of doing so.

Like many other technologies and processes before it, DPI can no longer be viewed as a failsafe for protecting data and services on the network. As organizations increase their use of encryption to guard against data breach (and if any lessons were learned from the Marriott, Sony, and similar breaches, it’s that encryption is key), organizations will need alternative methods to keep applications and services secret. This is not to say that encryption cannot ever be cracked by worthy adversaries. It is, however, yet another speed bump that will allow organizations more time and intersections at which to prevent a breach when compromise has already occured.

A new control plane based on zero trust

Without the ability to look at packet content, network teams must implement other means to determine anomalous and potentially malicious traffic. Traditional network and security practices advise looking for behavioral anomalies. For example, if you notice a server requesting access to a particular database with which it has never connected, it might be cause for concern. It might also simply be a new, legitimate request. Behavior, alone, is a tricky control plane for security and privacy. Instead, organizations should architect their networks—on-premises, cloud, container, and virtual—based on zero trust. The implications of a zero trust network are:

  • All networks and network traffic are considered inherently hostile.
  • Least privilege access is enforced across all devices, systems, hosts, users, and applications.
  • Access decisions are based on the cryptographic identity of the requesting resource.
  • Access controls use machine learning to adapt to the environment (i.e., policies are not based on network information).

Like many legacy security controls before it, deep packet inspection still has a place among the arsenal of tools that help improve network security. However, with encryption on the rise (as it should be), and as networks become more dynamic, DPI can no longer be counted on as a primary mechanism for preventing malware on organization’s systems. In its place, organizations that truly want to protect their networks from malware, unauthorized lateral movements, and data compromise should move to a zero trust model that relies only on the software and services communicating, not the network constructs around it or the ability of technology to deconstruct data.

 

Katherine Teitler, Director of Content

Written by Katherine Teitler, Director of Content

Katherine Teitler leads content strategy and development for Edgewise Networks. In her role as Director of Content she is a storyteller; a translator; and liaison between sales, marketing, and the customer. Prior to Edgewise, Katherine was the Director of Content for MISTI, a global training and events company, where she was in charge of digital content strategy and programming for the company's cybersecurity events, and the Director of Content at IANS, where she built, managed, and contributed to the company's research portal.