NEW: Zero Trust Security For Dummies ebook. Get your free copy now!
 
 

Gaining Network Visibility Requires Going Beyond Ports and Protocols

Today’s highly-dynamic and widely-distributed corporate networks are causing a visibility crisis. Organizations are increasingly struggling to form an exhaustive understanding of what software and services are present and communicating across their on-premises data centers, multiple cloud providers, and container instances. If keeping abreast of multiple, differently configured network environments weren’t enough, the rapid deployment of applications and services required to keep corporations competitive in the current on-demand market is only adding to the headache. With the “what” and “where” changing constantly, enterprise security teams can’t rely on static tools or network constructs to gain the right level of visibility which allows them to effectively manage the security of software, services, systems, and users.

Some desperate network and security admins employ vast amounts of tools—some which work well in one environment, others that work better elsewhere, and all of which have individual reporting mechanisms—but doing so only results in disparate sets of data which relies on further correlation. Therefore, while collecting the data may not be hard, forming a complete picture of all assets under management, network communication patterns and trends of those assets, deviations from baseline activities, non-compliance with policies, and even attacks in progress becomes almost impossible. However, all of this information is necessary to operate an effective cybersecurity program.

Visibility into network communications gives network and security teams the ability assess how security is performing against business goals, and to ensure that business-critical systems are consistently available, with the right information. The unfortunate truth is, though, that most companies don’t maintain an up-to-date map of their core assets (software, systems, hosts, users) or current baselines of software communications. Without full visibility into what software is communicating and how, security teams lose the ability to assess risk and adapt to changes in the environment (or external environment that affects what connects into the network). Without consistent, gap-free visibility across all networks, security teams are blind to where core assets are in the network, what “talks” to them, and which other assets depend on them to work properly.

What you have helps determine what you need to protect

Visibility is always the first step in a cybersecurity strategy, because without it security and network operations teams can’t reasonably know what they need to protect (and frankly, there are too many threats in today’s cybersecurity landscape to focus on everything). After finding out what needs protection, the only way to determine appropriate controls and policies is to then look at how software and services are communicating within the network perimeter. The nature of today’s networks and access to what’s inside has become too easy as a result of phishing, unpatched software, and other vulnerabilities, so it’s not enough for security teams to merely know which core assets are present in the network. Visibility into how assets communicate means the security team can quickly and easily identify when software is acting strangely, for instance, trying to connect to an unusual host or through a connection proxy. It’s not just about who or what is getting in, but behavior once on the network.


Learn how Edgewise uses identity-based controls to protect your core data center and cloud assets.


How software and services behave determines security controls

Attackers have grown savvy to the fact that east-west (lateral) movement is hard for most companies to monitor reliably. Adversaries know they can, for example, use traditional address-based security technologies to blend in with “normal” network traffic, chain together a “multi-hop proxy,” or send command and control communications over a non-standard port and bypass misconfigured firewalls altogether. In essence, the most common tools implemented in organizations’ data centers and clouds that are meant to protect the network are blind to malicious and unauthorized traffic because address-based controls can’t see into the identity of the software communicating. If this seems like a challenge in an on-premises environment, multiply the difficulty of trying to gain visibility into the behavior of all of the organization’s software by the number of clouds and containers, further multiplied by each of their requisite built-in security capabilities.

For these reasons, address-based controls are no longer sufficient for network security. While firewalls, DLP, IDS/IPS, etc. are adequate for identifying some types of breach activity, they cannot see into the identity of software and services communicating in and across hybrid networks, and therefore can’t empower security teams to determine appropriate levels of risk and control. If the organization can’t see when malware has been added to a business-critical application, for instance, how can it stop that application from communicating through the network and infecting other systems and users? If the tools that are supposed to highlight suspicious activity rely on network constructs that can be hijacked by adversaries, how can a security team ensure that it truly knows what’s happening on the network?

Gaining network visibility requires knowing not just what needs protection (although that’s an excellent first step), but going beyond to learn typical behavior (how software normally communicates, with what, when, etc.). To fully understand typical behavior (communication patterns and trends, etc.), security teams must recognize the shortcomings of address-based tools, lest be blind to attackers’ tactics and techniques. Instead, identity-based software policies mean that security controls are based on and travel with the what—the core assets that need protecting—and can accurately see when the how is out of whack, even if (when) the network itself changes. Which it will.


Subscribe to our newsletter:


Katherine Teitler, Director of Content

Written by Katherine Teitler, Director of Content

Katherine Teitler leads content strategy and development for Edgewise Networks. In her role as Director of Content she is a storyteller; a translator; and liaison between sales, marketing, and the customer. Prior to Edgewise, Katherine was the Director of Content for MISTI, a global training and events company, where she was in charge of digital content strategy and programming for the company's cybersecurity events, and the Director of Content at IANS, where she built, managed, and contributed to the company's research portal.