NEW VIDEO: Security Weekly - How to protect AWS metadata services (used in Capital One breach). Watch now!
 
 

Protecting Key Assets with Zero Trust

The 2019 Verizon Data Breach Investigations Report (DBIR) was published earlier this month. Every year it is interesting to see the broad range of cyber activity reported and track how that activity changes (or not) from year to year. The authors of the report are careful to explain that the trends detailed in the report are only as good as the data supplied to them by the organizations participating in any given year, yet the DBIR is the most extensive report of its kind and certainly provides insights defenders can use to inform their cybersecurity strategies. Though every organization is different—has different data to be protected, different technology in use, different risk tolerance, etc.—and therefore must construct its security program in a way that supports the unique business, the information presented in the DBIR shines a light on the cyber nooks and crannies which, if left unattended, are most likely to result in an incident or breach.

Not terribly surprisingly, the most targeted asset category in data breaches was servers. Sixty-three percent of all 2018 reported breaches per the DBIR included the compromise of a server: web servers, mail servers, database servers. This number is relatively unchanged from 2013, which makes sense.

null

 

Source: 2019 Verizon Data Breach Investigations Report

While categories like human hacking have become more prominent over the years, the most surefooted way to access a company’s juicy data, be it PII or trade secrets, is through exploiting servers. Data has to be stored and travel through somewhere; that “somewhere” always includes a server. Databases, for instance, are companies’ most valuable assets for data at rest, and if a database is compromised, resulting in data loss or availability issues, the company could face serious productivity and thus financial repercussions. An attack on a database could also lead to compliance issues. No organization wants the headache of dealing with regulators and lawmakers on top of managing an incident.


Edgewise will be at AWS re:Inforce, June 25-26, 2019 in Boston. We hope to see  you there! <https://www.edgewise.net/events>


Secure servers makes for secure data

Protecting servers, therefore, should be a key element of any organization’s security plan, and perimeter defense isn’t enough. While a lot of security and networking teams understand the urgency of protecting anything connected directly to the internet (as some servers do), the reality is that organization’s internal networks often have weaker security than anything connecting outside the firewall. In today’s threat landscape, though, internal networks must be treated like they’re as potentially hostile as the internet, meaning, trust relationship break down even when traffic is internal-only. It’s too easy for attackers to stealthily enter the network and pivot to applications or services not directly connected. If the organization does not have proper security controls specifically designed to harden servers, myriad other vulnerabilities—misconfigurations, overly-permissive admin access, weak passwords, and unpatched systems—will lead to server compromise.

Adversaries understand this, which is why, for years running, servers have been a highly coveted asset by criminals and would-be thieves.

Securing servers may not be the most obvious starting point. After all, the easiest and most direct way for cyber criminals to enter networks is through social engineering. This, too, has been a key point in the DBIR over the years; phishing and the use of stolen credentials have steadily risen as the “top threat action varieties in breaches” since 2013. But keeping in mind that a compromised perimeter is only the first stage of an attack, better methods of protecting the soft, chewy insides of the network and (more importantly) the data and services that communicate over those networks is critical. Ultimately, it’s the data criminals are after, not the network itself.

Simplifying security with zero trust

Locking down servers isn’t simple. As mentioned, above, they have to be configured properly, they must be patched regularly (which isn’t always as easy as it looks on paper), access controls should be least-privilege, passwords should be complex and protected with multi-factor authentication...Taking these processes as standalone action items is overwhelming. However, in aggregate it all adds up to zero trust. A network built on the concept zero trust means that any access request failing to meet zero trust authentication requirements is denied by default. For example, an attacker could use stolen (yet valid) credentials to enter a network undetected and then move laterally to access a database server. This is clearly a compromise. Yet, if the attacker then tries to add malware or exfiltrate large amounts of data out of the database, those actions would be blocked because they’re unrecognized in a zero trust environment, and a full-blown breach could be avoided.

All communication in a zero trust network must be verified before it’s allowed to be sent or received—for every access control decision, on a continuous basis. One allowed connection does not determine the next or the one after that, and so on. Therefore, even if an attacker makes their way onto the network (which could be on-premises, cloud, container, virtual, etc.), an actual breach won’t necessarily be affected because unrecognized software (e.g., malware) won’t be permitted to communicate, anomalous behaviors (e.g., exfiltrating terabytes of data) will be blocked, and new service requests (e.g., use of C2 to send illicit commands) will be denied communication.

As an architectural approach or methodology, zero trust needs to be applied through dedicated controls or policies. Those controls/policies can be tied directly to the assets you need to protect, the network on which the assets sit, or the endpoints that lead to interaction with the assets. There is no “wrong” way to implement zero trust, but logically, all roads lead to the asset. We know from the DBIR that servers are the most breached asset category, and therefore starting your protection strategy at the server level, using a zero trust approach, is the most reliable and effective way to keep your data safe from would-be intruders.

 

Katherine Teitler, Director of Content

Written by Katherine Teitler, Director of Content

Katherine Teitler leads content strategy and development for Edgewise Networks. In her role as Director of Content she is a storyteller; a translator; and liaison between sales, marketing, and the customer. Prior to Edgewise, Katherine was the Director of Content for MISTI, a global training and events company, where she was in charge of digital content strategy and programming for the company's cybersecurity events, and the Director of Content at IANS, where she built, managed, and contributed to the company's research portal.