Least privilege access
Overly permissive access controls are a key facilitator of insider threat. When admins provision more-open access, employees can interact with data and systems without resistance. Given the plethora of technology at our disposal, people want quick, easy access without jumping through hoops. The expectation is that they can connect to any systems they need to perform their job, anytime they want. In an effort to meet business demands, technology teams frequently leave overly permissive controls in place, but liberal access controls add unnecessary risk.
The principle of least privilege limits the access that users, systems, and processes have to networked resources based on roles and responsibilities. Eliminating unnecessary or overly permissive access reduces the network attack surface and helps mitigate the probability of attackers escalating privileges and accomplishing a breach.
Continuous authentication and authorization
Historically, network access worked much like a lock on a front door; anyone with the right key was able to get inside. Once inside the building, that person was free to go wherever they pleased. Over time, network and security admins realized that a one-time security authentication at the “front door” (i.e., perimeter) wasn’t enough and set up additional perimeters through which users and processes had to authenticate before being allowed to access critical data, applications, or services.
However, the idea of trust remained. Credentials checked one time at each juncture permitted the user/process easy access to anything inside.
Zero trust abandons the idea of a trusted user or process and requires a check on authorization and authentication every time access is requested; previous access doesn’t determine future access (because attackers can intercept communications). To prevent an attacker from piggybacking on authorized users or processes, a zero trust network creates access permissions that are dynamic and based on a wide collection of attributes — as opposed to a username+password combination, location-based protocols, or other static information. This combination of attributes forms an identity for every user and process, and if the identity is altered, access is denied. If the user/process is acting in an unexpected way (e.g., sending excessive amounts of data), the action is blocked. Continuous authentication and authorization prevents “bad” from happening because every action is checked.
Most security practitioners think of multi-factor authentication as a user control that helps eliminate the probability of an adversary using a legitimate set of credentials to gain unauthorized access to applications, data, or systems. However, databases, applications, hosts, servers, and processes also require access permissions to function normally.
In a zero trust network, assets/resources are assigned an identity (just like users), but each identity is based on a collection of attributes taken from source data (rather than network-based information or a password+username combo). This process provides the contextual information which allows the system to determine the true authenticity of network communication requests, only without requiring a user or administrator to take further action. Multi-factor authentication in userland is often resisted because it means the user must supply additional information before receiving permission to access the requested resource. But multi-factor authentication in a zero trust network can happen automatically and seamlessly because identities are collections of multiple factors which, in aggregate, cannot be changed by an attacker — be that person an employee or an external threat actor.
By now, most organizations understand that flat networks are easily breached. As with the open building example used earlier, if a user/attacker is given free reign to move laterally on the network and access any asset, data will likely be stolen, destroyed, or accessed by people who shouldn’t have access. Companies, therefore, have implemented varying segmentation methods over the years to keep sensitive data segregated from other parts of the network. Firewalls are the most common tool in the security practitioner’s toolbox, but firewalls only work insomuch as network constructs and location data are accurate.
A modern, zero trust segmentation strategy shifts focus away from the network to what is communicating on the network — hosts, servers, applications, etc. Rather than creating perimeters around the whole network or micro-perimeters around sections of the network, zero trust segmentation a) uses identity as the basis for perimeterization, b) continuously authorizes and authenticates communicating assets, and c) enforces control based on communicating assets. This last point is especially critical when trying to prevent attacks. If the security/networking team can segment the network based on assets rather than network constructs, the hassle of traditional segmentation is eliminated and security policies remain in place even if the network changes.
The vast majority of insider attacks happen because an employee wants to steal or destroy company-proprietary data. Therefore, when looking at security strategies to prevent insider threat, it makes sense to put security control as close as possible to the thing attackers want: the data. Traditional security tooling focuses on protecting the network in which the data lives, or detecting abnormal or unauthorized activity by a user. But a zero trust network places the strongest protection around the most sought-after asset instead of the environment in which the asset is communicating. Today’s networks change all the time. The most important thing is to ensure no unauthorized party—whether they have been assigned legitimate credentials or have stolen them—can cause a data breach.