Issue #2: Secrets management
The second control that is particularly devastating to organizations is inappropriate storage of secrets: things like encryption keys, API keys, passwords, administrative credentials, and any other piece of information we need to protect to keep the application appropriately secured. We’ve seen instances where this information was accessible to attackers (e.g., in misconfigured buckets, stored on Github, in servers that were subsequently compromised, in HTML source, etc.), thereby leading to attack or a breach.
Fixing this isn’t rocket science. Just like you would maintain an inventory of physical keys (for example, to the front door of your office), so too is it important to maintain an inventory of secrets that you use in the cloud. Beyond this, you also want to make sure that you are frequently reviewing how secrets are used and how they’re protected to make sure they are, in fact, protected.
Issue #3: Disabled logging/monitoring
All cloud services provide the ability to collect logs and telemetry. In fact, some of those options (services like CloudWatch, CloudTrail, Azure Sentinel, etc.) are very sophisticated. However, it doesn’t amount to a hill of beans if they’re not enabled, not configured, or you don’t look at them. Make sure that you are capturing security-relevant events, and that someone is reviewing them. This applies to SaaS services too, by the way. If a SaaS provider issues a maintenance or update bulletin—particularly when features germane to security are modified—make sure someone is reading that report and evaluating it from a security perspective. Just like you would ensure that logging is enabled and monitored inside an environment you control (for example, a server in your data center), ensure that the same actions to monitor are taken in the cloud environment.
Issue #4: Overly-permissive hosts/containers/virtual machines
Nowadays, no security practitioner in their right mind would directly connect a physical server (or even a virtual one) directly to the open internet without being behind some kind of firewall or filter, right? But yet we see people doing more or less the equivalent of this in the cloud all the time. For example, we see etcd (port 2379) exposed to the open internet for Kubernetes clusters, we see legacy ports/protocols (e.g., FTP) enabled on hosts in EC2 or Azure Virtual Machines, and we see (though more rarely) legacy ports/protocols in use (rsh, rexec, telnet) in legacy boxes that have been virtualized and moved to the cloud. Just like you’d lock down a host on-premises, both by safeguarding important ports and restricting (or ideally disabling entirely) legacy and insecure protocols, so too do these rules apply in the cloud.
Issue #5: Lack of validation
This fifth issue isn’t exactly a direct misconfiguration so much as it is a general failure to implement mechanisms to find and catch misconfigurations when they happen. You’ve heard the expression “trust but verify”? Well, in practice many organizations (intentionally or otherwise) take a position to “trust and then trust some more.” The way to ensure that misconfigurations are addressed quickly is to have someone looking for them and remediating them when they appear. Meaning, have someone take steps to validate permissions, review how buckets and compute instances are configured, look at network filtering reviews, and so on. In other words, have someone knowledgeable (be they an internal security resource, a cloud-savvy technology auditor, or a trusted third party) actually go out and check what permissions are applied and how services are configured. If you’ve done that once already, great—do it again. Do this periodically, either according to a set schedule and/or when new applications are deployed. Having someone do this periodic review ensures that, should the environment change, you will find and flag any potential issues before too much time elapses.