New: ESG Technical Validation: One-Click Segmentation. Download now!

5 Cloud Misconfigurations That Can Hurt You

We all know that cloud can be a bit of a “mixed blessing” when it comes to security. On the one hand, economies of scale arising from the centralization of operational security can help to improve some elements of security; a cloud provider, because of these economies of scale, can maintain levels of staff and purchase tools that would be out of reach for most individual organizations on their own. On the other hand, there are some areas where cloud makes security a little more challenging. One of those areas can be ensuring robust and hardened configuration of usage.

Under the shared responsibility model, it is the responsibility of the provider to ensure that customers have a solid foundation upon which to operate; it’s the responsibility of the customer to ensure that they are making use of the provider’s features and configuration options to ensure data stays protected. This, as it turns out, has proven challenging in practice. For example, we’ve seen S3 bucket configuration issues expose data about 198 million US voters, millions of cable customers, classified defense information (including encryption keys), and numerous other gaffes brought about as a direct result of misconfigurations in usage.

With this in mind, let’s look at some of the more common cloud misconfiguration mistakes with an eye to preventing them. This isn’t, of course, intended to be an exhaustive list of every possible thing that you could misconfigure in a cloud context (such a list would be thousands of pages long). Instead, these are the common issues that you might encounter—they’re common enough that keeping an eye open specifically for them is a good idea, and generic enough that they apply to pretty much any organization regardless of the particular set of service providers they use.

Issue #1: Storage access

Arguably the biggest issue right now (at least when judged based on how many breaches they’ve caused) is misconfiguration of storage objects—specifically by allowing too-permissive access control settings. Excessive permissions happen when storage buckets are inappropriately configured to allow access to those beyond what is intended (for example, to the public at large).

There are some good reasons why this happens. In fact, one of the most common reasons is a misunderstanding of the “Authenticated Users” group in AWS. Many developers working with buckets mistakenly assume that “Authenticated Users” refers specifically to users authenticated within the context of their organization/application when, in fact, “Authenticated Users” means anyone who authenticated to AWS (so pretty much everyone in the world).

How do you find and fix storage access issues? A few ways. There are free automated scanning tools that can help, in some cases you can use the built-in security features from your platform of choice to help find misconfigured buckets, and you can have administrators periodically review permissions to make sure they are appropriate.

Visit Edgewise at Black Hat, August 3-8, 2019 Booth #1612

Issue #2: Secrets management

The second control that is particularly devastating to organizations is inappropriate storage of secrets: things like encryption keys, API keys, passwords, administrative credentials, and any other piece of information we need to protect to keep the application appropriately secured. We’ve seen instances where this information was accessible to attackers (e.g., in misconfigured buckets, stored on Github, in servers that were subsequently compromised, in HTML source, etc.), thereby leading to attack or a breach.  

Fixing this isn’t rocket science. Just like you would maintain an inventory of physical keys (for example, to the front door of your office), so too is it important to maintain an inventory of secrets that you use in the cloud. Beyond this, you also want to make sure that you are frequently reviewing how secrets are used and how they’re protected to make sure they are, in fact, protected.  

Issue #3: Disabled logging/monitoring

All cloud services provide the ability to collect logs and telemetry. In fact, some of those options (services like CloudWatch, CloudTrail, Azure Sentinel, etc.) are very sophisticated. However, it doesn’t amount to a hill of beans if they’re not enabled, not configured, or you don’t look at them. Make sure that you are capturing security-relevant events, and that someone is reviewing them. This applies to SaaS services too, by the way. If a SaaS provider issues a maintenance or update bulletin—particularly when features germane to security are modified—make sure someone is reading that report and evaluating it from a security perspective. Just like you would ensure that logging is enabled and monitored inside an environment you control (for example, a server in your data center), ensure that the same actions to monitor are taken in the cloud environment.

Issue #4: Overly-permissive hosts/containers/virtual machines

Nowadays, no security practitioner in their right mind would directly connect a physical server (or even a virtual one) directly to the open internet without being behind some kind of firewall or filter, right? But yet we see people doing more or less the equivalent of this in the cloud all the time. For example, we see etcd (port 2379) exposed to the open internet for Kubernetes clusters, we see legacy ports/protocols (e.g., FTP) enabled on hosts in EC2 or Azure Virtual Machines, and we see (though more rarely) legacy ports/protocols in use (rsh, rexec, telnet) in legacy boxes that have been virtualized and moved to the cloud. Just like you’d lock down a host on-premises, both by safeguarding important ports and restricting (or ideally disabling entirely) legacy and insecure protocols, so too do these rules apply in the cloud.   

Issue #5: Lack of validation

This fifth issue isn’t exactly a direct misconfiguration so much as it is a general failure to implement mechanisms to find and catch misconfigurations when they happen. You’ve heard the expression “trust but verify”?  Well, in practice many organizations (intentionally or otherwise) take a position to “trust and then trust some more.” The way to ensure that misconfigurations are addressed quickly is to have someone looking for them and remediating them when they appear. Meaning, have someone take steps to validate permissions, review how buckets and compute instances are configured, look at network filtering reviews, and so on. In other words, have someone knowledgeable (be they an internal security resource, a cloud-savvy technology auditor, or a trusted third party) actually go out and check what permissions are applied and how services are configured. If you’ve done that once already, great—do it again. Do this periodically, either according to a set schedule and/or when new applications are deployed. Having someone do this periodic review ensures that, should the environment change, you will find and flag any potential issues before too much time elapses.    


Ed Moyle, General Manager and Chief Content Officer

Written by Ed Moyle, General Manager and Chief Content Officer

Ed Moyle is currently General Manager and Chief Content Officer at Prelude Institute. Prior to joining Prelude, Ed was Director of Thought Leadership and Research for ISACA and a founding partner of the analyst firm Security Curve. In his 20+ years in information security, Ed has held numerous positions including: Senior Security Strategist for Savvis (now CenturyLink), Senior Manager with CTG's global security practice, Vice President and Information Security Officer for Merrill Lynch Investment Managers, and Senior Security Analyst with Trintech. Ed is co-author of "Cryptographic Libraries for Developers" and a frequent contributor to the Information Security industry as author, public speaker, and analyst.