Skip links

How to keep on top of cloud security best practices

Paid Feature In an era beset by hackers at every turn, it’s no small irony that the fastest growing security threat to business data might now be the self-inflicted wound of cloud service misconfiguration.

In the cloud, a misconfiguration is any mistake in the way a service has been set up that could leave the environment exposed to a breach. These types of errors have always been a part of corporate IT but in a corporate data center are usually no sweat. In the cloud, life is very different. Here, the effect of any mistakes is amplified, sometimes because it can be detected from the public side but also because of the way security controls work in the cloud.

Focusing on accidental misconfiguration runs counter to the traditional assumption that the biggest threats are always external or malicious. Undoubtedly, when a breach is exploited, it is because an error has been noticed by a professional hacker or possibly a malicious insider. But the deeper question should not simply be who exploited an error but who caused it. In most cases, misconfigurations happen because of an innocent error rather than bad intentions.

The question, then, is why misconfigurations happen and how they can be prevented. Every organization migrating to cloud services needs to set out with the assumption that misconfiguration is a possibility and resolve to put in place the correct processes to make them less likely.

The theme jumps out of a recent Trend Micro analysis of high-profile cloud breaches that came up with the surprising conclusion that in the majority cases the exposure happened after a single misconfiguration. Only one incident, the widely dissected Capital One insider breach of 2019, involved two misconfigurations, the first in a Web Application Firewall (WAF), the second a Server-Side Request Forgery (SSRF) vulnerability.

As Trend Micro points out, Capital One is considered a mature cloud user. “This is a team that knows what they are doing. And yet, they still made a mistake.” The platform itself, AWS, was a blameless host.

When things go wrong

Other examples of what misconfigurations can lead to are not hard to find. In 2017, an UpGuard researcher discovered a large cache of US Central Command (Centcom) data left exposed to anyone with a free AWS account. Amazon S3 buckets are private by default which meant that someone had changed this in error, exposing three archives.

In 2019, Israeli researchers stumbled on a 179GB AWS-hosted Elasticsearch database of travel and hotel reservation data, including some related to US military personnel. The list could go on for a paragraph but this one is worth adding simply to underline how popular tools such as Elasticsearch can be botched by the unwary with disastrous consequences.

Most misconfigurations go unreported, and in an unknown but probably substantial number of cases are never detected at all. Misconfiguration ranked second only to hacking as the underlying cause of data breaches, in The 2021 Verizon Data Breach Report (DBIR). Getting bigger

The growing awareness of cloud misconfigurations comes at a time of huge growth in cloud platforms. The drivers for this look unstoppable and include remote working, the boom in AI automation, the ongoing digitalization of ecommerce, and the need to use cloud resources for resilience, backup, and application development. Likewise at a lower level, the emergence of hybrid cloud technologies that make it possible to mix and match different public and private clouds under one codebase and set of development and management tools.

This huge expansion in cloud use increases the risk and likelihood of breaches in ways that not all organizations are ready to acknowledge. When Trend Micro surveyed 2,500 global decision makers in late 2020, they uncovered mixed evidence. On one hand, 87 percent said they were fully or mostly in control of their remote working environment, with 51 percent crediting the acceleration of cloud migration as an influence that had improved their security best practice.

But in other ways, the cloud was associated with a variety of security anxieties, with 45 percent citing this as a significant barrier to cloud adoption. The top operational worries were the ability to set consistent security policies (35 percent), secure traffic flows to and from the cloud (33 percent), and patching (33 percent). Migration to a new set of security tools was another theme, with data privacy 43 percent), staff training (37 percent) and compliance worries (36 percent) mentioned as other hurdles.

Why do misconfigurations happen?

Misconfigurations can happen at different levels, including in the cloud service, the application configuration, in virtualization or operating systems, and the underlying hardware management. Sometimes these mistakes are simple oversights while in other cases there is something wrong with the security process around a control. A compounding factor is the need to impose the same controls across different and sometimes divergent services, for example AWS and Azure, something which is becoming a more pressing problem as hybrid clouds become popular.

Trend Micro lists the top five mistakes that explain many misconfigurations.

Storage access

An AWS authenticated user with ‘full control’ is anyone who has an account and not only authorized users within the organization, as is commonly assumed. As the incidents mentioned previously demonstrate, this is one of the commonest mistakes to make. Others include not turning on logging and failing to configure encryption properly. Anyone getting their hands into AWS should understand how to use the platform’s Identity and Access management (IAM) settings, define rules for buckets policies, and implement access control.

Credential mismanagement

Secrets such as passwords, tokens and API keys can be accidentally exposed in numerous ways, including by leaving them as part of public GitHub repositories, because they are stolen, over-shared, or extending access to too many resources.

Disabled logging

Logging tracks all changes and is where platform service announcements are made. Disabling or failing to turn on logging makes it impossible to see evidence of unauthorized access or track urgent security or updating announcements.

Legacy admin behavior

Another common mistake is to allow access to containers and virtual machines using legacy protocols such as FTP, telnet, or by exposing Kubernetes clusters to the Shodan/etcd search engines.

Lack of validation

Making the above worse, many organizations lack a proper system for detecting when a misconfiguration has occurred. Ideally, this should be part of someone’s job description and run as a routine every time a change is made. This process of checking should also be integrated into the auditing schedule.

The ‘only testing’ fallacy

A common theme is the frequency with which developers post test data to a cloud database and forget to secure it on the assumption that the data is less sensitive or because the test is short term. It’s as if as a test system normal rules don’t apply or would take too long to implement, or that the developer is inexperienced at using them.

Shared responsibility model

A secure cloud is always a mixture of the cloud service provider’s platform and the management of that service by the customer. The evidence of frequent breaches suggests there is a tendency for customers to make assumptions about the responsibility of the service provider, or to assume the risk lies with the platform and not the way it is used.

In response to this, Amazon, Microsoft, and Google have published their own versions of the shared responsibility model (SRM) which broadly states that the provider is responsible for the platform and the customer for the data, with the precise demarcation depending on whether the cloud service is IaaS, PaaS, or SaaS.

Fixing the hole

Cloud security can be upgraded in many ways, including implementing least privilege access, enabling audit logs, firewalling of unauthorized traffic, tightening of identity and access management, more frequent key rotation, and enabling multi-factor authentication is turned on for all admin access. Likewise, responsibilities under SRM need to be analyzed carefully and not taken as read.

But the most important remediation of all is to put in place a system that allows misconfigurations to be detected and quickly rectified. Trend Micro’s approach to this issue is Trend Micro Cloud One – Conformity, which provides central visibility and real-time monitoring of cloud infrastructure. Conformity can perform almost 1000 different configuration best practice checks and auto-remediation for AWS, Azure and Google Cloud in hybrid cloud setups.

The benefit of a misconfiguration checking system is that it reduces the burden of what might otherwise turn into a complex undertaking, lowering the risk of exposure in measurable ways while integrating this into the auditing process. That this is even necessary is another cautionary lesson in how not to adopt new technology. The cloud has changed enterprise computing for good, but it shouldn’t take a lengthening list of embarrassed breach victims to make it secure.

Sponsored by Trend Micro.

Source