There’s a set of emerging trends in the security aspect of cloud infrastructure.
Misconfigurations are now covering a wide spectrum of issues and stem from more complex environments.
Identifying, responding, and eventually fixing misconfigurations still take too much time.
Accurics has published its "Cloud Cyber Resilience Report," and there are some interesting findings in there that reflect the current state of the rapidly growing space, the challenges faced, and the persistent issues that still plague a large percentage of deployments. COVID-19 has accelerated the “migration of everything” to the cloud, but at the same time, it has moved some malpractices on new platforms intact.
The firm has analyzed hundreds of cloud-native infrastructure deployments across their customers and also community users, and so here are the key findings regarding emerging trends:
As the adoption of managed infrastructure offerings rises, watering hole attacks become more prevalent.
In 22.5% of the violations found, the main problem was poor configurations of managed services, leaving things at their “default” settings.
Messaging services and FaaS (function as a service) are becoming the next “storage bucket” trend.
35.3% of IAM (Identity and Access Management) drifts originate in IaC (Infrastructure as Code).
The report dives deep into Kubernetes deployment risks, so here are the main problems there:
47.9% of identified problems were a result of using insecure default settings. Improper use of the default namespace was the most common problem here.
26% of the identified violations concerned insecure secrets management, passing them into containers via environment variables.
17.8% of Helm repo misconfigurations related to lack of resource management, specifying no resource limits.
8.2% of the misconfigurations concerned container security violations like using the host’s process ID namespace for the containers.
Storage-bucket-related findings indicate that the risks discussed so often in recent years aren’t going anywhere. These include:
Hardcoded secrets in container configurations (10% of violations)
Storage bucket misconfigurations (15.3% of violations)
Not enabling the available advanced security capabilities (10.3% of the organizations tested)
Role definition failure on Kubernetes RBAC (35% of the organizations tested)
And finally, there’s the issue of the time needed to fix these misconfigurations. On average, Accurics reports that cloud infrastructure misconfigurations take about 25 days to fix. Misconfigurations on the load balancer services, though, take a whopping 149 days, which is almost five months.
Production environments fix errors in 21.8 days, whereas, for pre-production, it takes around 31.2 days. On average, it takes 7.7 days for organizations to reconcile configuration changes in runtime with the IaC baseline. And as for fixing drifts, the time for that would be 21 days on average.
For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: