AWS

9 AWS Security Mistakes for DevOps Teams

AWS introduces new complexity to your infrastructure. With that complexity comes potential security risk to the organization. Amazon’s shared responsibility model places the reality of security within the cloud squarely in the hands of the DevOps team

9 AWS Security Mistakes for DevOps Teams (1)

Here are 9 cringe-inducing ways that DevOps teams expose their AWS environments to unnecessary risk.

Access control blunders

Many AWS security problems center on identity and access management (IAM). Users (both humans and software) need to access your AWS environment to perform their required tasks. But users are your greatest vulnerability: 85% of cyberattacks involve user accounts. A big step towards preventing such attacks is avoiding these IAM missteps.

1 Using the AWS Management Console to make environment changes

It might sound odd: it’s unwise to use the AWS Management Console to manage AWS? But it’s true. Most of your DevOps team shouldn’t even have access to the console. Why? Because you should make all changes through APIs instead.

Sure, many teams choose to manage AWS with code that calls AWS APIs because it reduces labor and scales well. But it’s also more secure than using the console.

One reason is the quantity of AWS Management Console user accounts. Each user account is a potential gateway for an attacker. So the fewer accounts you have, the less risk you expose the organization to. If you perform AWS management tasks programmatically, then only the API needs access, not all of the team members.

The other reason is the system of checks and balances that are a normal part of every smart team’s source code management. Nearly all developers use a source control tool (such as GitHub, GitLab, Subversion, or Mercurial) that provides change management.

Nearly all developers use a source control tool (such as GitHub, GitLab, Subversion, or Mercurial) that provides change management.



One security benefit of change management is that developers can’t independently push their own code to production. Other team members must review and approve their changes. This is a typical workflow:

  1. Write. The developer writes the changes in a branch and then issues a pull request.
  2. Review. One or more people review the code and approve merging it into the mainline source.
  3. Build. Scripts and configuration directives run a build, which creates an executable and places it in an artifact repository.
  4. Test. A tester—a different person and/or a suite of automated tests—confirms that the new code works as intended.
  5. Deploy. The tester triggers a deployment to the next environment. If the tester is a person, then they manually approve deployment. Automated tests, on the other hand, can programmatically approve deployment. (In that case it’s important that the automated tests themselves undergo peer review and approval.)

In addition to requiring peer review and testing, source control tools track activity. Nearly all source control tools track these details:

  • What changed
  • Who changed it
  • When they changed it
  • Who approved it
  • Who merged it into the code base

This provides both an additional layer of accountability and an audit trail.

2 Granting overly permissive access

When managing any permissions, always give users the least access required to do their job. Restricting access to the bare minimum is called the principle of least privilege.

principle of least privilege

Remember that every user is another account that attackers can target to penetrate your environment. If you grant greater access than a user needs, you're unnecessarily increasing the potential attacker’s access.

Apply the principle of least privilege to non-human accounts as well. When configuring your pipeline, grant only the minimum access that applications, systems, and tools need to perform a required task.

3 Not establishing separation of duties

separation of duties



The fast and dynamic nature of DevOps can cause teams to neglect some of the simplest checks and balances. One of these is separation of duties. Also known as segregation of duties or SoD, separation of duties requires more than one team member to complete a critical task. The most common example is that a person who writes code can’t approve or deploy their own code. This measure helps ensure that no one maliciously or accidentally releases unauthorized changes to production.

Many DevOps teams deliberately neglect separation of duties because the extra handoffs would slow them down. Incident response time is a particular concern—what if the responder can’t find someone to approve an emergency fix? Some even insist that separation of duties goes against core DevOps principles.

But, in fact, separation of duties is an essential part of preventing fraud and error, and can function smoothly in a DevOps environment. DevOps teams can implement separation of duties with few or no added manual steps.

4 Letting console users authenticate with just a password

The few user accounts that require access to the AWS Management Console must be highly secure. However, passwords don’t adequately secure user accounts. In 2021, 81% of hacking-related breaches used stolen and/or weak passwords (Verizon). To prevent account fraud, you must require that users prove their identity with more than one authentication factor.

81% of hacking-related breaches used stolen and/or weak passwords

For example, users provide their user name and password and then receive a phone text message with a one-time code to enter. The one-time code is a second authentication factor. Because access to that code requires the user’s phone, it increases the likelihood that the user is who they claim to be.

Multi-factor authentication (also known as MFA, two-factor authentication, or 2FA) blocks 99.9% of fraudulent sign-in attacks (Microsoft). Yet adoption remains low compared to the benefit.

MFA is also critical to securing your root account. Root accounts protected by only a password are open to several attack methods, including the forgot password function and social engineering. Consider this nightmare scenario for a root account not protected by MFA: if an attacker compromises the account and enables MFA, AWS Support won’t just reset your password. They’re required to perform a thorough internal review that can take weeks, during which you remain locked out of your account.

One objection to MFA is the extra step it adds to the login process. When compared to the cost of credentials theft, this extra step is trivial.

Some organizations don’t adopt MFA because of the misconception that it requires additional hardware. However, many authentication factors rely solely on hardware that users already have. Don’t let misconceptions prevent you from securing the most common attack point.

Find and fix the security risks that pose the biggest threat to your business.

5 Failing to audit IAM

A loosely managed IAM solution is a common source of insider leaks and data breaches



A loosely managed IAM solution is a common source of insider leaks and data breaches. AWS makes it easy to add new resources and tools to your environment, but not to evaluate, audit, and monitor access to these resources and tools. Without regular IAM audits, user access within your organization can quickly grow out of control. Before you begin an IAM audit, be sure you’ve set clear access guidelines and standards. They provide a baseline not only for your audit but for IAM operations.

A typical IAM audit covers these aspects of your AWS infrastructure access:

  • Review users. Review for stale accounts. Users come and go. When they go, so should their accounts.
  • Review access. Do users still need the level of access they have? Do they still need access at all? Do they have least required access?
  • Review separation of duties. Break down mission-critical tasks into smaller tasks, each performed by a different person.
  • Manage generic user accounts. Although generic accounts are useful, they introduce risk. Make sure no generic accounts have administrative privileges. Cycle passwords. Delete any generic accounts that you’re not using.

Configuration and coding missteps

Attackers enjoy taking advantage of configuration oversights and poor coding decisions. These are more common than you might expect. Make sure your own infrastructure isn’t hiding one of these mistakes.

6 Exposing resources to public internet access

AWS resources connected directly to the internet are dangerous vulnerability points. For example, an EC2 instance with a public IP address is an attack invitation. You might think you need your public network interface, but it’s likely an unnecessary convenience—and a potentially costly one, considering that it can be hacked in many different ways.

We often provision instances and then forget them. We neglect to update them as new vulnerabilities are introduced. A neglected EC2 instance with a public IP address can be just the point of attack a hacker is looking for to penetrate your entire system.

Another example is S3 buckets. By default, S3 buckets are private: only accounts with explicit permission can access them. You can configure each bucket individually. Surprisingly often, people configure S3 buckets so that anyone on the internet can access the data stored in them. This might be to circumvent access controls or it might be sheer accident. Either way, it’s a sure path to leaked data.

Deploy your AWS resources in a private virtual network. Only resources under your control should have access to your resources, such as web or application servers that need to request data. Be sure to use firewalls to mediate access and only expose services to the public internet via well secured load balancers.

EC2InstanceswithPublicIPaddresses

7 Hard-coding secrets

Sure, most of us know not to hard-code secrets. After all, it’s difficult to control (and audit) who has access to a hard-coded secret. And anyone who has access to your code can see it. Also, you don’t want to be dependent on source code to change a secret. When you discover a password has been leaked, you shouldn’t have to rebuild your software in order to change the password.

Hard-coded secrets, however, are surprisingly common. When developing applications on AWS, we often use AWS IAM roles to create temporary credentials that call AWS services. If your application needs longer-term credentials (such as API keys or database passwords), you might be inclined to hard-code them.

Even if it’s just a temporary measure, don’t succumb to the temptation of hard-coding secrets. Too often, temporary measures accidentally become permanent—you edit a file to test locally and then forget to change it before you check it into a public repository for the world to see.

This example shows hard-coded credentials in Terraform:

Screen Shot 2022-05-18 at 2.54.21 PM

Instead of hard-coding the credentials, you can leave the provider empty…

Screen Shot 2022-05-18 at 2.55.15 PM

… and export environment variables:

Screen Shot 2022-05-18 at 2.55.46 PM

8 Failing to configure encryption by default

With proper encryption, your data is meaningless to bad actors. Even if hackers manage to access your information, they can’t read it. AWS storage services offer strong encryption:

  • Elastic Block Storage (EBS) features encryption for data at rest, in transit, and in snapshots.
  • S3 offers a number of server-side encryption options. You can also take advantage of key management solutions that simplify the implementation of encryption keys.

Few DevOps teams deliberately choose to leave their data unencrypted. However, accidental misconfigurations do occur, leaving sensitive data exposed. It’s important to perform configuration checks to catch such errors.

EBS Encryption by Default

The big culture failure

Security mistakes don’t just occur in configuration details; your organization’s culture can put your infrastructure at risk.

9 Not integrating security into the SDLC

The biggest AWS security mistake you can make is failing to make security a priority in every step of your system development lifecycle.

The power and scale of the DevOps SDLC have added a new risk dimension to security. Software provisioned with infrastructure as code can go from a developer’s laptop to the customer in seconds. In this rapid process, the traditional practice of solely scanning runtime infrastructure is inadequate. By the time you catch vulnerabilities, your team already may have replicated them.

Putting off security checks until the end of the SDLC is time-consuming and costly, and can be disastrous. If you wait until production to scan your environment, attackers can find vulnerabilities before you do. DevOps teams need to integrate security policy and testing into the full cycle. Catching issues early in the development process saves your organization time and cost.

One way to identify misconfigurations and errors before they get more costly is to include your Security team in design and code reviews. This helps you make smart decisions early, and is also a big step in spanning the divide between Security and DevOps teams. Design- and code-stage security reviews can prevent you from implementing and replicating problematic infrastructure. And it’s far less painful to learn about vulnerabilities in the code stage than through a compliance audit… or a system breach!

There’s a more efficient strategy for securing your AWS infrastructure early in the SDLC: automating continuous security checks. If developers work with the Security team to codify security policies, they can scan for issues at different points in the SDLC: locally during development, in testing, and before deployment.

Implementing security policy as code may sound daunting to some members of the organization. Just as not all DevOps practitioners are security experts, not all Security team members are programmers. That’s why a DevOps-Security partnership is such an asset: One develops and explains the policy and the other implements and codifies it. Policies become queries that can be executed at any stage of the SDLC. This solution is policy as code.

Does policy as code sound like a time-consuming and expensive effort? It doesn’t have to be. Mondoo has codified policies already written. Using the Mondoo platform, it’s easy to integrate existing policy as code into your development process. You can also create and customize policies to fit your organization’s unique needs.

AWS Policy

Letha Dunn

Letha has been writing about technology for more than thirty years. During the past decade, she’s focused on educating engineers about identity and access management, security, CI/CD, and project velocity. Letha lives in the Pacific Northwest, where she rescues and rehabilitates abused and neglected horses and dogs.

Ben Rockwood

Ben Rockwood is the VP of Engineering & Operations at Mondoo. He helped build the first Infrastructure as a Service cloud at Joyent in 2005 and became an influential voice in the DevOps movement since it began in 2009. He’s also helped advance operations, security, and compliance at Chef, Packet, and Equinix. He lives on Bainbridge Island near Seattle.

You might also like

Releases
Mondoo November 2024 Release Highlights
Overview of Changes and New Security Features in Windows Server 2025
Releases
Mondoo October 2024 Release Highlights