Sekhar Sarukkai is the Co-founder and Chief Scientist at Skyhigh Networks.
The Deep Root Analytics leak that affected 198 million Americans sent shockwaves throughout the world, as the majority of the adult US population had its voter information exposed to the public. Within the context of the increasingly turbulent cybersecurity political landscape, data protection is becoming essential for both the public and private sector alike, yet these leaks seem to be happening more frequently.
The Deep Root security incident, alongside many others before it, only further proves the necessity of proper security practices for frequently used but often-neglected IaaS systems such as AWS. There were essentially no such protections for Deep Root’s data stored in an AWS S3 bucket. Anyone with a simple six-character Amazon subdomain could access it.
Data vulnerability is nothing new to the security industry, but adopting the correct best practices can make any bit of data, no matter how sensitive, secure in AWS. Although Amazon has made significant investments in securing its AWS platform, gaps still exist that hackers could utilize to either gain access to secure information, take an application offline, or erase data entirely.
Amazon has developed sophisticated tools, such as AWS shield, for DDoS attacks, yet a larger, more coordinated effort could overwhelm the system. Even with such protections, many data breaches are caused by insiders, whether that is due to negligence or malicious intent. In fact, enterprises face nearly 11 insider threats per month on average, making internal and external security essential to safeguarding sensitive data.
Another important AWS vulnerability is improper configuration. Within the shared responsibility model, Amazon monitors AWS infrastructure and platform security, as well as responds to incidents of fraud and abuse. Since customers often require custom applications, they are responsible for configuring and managing the services themselves, notably EC2, VPC, and Amazon S3. This includes installing updates and security patches, otherwise vulnerabilities may arise if left unattended.
Best Practices for a More Secure AWS
- Activate multi-factor authentication when signing up for AWS in order to enable an additional level of security from the start. This should be applied to both the root user account and all subsequent IAM users. Authentication for the root account should be tied to a dedicated device independent of the user’s mobile phone, in case the personal device is lost or the user leaves the company.
- Use a strict password policy, as users tend to create easy-to-remember but easy-to-guess passwords. Strict password policies make passwords more complicated, but it establishes strong protection against brute force attacks. At the very least, passwords should have one upper case letter, one lower case letter, one number, one symbol, and 14 characters.
- Make sure CloudTrail is active for all of AWS because global logging allows you to retain an audit trail of activities within AWS services, even for those that are not region specific, notably IAM and Cloudfront.
- Turn on CloudTrail log file validation in order to track changes made to the log file after it has been delivered to the S3 bucket. Not only will this create another layer of security for the bucket, but validation will also make it easier to discover potential threats.
- Activate access logging for CloudTrail S3 buckets to track access requests in order to identity unauthorized or unwarranted access. Customers could also monitor past access requests.
- Don’t use root user accounts because these automatically-generated accounts have access to all services and resources in AWS. Since root users are very privileged accounts, they must only be used to create the first IAM user, after which the root user credentials should be locked away.
- Terminate unused access keys, which decreases the chance of a compromised account or insider threat. It is recommended that access keys be deleted after 30 days of inactivity. These precautions should also be applied to IAM access keys in order to prevent unauthorized access to AWS resources.
- Avoid using expired SSL/TLS certificates because they may no longer be compatible with AWS services, leading to errors for ELB or custom applications, impacting productivity and overall security.
- Use standard naming conventions for EC2 as this reduces the risk of misconfiguration. By utilizing regular tagging conventions, there is a reduced risk of someone misusing a tag or name, decreasing the number of potential vulnerabilities.
- Restrict access to Amazon Machine images (AMIs) because if left unrestricted, anyone with an AWS account can access them through community AMIs to launch EC2 instances. Restricting this would prevent enterprise-specific application data from being exposed to the public.
Applying the best practices for AWS services and infrastructure is only a small part of the puzzle, as custom applications deployed in AWS also require similar safety precautions. Without proper security configurations, the Deep Root leak may become one of many data breaches that impacts hundreds of millions of people. However, by employing security best practices, organizations can withstand even the most sophisticated threats, sheltering their most valuable data.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
All copyrights for this article are reserved to infosecblog.org