Amazon S3

Amazon S3 overview

Amazon Simple Storage Service (Amazon S3) is an object storage that has a simple web services interface to store and retrieve any amount of data. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

S3 bucket

To store an object in Amazon S3, you create a bucket and then upload the object to the bucket.

A bucket is a container for objects. An object is a file and any metadata that describes that file.

When the object is in the bucket, you can open it, download it, and move it. When you no longer need an object or a bucket, you can clean up your resources.

An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted.

Addressing model

There are two addressing models to access a bucket:

  • Virtual-Hosted–Style:

  • Path-Style:


Access control list

Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.

Security issues

Bucket takeover

If an application is using a domain-linked S3 bucket that has been deleted by developers and CNAME records from Amazone Route 53 are still pending deletion, you can claim this unclaimed S3 bucket name by using an other AWS account.

To verify whether bucket takeover may be possible, run:

$ curl -s https://<url-to-bucket> | grep -E -q '<Code>NoSuchBucket</Code>|<li>Code: NoSuchBucket</li>' && echo "Subdomain takeover may be possible" || echo "Subdomain takeover is not possible"


Improper ACL permissions

If ACL permissions are misconfigured you can get unauthenticated access to a bucket. Moreover, such permissions may allow you to both read and modify objects.

# configure AWS CLI
$ aws configure
# listing a bucket
$ aws s3 ls s3://<bucket-name>/
# create file.txt in a bucket
$ aws s3 cp s3://<bucket-name>/file.txt
# remove file.txt in a bucket
$ aws s3 rm s3://<bucket-name>/file.txt

You can use the following tools to automate the process:

  • S3Scanner - Scan for open S3 buckets and dump the contents.

  • s3inspector - Tool to check AWS S3 bucket permissions.

  • lazys3 - A Ruby script to bruteforce for AWS s3 buckets using different permutations.



Last updated