Usecase Universe

A collective of use cases for DevOps teams

Browse a variety of 300+ predefined templates to automate all your AWS actions

Create Template
Solutions
All Categories

S3

24 Times Used
22 MAY 2019
S3 Buckets
S3
None

S3 Buckets (Inventory)

S3 Bucket Public 'WRITE' Access
S3
Security

Sends a report of all S3 Buckets that provide Public 'WRITE' Access. Providing this access will allow unauthorised users the ability to delete, change and add objects in your S3 Buckets.

S3 Bucket with Public 'READ' Access
Security
Security
S3

Sends a report of S3 buckets which have Public READ Access

S3 Bucket Public 'WRITE_ACP' Access
S3
Security

Sends a report of all S3 Buckets with Public 'WRITE_ACP' Access. Providing this access allows unauthorised users to edit who has control over your objects, thereby allowing them to delete, edit and add objects in S3 Buckets.

Bundle And Archive - S3 Glacier Movement
S3
S3 Glacier
Remediation
Automation
Cost Saving

This S3-bundling use case simplifies an industry-standard storage best practice, while also providing further benefits additionally. The Amazon S3 storage tiers allow you to move data over and the different tiers come with different benefits. Moving data from Standard S3 to Glacier is common practice. For one, Glacier is the cheapest storage tier available, and two, it’s the best archiving solution. 


We built out an automated no-code workflow to take the same process and push it all into one seamless flow of events that does these different tasks from the same place. With this workflow, compression of your data will be the ideal way you approach your archiving. You could potentially cut your costs with this neat method. You only need 1 workflow with 8 nodes to make this complex use case a reality. No coding, no configuring on the AWS Console, or anything else.


Workflow Brief


The workflow accesses the data, compresses the files, and transfers the files into Glacier. Data compression is done by loading the collection of smaller S3 data onto a different bucket and into the data pipeline. It bundles small files into one large zip. Compression quality ranges from 0 to 9. This template uses 0. Text files and log files can be compressed with a bit of custom code (since we’ve already created it, you can simply adopt it as a template). We also configure the pipeline on this workflow to enable the compression - which happens after a short wait period. The process itself is no different to normal ZIP compression, we’re just enabling it on a cloud service, without any code. See the detailed workflow docs here.


Process


When it comes to the activation of this workflow, there are 3 key elements.


1) Collection of data

A custom node is present that collects the S3 Data from your bucket and prepares it to be redirected. The sourceBucket is defined to define from where the data is taken and the targetBucket is where the data will be moved to.


2) Creating the PipeLine

These nodes create the data pipeline through which the data will be compressed and moved. 


3) Pipeline Definition, Activation, and Deletion

This part of the workflow configures the compression of the S3 Data that is moved into the pipeline and ensures its transfer to S3 Glacier. Once it's complete, it deletes the pipeline.



DynamoDB to S3 Exporter
DynamoDB
S3
Backup
AWS Best Practices

This workflow lets you import data from DynamoDB to S3. Importing DynamoDB data to S3 successfully safeguards your data and doubles up as an efficient AWS backup strategy. Automating this process with scheduled backups guarantees no loss of data and cost-efficient storage practices.


Benefits


  • Cost efficiency
  • Storage efficiency
  • Auto-remediation


Workflow Brief


The workflow is set to run everyday, it will use AWS Data Pipeline to export data from a DynamoDB table to a file in an Amazon S3 bucket. The workflow nodes primarily consist of two action nodes for creating the pipeline and passing the data.



Process


The trigger is a recurring schedule that runs throughout the week. The two action nodes are used to create the data pipeline and pass the data across the pipeline. There’s also a notification node to alert you of successful backups throughout the week.

S3 Bucket with Public 'READ_ACP' Access
S3
Security

Sends a report of all S3 Buckets with Public 'READ_ACP' Access. Providing this access allows unauthorised users to view who has control over your objects, and find those with badly configured permissions.

Remove S3 Bucket Public 'WRITE_ACP' Access
S3
Security

.

Remove S3 Bucket Public 'READ_ACP' Access
S3
Security

.

Remove S3 Bucket Public 'READ' Access
S3
Security

.

S3 Buckets with Public 'FULL_CONTROL' Access
S3
Security

Sends a report of all S3 Buckets with Public 'FULL_CONTROL' Access. Allowing this access is dangerous as unauthorised users can view, delete, edit and add objects in your S3 Buckets.

Remove S3 Bucket Public 'FULL_CONTROL' Access
S3
Security

.

Remove S3 Bucket Public 'WRITE' Access
S3
Security

.

CloudTrail S3 Bucket Logging Enabled
Security
S3
CloudTrail
Security
CIS-AWS

Sends a report of CloudTrail Trails whose S3 bucket does not have "Bucket Logging" enabled. With Server Access Logging feature enabled for your S3 buckets you can track any requests made to access the buckets and use the log data to take measures in order to protect them against unauthorized user access.

CloudTrail S3 Bucket Publicly Accessible
S3
Security
Security
CIS-AWS

Sends a report of CloudTrail Trails present in your AWS account whose S3 bucket is publicly accessible. Using Public S3 bucket makes your log files less secure and easily accessible for others