Usecase Universe

A collective of use cases for DevOps teams

Browse a variety of 300+ predefined templates to automate all your AWS actions

Create Template
Solutions
All Categories

S3 Glacier

24 Times Used
22 MAY 2019
Bundle And Archive - S3 Glacier Movement
S3
S3 Glacier
Remediation
Automation
Cost Saving

This S3-bundling use case simplifies an industry-standard storage best practice, while also providing further benefits additionally. The Amazon S3 storage tiers allow you to move data over and the different tiers come with different benefits. Moving data from Standard S3 to Glacier is common practice. For one, Glacier is the cheapest storage tier available, and two, it’s the best archiving solution. 


We built out an automated no-code workflow to take the same process and push it all into one seamless flow of events that does these different tasks from the same place. With this workflow, compression of your data will be the ideal way you approach your archiving. You could potentially cut your costs with this neat method. You only need 1 workflow with 8 nodes to make this complex use case a reality. No coding, no configuring on the AWS Console, or anything else.


Workflow Brief


The workflow accesses the data, compresses the files, and transfers the files into Glacier. Data compression is done by loading the collection of smaller S3 data onto a different bucket and into the data pipeline. It bundles small files into one large zip. Compression quality ranges from 0 to 9. This template uses 0. Text files and log files can be compressed with a bit of custom code (since we’ve already created it, you can simply adopt it as a template). We also configure the pipeline on this workflow to enable the compression - which happens after a short wait period. The process itself is no different to normal ZIP compression, we’re just enabling it on a cloud service, without any code. See the detailed workflow docs here.


Process


When it comes to the activation of this workflow, there are 3 key elements.


1) Collection of data

A custom node is present that collects the S3 Data from your bucket and prepares it to be redirected. The sourceBucket is defined to define from where the data is taken and the targetBucket is where the data will be moved to.


2) Creating the PipeLine

These nodes create the data pipeline through which the data will be compressed and moved. 


3) Pipeline Definition, Activation, and Deletion

This part of the workflow configures the compression of the S3 Data that is moved into the pipeline and ensures its transfer to S3 Glacier. Once it's complete, it deletes the pipeline.