This workflow lets you import data from DynamoDB to S3. Importing DynamoDB data to S3 successfully safeguards your data and doubles up as an efficient AWS backup strategy. Automating this process with scheduled backups guarantees no loss of data and cost-efficient storage practices.
The workflow is set to run everyday, it will use AWS Data Pipeline to export data from a DynamoDB table to a file in an Amazon S3 bucket. The workflow nodes primarily consist of two action nodes for creating the pipeline and passing the data.
The trigger is a recurring schedule that runs throughout the week. The two action nodes are used to create the data pipeline and pass the data across the pipeline. There’s also a notification node to alert you of successful backups throughout the week.
This use case automates the process of copying your AMI across regions. The AMI’s from one region backed up across a different region enables your Disaster Recovery (DR) setup, and also quickly recovers instances in case the EC2 service in the entire region fails (though this is a rare case). You can scale your applications globally much easier with cross-region AMIs, without any code.
There are two parts to this workflow. The first workflow collects and filters out the EC2 images. Then the endpoint URL of the account in the second region is sent as a payload to the second workflow where the filtered images are pasted on.
In this particular template, the trigger is set to a recurrent schedule that runs every day. This means that backups are carried out everyday. This trigger setting can be customized as well. The resources node collects the EC2 images and the filter node identifies the appropriate images by tags. The http node will post the filtered image onto the account of the second region which is connected using an endpoint URL. The second workflow then copies the image and sends a report via slack or mail.
This use case automates the process of copying your AMI across regions. The AMI’s from one region backed up across a different region enables your Disaster Recovery (DR) setup, and also quickly recovers instances in case the EC2 service in the entire region fails (though this is a rare case). You can scale your applications globally much easier with cross-region AMIs, without any code.
There are two parts to this workflow. The first workflow collects and filters out the EC2 images. Then the endpoint URL of the account in the second region is sent as a payload to the second workflow where the filtered images are pasted on.
In this particular template, the trigger is set to a recurrent schedule that runs every day. This means that backups are carried out everyday. This trigger setting can be customized as well. The resources node collects the EC2 images and the filter node identifies the appropriate images by tags. The http node will post the filtered image onto the account of the second region which is connected using an endpoint URL. The second workflow then copies the image and sends a report via slack or mail.