This workflow sends an automated report of RDS instances that are low on storage. Detecting RDS database instances that run low on disk space is crucial when these instances are used in production by latency-sensitive applications as this can help you take immediate actions and expand the storage space in order to maintain an optimal response time. This is an important part of your monitoring setup.
The workflow retrieves all the RDS DB instances and monitors their storage state with the “AWS monitoring” node, all the instances that are found to be low on storage are filtered and sent to the Report node which will be used to notify you of the appropriate instances. The workflow is set up in a fully no-code fashion, where the ‘Monitoring Node’ directly integrates and sends you monitoring data in a readable format. The data can be retrieved for any resource or sub-resource.
The workflow consists of 5 nodes. The workflow is set to be triggered by an external application (Jira ticket, Email etc). As soon as the resource node collects all the instances (you can filter out instances to be retrieved using Additional Parameters), the AWS monitoring node has all the parameters set to monitor your instances. Subsequently, the low storage instances are filtered out with the custom function written on the filter node. The Report node sends an Email/Slack notification to the user.
This workflow lets you import data from DynamoDB to S3. Importing DynamoDB data to S3 successfully safeguards your data and doubles up as an efficient AWS backup strategy. Automating this process with scheduled backups guarantees no loss of data and cost-efficient storage practices.
The workflow is set to run everyday, it will use AWS Data Pipeline to export data from a DynamoDB table to a file in an Amazon S3 bucket. The workflow nodes primarily consist of two action nodes for creating the pipeline and passing the data.
The trigger is a recurring schedule that runs throughout the week. The two action nodes are used to create the data pipeline and pass the data across the pipeline. There’s also a notification node to alert you of successful backups throughout the week.
A common occurrence in Instance management is the risk of overutilization of disk space. Several factors can cause an increase in Diskutilization to go over 90%. For example, user-initiated heavy workloads, analytic queries, prolonged deadlocks & lock waits, multiple concurrent transactions, long-running transactions, or other processes that utilize CPU resources.
Over-utilized instances can incur several performance issues that later affect your budget. Having a simple, automated means of scaling the volume of your instances when necessary take off any management overhead from your side. This use case focuses on automatically increasing disk space by a defined amount when a DSU of above 90% is detected.
In this particular template, you instruct the workflow to increase the DB size by 20GB when the disk space utilization crosses 90%. This event (DSU > 90%) sets off a CloudWatch Alarm, which triggers the TotalCloud workflow. Even if your CloudWatch Alarm alerts you of overutilization in the middle of the night, the workflow would have handled it for you before you even think of having to respond. Since it’s automated, the fix is executed immediately - eliminating any response time delays. If you wish to approve of the action before it occurs, you can enable user approval as well.
The workflow increases your EBS volume by 20GB, as a default value. This value can be altered depending on your workload demands. When a CloudWatch Alarm goes off and sends an SNS alert for high disk space utilization, the workflow is automatically triggered and executes the action. Like we’ve pointed out, you can set the trigger to be anything - a CloudWatch Alarm, any other external system, platform, or ticketing system such as JIRA. In this case, you can also instruct the workflow to create the ticket on your ticketing platform when the Alarm goes off. It can then close the ticket once remediation is completed. This is helpful for logging purposes, and to enable end-to-end automation.
After the Workflow matches the instances to be modified, it requests for user approval. On receiving a green signal, it increases the EBS volume and sends an SSM command that will attach the new volume to its EBS, and inform the OS.
The workflow has two primary steps being achieved with a total of 8 nodes. The first step is to filter out the right instance(s) using simple conditional operations. The second is to modify the volume and apply it to your instance.
Notifies that whether your Amazon Auto Scaling Groups (ASGs) span across multiple Availability Zones (AZs) within an AWS region. This is AWS best practice to expand the availability of your auto-scaled applications. When hosting your AWS ASGs within a multi-AZ environment, if one AZ becomes unhealthy or unavailable, the Auto Scaling Group launches new EC2 instances in an unaffected Availability Zone, enhancing the availability and reliability of the ASG.
The workflow transfers the logs present in the log folder of EC2 machines into a specified S3 Bucket. This use case helps you to store the logs you want, without worrying about increasing the disk space in the machine.