Usecase Universe

A collective of use cases for DevOps teams

Browse a variety of 300+ predefined templates to automate all your AWS actions

Create Template
Solutions
All Categories

AWS Best Practices

24 Times Used
22 MAY 2019
EC2 Instances In Running State
AWS Best Practices
EC2
AWS Best Practices

To Identify total number of EC2 Instances Running In your AWS account.

Aurora Database Instance Accessibility
AWS Best Practices
RDS
AWS Best Practices

Ensure that all the database instances within your Amazon Aurora clusters have the same accessibility (either public or private)

Report RDS Free Storage Space
AWS Best Practices
RDS
AWS Best Practices

This workflow sends an automated report of RDS instances that are low on storage. Detecting RDS database instances that run low on disk space is crucial when these instances are used in production by latency-sensitive applications as this can help you take immediate actions and expand the storage space in order to maintain an optimal response time. This is an important part of your monitoring setup. 


Benefits


  • Cost optimization
  • Storage optimization
  • Status reporting & monitoring


Workflow Brief


The workflow retrieves all the RDS DB instances and monitors their storage state with the “AWS monitoring” node, all the instances that are found to be low on storage are filtered and sent to the Report node which will be used to notify you of the appropriate instances. The workflow is set up in a fully no-code fashion, where the ‘Monitoring Node’ directly integrates and sends you monitoring data in a readable format. The data can be retrieved for any resource or sub-resource.


Process


The workflow consists of 5 nodes. The workflow is set to be triggered by an external application (Jira ticket, Email etc). As soon as the resource node collects all the instances (you can filter out instances to be retrieved using Additional Parameters), the AWS monitoring node has all the parameters set to monitor your instances. Subsequently, the low storage instances are filtered out with the custom function written on the filter node. The Report node sends an Email/Slack notification to the user.

Security Group internet accessibility Report
AWS Best Practices
AWS Best Practices
VPC

Sends a report of all the VPC'c whose tunnel State is down

DynamoDB to S3 Exporter
DynamoDB
S3
Backup
AWS Best Practices

This workflow lets you import data from DynamoDB to S3. Importing DynamoDB data to S3 successfully safeguards your data and doubles up as an efficient AWS backup strategy. Automating this process with scheduled backups guarantees no loss of data and cost-efficient storage practices.


Benefits


  • Cost efficiency
  • Storage efficiency
  • Auto-remediation


Workflow Brief


The workflow is set to run everyday, it will use AWS Data Pipeline to export data from a DynamoDB table to a file in an Amazon S3 bucket. The workflow nodes primarily consist of two action nodes for creating the pipeline and passing the data.



Process


The trigger is a recurring schedule that runs throughout the week. The two action nodes are used to create the data pipeline and pass the data across the pipeline. There’s also a notification node to alert you of successful backups throughout the week.

Increase EBS volume size if Instance's disk utilisation exceeds 90%
EC2
Remediation
AWS Best Practices
Remediation
Automation

A common occurrence in Instance management is the risk of overutilization of disk space. Several factors can cause an increase in Diskutilization to go over 90%. For example, user-initiated heavy workloads, analytic queries, prolonged deadlocks & lock waits, multiple concurrent transactions, long-running transactions, or other processes that utilize CPU resources.


Over-utilized instances can incur several performance issues that later affect your budget. Having a simple, automated means of scaling the volume of your instances when necessary take off any management overhead from your side. This use case focuses on automatically increasing disk space by a defined amount when a DSU of above 90% is detected. 


Workflow Brief


In this particular template, you instruct the workflow to increase the DB size by 20GB when the disk space utilization crosses 90%. This event (DSU > 90%) sets off a CloudWatch Alarm, which triggers the TotalCloud workflow. Even if your CloudWatch Alarm alerts you of overutilization in the middle of the night, the workflow would have handled it for you before you even think of having to respond. Since it’s automated, the fix is executed immediately - eliminating any response time delays. If you wish to approve of the action before it occurs, you can enable user approval as well. 


Process & Integrations


The workflow increases your EBS volume by 20GB, as a default value. This value can be altered depending on your workload demands. When a CloudWatch Alarm goes off and sends an SNS alert for high disk space utilization, the workflow is automatically triggered and executes the action. Like we’ve pointed out, you can set the trigger to be anything - a CloudWatch Alarm, any other external system, platform, or ticketing system such as JIRA. In this case, you can also instruct the workflow to create the ticket on your ticketing platform when the Alarm goes off. It can then close the ticket once remediation is completed. This is helpful for logging purposes, and to enable end-to-end automation. 


After the Workflow matches the instances to be modified, it requests for user approval. On receiving a green signal, it increases the EBS volume and sends an SSM command that will attach the new volume to its EBS, and inform the OS. 

The workflow has two primary steps being achieved with a total of 8 nodes. The first step is to filter out the right instance(s) using simple conditional operations. The second is to modify the volume and apply it to your instance.


EC2 Instance Termination Protection Is Disabled
EC2
AWS Best Practices
AWS Best Practices

EC2 Termination Protection ensures that the instances cannot be terminated accidentally from the Console, API or CLI. These instances can be terminated only after the termination protection setting is turned off.

Report Lambdas whose DLQ is not set
Lambda
AWS Best Practices
AWS Best Practices

This workflow helps generate a report of Lambda Functions whose DLQ is not set.

Daily/Weekly reports of Lambda's Duration
Lambda
AWS Best Practices
AWS Best Practices

Monitors metrics to make sure your Lambdas are running as they should. It helps in finding anomalies and improve Lambda Function performance.

Send report of AWS ASG which does not have multiple AZ
AWS Auto Scaling
AWS Best Practices
AWS Best Practices

Notifies that whether your Amazon Auto Scaling Groups (ASGs) span across multiple Availability Zones (AZs) within an AWS region. This is AWS best practice to expand the availability of your auto-scaled applications. When hosting your AWS ASGs within a multi-AZ environment, if one AZ becomes unhealthy or unavailable, the Auto Scaling Group launches new EC2 instances in an unaffected Availability Zone, enhancing the availability and reliability of the ASG.

Instances Everyday States
EC2
AWS Best Practices
AWS Best Practices

Generates a brief report of all the instances and their state.

Copying Infrastructure
EC2
AWS Best Practices

Make the exact copy of the resources

Copy EC2 Logs Data to S3 and Delete the Log Folder
S3
EC2
Remediation
AWS Best Practices
Remediation

The workflow transfers the logs present in the log folder of EC2 machines into a specified S3 Bucket. This use case helps you to store the logs you want, without worrying about increasing the disk space in the machine.

Reserved Instance Lease Expiration (7 Days)
AWS Best Practices
EC2
AWS Best Practices

Checks for Amazon EC2 Reserved Instances that are scheduled to expire within the next 7 days

Hardware MFA On Root Account
AWS Best Practices
AWS Best Practices
IAM

Checks the root account and warns if hardware multi-factor authentication (MFA) is not enabled