Usecase Universe

A collective of use cases for DevOps teams

Browse a variety of 300+ predefined templates to automate all your AWS actions

Create Template
Solutions
All Categories

Remediation

24 Times Used
22 MAY 2019
Remove Entries in Security Groups Which Allow RPC (TCP Port 135) Access From Public IP
EC2
Security
Remediation

It is AWS best practice to remove entries in security group which allows RPC access from public IP to reduce possibility of breach. Allowing unrestricted RPC access can increase threats like hacking, denial-of-service (DoS) attacks and loss of data.

Bundle And Archive - S3 Glacier Movement
S3
S3 Glacier
Remediation
Automation
Cost Saving

This S3-bundling use case simplifies an industry-standard storage best practice, while also providing further benefits additionally. The Amazon S3 storage tiers allow you to move data over and the different tiers come with different benefits. Moving data from Standard S3 to Glacier is common practice. For one, Glacier is the cheapest storage tier available, and two, it’s the best archiving solution. 


We built out an automated no-code workflow to take the same process and push it all into one seamless flow of events that does these different tasks from the same place. With this workflow, compression of your data will be the ideal way you approach your archiving. You could potentially cut your costs with this neat method. You only need 1 workflow with 8 nodes to make this complex use case a reality. No coding, no configuring on the AWS Console, or anything else.


Workflow Brief


The workflow accesses the data, compresses the files, and transfers the files into Glacier. Data compression is done by loading the collection of smaller S3 data onto a different bucket and into the data pipeline. It bundles small files into one large zip. Compression quality ranges from 0 to 9. This template uses 0. Text files and log files can be compressed with a bit of custom code (since we’ve already created it, you can simply adopt it as a template). We also configure the pipeline on this workflow to enable the compression - which happens after a short wait period. The process itself is no different to normal ZIP compression, we’re just enabling it on a cloud service, without any code. See the detailed workflow docs here.


Process


When it comes to the activation of this workflow, there are 3 key elements.


1) Collection of data

A custom node is present that collects the S3 Data from your bucket and prepares it to be redirected. The sourceBucket is defined to define from where the data is taken and the targetBucket is where the data will be moved to.


2) Creating the PipeLine

These nodes create the data pipeline through which the data will be compressed and moved. 


3) Pipeline Definition, Activation, and Deletion

This part of the workflow configures the compression of the S3 Data that is moved into the pipeline and ensures its transfer to S3 Glacier. Once it's complete, it deletes the pipeline.



Reboot Process in EC2 instances (Triggered by Jira)
EC2
Remediation
Remediation

What if managing your instances were as easy as raising a JIRA ticket? Almost every DevOps team uses JIRA as a standard means of issue tracking & task management. We’ve seen a ton of our customers prefer using an integrated approach to their cloud & workflows. Hence, we’ve adopted a workflow for easier management of instance states through JIRA triggers. 


Benefits


  • Scheduled alerts
  • Auto-remediation
  • Cost-saving
  • Customization


Workflow Brief


To automate the process end-to-end- the workflow raises a JIRA ticket when a CloudWatch alarm goes off, the corrective action then executes, and the workflow then closes the ticket. The workflow acts as a virtual DevOps engineer. The action here would be rebooting the instances when an alarm for high CPU utilization goes off. See the detailed workflow docs here.


Process


Reboot the process associated with a machine by raising a ticket. The Apache servers associated with the EC2 machines will be rebooted when it causes high CPU utilization(set threshold as per your need). The trigger is the tags specified in the Jira Ticket description. The workflow will create a Jira ticket when the CloudWatch alarm alerts of high CPU utilization and then after the machines reboot, the ticket is closed before the workflow ends.

Increase EBS volume size if Instance's disk utilisation exceeds 90%
EC2
Remediation
AWS Best Practices
Remediation
Automation

A common occurrence in Instance management is the risk of overutilization of disk space. Several factors can cause an increase in Diskutilization to go over 90%. For example, user-initiated heavy workloads, analytic queries, prolonged deadlocks & lock waits, multiple concurrent transactions, long-running transactions, or other processes that utilize CPU resources.


Over-utilized instances can incur several performance issues that later affect your budget. Having a simple, automated means of scaling the volume of your instances when necessary take off any management overhead from your side. This use case focuses on automatically increasing disk space by a defined amount when a DSU of above 90% is detected. 


Workflow Brief


In this particular template, you instruct the workflow to increase the DB size by 20GB when the disk space utilization crosses 90%. This event (DSU > 90%) sets off a CloudWatch Alarm, which triggers the TotalCloud workflow. Even if your CloudWatch Alarm alerts you of overutilization in the middle of the night, the workflow would have handled it for you before you even think of having to respond. Since it’s automated, the fix is executed immediately - eliminating any response time delays. If you wish to approve of the action before it occurs, you can enable user approval as well. 


Process & Integrations


The workflow increases your EBS volume by 20GB, as a default value. This value can be altered depending on your workload demands. When a CloudWatch Alarm goes off and sends an SNS alert for high disk space utilization, the workflow is automatically triggered and executes the action. Like we’ve pointed out, you can set the trigger to be anything - a CloudWatch Alarm, any other external system, platform, or ticketing system such as JIRA. In this case, you can also instruct the workflow to create the ticket on your ticketing platform when the Alarm goes off. It can then close the ticket once remediation is completed. This is helpful for logging purposes, and to enable end-to-end automation. 


After the Workflow matches the instances to be modified, it requests for user approval. On receiving a green signal, it increases the EBS volume and sends an SSM command that will attach the new volume to its EBS, and inform the OS. 

The workflow has two primary steps being achieved with a total of 8 nodes. The first step is to filter out the right instance(s) using simple conditional operations. The second is to modify the volume and apply it to your instance.


Security Hub remediation
Security Hub
Remediation

remediate the findings sent by security hub.

Reboot process in EC2 machine if CPUUtilization goes high(Triggered by alarm)
EC2
Remediation

If CPUUtilization of any instance having required cloudwatch alarm set, goes high then this workflow will be triggered and it will lower down the CPUUtilization of the machine by rebooting the process inside the machine.

Reboot process in EC2 machine if CPUUtilization goes high(Triggered by alarm)
EC2
Remediation

If CPUUtilization of any instance having required cloudwatch alarm set, goes high then this workflow will be triggered and it will lower down the CPUUtilization of the machine by rebooting the process inside the machine.

Reboot EC2 instances if CPUUtilization Goes High (Triggered by Alarm)
EC2
Remediation

If CPUUtilization of the machine having an alarm set goes above 90%, the alarm will trigger this workflow. Instances with high CPUUtilizaton will be rebooted to reduce their CPUUtilization.

Upgrade EC2 Machines ( Triggered by JIRA )
EC2
Remediation

Jira ticket with required summary will trigger this workflow. Instances having the same tag that are mentioned in the description of jira ticket will upgrade to their next level once the CPUUtilization goes high . This will reduce the CPUUtilization of that machine.

Backup and Terminate EC2 Instances ( Triggered by JIRA )
EC2
Remediation

Jira ticket with required summary will trigger this workflow. AMI backup and EBS snapshot of Instances will be taken as a backup of the instances having the same tag that are mentioned in the description of jira ticket

Launch EC2 instances from AMI ( Triggered by JIRA )
EC2
Remediation

Jira ticket with required summary will trigger this workflow. Instances will be launched on AMI's having the same tag that are mentioned in the description of jira ticket

Reboot EC2 instances ( Triggered by JIRA )
EC2
Remediation

This workflow will reboot EC2 instances if CPUUtilizatin goes high and Jira ticket with required summary will trigger this workflow. Instances having the same tag that are mentioned in the description of jira ticket will get rebooted. This will reduce the CPUUtilization of that machine.

Reboot RDS DB instances (Triggered by JIRA)
RDS
Remediation

Objective of this workflow is to reboot the RDS DB instances whose DBConnections goes high. Jira ticket with required summary will trigger this workflow. RDS DB Instances having the same tag that are mentioned in the description of jira ticket will get rebooted. This will reduce the CPUUtilization of the DB instance.

Copy EC2 Logs Data to S3 and Delete the Log Folder
S3
EC2
Remediation
AWS Best Practices
Remediation

The workflow transfers the logs present in the log folder of EC2 machines into a specified S3 Bucket. This use case helps you to store the logs you want, without worrying about increasing the disk space in the machine.

Stop RDS DB Instance - Create And Close Jira Ticket
Remediation
Automation
RDS

Create a Jira ticket with DB Instance Identifiers of all Instances that will be stopped, stop the DB Instance, and then close the Jira ticket