We are pleased to announce that we have entered into an agreement whereby Qualys Inc. intends to acquire the assets of TotalCloud. More details.

Usecase Universe

A collective of use cases for DevOps teams

Browse a variety of 300+ predefined templates to automate all your AWS actions

Create Template
Solutions
All Categories

Create an EKS Cluster and Node Group (Part 2)

This use case automates the deployment of an EKS cluster, without any code. The deployment can occur in response to a manual trigger or a particular event, like the filing of a ticket. Generally, to run Kubernetes on AWS, you have to deploy a cluster of EKS worker nodes. You can go about doing this in 4 ways - AWS CLI, AWS console, commands via eksctl or Terraform scripts. 3 of these methods require scripting and all 4 come with possible configuration errors. Like, for example, Availability Zone capacity errors. 


You’ll be spending another set amount of time trying to fix the errors and set the configuration metrics accurate to your architecture. An alternative is a standardized template that can deploy this cluster with bare minimum configuration from the user side. This was the idea requested to us by one of our customers who found the tedious steps put forward by AWS to be all too frustrating.


Workflow Brief


The process is split into 2 workflows, that are set up sequentially - the execution of one triggers the second. The workflows define the triggers, the resources to be deployed, the EKS cluster creation details, and any approvals, notifications, or integrations you need. 


The 2 workflows in this process include:

  1. Workflow to setup the network and data management
  2. AWS EKS Cluster creation workflow


Integrations & Customizations 


Every TotalCloud workflow can accommodate multiple integrations & customized changes. This workflow is currently triggered manually, whenever you need a cluster to be deployed. It can be automated, based on any alarm or event, like a ticket being created on an external ticketing system like JIRA or Zendesk. Slack & Email integrations also come in handy to send user approvals, notifications and reports wherever you require. 


The specifics or the workflow can be altered in every node as well. If you’re operating on a larger scale, the number and type of resources deployed can also be configured accordingly.


Create a 3-Tier Application (Part 3)

Totalcloud’s 3 Tier Application Workflow is a quick solution & template to an otherwise long scripting process. With the use of sequentially placed workflows, you can create an architecture framework with all the necessary infrastructure layers set up and ready to go.


The currently available methods to create a 3 tier application includes either manually coding or provisioning it using the AWS CLI, or writing out a CloudFormation or Terraform script. Needless to say, the manual methods are not only tedious but chaotic, time-consuming & prone to errors. You’re not just programmatically scripting the requirements and configuring multiple services independently, but spending time testing them, fixing bugs & then deploying it. Some of these issues are solved by tools like Terraform & CloudFormation, which allow you to write code that’s more readable — but the catch here is that your DevOps engineers need a considerable amount of Terraform & CloudFormation language knowledge and skill to be able to write the code.


Workflow Brief


The goal of these three workflows is to create individual layers that mimic the three layers of a 3-tier application. The outer layer being the network layer, followed by the traffic layer and finally the database layer. A fixed set of AWS services are configured to get this done. The services can be altered based on your custom application. No coding, no navigating between different pages or any other hassle. Everything will be available in the workflow setup in a drag & drop model, you just need to connect the nodes and create a flow of events.


Process


The details of deployment are defined within the workflows - when to trigger, what the action is, which services to deploy, whether to get user approval, whether to send notifications at different points, whether to have validations, what customizations to have, etc. See the detailed workflow description here.


A quick overview of the 3 layers in this template: 

Network Layer

Here, all the network resources are set up to establish a connection and manage the other resources.


The first few nodes are set up to establish the VPCs. 4 subnets are created with designated IPs that will be attached to the internet gateway. This way, we establish a connection between the services and the internet.


Normally, with AWS, you will need to configure each of these services separately. With these action nodes, however, you can configure several different services consecutively without any coding or navigating between pages.


Traffic Layer

The traffic layer manages the traffic with the below services, it also responds and reacts to changing demands in traffic.


The second workflow is entirely focused on managing moving data. This is achieved by setting up route tables and associate route tables connected to the previously established internet gateway.


A load balancer is also configured to manage the data across the EC2 instances. In case, you need to adjust the EC2 scale to match your incoming load, an auto-scaling group is also configured.


Database Layer

Here, the DB Instances are created, that will help store and manage the data.


An RDS DB instance to act as our database. The cloudFront content distribution will allow the user to distribute the data to desired locations.


Integrations & Customizations 


Every TotalCloud workflow can accommodate multiple integrations & customized changes. For this template, you can set up external triggers - like ticketing systems or emails that can set this workflow to execute. For instance, whenever a JIRA ticket is created with a particular summary, this workflow will run automatically and deploy the application. Similarly, for MSPs, specific emails from clients can trigger the workflow as well, so you don’t have to manually do it every time. Slack & Email integrations also come in handy to send user approvals, notifications and reports wherever you require.


The 3 layers or workflows can also be customized, in every node. If you want to alter the type of resources deployed, or the specific configurations, you can. Every aspect of this template can be customized.  





Bundle And Archive - S3 Glacier Movement

This S3-bundling use case simplifies an industry-standard storage best practice, while also providing further benefits additionally. The Amazon S3 storage tiers allow you to move data over and the different tiers come with different benefits. Moving data from Standard S3 to Glacier is common practice. For one, Glacier is the cheapest storage tier available, and two, it’s the best archiving solution. 


We built out an automated no-code workflow to take the same process and push it all into one seamless flow of events that does these different tasks from the same place. With this workflow, compression of your data will be the ideal way you approach your archiving. You could potentially cut your costs with this neat method. You only need 1 workflow with 8 nodes to make this complex use case a reality. No coding, no configuring on the AWS Console, or anything else.


Workflow Brief


The workflow accesses the data, compresses the files, and transfers the files into Glacier. Data compression is done by loading the collection of smaller S3 data onto a different bucket and into the data pipeline. It bundles small files into one large zip. Compression quality ranges from 0 to 9. This template uses 0. Text files and log files can be compressed with a bit of custom code (since we’ve already created it, you can simply adopt it as a template). We also configure the pipeline on this workflow to enable the compression - which happens after a short wait period. The process itself is no different to normal ZIP compression, we’re just enabling it on a cloud service, without any code. See the detailed workflow docs here.


Process


When it comes to the activation of this workflow, there are 3 key elements.


1) Collection of data

A custom node is present that collects the S3 Data from your bucket and prepares it to be redirected. The sourceBucket is defined to define from where the data is taken and the targetBucket is where the data will be moved to.


2) Creating the PipeLine

These nodes create the data pipeline through which the data will be compressed and moved. 


3) Pipeline Definition, Activation, and Deletion

This part of the workflow configures the compression of the S3 Data that is moved into the pipeline and ensures its transfer to S3 Glacier. Once it's complete, it deletes the pipeline.



Create an EKS Cluster and Node Group (Part 2)

Automation
Release and Deployment
EC2

This use case automates the deployment of an EKS cluster, without any code. The deployment can occur in response to a manual trigger or a particular event, like the filing of a ticket. Generally, to run Kubernetes on AWS, you have to deploy a cluster of EKS worker nodes. You can go about doing this in 4 ways - AWS CLI, AWS console, commands via eksctl or Terraform scripts. 3 of these methods require scripting and all 4 come with possible configuration errors. Like, for example, Availability Zone capacity errors. 


You’ll be spending another set amount of time trying to fix the errors and set the configuration metrics accurate to your architecture. An alternative is a standardized template that can deploy this cluster with bare minimum configuration from the user side. This was the idea requested to us by one of our customers who found the tedious steps put forward by AWS to be all too frustrating.


Workflow Brief


The process is split into 2 workflows, that are set up sequentially - the execution of one triggers the second. The workflows define the triggers, the resources to be deployed, the EKS cluster creation details, and any approvals, notifications, or integrations you need. 


The 2 workflows in this process include:

  1. Workflow to setup the network and data management
  2. AWS EKS Cluster creation workflow


Integrations & Customizations 


Every TotalCloud workflow can accommodate multiple integrations & customized changes. This workflow is currently triggered manually, whenever you need a cluster to be deployed. It can be automated, based on any alarm or event, like a ticket being created on an external ticketing system like JIRA or Zendesk. Slack & Email integrations also come in handy to send user approvals, notifications and reports wherever you require. 


The specifics or the workflow can be altered in every node as well. If you’re operating on a larger scale, the number and type of resources deployed can also be configured accordingly.


Set Concurrency for Tagged Lambdas

Lambda
Other
Other

Finds particular Lambda Functions via tags and set the desired concurrency.

Redshift Clusters CPUUtilization

Amazon Redshift
Other
Reporting

The template measures & provides you with actionable CPUUtilization data for Redshift Clusters - so you can view under & over-utilized resources & take action.

Remove Empty AutoScaling groups.

AWS Auto Scaling
Cost Saving
Cost Saving

It is AWS best practice to identify an empty autoscaling group available in your AWS account and delete them in order to avoid unneeded cost and better management of AWS resources. Autoscaling group is considered as empty when it doesn't have any EC2 instances attached and is not associated with an Elastic Load Balancer (ELB).

Copy EC2 Logs Data to S3 and Delete the Log Folder

S3
EC2
Remediation
AWS Best Practices
Remediation

The workflow transfers the logs present in the log folder of EC2 machines into a specified S3 Bucket. This use case helps you to store the logs you want, without worrying about increasing the disk space in the machine.

Reboot Process in EC2 instances (Triggered by Jira)

EC2
Remediation
Remediation

What if managing your instances were as easy as raising a JIRA ticket? Almost every DevOps team uses JIRA as a standard means of issue tracking & task management. We’ve seen a ton of our customers prefer using an integrated approach to their cloud & workflows. Hence, we’ve adopted a workflow for easier management of instance states through JIRA triggers. 


Benefits


  • Scheduled alerts
  • Auto-remediation
  • Cost-saving
  • Customization


Workflow Brief


To automate the process end-to-end- the workflow raises a JIRA ticket when a CloudWatch alarm goes off, the corrective action then executes, and the workflow then closes the ticket. The workflow acts as a virtual DevOps engineer. The action here would be rebooting the instances when an alarm for high CPU utilization goes off. See the detailed workflow docs here.


Process


Reboot the process associated with a machine by raising a ticket. The Apache servers associated with the EC2 machines will be rebooted when it causes high CPU utilization(set threshold as per your need). The trigger is the tags specified in the Jira Ticket description. The workflow will create a Jira ticket when the CloudWatch alarm alerts of high CPU utilization and then after the machines reboot, the ticket is closed before the workflow ends.

Create a 3-Tier Application (Part 3)

Automation
Release and Deployment
EC2

Totalcloud’s 3 Tier Application Workflow is a quick solution & template to an otherwise long scripting process. With the use of sequentially placed workflows, you can create an architecture framework with all the necessary infrastructure layers set up and ready to go.


The currently available methods to create a 3 tier application includes either manually coding or provisioning it using the AWS CLI, or writing out a CloudFormation or Terraform script. Needless to say, the manual methods are not only tedious but chaotic, time-consuming & prone to errors. You’re not just programmatically scripting the requirements and configuring multiple services independently, but spending time testing them, fixing bugs & then deploying it. Some of these issues are solved by tools like Terraform & CloudFormation, which allow you to write code that’s more readable — but the catch here is that your DevOps engineers need a considerable amount of Terraform & CloudFormation language knowledge and skill to be able to write the code.


Workflow Brief


The goal of these three workflows is to create individual layers that mimic the three layers of a 3-tier application. The outer layer being the network layer, followed by the traffic layer and finally the database layer. A fixed set of AWS services are configured to get this done. The services can be altered based on your custom application. No coding, no navigating between different pages or any other hassle. Everything will be available in the workflow setup in a drag & drop model, you just need to connect the nodes and create a flow of events.


Process


The details of deployment are defined within the workflows - when to trigger, what the action is, which services to deploy, whether to get user approval, whether to send notifications at different points, whether to have validations, what customizations to have, etc. See the detailed workflow description here.


A quick overview of the 3 layers in this template: 

Network Layer

Here, all the network resources are set up to establish a connection and manage the other resources.


The first few nodes are set up to establish the VPCs. 4 subnets are created with designated IPs that will be attached to the internet gateway. This way, we establish a connection between the services and the internet.


Normally, with AWS, you will need to configure each of these services separately. With these action nodes, however, you can configure several different services consecutively without any coding or navigating between pages.


Traffic Layer

The traffic layer manages the traffic with the below services, it also responds and reacts to changing demands in traffic.


The second workflow is entirely focused on managing moving data. This is achieved by setting up route tables and associate route tables connected to the previously established internet gateway.


A load balancer is also configured to manage the data across the EC2 instances. In case, you need to adjust the EC2 scale to match your incoming load, an auto-scaling group is also configured.


Database Layer

Here, the DB Instances are created, that will help store and manage the data.


An RDS DB instance to act as our database. The cloudFront content distribution will allow the user to distribute the data to desired locations.


Integrations & Customizations 


Every TotalCloud workflow can accommodate multiple integrations & customized changes. For this template, you can set up external triggers - like ticketing systems or emails that can set this workflow to execute. For instance, whenever a JIRA ticket is created with a particular summary, this workflow will run automatically and deploy the application. Similarly, for MSPs, specific emails from clients can trigger the workflow as well, so you don’t have to manually do it every time. Slack & Email integrations also come in handy to send user approvals, notifications and reports wherever you require.


The 3 layers or workflows can also be customized, in every node. If you want to alter the type of resources deployed, or the specific configurations, you can. Every aspect of this template can be customized.  





Increase EBS volume size if Instance's disk utilisation exceeds 90%

EC2
Remediation
AWS Best Practices
Remediation
Automation

A common occurrence in Instance management is the risk of overutilization of disk space. Several factors can cause an increase in Diskutilization to go over 90%. For example, user-initiated heavy workloads, analytic queries, prolonged deadlocks & lock waits, multiple concurrent transactions, long-running transactions, or other processes that utilize CPU resources.


Over-utilized instances can incur several performance issues that later affect your budget. Having a simple, automated means of scaling the volume of your instances when necessary take off any management overhead from your side. This use case focuses on automatically increasing disk space by a defined amount when a DSU of above 90% is detected. 


Workflow Brief


In this particular template, you instruct the workflow to increase the DB size by 20GB when the disk space utilization crosses 90%. This event (DSU > 90%) sets off a CloudWatch Alarm, which triggers the TotalCloud workflow. Even if your CloudWatch Alarm alerts you of overutilization in the middle of the night, the workflow would have handled it for you before you even think of having to respond. Since it’s automated, the fix is executed immediately - eliminating any response time delays. If you wish to approve of the action before it occurs, you can enable user approval as well. 


Process & Integrations


The workflow increases your EBS volume by 20GB, as a default value. This value can be altered depending on your workload demands. When a CloudWatch Alarm goes off and sends an SNS alert for high disk space utilization, the workflow is automatically triggered and executes the action. Like we’ve pointed out, you can set the trigger to be anything - a CloudWatch Alarm, any other external system, platform, or ticketing system such as JIRA. In this case, you can also instruct the workflow to create the ticket on your ticketing platform when the Alarm goes off. It can then close the ticket once remediation is completed. This is helpful for logging purposes, and to enable end-to-end automation. 


After the Workflow matches the instances to be modified, it requests for user approval. On receiving a green signal, it increases the EBS volume and sends an SSM command that will attach the new volume to its EBS, and inform the OS. 

The workflow has two primary steps being achieved with a total of 8 nodes. The first step is to filter out the right instance(s) using simple conditional operations. The second is to modify the volume and apply it to your instance.


Bundle And Archive - S3 Glacier Movement

S3
S3 Glacier
Remediation
Automation
Cost Saving

This S3-bundling use case simplifies an industry-standard storage best practice, while also providing further benefits additionally. The Amazon S3 storage tiers allow you to move data over and the different tiers come with different benefits. Moving data from Standard S3 to Glacier is common practice. For one, Glacier is the cheapest storage tier available, and two, it’s the best archiving solution. 


We built out an automated no-code workflow to take the same process and push it all into one seamless flow of events that does these different tasks from the same place. With this workflow, compression of your data will be the ideal way you approach your archiving. You could potentially cut your costs with this neat method. You only need 1 workflow with 8 nodes to make this complex use case a reality. No coding, no configuring on the AWS Console, or anything else.


Workflow Brief


The workflow accesses the data, compresses the files, and transfers the files into Glacier. Data compression is done by loading the collection of smaller S3 data onto a different bucket and into the data pipeline. It bundles small files into one large zip. Compression quality ranges from 0 to 9. This template uses 0. Text files and log files can be compressed with a bit of custom code (since we’ve already created it, you can simply adopt it as a template). We also configure the pipeline on this workflow to enable the compression - which happens after a short wait period. The process itself is no different to normal ZIP compression, we’re just enabling it on a cloud service, without any code. See the detailed workflow docs here.


Process


When it comes to the activation of this workflow, there are 3 key elements.


1) Collection of data

A custom node is present that collects the S3 Data from your bucket and prepares it to be redirected. The sourceBucket is defined to define from where the data is taken and the targetBucket is where the data will be moved to.


2) Creating the PipeLine

These nodes create the data pipeline through which the data will be compressed and moved. 


3) Pipeline Definition, Activation, and Deletion

This part of the workflow configures the compression of the S3 Data that is moved into the pipeline and ensures its transfer to S3 Glacier. Once it's complete, it deletes the pipeline.



Modify Instance Type if CPU Utilization <10% For a Week

EC2
Cost Saving
Remediation
Cost Saving
Remediation

If CPU utilization is less than 10% for a week, you are not utilizing the instance efficiently. This workflow identifies such instances and switches them to another instance type.

Create Template
Have idea that can solve your problem, Create It