Cost Allocation for AWS EBS Snapshots Made Easy, Get Deeper AWS Cost Analysis

All AWS EBS snapshots (which allow you to create persistent block storage volumes for your AWS EC2 instances), including the untagged/underused/unused volumes, cost money. AWS has been evolving the custom tagging support for most of the services like EC2, RDS, ELB, BeanStalk etc. And now it has introduced Cost Allocation for EBS snapshots.

This new feature allows you to use Cost Allocation Tags for your EBS snapshots so that you can assign costs to your customers, applications, teams, departments, or billing codes at the level of individual resources. With this new feature you can analyze your EBS snapshot costs as well as usage easily.

Botmetric, quickly acting on this new AWS announcement, incorporated this cost allocation and cost analysis for EBS snapshots. Of course you can use AWS’ console to activate EBS snapshot tagging and get EBS cost analysis (read this detailed post by AWS to know how). However, when you take this approach, you are required to download the cost and and usage report and analyze it using excel sheets. This get’s tedious. But with this feature now available on Botmetric, you need not juggle through complex excel sheets.

Importance of Tagging EBS Snapshots for Cost Allocation and Analysis

Tagging has been an age old practice with AWS enthusiasts. Not every AWS service permits customer-defined tags for every AWS service. Some that do can be tagged only using API command line access. Among several AWS services, EBS snapshot storage too is one of the metrics that AWS accounts are charged for. So, tagging the EBS snapshots is pivotal for proper cost allocation.

More than that, as an AWS user, you can now see exactly how much data changes have been made between each snapshot, thus giving visibility on how much you can save by copying the snapshots to Glacier instead.

This new feature of will be of greatest interest to enterprise customers seeking to track their cost associated with EBS snapshots, which generally add few thousands of dollars to their AWS bill.  

Earlier, it was a huge challenge for enterprises to track snapshot cost as they could not tag EBS snapshots for cost allocation. But with the availability of this new feature from AWS complemented with Botmetric’s capability to provide cost analysis for EBS snapshots, they get to drill-drown deeper into cost allocation and get a consolidated cost analysis view too.

Even Jeff Barr recounts this fact in his blog post that this feature will be very useful for enterprises, even for AWS customers of all shapes and sizes. He also adds the fact that managed service providers, some of whom manage AWS footprints that encompass thousands of EBS volumes and many more EBS snapshots, will be able to map snapshot costs back to customer accounts and applications.

Analyzing and Generating Cost Reports of Tagged EBS Snapshots in Botmetric

Botmetric, since long, has been offering cost allocation and cost analysis feature. Right from helping customers with proper tagging policies, tagging resources that have not been properly tagged to providing them an edge to allocate costs for those items for which AWS does not allow, analyzing costs of resources for which tagging was not possible earlier, and much more.

If you have to manage your AWS cloud budget like a pro, your AWS cost allocation & chargeback must be perfect. Thanks to Botmetric Cost & Governance’s Chargeback and Analyze, many AWS customers have been able to define, control, allocate, and understand their AWS cost allocation by different cost centers in their organization, while also having an ability to generate internal chargeback invoices. Now with AWS releasing the capability to tag EBS snapshot, you will have a better visibility into your AWs spend.

Cost Allocation for EBS Snapshots in Botmetric

Using Botmetric Cost & Governance’ Chargeback, you can allocate cloud resources with IDs, including the EBS snapshots. Please refer the image below:

Cost Allocation for EBS Snapshots in Botmetric

Cost Analysis of EBS Snapshots in Botmetric

Using Botmetric Cost & Governance’ Analyze, you can analyze the total cost incurred by the EBS snapshots for a particular day or the month using the filter ‘EC2-EBS-Snapshot.’

Cost Analysis of EBS Snapshots in Botmetric

 

You can even analyze the cost for each resource for a particular time stamp, so that you can get complete visibility into your EBS snapshot.

 

Cost Analysis of EBS Snapshots in Botmetric

 

Report Scheduling and Shareability in Botmetric

With Botmetric, you can even schedule alerts and share the cost reports with a set of recipients, so that other members too have visibility into cost allocation and cost analysis.

Report Scheduling and Shareability in Botmetric

 

With Botmetric, you can even share the cost allocation and analysis reports directly with the intended recipients.

Report Scheduling and Shareability in Botmetric

P.S: According to AWS, snapshots are created incrementally and that the first snapshot will look expensive. In regards to a particular EBS volume, deleting the snapshot with the highest cost may simply move some of the costs to a more recent snapshot. Because when you delete a snapshot that contains blocks that are being used by a later snapshot, the space referenced by the blocks will be attributed to the later snapshot.

To Conclude

Since long, Botmetric has the feature to automate taking EBS snapshots based on instance tags, and volume tags, at regular intervals and at any time of day/week/month. With this feature, you can easily perform AWS EC2-EBS cost allocation and analysis.

And to bring cloud cost accounting under control, you need to build a cost reporting strategy for your cloud deployments. Having said that, this can be a daunting task. If you are looking for an easier way to track your cloud spend, the best way forward is to plug-in your AWS to Botmetric Cost & Governance cloud cost management console. Read this post if you want to know how to schedule interval job to capture EBS snapshots based on Instance tags. Until our next blog, stay tuned with us.

Increase Operational Efficiency by 5X with New Botmetric Custom Jobs’ Cloud Ops Automation

As a Cloud Ops engineer, do you get that feeling — that you are stuck like a tiny pet hamster in a wheel, doing the same stuff again and again, and going nowhere? You have plans to automate everyday cloud operation tasks and a roadmap towards Cloud Ops Automation, but don’t know from where to start! Working on mundane operational tasks day-in day-out is too taxing. Does this ring a bell?

repetition

The best way forward is to schedule all your routine tasks and use simple python scripts to achieve the desired automation using Botmetric’s New Custom Jobs.

Here’s Why Botmetric Built Custom Jobs

Botmetric Ops & Automation product already offered a list of 25+ pre-defined automated jobs. Using these jobs, you could automate a lot of routine tasks for major 7 AWS services. A lot of Botmetric customers liked these automated jobs and further requested some unique operational tasks in AWS cloud. Hence, Botmetric team built an universal solution that had the ability to custom run python scripts through the Botmetric console.

Game-changing Cloud Ops Automation Features in Botmetric Custom Jobs

In current marketspace a lot of SaaS products offer automations but  lack in delivering categories of custom jobs. However, Botmetric Ops & Automation, since its release, has solved almost 80% of automation requirement.

With Botmetric Custom Jobs you can:

  • Run your own custom scripts: Through one Botmetric console now you can perform both governance and automation. Botmetric Custom Jobs allows you to write desired Python scripts and automate scheduled execution through Botmetric console.
  • Increase operational efficiency: There are a list of tasks that a DevOps engineer performs on everyday basis and these tasks differ from one infrastructure type to another. Automating such tasks through scripts would free up a lot of time for the DevOps engineer so that one could concentrate on business innovation.
  • Get visibility into executions: Unlike running a script through cron/CLI, with Botmetric, you will have the ability to view status, receive alert or email notification on success or failure, and get historic execution details.

How to Schedule Custom Jobs on Botmetric?

There are two ways to schedule custom jobs:

1. Create a job with new script

Write your new script in the editor provided and verify the syntax. Provide necessary naming for identification and give email address to be notified.

Create a job with new script

 

2. Utilize saved scripts to create a new job

You can also choose from the previously created scripts and schedule a task out of it.

2) Utilize saved scripts to create new job

 

Essentially, Custom Jobs will empower you with running desired automation in your environment. With simple code logic of yours, written in Python, you can schedule your routine tasks for increased operational excellence.

Here’re few use cases to give you a gist of Custom Jobs’ potential:

The Case in Point for Creating VPC in a Region

Assume, you’re headquartered in Bay Area of the USA and have your business on cloud. So you have populated maximum of your resources in US-west. Lately, you expand your business to Germany too. However, you are still launching instances in US-west. Your team starts complaining about latency issues. So you decide to populate resources in EU-central, as the present EU-central region offers greater benefits. With a simple Python script for creating VPC in a region, with user defined CIDR scheduled, you can have the VPC created for any resources launched in this region.

The Case in Point for Copying EBS Snapshots Automatically Across Instance Tags

If you are looking for heightened DR policies and want to secure your snapshots, you can use Custom Jobs to write a custom script on Copy EBS snapshots across instance tags that can schedule your volume snapshots for the mentioned  instance tags across regions and secure them.

The Case in Point for Automatically Deleting Snapshots

If you are looking to derive savings from optimizing your back-ups, you can form a custom script to schedule deletion of old snapshots after defined number of days. By automating this through Custom Jobs you will lower wastage and save on unnecessary back-up retentions.

[mk_title_box color=”#FFFFFF” highlight_color=”#008080″ highlight_opacity=”0.5″ size=”18″ line_height=”34″ font_weight=”inherit” margin_top=”0″ margin_bottom=”18″ margin_left=”10″ font_family=”none” align=”center”] Try Botmetric Custom Jobs Now[/mk_title_box]

To Conclude

Each passing day, we are moving more towards NoOps, which essentially means that machines can automate known problems, while humans can focus on new problems. Many of Botmetric customers have embraced NoOps (knowingly/unknowingly) by automating all and every possible routine tasks in their environment so that DevOps time is spent more towards solving new issues, and increase operational efficiency by 5X.

What are you waiting for? Take a 14-day trial and check for yourself how Botmetric helps you automate cloud ops tasks. If you’re already a customer, and have any questions, please pop them in the comment section below. We will get back to you ASAP. If you are looking to know about all things cloud, follow us on Twitter.

Dynamically Increase AWS EBS Capacity On-the-Go Now with New Elastic Volumes

Say goodbye to scheduling downtime while modifying Elastic Block Storage (EBS) volumes. No more bottlenecks. Modify these EBS volumes on-the-go. Here’s why: AWS announces new feature to its EBS portfolio, called Elastic Volumes, which will help you automate changes to your EBS workloads without going offline or impacting your operations. Plus, grow your volume, change your IOPS, or change your volume types too, as your requirements evolve. All without the need for scheduling downtime. And with today’s 24×7 operating models, it is more important than ever to have no room for that downtime.

Elastic Volumes: What is it about

EBS workloads are known to optimize capacity, performance, or cost by allowing you to increase volume size, adjust performance, and change volume type as and when the need arises. Primarily, due to its dynamic nature and the ability to offer persistence high-performance block storage for AWS EC2.

Prior to the launch of Elastic Volumes, you had to schedule a downtime to that end, perform several steps like create a snapshot, restore it to a new volume, and attach this snapshot to a EC2 instance as and when your data volume grows.

Now, with the launch of Elastic Volumes, AWS has simplified the process of modifying EBS volumes drastically. You can also use CloudWatch or CloudFormation, along with AWS Lambda,  to automate EBS volume modifications, without any down time.

AWS, in one of its blogs, says that Elastic Volumes reduce the amount of work and planning needed when managing space for EC2 instances. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

Essentially with AWS Elastic Volumes, as per AWS, you can:

  1. Change workloads: For instance, at some point, you realize that Throughput Optimized volumes are a better fit and need to change the type of the volume. You can do so easily with this new feature, without any downtime.
  2. Better handle the spiking demands: Assume, you’re running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month. You observe ten fold increase in traffic during the final three days of each month due to month-end processing. In this scenario, you can use this new feature provision right, handle the spike, and then dial it down once the spike tones down.
  3. Increase storage: Suppose, you need to provision a volume for 100 GiB. An alert alarm goes off indicating that it is now at 90% of capacity (disk-almost-full). Using this new feature, you can increase size of the volume and expand file system to match, with no downtime, and in a fully automated fashion. You can also use Botmetric Ops & Automation’s Incidents, Actions & Triggers app, which can help you automate increase in size of the volume as soon as this disk-almost-full alert gets triggered. Instead of manually working on it, Botmetric will help you right-size the volume based on the criterion decreed in the respective Actions and Triggers. To know more about Botmetric Incidents, Actions & Trigger, read here.

How to go about it:

It’s very simple to configure:

  1. Sign in to AWS Console
  2. Select Amazon EBS
  3. Right click on the Volume you wish to modify

create volume in aws

Image Source: Amazon Web Services

  1. Modify Volume

Modify Volume AWS

Image Source: Amazon Web Services

  1. Check the progress, whether modified, optimized, or completed.

Modify Volume in aws

Image Source: Amazon Web Services

Limitations:

While the new feature helps increase capacity, tune performance, and change volume types on-the-fly, without disruption, and with single-click, it comes with certain restrictions:

  • Your volume needs to be detached or the instance stopped for modification to proceed, if you encounter an error message while attempting to apply a modification to an EBS volume, or if you are modifying an EBS volume attached to a previous-generation instance type
  • The previous generation Magnetic volume type is not supported by the volume modification methods
  • Decreasing the size of an EBS volume is not supported. However, you can create a smaller volume and then migrate your data to it using application-level tools such as robocopy
  • Modifying a volume, you need to wait at least six hours before applying further modifications to the same volume
  • medium instances are treated as current generation. M3.large, m3.xlarge, and m3.2xl instances are treated as previous generation.

Conclusion:

With the launch of Elastic Volumes, AWS EBS is now more elastic. The best part, you can change an EBS volume’s size or performance characteristics when it’s still attached to and in use by an EC2 instance.

Check out the AWS video to know more:

Be a DevOps Champ, Automate AWS EBS Volume Snapshot Creations With Botmetric

Editor’s Note: This special product feature blog is written by our zealous Minja, Swathi Harish.

AWS EBS Volume (AWS Elastic Block Storage Volumes), attached and used by the AWS EC2 instances, play a pivotal role in file storage and Disaster Recovery (DR) management. Hence, it is recommended to backup these volumes regularly and pave the way for a better DR management and business continuity, and thereby assuage any data loss in case of a calamity or accidental deletion of information.

Consider a hypothetical use case: A person updates his volume for about four times a day and wants to take a snapshot when he does so. So this calculates to 28 times a week, 112 times a month and about 1344 times in a year. The time and effort spent on this will make the person give up on the idea of taking periodic snapshots.

Thankfully, however, one need not spend time and effort on such calculations or scripting. Instead, use our virtual cloud engineer — Botmetric, which automatically creates snapshots of the AWS EBS volume as per your scheduled settings.

Some of the most user-friendly automation featured on EBS Snapshot creation, offered by Botmetric, are:

1. Creating Backup Snapshots of the AWS EBS Volume Automatically

Ever wondered if you could take a snapshot of the volumes attached just by using the name of the volume or instance or even the instance ID? Yes. It’s possible — with Botmetric. Botmetric users can:

a. Take snapshots of volumes based on Instance IDs at any time in a day/week/month.
b. Take snapshots of volumes based on Instance IDs at regular intervals of time.
c. Take snapshots of volumes attached to instances based on Instance tags at any time of the day/week/month.
d. Take snapshots based on instance tags at regular intervals of time.
e. Take snapshots based on volume tags at any time of day/week/month.
f. Take snapshots based on volume tags at regular intervals of time.

2. Copying EBS Snapshots Across Accounts Automatically

Botmetric’s automation feature also provides provisions to copy the snapshots across different accounts of AWS. That too easily on a regular basis. More so, the delay that occurs due to manual operations can be reduced significantly by automating the tasks. The time in a day/week/month can be set and the snapshot job will be executed repeatedly as desired by you.

3. Copying EBS Snapshots Across Regions Automatically

Copying snapshots across the regions is a tricky task, as the charges are applied for this operation by Amazon explicitly. After the snapshot is copied, standard AWS EBS Volume snapshot charges apply for storage in the destination region. To reduce this kind of cost incurrence, the old snapshots stored in the destination region should be deleted on a regular basis, and the new snapshots must be copied carefully. This requires careful execution of the operation, which can be done with ease using automation.

a. Copying snapshots across region by specifying desired destination region and time in a day/week/month.
b. Copying snapshots to destination region at regular intervals of time.

The above jobs not only help in reducing the workload but also in saving the cost. For the reason that: all the old snapshots created by these jobs will be removed and replaced by the new ones. The user has the provision to specify how many snapshots he wants to be retained. The users also have the provision to delete any number of jobs at any time. Specifying the job names gives a good understanding to the user as to what the job does.

These auto-management features ensure you will never be a slave to the manual work, but will be a DevOps champ! So if you want your AWS tasks to be done accurately in the shortest time possible, automate them using Botmetric.

And we at Botmetric are continually improving our product based on the feedback from our customers. We’d love to hear from you too. Just drop a line on Facebook, Twitter, or Linkedin. Do sign-up for a Botmetric 14-day trial, here.

 

Automating on AWS Cloud- The DevOps Way

With innovations accelerating and increased demands of DevOps users, businesses are becoming more agile with each passing day. To smooth the progress of functional excellence and achieve overall business goals, organizations need to stay agile. This advancement is progressing downstream with the evolution of DevOps.

But DevOps isn’t as easy as it sounds. Deploying a highly efficient Amazon Web Services (AWS) environment without expensive configuration management tools is possible. But it requires serious efforts as there are chances of errors and mistakes.

AWS offers wide range of tools and services which can help you in configuring and deploying your AWS resources. Some of these tools are CloudFormation and ElasticBeanstalk. But these tools cannot manage your AWS environment fully. They can only cover the AWS objects created by you. They can’t deal with the software and various configuration data present on your EC2 instances.

While cloud is emerging as a hero for enterprises by giving them a great platform to manage their multifaceted software applications, enterprises look for more flexibility in their software creation practices. They have simply migrated from conventional models to agile or lean development practices. This move or let’s say it as a development has also spread to various operation teams and has shorten the impending gap between customary Development and Ops teams.

Providing a flexible and highly efficient environment, Amazon Web Services (AWS) has successfully facilitated the growth and profitability of its clients including Netflix, Airbnb, Etsy, and many more and these all embraced DevOps. In this post, we will try to deconstruct the elements of DevOps that have brought those successful impacts. We have provided here some of the best practices and practical examples.

How to make sure that your RDS/EBS data is being backed up timely? Do you keep a copy of your AWS snapshots across regions to be disaster recovery prepared? Botmetric offers Cloud Automation jobs for all these use cases and many more.

Here are some of the cloud automation jobs which will help you in saving time and advance your operational agility.

Take EBS volume snapshot based on instance/volume tags

Enable regular snapshots for your AWS EBS volumes. Use Botmetric’s Cloud Automation to schedule a job for creating snapshots automatically. This can be done for the EBS volumes having specified instance or volume tags. This would also help you to be DR ready.

Take RDS snapshot based on RDS tags

Enable regular snapshots for your AWS RDS instances. Use Botmetric’s Cloud Automation to schedule a job for creating snapshots automatically for the RDS instances having specified tags.

Stop EC2 Instances based on instance tags

Stop the instances which are not required anymore and save some cost. Botmetric’s Cloud Automation can schedule a job for your infrastructure which will stop EC2 instances automatically at specified time.

Start EC2 Instances based on instance tags

Start your stopped instances whenever it is required. Botmetric’s Cloud Automation automatically schedules a job which will start EC2 instances automatically at a specified time.

Create AMI for EC2 Instances based on instance tags

Use automation to create AMI for EC2 Instances based on instance tags automatically.

Copy EBS Volume snapshot (based on volume tags) across regions

Enable your data backups to be copied across AWS regions. Use Botmetric’s Cloud Automation to schedule a job which will automatically on specified periods copy EBS Volume snapshot based on volume tags from a source region to the destination region.

Copy RDS snapshot (based on RDS tags) across regions

Using Botmetric’s Cloud Automation you can schedule a job which will automatically on specified periods copy RDS snapshot based on RDS tags across regions.

Botmetric periodically copies your data backups across the AWS regions. With Botmetric, you can do so by scheduling a job for cross-region copy:
• Copy EBS Volume snapshot (based on volume tags) across regions
• Copy RDS snapshot (based on RDS tags) across regions

How Automation can help you further?

Auto-Healing with 24×7 DevOps Automation

Automate your most common and repetitive AWS tasks to save up to 30% of time. Detect and fix critical issues in just the click of a button.

  • Fix issues 10x faster, within seconds with ‘CLICK TO FIX’ feature

     

  • Automate start/stop of EC2 instances to save more time and avoid unnecessary expenses

     

  • Resolves problems in an on-demand/automatic basis to save cost and improve your operational agility.

     

  • One-click log activation of load balancers and AWS CloudTrail.

     

  • Quick ‘How-To-Fix’ guide to resolve audit issues

     

Implementing DevOps Automation can offer extremely helpful prospects and improve functional excellence with time-to-market. In addition to these, automation also helps in abridging expenses in several dimensions including manpower costs, resources costs, value costs, intricacy costs, and, most valuable in the eyes of all industry leaders, the time costs.

DevOps has progressed to become a key part of enterprise IT planning. The simple realistic way of managing security in cloud is developing very fast and changing swiftly to make it automation first. The Cloud Automation jobs being offered by Botmetric are helpful for all the use cases.

Take up a 14-day free trial to learn how Botmetric in your AWS Cloud infrastructure can simplify your Cloud automation tasks 10x faster.

With Botmetric’s AWS DevOps Automation, you can easily supervise your everyday cloud tasks with just a click. Not only this, but you can minimize approaching security risks while maintaining fast growth and quick time-to-market on the side of your production. Automation also helps you in reducing CloudOps overload. It eliminates repetitive and boring tasks and focuses on what matters for you business most. Automating your data backups not only frees you from the fear of losing them but also enable you to run your business smoothly. And as we rightly say as DevOps in Cloud is a match made in heaven, implementing the best practices will help you enjoy the freedom of saving up more time in automating your routine cloud tasks.

So what does the future hold for DevOps?  Tweet your thoughts to us.

AWS DevOps Automation : Gain Operational Efficiency

Public Clouds such as AWS provide tremendous flexibility and a rich set of Platform features for users. However, as one starts to scale up usage on AWS, day-to-day Cloud operations (Ops) could become a challenge.

Here are some reasons why Cloud DevOps automation on Amazon Web Services (AWS) is worth the investment.

  1. The Sheer number of Infrastructure attributes on AWS makes Ops challenging. Running Cloud Ops while staying on top of existing capabilities (contending with AWS new feature releases) is easier said than achieved. Moreover, AWS is not just about Infrastructure. Users have to deal with Applications, Infrastructure and the resulting workloads.
  2. AWS can be a complex ecosystem to tame and not everyone is skilled in Cloud Operational automation. Effective Ops Automation (where possible) needs an approach that melds skill sets with clear objectives.
  3. Developers and DevOps would love to free up their time for innovation (new features, applications, Business impact) instead of trying to manually solve Operations issues. Example: try multiple batch jobs manually on AWS at scheduled intervals.
  4. Manual DevOps approaches directly impact Time to Business Readiness. Let’s say you had to test out a critical upgrade for your e-commerce site across 50+ Instances and verify if your Operations is ready to support this rollout. An automated Ops approach to test and validate across reduce your time to get go live and increase overall confidence.
  5. Limited ability to track the Cost impact of Operational activities at scale. Example: Ability to schedule start and stop 50+ EC2 instances for an hour everyday and track the cost usage for that hour.

Fortunately, help is available. You could either invest in creating custom scripts to automate some tasks or use Cloud Automation products such as Botmetric to automate, track and resolve simple tasks. Here are five examples that you could consider automating using Botmetric:

Taking Automated Volume Snapshots for EBS and RDS:

In the event of Disaster Recovery, it would be important to have EBS (Enterprise Block Storage and RDS (Relational Database Systems) snapshots in place to aid recovery and restoration. You could either write a script to automate these tasks or use Botmetric, which provides a simple one click “ Cloud Fix” capability that can automate taking snapshots based on AWS volume tags or instance tags.

Enabling Termination Protection for AWS EC2 Instances:

Termination protection on AWS prevents accidental termination of EC2 instances. Imagine if you ran a Security audit and realized you have to activate Termination protection on 100+ EC2 instances! To automate this task, you could either write a script that detects and activates the protection or use Botmetric to discover, Visualize and automatically enable Termination Protection via a single click.

Releasing Unused Elastic IP Address:

Elastic IP addresses on AWS cost you money if unused. Botmetric can detect unused IP addresses across your regions and provide a one-click button to release unused IP addresses.

Run Batch Jobs at Scheduled Intervals:

If you needed to start and stop EC2 instances based on scheduled time of the day, Botmetric provides a simple way to schedule such jobs as alert users once the task is completed. Also, you can track the cost of the Workloads.

Removing Unused ELB (Elastic Load Balancers):

ELB are not cheap and often, customers don’t realize this until they have accrued a significant cost. You can use Botmetric to automatically detect and remove unused ELB.

All of the examples above could be achieved using manual approaches however taken in aggregate or done individually , these mundane yet repetitive tasks take a significant toll on your DevOps teams.

We strongly encourage users to automate where possible either by writing custom scripts or by using Cloud Automation platforms such as Botmetric. As a reference, our own experience tells us automating Start and Stop of EC2 instances reduce Operational fatigue by an order of 5X and gives back significant hours to Developers and DevOps teams which they can invest in creating better products rather than doing mundane EC2 start/stops.

Start Automating Now! Signup for Botmetric.