4 ways AIOps will bring joy to Cloud Engineers

The world of IT has evolved exponentially over the last decade with cloud being the new normal for enterprise businesses. From on-premise data centers to the rise of cloud and converged architecture of the IT operations has undergone a wave of process and procedural changes with DevOps philosophy. The not-so-IT companies like Amazon, Microsoft and Google have disrupted traditional infrastructure and IT operations by removing the heavy lifting of installing data centers, managing servers, networks and storage etc. so that engineers can put their focus back on applications and business operations instead of IT plumbing.

Above all, the DevOps philosophy is to save time and improve performance by bridging the gap between engineers and IT operations, however DevOps hasn’t truly delivered what was expected out of it as engineers still had to handle all the issues and events in their infrastructure whether in Cloud or on-prem data center.
There is a new philosophy emerging around “what if humans could solve new complex problems while we let machines resolve known, repetitive, and identifiable problems in cloud infrastructure management?” – this is known as “the AIOps philosophy” that is slowly taking root in cloud-native and cloud-first companies to reduce the dependency on engineers to resolve problems.  

Many enterprises have already adopted Cloud as key component of their IT and have limited their DevOps to configuration management and automated application deployments. Nurturing the AIOps philosophy will further eliminate the repetitive need for engineers to manage everyday operations and save precious engineering time to focus on business problems. While Cloud has made automation easy for engineers, it’s the lack of intelligence powering their day to day operations that is still causing operational fatigue for engineers even in the cloud world.

The adoption of Cloud and emergence of AI, ML technologies are allowing companies to use intelligent software automation rather than vanilla scripting to make decisions on known problems, predict issues and provide diagnostic information for the new problems to reduce the operational overhead for engineers. The era of pagers to wake up engineers in the middle of the night for down times and known issues will be a by-gone over the next 18 to 24 months.

In the traditional IT world, the main focus of operation engineers was to keep lights ON but in the world cloud, there are new dimensions like Variable Costs, API Driven Infrastructure Provisioning, No Centralised Security and Dynamic Changes that further increase the work burden.  The only way to help companies reduce their cloud costs, improve security compliance for on-demand provisioning, reduce alerts fatigue for engineers and bring intelligent machine operations to handle problems due to dynamic changes is through AIOps – Put AI to make Cloud work for your business.

Managing Enterprise Cloud Costs 

According to RightScale “State of Cloud 2017 Report”, managing cloud costs is the #1 priority for companies that are using Cloud computing. The cloud cost challenges are causing massive headache for finance, product and engineering teams within the organisations due to dynamic provisioning, auto-scaling support and lack of unused cloud resources garbage collection.  When hundreds of engineers within an enterprise use Cloud platforms like AWS, AZURE and Google for their applications, it will be impossible for one person to keep track of spend or deploy any centralised approval processes. Many companies like Botmetric are using machine intelligence and AI technologies to detect the cost spikes, provide deep visibility into who used what and help companies deploy intelligent automation to reduce unused resources and auto resize over provisioned servers, storage etc. in the cloud.

As IT infrastructure is an important factor to your business success, so is its need to understand the optimal usage limitations for your organizational IT infrastructure needs. In comparison on-prem cloud looks pretty easy because of “it’s pay-as-you-go” model, however when you grow exponentially you scale your cloud the same way and this gives you a bill shock. A lot of teams put in place tagging policies, rigorous monitoring but still as controlling cost is not the engineer’s way you still lack the edge. The process of continuous automation will help in reducing those misses in saving cloud cost. AIOps will put across the process of continuous saving in your cloud paradigm. For example: You can automate the process of purchase of Reserved Instances in AWS cloud through simple code with help of AWS Lambda. Another most favourable and most common use case is that of turning off dev instances over weekends and auto-turn back on while the start of weekdays, this saves upto 36% savings for most of cloud users.

Ensuring Cloud Security Compliance 

When any engineer within the organisation can provision a cloud resource through API call, how can businesses ensure that every cloud resource is provisioned with the right security compliance configuration needed for their business and satisfy regulatory requirements like PCI-DSS, ISO 27001, HIPPAA etc. This again requires a real time security compliance detection, informing the right user who provisioned the resource and take actions like shutting down machines if not complied to ensure your business stays protected. The most important part of security these days is continuous monitoring, and this can be achieved if you have a mechanism in place that detects and reports the next millisecond when the alert is received. A lot of organization are developing tools that not only detects security vulnerabilities but auto-resolves them. By leveraging AIOps and using the real time event configuration management data from cloud providers, companies can stay compliant and reduce their business risk.

Reduce Alert Fatigue

The problem of too many alerts is a known issue in the data center world, and is popularly called as Ops fatigue. The traditional NOC team (look at alert emails), IT support team (review tickets & respond) and then engineers looking into the critical problems was broken in the cloud world with DevOps Engineers managing all these tasks.

Also, anybody who managed production infrastructure, business services, applications and architected systems, knows that most of the problems are caused by the known events or identifiable patterns. Noisy Alerts are the common denominator in any IT operations management. With swarm of alerts flooding inboxes, it becomes highly difficult to manage which ones really matter or are ones to be looked upon by engineers. A great solution powered by anomaly detection would be to filter out unnecessary alerts or suppress duplicate alerts for a more concise alert management to detect real issues and predict problems. The engineers already have an idea on what to do when certain events or symptom occur in their application or production infrastructure. When events or alerts are triggered, most of the current tools just provide a text of what happened instead of providing a context of what is happening or why it’s happening? So as DevOps engineers, it’s important for you to create diagnostic scripts or programs so you can get a context of why CPU spiked? Why an application went down? Or why API latency increased? Essentially, to get to the root cause faster powered by intelligence. You should encourage them to deploy anomaly detection powered by machine intelligent and smart automated actions (response mechanisms) for known events with business logic embedded so team can sleep peacefully and never sweat again.

Intelligent Automation For Operations

The engineers responsible for managing the production operations (from ITOps to DevOps era) have been frustrated with the static tooling that’s mostly not intelligent. With the rise of machine intelligence and adoption of deep learning, we will see more of dynamic tooling that can help them in day to day operations.  In the Cloud world, the only magic wand for solving operational problems is to use code and automation as a weapon. Without using intelligent automation to operate your cloud infrastructure would only increase complexity for your DevOps teams. You can create everything from automated remediation actions to alert diagnostics. As a team and DevOps engineer, you need to focus on using CODE as a mechanism for resolving problems. If you are building the CI/CD today then you should certainly deploy a trigger as part of your CI/CD pipeline that can monitor deployment for health metrics and invoke a rollback if it detects performance or SLA issues. Simple remedies like this can save hours of time after every deployment and handle failures gracefully.

We will also see various ITSM vendors bringing AI & ML into their offerings like Intelligent Monitoring (without static thresholds for alerts instead of dynamic alerts), Intelligent Deployment (with cluster management and auto-healing tooling), Intelligent APM (not just what’s happening but why it’s happening due to what), Intelligent Log Management (real time streaming of log events and auto detection of relevant anomaly events based on application stack) and Intelligent Incident Management (suppression of noise from different alerting systems and providing diagnostics for engineers to get to the root cause faster).

The state of Cloud platforms and ITSM offerings is evolving at rapid pace, we are still to see newer concepts powered by AI and ML that revolve around disrupting cloud operations and infrastructure management to ease the pain for engineers to let them sleep peacefully in the night and not worry every time a pager goes off!

You can also read the original post here.

Introducing Enterprise Budgeting – Every CFO's Success Formula in Cloud

Cost budgeting in a large company is an exhaustive process. A tremendous amount of detail and input goes into this iterative procedure where each senior team member brings a cost budget from his or her team and the finance leader integrates it and then negotiates with the senior team members to get the numbers where they need to be. Budgeting is a collective process in which each individual operating units or Cost Centers prepare their own budget in conformity with company goals published by top management. Since cloud is quite scalable and often teams exceed their budget or don’t have a clear visibility over projected spend which leads to budget mismanagement and overall havoc for IT Directors to re-evaluate budget and get the approval of the Finance department. Also, at times IT Directors wish if they were able to set budgets at a very granular level that could diminish any kind of uncertainty. This is where Botmetric’s Budgeting can help you create a comprehensive budget model.

So, what is Enterprise Budgeting?

Botmetric’s new feature ‘Budgeting’ under Cost & Governance, will empower the financial leaders in your organization to set the budget and track it with seamless workflows and processes. The two inputs imperative to the budgeting process in a large enterprise are, a detailed cost model for the entire payer account and a comprehensive cost model for individual Cost Center based on linked account(s) and tags.

Who will benefit from Enterprise Budgeting?

Enterprise budgeting is a powerful tool which will be helpful to senior level professionals such as CFOs, CTOs, IT-Directors, Head of Infrastructure & Engg, Senior IT Managers and more.

Which Botmetric subscription plans have access to Budgeting?

Currently, we are enabling the Budgeting feature for Professional, Premium and Enterprise plans only, on request basis.

Botmetric Workflows Used in Budgeting:

The following workflow can be assigned to the people using budgeting:

  • User: User workflows with write permission will be allowed to only set the budget which will then be sent to a financial admin for approval.
  • Admin: Admin workflows/roles can provide the user with read and write access to budgeting. An admin can set the budget but only a financial admin can approve it.
  • Financial Admin: A Botmetric admin can also be a financial admin whose role will be to define the budget goal in Budgeting and approve the budget set by other users. By default, the owner of a Botmetric account will also be a financial admin.  

Add New User Smart Budgeting

Understanding Botmetric’s New Smart Cost Center

A Cost Center can be a department or any business unit in the company whose performance is usually evaluated through the comparison of budgeted to actual costs. Previously, Botmetric allowed you to create a Cost Center using tag-keys like ‘owner’, ‘customer’, ’role’, ‘team’ etc. Now, as per extensive budgeting requirements, Cost Centers in Botmetric can be defined in two ways-  based on tag keys alone and based on accounts and associated tag key-value pairs.

  • Based on Tag-Key

Here, you can choose the tag key which corresponds to your cost center. Based on the chosen tag key, Botmetric will create all possible cost centers for the tag values corresponding it.


  • Based on account(s) or combination of multiple account(s) and tags

You can also create Cost Centers based on account(s) and customize them based on multiple grouping of tag keys. You can create a cost center group such as  account1->team1->role1.


Let’s say you have different nomenclature for the same tag-keys such as user:TEAM, user:team, user:Team, then you can multi-select these tags and get complete clarity on your cost center group.


Please note that you can only choose one option at a time. You cannot have a few cost centers created based on tags and few on account and tags combination.

How to set, track and monitor the budget?

  1. Allocate & Review

  •  Budget Goal:

Botmetric budgeting enables the financial leader to define a budget goal for the entire payer account as per his estimations for the financial year. You can either enter the budget inputs manually or you can use Botmetric’s estimate to populate the budget inputs across months, quarter and year. Botmetric looks at the data for the last 12 months for yearly budget tracking.

Based on your company size it can take upto 72 hours of time to enable, process and crunch your data.

  •    Assigning Budget to Individual Cost Center:

Individual Cost Center owner(s) or financial admin(s) can set/edit budget goals for their respective units. The owner or financial admin(s) can either enter the budget inputs manually or make use of Botmetric’s estimate to populate the budget inputs across months, quarter and year. If a non financial admin or user is creating the budget for his Cost Center, it will be sent to a financial admin for approval. The new roles provided for Budgeting are helpful for providing clear demarcation between users and financial admin(s). This will allow financial admin(s) to have control over the approval of budget while providing enough flexibility to the other roles to manage their Cost Centers effectively.


  1. Budget Overview

Botmetric’s Budgeting Overview provides a summarised view where you can see a snapshot of your financial year performance at the payer account level. You can compare the actual, allocated and projected spends for the current month, current quarter and financial year. You can alse see a list of top spending Cost Centers for the current month and current quarter. Moreover, a complete trend graph comparing your actual, allocated and budgeted spend performance at a payer account level for 12 months and 4 quarters will help you evaluate Budgeting with a quick glance.


  1. Cost Center View

Botmetric’s Cost Center Overview provides a comprehensive view to track the performance for each Cost Center. Fine grained resources and service details provide a deeper and instantaneous understanding of where a certain Cost Center is incurring more cost. Ability to shuffle the view among monthly, quarterly and yearly options will allow the user to understand the budget variance over time. Each Cost Center will be evaluated to determine whether its incurred cost is within the allocated budget or it has exceeded the defined budget limit.

Moreover, each Cost Center has a corresponding budget trend graph to show the comparison between actual,  allocated and estimated spend. If you have a huge list of Cost Centers in your cloud, the search bar will help you to quickly find the desired Cost Center.

Botmetric’s Enterprise Budgeting will empower IT budget owners to define and track budgets at every granular level. This will also streamline budget processes in your organization and bring composure in the chaotic world of budget goals setting. Signup for 14 days free trial and check how it can help your organisation in cloud cost saving.

Botmetric Brings Unique Click to Fix for AWS Security Group Audit

In today’s day and age, deploying solutions and products on the cloud has become the new norm. However, managing your cloud infrastructure, implementing critical cloud security controls, and preventing vulnerabilities can become quite challenging.

Security & Compliance

Botmetric’s Security & Compliance simplifies the process of discovering and rectifying the threats as well as shortcomings in your AWS infrastructure by providing a comprehensive set of audits and recommendations, which saves a lot of time and makes eliminating unused Security Groups easy.

Botmetric’s Security & Compliance imbibes culture of continuous security and DevSecOps by automating industry best practices for cloud compliance and security. For an AWS user this simplifies the process of discovering and rectifying the threats.

Remediation of Security Threats with Botmetric

At Botmetric, we believe in simplifying cloud management for our customers. To amplify this, we provide the ‘click to fix’ feature for many of our Security & Compliance audits. This feature enables users to implement the best practices recommended by Botmetric simply with the click of a button. While saving a lot of time and effort, it also eliminates the possibility of human error. Moreover, rather than manually fixing each and every resource, Botmetric allows you to select multiple resources and fix them all at once.  

Click to Fix Security Group Audit

In an effort to allow our users to easily secure their cloud, we have recently added ‘click to fix’ feature for all Botmetric security group audits.

Why Botmetric Built Click to Fix for AWS Security Group Audits?

Security groups in AWS provide an efficient way to assign access to resources on your network. The rules that you define in security groups should be scrutinized. For a simple reason that you could end up giving a wide open access resulting in an increased risk of security breaches. The security group audits provided by Botmetric discover issues, such as as security groups having rules with TCP/UDP ports open to public, servers open to public, port ranges open to public,  so on and so forth. These are serious security loopholes that could leave your cloud open to malicious attacks.

Botmetric’s ‘click to fix’ feature for AWS security group audits deletes the vulnerable security group rule, thereby securing access to your cloud resources and protecting your cloud infrastructure.

Botmetric- Click to Fix

List of AWS Security Group Audits provided by Botmetric

  • Database Ports

Protecting database ports is crucial as you wouldn’t want access leaks or open ports to your Database ports. Botmetric scans your database ports open to public, IP and private subnet. Securing these would ensure your database ports safety in a security group.

  • Server Ports

Very essential as a lot of security issues and vulnerabilities have been caused due to server ports. Botmetric secures ports open to public, IP and private subnet.

  • TCP UDP  and ICMP Ports

Relies everything we do on the internet, here Botmetric secures open ports to both public and IP.

There are few more controls for Security Group such as All Traffic and Port Range also secured by the audits.

How to Enable Click to Fix for AWS Security Group Audits?

To use the click to fix for security group audits, please ensure that you have added “ec2:RevokeSecurityGroupIngress” permission to the policy of the role whose ARN is configured for Security and Compliance.

The Bottom line:

At Botmetric, we will continue to add more AWS cloud security and compliance features. We will soon come up with a detailed post on Click to Fix feature for several key AWS Security Audits. Until then stay tuned with us.

This is a newly launched feature by Botmetric. To explore this feature, take up a 14 day trial . If you have any questions on AWS security or AWS security best practices, just drop in a line below in the comment section or Tweet to us at @BotmetricHQ.

5 Point Guide to AWS DR Automation

Disaster Recovery is a procedure to recover technology infrastructure and systems following a disaster. There are 2 types of disasters:
Natural – These include natural calamities like floods, tornado, earthquakes.
Man-Made –  These are disasters caused by human negligence or errors such as infrastructure failure, IT Bugs, cyber-terrorism.
In such cases, not only should we have backups but backups should be copied across multiple regions and multiple accounts.

Here is a 5-point guide for AWS DR automation:

Type of Backups

There are three major levels of recoveries, organization should consider while designing their recovery solution:

File Level Recovery – from files stored in S3.

Volume Level Recovery – from snapshots.

Database Level Recovery – from DB Snapshots.
For every AWS Infrastructure, there are many kinds of resources that need to be backed up for DR purpose:
      EC2 Instance Backups (EC2 AMIs)
      EBS Volume Backups (Snapshots)
      RDS DB Backups (DBSnapshots)
      Elasticache DB Cluster Backups (Elasticache Snapshots)
      Redshift  DB Cluster Backups (Redshift Snapshots)
      Route53 Hosted Zone Backups (S3 Copy Hosted Zone Files)
      CloudFormation Template Backups (CloudFormation Template)

Critical vs Less Critical vs Non-Critical

Depending on the systems and their potential impacts on the business, we can classify strategies into 3 types –
     Most Critical System – Frequency – 1 hour. Retention -1 year
     Less Critical System – Frequency -1 day.  Retention – 180 days 
     Non-Critical System – Frequency -1 week. Retention – 4 weeks.
                                       – Manually Backup if required.

Automated vs Manual backups

In a dynamic cloud environment, with a wide range of services, it is extremely difficult to manage resources and deal with continuous changes beneath them.
For example:
If an organization has 100’s of instances of different types with different roles to play, it becomes impossible to manually create backups and monitor them.
With Automation, you just need to add tags to every instance defining their
role. It will help to create individual policies based on their role.
Let’s say, you have the following definition of instances –
Tag Instance Count Backup Policy

ENV/DEVELOPMENT 30 Once in a week
ENV/MONITORING 5 Once in a month
ENV/PRODUCTION 60  Every 4 hours
ENV/OTHERS 5 Not required(manually)

In the example shown above, automation is a clear winner relative to a manual backup.

Cost Optimized backups

Organizations should make strategies to clean up old backups which are no longer required. This will drastically reduce AWS Infrastructure Cost.
Also, AWS has a limit on the number of backups that can be created in an account.  For e.g. EBS Snapshot limit is 10,000.

Cost Optimized DR Strategy is therefore required to ensure limited backups.
In Botmetric backups jobs, Snapshots to retain parameter(s) ensures to keep the number of snapshots per volume.
Similarly, AMIs to retain ensures to keep number of AMIs per instance.
Let us understand it with an example – If there are 180 Snapshot to retain, and the job execution is once a day it will keep snapshots of 180 days (i.e. 6 months) old. 

If there are 360 Snapshot to retain and the job execution is twice a day, it will keep a backup of 180 days (i.e. 6 months) old. However, it will keep 2 snapshots per volume of the past 180 days.

Note: For safety purpose we will try to keep Snapshot to retain+1.

DR Automation for various AWS Resource

Depending on the AWS Infrastructure and DR Strategy backups can be taken across regions/across accounts.
In Botmetric, we have a wide variety of jobs for various services-

          Create EC2 Ami based on EC2 Instance tags
          Copy EC2 Ami based on EC2 Instance tags across regions
          Copy EC2 Ami based on EC2 ami tags across regions
          Copy EC2 Ami based on EC2 Instance tags across accounts
          Create EBS snapshot based on ebs volume tags
          Create EBS snapshot based on ec2 instance tags
          Create EBS snapshot based on ec2 instance ids
          Copy EBS snapshot based on ebs volume tags across regions
          Copy EBS snapshot based on ec2 instance tags across regions
          Copy EBS snapshot based on ebs volume tags across accounts

         Create RDS snapshot snapshot based on DB Instance tags
         Copy RDS snapshot based on DB Instance tags across regions

         Create Redshift snapshot based on redshfit cluster tags

         Create Route53 Hosted Zone backups

In addition to it, for cleaning up of old backups, we have de-register Old EC2 AMIs and Delete Old EBS Snapshots jobs.


In today’s ever changing cloud environment, zeal to achieve continuous availability, robustness, scalability and dynamicity spawned the rise of ‘Backup as a Service’ (BaaS).  With AWS DR automation and smart strategies you can secure make your business ‘disaster-free’. Read about the do’s and don’ts of DR Automation strategy.

Botmetric is an intelligent cloud management platform that is designed to make cloud easy for engineers. Sign up now, to see how Botmetic can help you with your Disaster recovery planning.


5 Point Guide For Today’s CFO to AWS Cloud Cost Management

We’ve seen a sharp increase in the use of cloud infrastructure over the last couple of years. There’s a range of useful services, various pricing structures with added options for saving costs by various cloud providers like AWS, Azure, Google etc. Because of this, enterprises have the elasticity to scale their existing IT infrastructures in order to match the performance and workload SLA requirements. Whether it’s for enterprise applications, testing, and development, data analysis or building  ecommerce platforms, companies have a number of choices regarding costing options and choosing the specific services that best suit for their work.

However, cloud costs can quickly increase without governance processes in place as team members can spin up infrastructure at will and with so many features and services and if companies don’t optimize their spending, avoidable and unnecessary bills can quickly pile up. Without an adequate understanding of your enterprise cloud spending and IT usage, most companies end up with a bill that is significantly higher than it normally would be. Although selling more products and services allows for bigger profits, but for now, we’ll focus more on reducing the costs associated with managing and operating a cloud infrastructure.

Understanding the various usage and cost structures

Although it may seem to an average person that every cloud infrastructure company offers unique pricing options, there are some similarities and generalized cost classifications, such as user-licensing, resource-by-the-hour (which is offered by almost all IaaS models) and an all-inclusive site license. But even resource and user licensing have a plethora of different tiers, including small vs. large virtual machine or specific functionality license vs. a full access license. It’s important for companies to figure out which tier suits them best early on and which one is most likely to suit them later, as the business continues to grow.

Maximizing cloud efficiency with multi-platform environments

Reducing cloud costs can also be accomplished by using just the right networks, servers and storage to handle your particular application workloads. Multi-platform environments like AWS BeanStalk are ideal for this type of work, as they can automatically scale-up or scale-down workloads which are best suited based on your scaling parameters like application usage or traffic or visitors or system parameters like CPU/Memory/Network etc. This workload-specific approach allows specific tasks to run significantly better and faster. As requests are being assigned and handled automatically,  you focus only on the most pressing task in order to maximize efficiency through the auto-scaling approach and, in turn, reduce the costs required to operate them.

AWS and Reserved Instances

Since Amazon Web Services are currently dominating the marketplace, most CFOs are looking for new ways to optimize it in order to tip the scale regarding cost and profit margins. When it comes to AWS, the Reserved Instances can actually be re-purposed to suit different workloads in your business without suffering a penalty. Reserved Instances are basically discounts that companies get for their upfront commitment. They have lower costs of usage per hour, but RI’s will only work if the instances are going to be consistently used.

AWS and Spot Instances

The single most overlooked feature that truly differentiates AWS from many cloud infrastructure solutions is the Spot Instances and Spot Market. These represent the spare capacity usually available at rather large discounts and operate in an auction-based model pricing. They are best used once the company has determined exactly what kind of task it needs to execute and simply run it using a Spot Instance. By using AWS API’s, you can automate the procurement and usage of Spot instances for your enterprise batch workloads, data cleaning workloads or even use spot instances as part of auto-scaling strategy for enterprise workloads that can tolerate instance failures.

Get the most out of AWS

The average AWS instance uses only around 30% of CPU based on 1000’s of instances analyzed by Botmetric. This means that companies have two-thirds or their operating power sitting idly. Categorizing workload as either memory or CPU intensive is one of the first steps in utilizing instances effectively. Once you’ve realized what your company’s utilization patterns are, identifying the type of instances that push the utilization to a higher percentage becomes easy. Don’t worry about pushing the limits of AWS utilization. Even if the hardware fails, all you have to do is provision another instance using AWS console or API’s through automation.

What most business leaders and IT owners fail to realize is that most cloud providers, including the AWS, offer incremental discounts which are proportionate to the increase in use, these volume discounts are available for Compute, Storage and Network Bandwidth etc. In other words, the more you use, the bigger the discount. Fortunately, these can be used in a myriad of ways and incorporated into existing discounts for an even larger margin for savings. AWS also offers Enterprise Discount Program for large customers that spend over $1 million per annum on their Cloud.

Are you a CFO who is awaiting a complete cloud cost control and governance management on a single platform? Then log onto Botmetric now!

AWS Comes with 61st Price Reduction, EC2 RIs & M4 Prices Slashed

AWS patrons using EC2 RIs and M4 instance type rejoice! AWS has come with yet another price reduction. This time the EC2 RIs & M4 prices have been slashed. 

AWS has been phenomenal in offering public cloud. As of today AWS offers a plethora of services that cater to various business needs and workload types. With its revolutionary pay-as-you-go model, AWS empowered agility in businesses. Now with further price reductions, it’s icing on the cake for businesses running on EC2 RIs and M4 instances.

In the words of Jeff Barr in one of his recent blogs, “Our customers use multiple strategies to purchase and manage their Reserved Instances. Some prefer to make an upfront payment and earn a bigger discount; some prefer to pay nothing upfront and get a smaller (yet still substantial) discount. In the middle, others are happiest with a partial upfront payment and a discount that falls in between the two other options. In order to meet this wide range of preferences we are adding 3 Year No Upfront Standard Reserved Instances for most of the current generation instance types. We are also reducing prices for No Upfront Reserved Instances, Convertible Reserved Instances, and General Purpose M4 instances (both On-Demand and Reserved Instances). This is our 61st AWS Price Reduction.”

All You Need To Know About Price Reductions of EC2 RIs and M4

No Upfront Option for 3 Year Standard RIs

Earlier AWS offered a No Upfront payment option just for 1 year term for Standard RIs. Henceforth, there will be a No Upfront payment option even for a 3 year term for C4, M4, R4, I3, P2, X1, and T2 Standard RIs.

~17% Price Reductions for No Upfront Reserved Instances

No Upfront 1 Year Standard and 3 Year Convertible RIs for C4, M4, R4, I3, P2, X1, and T2 instance types will now be available for 17% lesser, depending on instance type, operating system, and region. Refer the table below to know the average reductions for No Upfront Reserved Instances for Linux in several representative regions:

EC2 RIs prices slashedImage Source: https://aws.amazon.com/blogs/aws/category/price-reduction/

~21% Reduced Prices for Convertible Reserved Instances

AWS is now reducing the prices for 3 Year Convertible Reserved Instances by up to 21% for most of the current generation instances (C4, M4, R4, I3, P2, X1, and T2). Refer the table below to know the average reductions for Convertible Reserved Instances for Linux in several representative regions:

~21% Reduced Prices for Convertible Reserved Instances Image Source: https://aws.amazon.com/blogs/aws/category/price-reduction/

According to AWS, similar reductions will go into effect for nearly all of the other regions too. 

~ 7% Reduced Price for M4 Instances

M4 Linux instances’ prices are now available at a price lesser by 7%. M4 has been one of the most popular instance types among new generation instances.

Visit the EC2 Reserved Instance Pricing Page and the EC2 Pricing Page, or consult the AWS Price List API for all of the new prices.

The Wrap-Up

Cost modelling, budget reduction and cost optimization are some of the top most considerations for businesses irrespective of size. Whether it is an enterprise with 100+ foot print or a small start-up with less than 10 employees, this price reduction is a great news.

Share your views too with us. Just drop in a line below in the comment section or just gives us a shout out @BotmetricHQIf you want to holistically reduce your AWS bill, then just try Botmetric Cost & Governance. You will get data driven cost management to monitor AWS finances and take wise decisions to maximize your AWS cloud ROI.

Get Started Now!

The Ultimate Cheat Sheet On Deployment Automation Using AWS S3, CodeDeploy & Jenkins

A 2016 State of DevOps Report indicates that high-performing organizations deploy 200 times more frequently, with 2,555 times faster lead times, recover 24 times faster, and have three times lower change failure rates. Irrespective of whether your app is greenfield, brownfield, or legacy, high performance is possible due to lean management, Continuous Integration (CI), and Continuous Delivery (CD) practices that create the conditions for delivering value faster, sustainably.

And with AWS Auto Scaling, you can maintain application availability and scale your Amazon EC2 capacity up or down automatically according to conditions you define. Moreover, Auto Scaling allows you to run your desired number of healthy Amazon EC2 instances across multiple Availability Zones (AZs).

Additionally, Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during less busy periods to optimize costs.

Optimize your cloud spend and performance from a single console

The Scenario

We have an application of www.xyz.com. The web servers are setup on Amazon Web Services (AWS). As a part of the architecture, our servers are featured with AWS Auto Scaling service which is used to help scale our servers depending on the metrics and policies we specified. So every time a new feature is developed, we have to manually run the test cases before the code gets integrated and deployed. Later, we need to pull the latest code to all the environment servers. There’re several challenges while doing it manually.

The Challenges

The challenges of manually runing the test cases before the code gets integrated and deployed are:

  1. Pulling and pushing code for deployment from a centralized repository
  2. Working manually to run test cases and pull the latest code on all the servers
  3. Deploying code on new instance that are configured in AWS Auto Scaling
  4. Pulling the latest code on one server, taking the image of that server, re configuring it with AWS Auto Scaling, since the servers were auto scaled
  5. Deploying build automatically on instances in a timely manner
  6. Reverting back to previous build

The above challenges requires lots of time and human resource. So we have to find a technique that can save time and make our life easy while automating all the process from CI to CD.

Here’s a complete guide on how to automate app deployment using AWS S3, CodeDeploy, Jenkins & Code Commit.

To that end, we’re going to use:

Now, let’s walk through the flow, how it’s going to work, and what are the advantages before we implement it all. When a new code is pushed to a particular GIT repo/AWS CodeCommit branch:

  1. Jenkins will run the test cases (Jenkins listening to a particular branch through git web hooks )
  2. If the test cases fail, it will notify us and stop the further after-build actions.
  3. If the test cases are successful, it will go to post build action and trigger AWS CodeDeploy.
  4. Jenkins will push the latest code in the zip file format to AWS S3 on the account we specify.
  5. AWS CodeDeploy will pull the zip file in all the Auto Scaled servers that have been mentioned.
  6. For the auto scaling  server, we can choose the AMI that has the default AWS CodeDeploy Agent running on it. This agent helps AMIs to launch faster and pull the latest revision automatically.
  7. Once the latest code is copied to the application folder , it will once again run the test cases.
  8. If the test cases fail, it will roll back the deployment to previous successful revision.
  9. If it is successful , it will run post deployment build commands on server and ensure that latest deployment does not fail.
  10. If we want to go back to previous revision then also we can roll back easily

This way of automation using CI and CD strategies makes the deployment of application smooth, error tolerant, and faster.

Smart Deployment Automation: Using AWS S3, CodeDeploy, Jenkins & CodeCommit

The Workflow:

Here’s the workflow steps of the above architecture:

  1. The application code with the Appspec.yml file will be pushed to the AWS CodeCommit. The Appspec.yml file includes the necessary scripts path and command, which will help the AWS CodeDeploy to run the application successfully
  2. As the application and Appspec.yml file will get committed in the AWS CodeCommit, Jenkins will automatically  get triggered by poll SCM function.
  3. Now Jenkins will pull the code from AWS CodeCommit into its workspace (Path in Jenkins where all the artifacts are placed) and archive it and push it to the AWS S3 bucket. This can be considered as Job1.

Here’s the Build Pipeline

Jenkins pipeline (previously workflow) refers to the job flow in a specific manner. Building Pipeline means breaking the big Job into small individual jobs, relying on which, if first job get failed then it will trigger the email to the admin and stop the building process at that step only and will not move to the second job.

To achieve the pipeline, one should need to install the pipeline plugin in Jenkins.

According to the above scenario, the Jobs will be broken into three individual Jobs

  • Job 1: When the code commit runs, the Job 1 will run and it will pull the latest code from the CodeCommit repository, and it will archive the artifact and email about the status of Job1, whether it got successful build or got failed altogether with the console output. If the Job1 got build successfully then it will trigger to Job 2
  • Job2: This Job will run only when the Job 1 will be stable and run successfully. In Job2, the artifacts from Job1 will be copied to workspace 2 and will be pushed to AWS S3 bucket. Post to that if the artifacts will be send to S3 bucket, the email will be send to the admin. And then it will trigger the Job3
  • Job3: This Job is responsible to invoke the AWS CodeDeploy and pull the code from S3 and push it either running EC2 instance or AWS auto scaling instances. When it will be done

The below image shows the structure of pipeline.

Smart Deployment Automation: Using AWS S3, CodeDeploy, Jenkins & CodeCommit


  1. If Job 1 executes successfully then it will trigger the Job2, which is  responsible to pull the successful build version of code to S3 bucket and then trigger the Job3. If Job 2 fails, then again email will be triggered with a message of Job Failure.
  2. When Job 3 gets triggered, the archive file (Application code along with Appspec.yml) will be pushed to AWS CodeDeploy deployment service, where AWS Code Deploy will run the CodeDeploy agent in the instance and run the Appspec.yml file that will help the application to get up and running.
  3. If at any point the Job fails then the application will be deployed with the previous build.

Below are the five steps necessary for deployment automation using AWS S3, CodeDeploy, Jenkins & CodeCommit.

Step 1: Set Up AWS CodeCommit in Development Environment

Create an AWS CodeCommit repository:

1. Open the AWS CodeCommit console at https://console.aws.amazon.com/codecommit.

2. On the welcome page, choose Get Started Now. (If a Dashboard page appears instead of the welcome page, choose Create new repository.)


3. On the Create new repository page, in the Repository name box, type xyz.com

4. In the Description box, type Application repository of http://www.xyz.com


5. Choose Create repository to create an empty AWS CodeCommit repository named xyz.com

Create a Local Repo

In this step, we will set up a local repo on our local machine to connect to our repository. To do this, we will select a directory on our local machine that will represent the local repo. We will use Git to clone and initialize a copy of our empty AWS CodeCommit repository inside of that directory. Then we will specify the username and email address that will be used to annotate your commits. Here’s how you can create a Local Repo:

1. Generate ssh-keys in your local machine #ssh-keygen without any passphrase.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

2. Cat id_rsa.pub and paste it into the IAM User->Security Credentials-> Upload SSH Keys Box. And Note Down the SSH-KeyID

$ cat /.ssh/id_rsa.pub 

Copy this value. It will look similar to the following:

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

  1. Click on Create Access keys and Download the Credentials having Access Key and Secret Key.

2. Set the Environment Variables in BASHRC File at the end.

# vi /etc/bashrc

          export AWS_ACCESS_KEY_ID=AKIAINTxxxxxxxxxxxSAQ

         export AWS_SECRET_ACCESS_KEY=9oqM2L2YbxxxxxxxxxxxxzSDFVA

3. Set the config file inside .ssh folder

# vi ~/.ssh/config

Host git-codecommit.us-east-1.amazonaws.com

 User APKAxxxxxxxxxxT5RDFGV

 IdentityFile ~/.ssh/id_rsa             —> Private Key

         # chmod 400 config

4. Configure the Global Email and Username

#git config –global user.name “username”

#git config –global user.email “emailID”

5. Copy the SSH URL to use when connecting to the repository and clone it

#git clone ssh://git-codecommit.us-east-1.amazonaws.com/xyz.com

6. Now put the Application/Code inside the cloned directory and also write the Appspec.yml file and you are ready to push it.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

7. Install_dependencies.sh includes.


yum groupinstall -y “PHP Support”   

yum install -y php-mysql  

yum install -y httpd

yum install -y php-fpm  

Start_server.sh includes


service httpd start  

service php-fpm start

Stop_server.sh includes


isExistApp=`pgrep httpd`  

if [[ -n  \$isExistApp ]]; then  

 service httpd stop


isExistApp=`pgrep php-fpm`  

if [[ -n  \$isExistApp ]]; then  

  service php-fpm stop


Appspec.yml includes

version: 0.0  

os: linux  


– source: /

  destination: /var/www/xyz.com



  – location: .scripts/install_dependencies.sh

    timeout: 300

    runas: root


  – location: .scripts/start_server.sh

    timeout: 300

    runas: root


  – location: .scripts/stop_server.sh

    timeout: 300

    runas: root

Now push the code to the CodeCommit

# git add .

# git commit -m “1st push”

        # git push

8. Now we can see that the code will be pushed to the CodeCommit.

Step 2: Setting Up Jenkins Server in EC2 Instance

1. Launch the EC2 instance (CentOS7/RHEL7) and perform the following operations

# yum update -y

# yum install java-1.8.0-openjdk

Verify the java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

# yum install java-1.8.0-openjdk

2. Verify the Java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

3. Install Jenkins:

# yum install Jenkins

4. Add Jenkins to system boot:

# chkconfig jenkins on

5. Start Jenkins:

# service jenkins start

6. By default Jenkins will start on Port 8080, this can be verified via

# netstat -tnlp | grep 8080

7. Go to browser and navigate to http://:8080. You will see Jenkins dashboard.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

8. Configure the Jenkins username and password, and install the AWS and GIT related plugins.

Here’s how to Setup a Jenkins Pipeline Job:

Under Source Control Management click on GIT.

Pass the GIT ssh URL and under credentials click on ADD and then in kind option click SSH username with PrivateKey.

Note that username will be same as mentioned in the config file of development machine where repo was initiated and we have to catch the private key of development machine and paste it here.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

In Build Trigger, click on Poll SCM and mention the time whenever you want to start the build.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

For the Post Build Action, we have to archive the files and  provide the name of Job 2, if the Job 1 will  get successful build after then it should trigger the email.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Now for the time being we can start building the Job and we have to verify that when the code is committed. By now, Jenkins should start building automatically and tell whether it is able to pull the code into its workspace folder. But before that we have to create S3 bucket and pass credentials (Access key and Secret key) in Jenkins so that when the Jenkins pulls code from AWS CodeCommit  it can push build in the s3 bucket after archiving.

Step 3: Create S3 Bucket

Create S3 Bucket.

After creating S3 bucket, provide the details into Jenkins with AWS credentials.

After creating S3 bucket, provide the details into Jenkins with AWS credentials.

Now when we run Job 1  of Jenkins it will pull the code from AWS CodeCommit. After archiving, it will keep it into the workspace folder of Job1.

AWS CodeCommit Console Output

AWS CodeCommit Console Output

From the above Console output, we can see that it is pulling the code from AWS CodeCommit. After archiving, it is triggering the email. Post that it calls for the next job, Job 2.

Console Output

The above image shows that after building Job2, the Job3 will also get triggered. Now before triggering Job3, we need to setup AWS CodeDeploy environment.

Step 4: Launch the AWS CodeDeploy Application

Creating IAM Roles

Create an IAM instance profile and attach AmazonEC2FullAccess policy and also attach the following inline policy:


  “Version”: “2012-10-17”,

  “Statement”: [


          “Action”: [




          “Effect”: “Allow”,

          “Resource”: “*”




Select Role type AWS CodeDeploy

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create an auto scaling group for a scalable environment.

Here’re the steps below:

1. Choose an AMI and select an instance type for it. Attach the IAM instance profile that we created in the earlier step.

Choose an AMI and select an instance type for it. Attach the IAM instance profile

2. Now go to Advanced Settings and type the following commands in “User Data” field to install AWS CodeDeploy agent on your machine (if it’s not already installed on your AMI)


yum -y update  

yum install -y ruby  

yum install -y aws-cli  

sudo su –  

aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . –region us-east-1

chmod +x ./install  

./install auto

3. Select Security Group in the next step and create the launch configuration for the auto scaling group. Now using the launch configuration created in the above step, create an auto scaling group.

4. Now after creating Autoscaling group, it’s time to create the Deployment Group.

5. Click on AWS CodeDeploy and Click on create application.

6. Mention the application name and deployment Group Name.

AWS codedeploy and click on create application

7. In tag type, click on either EC2 instance or AWS AutoScale Group. Mention the name of EC2 instance or AWS Autoscale Group.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

8. Select ServiceRoleARN for the service role that we created in the “Creating IAM Roles” section of this post.

9. Go to Deployments and choose Create New Deployment.

10. Select Application and Deployment Group and select the revision type for your source code.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

11. Note that the IAM role associated with the instance or autoscale group should be same as CodeDeploy and the arn name must have the CodeDeploy policy associated with it.

Step 5: Fill CodeDeploy Info in Jenkins and build it

1. Now go back to Jenkins Job 3 and click on “Add PostBuild Action” and select “Deploy the application using AWS CodeDeploy.

2. Fill the details of AWS CodeDeploy Application Name, AWS CodeDeploy Deployment Group, AWS CodeDeploy Deployment Config,  AWS Region  S3 Bucket, Include Files ** and click on Access/secret to fill the Keys for the Authentication.

3. Click on save and build the project. After few minutes, the application will be deployed on the Autoscale instances.

4. When this Job3 will get build successfully then we will get the console output as below:

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

5. After this Build, there will be changes that takes place in AWS CodeDeploy group.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

6. Once you hit the DNS of the instance, you will get your application up and running.

To Wrap-Up

It’s proven that teams and organizations who adopt continuous integration and continuous delivery practices significantly improve their productivity. And with AWS CodeDeploy with Jenkins, it is an awesome combo when it comes to automating app deployment and achieve CI and CD.

Are you an enterprise looking to automate app deployment using CI/CD strategy? As a Premier AWS Consulting Partner, we at Minjar have your back! Do share your comments in the section below or give us a shout out on Twitter, Facebook or LinkedIn.