May Roundup @ Botmetric: Deeper AWS Cost Analysis and Continuous Security

Cost modelling, budget reduction and cost optimization are some of the top most considerations for businesses irrespective of size. Whether it is an enterprise with 100+ foot print or a small start-up with less than 10 employees, cost reduction is always a great news. This month, we had two awesome news by AWS in regards to cost reduction — 61st Price Reduction slashing the rates of EC2 RIs & M4 Prices and releasing better Cost Allocation for EBS snapshots, and a key Botmetric Security & Compliance product roll-out on CIS Compliance. So in the month of May, focus was on AWS cloud cost analysis and continuous security.

Like every month, here we are presenting May month-in-review, covering all the key activities around Botmetric and AWS cloud.

Product News You Must Know @ Botmetric

Botmetric continues to build more competencies on its platform. Here’re the May month updates:

CIS Compliance for Your AWS

What is about: Auditing your infrastructure as per AWS CIS Benchmark policies to ensure complete CIS compliance of your AWS infra, without you going through complex process or studying docs.

How it will help: It will help AWS users, AWS auditor, AWS system integrator, AWS partner, or a AWS consultant to imbibe CIS AWS Framework best practices. This ensures CIS compliance for your AWS cloud.

Where can you find this feature on Botmetric: Under Security & Compliance’ Security Audit & Remediation console.

To know more in detail, read the blog ‘Embrace Continuous Security and Ensure CIS Compliance for Your AWS, Always.’

Cost Allocation for AWS EBS Snapshots

What is about: AWS has been evolving the custom tagging support for most of the services like EC2, RDS, ELB, BeanStalk, etc. And now it has introduced Cost Allocation for EBS snapshots. Botmetric, quickly acting on this new AWS announcement, incorporated this cost allocation and cost analysis for EBS snapshots.

How it will help: It will allow you to use Cost Allocation Tags for your EBS snapshots so that you can assign costs to your customers, applications, teams, departments, or billing codes at the level of individual resources. With this new feature you can analyze your EBS snapshot costs as well as usage easily.

Where can you find this feature on Botmetric: Under Cost & Governance’s Chargeback console.

To know more in detail, read the blog ‘Cost Allocation for AWS EBS Snapshots Made Easy, Get Deeper AWS Cost Analysis.’

Use of InfluxDB Real-Time Metrics Data Store by Botmetric

What is about: Botmetric’s journey in choosing InfluxDB real-time metrics data store over KairosDB+Cassandra cluster, and key reasons why engineer or an architect looking for a real-time data store featuring a simple operational management should opt for InfluxDB.  

How it helped Botmetric: With the use of InfluxDB, Botmetric could speed-up application development time, while the simple operational management of InfluxDB has been helpful. Plus, team Botmetric was able to easily query data and aggregate it. Above all, InfluxDB offered auto expiry support for certain datasets. Using InfluxDB, Botmetric is able reduce its DevOps effort in cleaning up old data using separate utilities.

Knowledge Sharing @ Botmetric

5 Cloud Security Trends Shaping 2017 and Beyond

While the switch to cloud computing provides many advantages in cost savings and flexibility, security is still a prime consideration for several businesses. It’s vital to consider new cloud technologies in 2017 for countering such rising threats. This guest post by Josh McAllister covered the top cloud security trends that are shaping 2017. Some of them are AI and automation, micro-segmentation, software governance, adopt new security technologies, ransomware and the IoT, and much more. If you are looking to improve your security posture, then this blog post is a must read.  

The Biggest Pet Peeves of Cloud Practitioners and Why You Should Know

Despite adoption, there are a lot of barriers and challenges to a cloud’s adoption and acceleration. So it is for cloud practitioners as well. Botmetric throws some light on it — it could be apprehensions about losing control and visibility over data, having lesser visibility and control over operations compared to on-prem IT infra, fear of bill shock, and more. As a cloud user, do you want to know the top pet peeves of a cloud practitioner and turn them into possibilities or opportunities? Know about these roadblocks here.

A CFO’s Roadmap to AWS Cloud Cost Forecasting and Budgeting

Despite exponential increase in cloud adoption, there is one major fear attached to AWS, for that matter all the cloud’s adoption — how to be on top of cloud sprawl, and how to perfect AWS cost forecasting and budgeting as an enterprise business. To add to it, for today’s CFOs, IT is at the top of their agenda. If  you are a CFO trying to up your game and seeking to build a roadmap for AWS cloud cost modelling, spend forecasting and cloud budgeting, and above all assuage cloud sprawl?  Bookmark this blog.

What is NoOps, Is it Agile Ops?

DevOps is there, but today it is being augmented with NoOps using automation. And by taking a NoOps approach, businesses will be able to focus on clean application development, shorter cycles, and more so increased business agility.

On the other hand, in the journey of DevOps, if you automate mundane Ops tasks, it leads to NoOps. Essentially, NoOps frees-up developers’ time to further utilize their time for more innovation and to bring agility into ops (which is Agile Ops). Do read Botmetric’s take on this.

​Ultimate Comparison of AWS EC2 t2.large vs. m4.large for Media Industry

Two types of AWS EC2 instances, t2.large and m4.large, feature almost similar configuration. With media sites required to handle large number of concurrent visitors at any given time, both these resources seem perfect. This makes it challenging to make a decision on choosing the best resource, in terms of price and performance if you are a media company.  To eliminate this confusion, Botmetric has come up with information break-up of AWS EC2 t2.large vs. m4.large for media companies.  If you are a media company on AWS, this post by Botmetric might interest you.

The Wrap-up

Before we wrap-up this month, we have a freebie to share. Botmetric has always recommended AWS users to use tagging and monitoring as a stepping stone towards ensuring budgeting and cost compliance. To this end, Botmetric has come up with an expert guide that will help save cost on AWS cloud with smart tagging. Download it here.

Until next month, stay tuned with us.

What is NoOps, Is it Agile Ops?

Some time during 2011, Forrester had released a report ‘Augment DevOps With NoOps‘ quoting, “DevOps is good, but cloud computing will usher in NoOps.” It’s been over five years, several statements quoted then in the report still have a lot of weightage. While several have embraced cloud and Devops, there’s a huge bunch of DevOps professionals out there who still think NoOps is the end of DevOps. But, in reality NoOps is just the progression of DevOps.

And DevOps being the mere extension of Agile to include Ops as well, can we call NoOps as Agile Ops? In this post we will dive deep into how developers are building, testing and deploying applications, automating operations and making use of micro services, leading to  NoOps (more so Agile Ops) where everything is rolling out fast, very fast.  

Role of Cloud in NoOps and the Continuous Automative-Approach to DevOps for Agility

Before DevOps concept came into existence, the development team was responsible for the estimation of servers, memory and network components and the final specification of the resources. This was a tedious process. Later ITOps started taking care of estimation of servers, memory and network components and the final specification of the resources, while also managing them. However, to bring in agility, DevOps was born where developers started leveraging Agile concepts and managing operations too to roll-out applications faster.  

Today, several cloud and PaaS platforms help developers automate the application lifecycle management activities such as allocating a machine instance to run the application, loading the OS, and other architectural software components like application servers and databases, setting up the networking layer, building the application from the latest source in the code repository, deploying it in the configured machine, etc.  

So as and when the developers automate operational tasks, they are able to free-up their time more for business logic and less for operations. In most cases they perform ‘no Ops tasks at all.’ In essence, they have made a progression from DevOps towards NoOps.

DevOps is there, but it is being augmented with NoOps using automation.

Mike Gualtieri of Forrester Research, who coined the term NoOps, once said in his blog post, “NoOps means that application developers will never have to speak with an operations professional again.”  This means that now more and more developers are responsible for operations. And operations are getting ingrained in job description of developers. Thanks to increasing cloud adoption, today’s operational tasks are increasingly carried out by developers more rather than the ITOps professional. Here’s why: Cloud has brought in consistency and elasticity which makes it easier for developers to automate everything using APIs.

For instance, the leading public cloud AWS offers a bunch of services and tools that have the capability to automate repetitive tasks. Use of services like Jenkins, CodePipeline, & CodeDeploy helps them automate their build-test-release-deploy process. This enables developers to deploy a new piece of code into production, potentially saving hundreds of hours every month.

Consider the Netflix case study. Adrian Cockcroft, VP Cloud Architecture Strategy at AWS and then Cloud Architect at Netflix, says his blog post, “Several hundred development engineers use tools to build code, run it in a test account in AWS, then deploy it to production themselves. They never have to have a meeting with ITops, or file a ticket asking someone from ITops to make a change to a production system, or request extra capacity in advance.”

Cockcroft further adds in the same post, “They use a web based portal to deploy hundreds of new instances running their new code alongside the old code, put one ‘canary’ instance into traffic, if it looks good the developer flips all the traffic to the new code. If there are any problems they flip the traffic back to the previous version (in seconds) and if it’s all running fine, some time later the old instances are automatically removed. This is part of what we call NoOps.”

“NoOps approach leads to business focus, clean application development, shorter cycles, and more so increased business agility.” – Vijay Rayapati, CEO, Minjar Cloud Solutions

Further, as DevOps and microservices work better when applied together, adopting microservices architectural style and common toolset that supports it through code, engineers can bring about additional productivity to DevOps and agility to Ops.

DevOps and Microservices Architecture: Moving Hand-in-hand to Enable NoOps

Microservices can help developers and DevOps collaborate over requirements, dependencies, and problems, allowing them to work jointly on a problem such as build configuration or build script issue. With microservices, functional components can be deployed in their own archives. The application can then be organized as a logical whole through a lightweight communication component such as REST over HTTP.

More so, microservices-based architecture empowers DevOps to manage their own line of codes (LOCs) without depending on others for deploying them anytime. By enabling this independence, microservices architecture can not only help increase developer productivity but also make the applications more flexible and scalable.  

Here’re the highlights how microservices can help DevOps in all the aspects of operations management:  

  • Service Deployability: Microservices enables DevOps to incorporate service-specific security, replication, persistence, and monitoring configurations.
  • Service Replication: Kubernetes provides a great way to replicate services easily using Replication Controller when services are needed to be replicated using X-axis cloning or Y-axis partitioning. Each service can build their logic to scale.
  • Service Resiliency:  Since the services are independent by design, even if one service fails, it would not bring down the entire application. The DevOps can remedy that particular service without having to worry about the cascading impact because of the individual service failure.
  • Service Monitoring: As a distributed system, Microservices can simplify service monitoring and logging. Microservices allows DevOps to take a proactive action, for example, if a service is consuming unexpected resources and scale resources only for that service alone.

Considering the above points, DevOps should embrace the microServices approach to bring agility to all the Ops tasks carried out by DevOps engineers.

Public Cloud Services: Empowering DevOps to Move Towards NoOps

Due to a diverse set of features, tools, and services offered by cloud services, today’s developers as well as DevOps are able to automate several tasks and autoscale without the help of ITOps professionals.  This has brought down the burden of doing repetitive operational tasks, especially for developers and DevOps. For instance:

  1. Auto Scaling: A DevOps can create collections of EC2 instances/VMs, specify desired instance ranges, and create scaling policies that define when instances are provisioned or removed from the collection. When this resource provisioning capability is available at hand, the tasks of Ops team gets redundant and they can focus more on business logic rather than Ops.
  2. AWS OpsWorks, which helps configure and manage applications, create groups of EC2 instances streamline the instance provisioning and management process. When this resource managing capability is available at hand, the tasks of Ops team gets redundant and thus they can focus more on business logic rather than Ops.
  3. A centralized log management tool helps developers and devops simplify troubleshooting by monitoring, storing, and accessing log files from EC2, AWS CloudTrail, and other sources.
  4. Using EKK stack, a developer can focus on analyzing logs and debugging application instead of managing and scaling the system that aggregates the logs.  
  5. AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy help automate manual tasks or processes including deployments, development & test workflows, container management, and configuration management.
  6. AWS Config, which creates an AWS resource inventory like configuration history, configuration change notification, and relationships between AWS resources. It provides a timeline of resource configuration changes for specific services too.  Plus, change snapshots are stored in a specified Amazon S3 bucket and can be configured to send Amazon SNS notifications when AWS resource changes are detected. This will help keep vulnerabilities under check.

If you want to know how to work the magic with DevOps for AWS Cloud Management, read this post.

The last word: NoOps Brings Agility to Ops Tasks, So NoOps is Synonymous to Agile Ops

In journey of DevOps, if you automate mundane Ops tasks, it leads to NoOps. Essentially, NoOps frees-up developers’ time to further utilize their time for more innovation and to bring agility into ops (which is Agile Ops). Whatever you perceive it is: Ops Automation, AI Ops, Agile Ops , or more, it is a rolling stone with the right momentum.

What is your take on this? Do share your thoughts.

Why Botmetric Chose InfluxDB — A Real-Time Metrics Data Store That Works

Are you an engineer or an architect evaluating or seeking for a real-time data store featuring a simple operational management? If so, Botmetric recommends InfluxDB time series metrics data store. After years of trying out a couple of other data stores, Botmetric zeroed in on InfluxDB. Read on to know why Botmetric chose InfluxDB and what were the key criteria in using it over other data storage systems.

The Backdrop: Why Botmetric Chose InfluxDB Metrics Data Store?

Botmetric, the intelligent cloud management platform built for modern DevOps world, has always been helping cloud customers reduce overall cost, improve their security posture and automate day-to-day operations.

One of the unique differentiations of the Botmetric platform compared to other SaaS tools is the powerful automation framework, wherein DevOps teams can perform automated actions either based on real-time events or scheduled workflows.

To this end, Botmetric execute thousands of jobs every day for its customers to handle their tasks. This is expected to reach millions of tasks as the customer base grows. Further, the metadata around all Botmetric automations should be tracked continually to notify the end users and provide visibility into what’s done and what’s not.

Essentially, Botmetric delivers intelligent operations using the metadata from various operational sources like cloud providers, monitoring tools, logs, etc. It then applies concepts of Algorithmic IT Operations (AIOps) to provide smart insights and adaptive automation, so that the customers can make quick decisions. To that end, Botmetric collects a lot of time series data from different sources and is always in need of an efficient database solution.

Some time during early 2014, Botmetric was using OpenTSDB as a time series database solution. While team Botmetric liked the scalability aspect of OpenTSDB, they faced several challenges in operating it along with Hadoop, HBase, and ZooKeeper. So after 6 months, the team realized that OpenTSDB was not the right fit for a small and nimble team. Another major issue while using OpenTSDB was data aggregation, which was slowing down Botmetric’s development speed. Further, the lack of a reliable failover at HBase in 2014 had caused data availability issues.

In late 2014, team Botmetric decided to move away from OpenTSDB. Consequently, Cassandra & KairosDB was shortlisted as the alternative choice for storing the time series data. The team liked the quick setup and less operational burden compared to OpenTSDB in production. Plus, Cassandra offered with mature client libraries support for easier integration.

Even though Cassandra worked well for Botmetric until early 2016, the team had its share of challenges as the customer base with large data sets grew exponentially and data aggregation was becoming complex task. The Cassandra clusters had to be scaled vertically with high-end machines and horizontally with more nodes. More so, hundreds of millions of records had to be processed everyday into this data store while the team was still doing application level data aggregation on top of Cassandra using CQL. This was a time consuming exercise for most of the engineers in the team.

Further, from late 2015, Botmetric started moving away from metadata around cloud infrastructure, billing and usage records, etc. for easier and faster querying of data. The complete platform was decoupled into microservices-based architecture. To that end, we needed to stream data from our microservices, components usage, and monitoring metrics, etc. Botmetric’s search for reliable time series and real-time data store wasn’t achieved despite using Cassandra and KairosDB for over a year in production.

After several deliberations, during early 2015, Botmetric zeroed in on InfluxDB metrics data store. Botmetric deployed the InfluxData TICK stack with Grafana for monitoring of all the micro-services events. The Botmetric team loved the simplicity, ease of use, support for various client libraries, great aggregation capability for querying, the lack of operational overhead, and more that InfluxDB offered.

With InfluxDB, Team Botmetric was able to easily query data and aggregate it, unlike in Cassandra CQL. Above all, it offered auto expiry support for certain datasets. With this feature, Botmetric is now able reduce its DevOps effort in cleaning up old data using separate utilities.

In the words of Botmetric CEO Vijay Rayapati as cited in one of his blog posts, “InfluxDB is a savior. Its simplicity is amazing and will certainly speed-up your application development time. The simple operational management of InfluxDB will be very helpful if it’s a critical data store for you. You don’t need to break your head during any production debugging. Plus, their active community support is very helpful. We just loved what we saw with the TICK stack deployment for our SaaS platform metrics collection and events monitoring.”

Vijay further adds, “ We’ve now retired our entire KairosDB+Cassandra cluster and replaced it with an InfluxDB, Elasticsearch deployment. Today, InfluxDB and TICK stack are central components in the Botmetric technology landscape. We will continue to adopt it as our core data store as we build new real-time use cases that are event driven in nature.”

The Wrap up

Today, Botmetric refers to InfluxDB as good choice for “High Velocity Real-Time Metrics Data Store.” If you are an engineer or an architect looking for a real-time data store featuring a simple operational management, then your search should end at InfluxDB. You can read the detailed story here, if this case study interests you.

Editor’s Note: This blog post is an adaptation of Vijay Rayapati’s blog post, “Choosing a Real-Time Metrics DataStore That Works – Botmetric Journey.”

The Biggest Roadblocks for Cloud Practitioners and Why You Should Know

Cloud computing has been increasingly favoured over on-premise computing lately. A majority of IT industry players right from hardware manufacturers, OS, and middleware software developers to independent software vendors (ISVs) have embraced cloud.

A recently published IDG’s Enterprise Cloud Computing Survey (2016) indicates that by 2018 the typical IT department will have at least 60% of their apps and platforms hosted on the cloud.

Future of IT Platform is Cloud

Despite adoption, however, there are a lot of barriers to its adoption and acceleration. Another industry report indicates that 60% of engineering firms are slowing down their cloud adoption plans due to lack of security skills.

Skills gap is considered to be one of the major pet peeve of cloud practitioners across the globe. Apart from this challenge there are other barriers to the cloud adoption too. Just because there are challenges one must not stop there and hinder progress. The buck must not stop there. Why not turn these obstacles into opportunities and problems into possibilities?

As a cloud user, do you want to know the top pet peeves of a cloud practitioner and turn them into possibilities or opportunities? If yes, then you are at the right place. Read on these challenges collected from several cloud experts via an internal survey:

Apprehensions about Losing Control and Visibility over Data

Storing sensitive and proprietary data on external environment carries risks. Despite cloud providers providing successful case studies and guides for best practices, enterprise bureaucrats are still apprehensive about moving their data to the cloud. Because it becomes very difficult to see where the data resides exactly once it is on a public cloud.

The other perspective: If data is stored in the cloud, you can access it from anywhere, anytime, no matter what happens to your machine. Plus, you can have complete control over your data and even remotely erase data if you’re in doubt that it is in the wrong hands. Cloud providers also have fine grained Identity and Access Management (IAM) controls.

Moreover, there are many competitive SaaS platforms that bring data security tools integrated with other DevOps features so that cloud users don’t have to worry about losing control over their sensitive data sets.

Lesser Visibility and Control over Operations Compared to On-Prem IT Infra

A majority of businesses want to track the changes that are made during the IT operations. So, they are worried that there might not be complete visibility into their IT operations such as who is accessing what and when, like in on-prem IT infra.

The other perspective: It is a myth that cloud does not provide complete visibility and control. It provides complete visibility and control to the user, provided you have all authenticated access to it. Further, adopting DevOps platforms and tool chain such as Docker, Ansible etc. can empower enterprise teams to track and manage the entire Application development and deployment lifecycle.

Fear of Bill Shock

Cloud Services are priced entirely different from the simple fixed price models of standard servers in a data center. Budgeting and managing frequent cost changes in the cloud scenario is worrisome for most businesses, because the complex pricing model of the cloud get them overworked or overwhelmed.

The other perspective: Cloud goes with Opex, not Capex. With a well designed cloud architecture, along with a comprehensive cloud management plan, can always keep cloud cost under control and optimized. No bill shock, to be precise!

Good news is that there are several SaaS-based CloudOps solutions that integrate natively with the Core Cloud Platform leveraging the Open APIs. They can dynamically provision and decommission system resources based on dynamic parameters like workloads, user traffic, etc. By optimizing the resource utilization, these CloudOps platforms can bring down the operational costs drastically. Additionally, these platforms feature advanced dashboards that can help companies to establish budgetary controls and track the actual cost accruals against the planned costs. Such tools can help enterprises overcome the fear of costs overshooting.

Acquiring New Skillset for Cloud Management

Cloud platforms have radically altered the application development lifecycle automation and continue to do so with emerging cutting-edge technologies. To that end, businesses have to continually acquire DevOps teams adept with all the new emerging technologies, essentially to manage the cloud servers over the different lifecycle stages such as PoC, test, staging, and production.

The other perspective: Instead of hiring a team of engineers for Ops why not just automate known IT ops using tools and just focus on development? The skills shortage always seems like a problem across the industry. Moreover, cloudOps automation platforms like Botmetric can help the technology complexity underneath the Cloud by automating many of the frequent tasks that are expected to be performed.

The Bottom Line: Many CloudOps Platforms and Tools to the Rescue

Several third party software vendors have ventured to fill the gaps in the core cloud platforms and solve most of the concerns voiced by the cloud users. As a Cloud Expert one should be knowledgeable about these extension tools to bridge the gap between the Cloud platforms capabilities and Enterprise teams’ needs.

Share you feedback in the comment section below or give us a shout out on any of our social media channels. We are all ears.

AWS Comes with 61st Price Reduction, EC2 RIs & M4 Prices Slashed

AWS patrons using EC2 RIs and M4 instance type rejoice! AWS has come with yet another price reduction. This time the EC2 RIs & M4 prices have been slashed. 

AWS has been phenomenal in offering public cloud. As of today AWS offers a plethora of services that cater to various business needs and workload types. With its revolutionary pay-as-you-go model, AWS empowered agility in businesses. Now with further price reductions, it’s icing on the cake for businesses running on EC2 RIs and M4 instances.

In the words of Jeff Barr in one of his recent blogs, “Our customers use multiple strategies to purchase and manage their Reserved Instances. Some prefer to make an upfront payment and earn a bigger discount; some prefer to pay nothing upfront and get a smaller (yet still substantial) discount. In the middle, others are happiest with a partial upfront payment and a discount that falls in between the two other options. In order to meet this wide range of preferences we are adding 3 Year No Upfront Standard Reserved Instances for most of the current generation instance types. We are also reducing prices for No Upfront Reserved Instances, Convertible Reserved Instances, and General Purpose M4 instances (both On-Demand and Reserved Instances). This is our 61st AWS Price Reduction.”

All You Need To Know About Price Reductions of EC2 RIs and M4

No Upfront Option for 3 Year Standard RIs

Earlier AWS offered a No Upfront payment option just for 1 year term for Standard RIs. Henceforth, there will be a No Upfront payment option even for a 3 year term for C4, M4, R4, I3, P2, X1, and T2 Standard RIs.

~17% Price Reductions for No Upfront Reserved Instances

No Upfront 1 Year Standard and 3 Year Convertible RIs for C4, M4, R4, I3, P2, X1, and T2 instance types will now be available for 17% lesser, depending on instance type, operating system, and region. Refer the table below to know the average reductions for No Upfront Reserved Instances for Linux in several representative regions:

EC2 RIs prices slashedImage Source: https://aws.amazon.com/blogs/aws/category/price-reduction/

~21% Reduced Prices for Convertible Reserved Instances

AWS is now reducing the prices for 3 Year Convertible Reserved Instances by up to 21% for most of the current generation instances (C4, M4, R4, I3, P2, X1, and T2). Refer the table below to know the average reductions for Convertible Reserved Instances for Linux in several representative regions:

~21% Reduced Prices for Convertible Reserved Instances Image Source: https://aws.amazon.com/blogs/aws/category/price-reduction/

According to AWS, similar reductions will go into effect for nearly all of the other regions too. 

~ 7% Reduced Price for M4 Instances

M4 Linux instances’ prices are now available at a price lesser by 7%. M4 has been one of the most popular instance types among new generation instances.

Visit the EC2 Reserved Instance Pricing Page and the EC2 Pricing Page, or consult the AWS Price List API for all of the new prices.

The Wrap-Up

Cost modelling, budget reduction and cost optimization are some of the top most considerations for businesses irrespective of size. Whether it is an enterprise with 100+ foot print or a small start-up with less than 10 employees, this price reduction is a great news.

Share your views too with us. Just drop in a line below in the comment section or just gives us a shout out @BotmetricHQIf you want to holistically reduce your AWS bill, then just try Botmetric Cost & Governance. You will get data driven cost management to monitor AWS finances and take wise decisions to maximize your AWS cloud ROI.

Get Started Now!

April Roundup @ Botmetric: Aiding Teamwork to Solidify 3 Pillars of Cloud Management

Spring is still on at Botmetric, and we continue to evolve like seasons with new features. This month, the focus was on how to bring in more collaboration and teamwork while performing various tasks related to cloud management. The three pillars of cloud management, visibility, control, and optimization, can be solidified only with seamless collaboration. To that end, Botmetric released two cool collaborative features in April: Slack Integration and Share Reports.

1. Slack Integration

What is it about: Integrating Slack collaboration tool and Botmetric so that a cloud engineer will never miss an alert or notification when on a Slack channel and quickly communicate/alert it to their team ASAP. 

How will it help: Cloud engineers can quickly get a sneak-peak into specific Botmetric alerts, as well as details of various cloud events, on their desired channel of Slack. Be it an alert generated by Botmetric’s Cost & Governance, Security & Compliance, or Ops & Automation, engineers can see these alerts without logged into Botmetric, and quickly communicate the problem between the team members.

Where can you find this feature on Botmetric: Under the Admin section inside 3rd Party Integrations.

To know more in detail, read the blogBotmetric Brings Slack Fun to Cloud Engineers

2. Share/Email Data-Rich AWS Cloud Reports Instantly

What is it about: Sharing/emailing Botmetric reports directly from Botmetric. No downloading required.

How will it help: For successful cloud management, all the team members need complete visibility with pertinent data in the form of AWS cloud reports. The new ‘Share Reports’ feature provides complete visibility across accounts and helps multiple AWS users in the team better collaborate while managing the cloud.

Where can you find this feature on Botmetric: Across all the Botmetric products in the form of a share icon.

To know more in detail, read the blog ‘Share Data-Rich AWS Cloud Reports Instantly with Your Team Directly From Botmetric.’

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of April:

Gauge AWS S3 Spend, Minimize AWS S3 Bill Shock

AWS S3 offers a durability of  99.999999999% compared to other object storage on AWS, and features simple web interface to store and retrieve any amount of data. When it comes to AWS S3 spend, it has something more in it beyond just the storage cost. If you’re a operations manager or a cloud engineer, you probably know that data read/write or data moved in/out also do count  AWS S3 bill. Hence, a detailed analysis of all these can help you keep AWS S3 bill shock to a minimum. To know how, visit this page.

7 Tips on How to Work the Magic With DevOps for AWS Cloud Management

Are you a DevOps engineer looking for complete AWS cloud management? Or are you a AWS user looking to use DevOps practices to optimize your AWS usage? Both ways, AWS and DevOps are modern way of getting things done. You should leverage new age DevOps tools for monitoring, application performance management, log management, security, data protection and cloud management instead of trying to build adhoc automation or dealing with primitive tools offered by AWS.

Get the top seven tips on how to work the magic with DevOps for AWS cloud management.

The Ultimate Cheat Sheet On Deployment Automation Using AWS S3, CodeDeploy & Jenkins

If you’re a DevOps engineer or an enterprise looking for a complete guide on how to automate app deployment using Continuous Integration (CI)/Continuous Deliver(CD) strategies, and tools like AWS S3, CodeDeploy, Jenkins & Code Commit, then bookmark this blog penned by Minjar’s cloud expert.

Botmetric Cloud Explorer: A Handy Topological Relationship View of AWS Resources

Do you want to get a complete understanding of your AWS infrastructure. And map how each resources are connected and where they stand today for building stronger governance, auditing, and tracking of resources. Above all get one handy, cumulative relationship view of AWS resources without using AWS Config service. Read this blog how to get a complete topological relationship view of your AWS resources.

The Cloud Computing Think-Tank Pieces @ Botmetric

5 Reasons Why You Should Question Your Old AWS Cloud Security Practices

While you scale your business on cloud, AWS too keeps scaling its services too. So, cloud engineers have to constantly adapt to architectural changes as and when AWS updates are announced. While all architectural changes are made, AWS Cloud Security best practices and audits need to be relooked too from time to time.

Tightly Integrated AWS Cloud Security Platform Just a Click Away

As a CISO, you must question your old practices and relook at them whether it’s relevant in the present day. Here’re the excerpts from a think tank session highlighting the five reasons why you should question your old practices.

The Rise of Anything as a Service (XaaS): The New Hulk of Cloud Computing

The ‘Cloud-driven aaS’ era is clearly upon us. Besides the typical SaaS, IaaS, and PaaS offerings discussed, there are other ‘As-a-Service(aaS)’ offerings too. For instance, Database-as-a-service, Storage-as-a-Service, Windows-as-a-Service, and even Malware-as-a-Service. It is the era of Anything-as-a-Service (XaaS). Read the excerpts from an article by Amarkant Singh, Head of Product, Botmetric, featured on Stratoscale, which share views on XaaS, IaaS, PaaS, and SaaS.

April Wrap-Up: Helping Bring Success to Cloud Management

Rain or shine, Botmetric has always striven to bring success to cloud management. And will continue to do so with DevOps, NoOps, AIOps solutions.

If you have missed rating us, you can do it here now. If you haven’t tried Botmetric, we invite you to sign-up for a 14-day trial. Until the next month, stay tuned with us on Social Media.

AWS Cloud Security Think Tank: 5 Reasons Why You Should Question Your Old Practices

Agile deployments and scalability seem to be the most dominant trend in public cloud, today; especially on AWS. While you scale your business on cloud, AWS too keeps scaling its services as well as upgrading its technology from time to time, to keep up with the technology disruptions happening across the globe. To that end, your cloud engineers have to constantly adapt to architectural changes as and when updates are announced. While all these architectural changes are made, AWS Cloud Security best practices and audits need to be relooked too from time to time.

As a CISO, have you ever questioned your old practices and relooked at them whether it’s relevant in the present day.

Here are few excerpts from our AWS Cloud Security Think Tank: A collation of deliberations we had recently at Botmetric HQ with our security experts on why anyone on cloud should question their old AWS cloud security best practices.

1. Relooking at Endpoint Security

“Securing the server end is just one part of enterprise cloud security. If there is a leakage at the endpoints, the net result is adverse impact on your cloud infrastructure.  Newer approaches to assert the legitimacy of the endpoint is more important than ever.” — Upaang Saxena, Botmetric LLC.

As most cloud apps provide APIs, the client authentication mechanisms have to be redesigned. Moreover, as the endpoints are now mobile devices, IOT devices, and laptops that might be anywhere in the world, increasingly the endpoint security is moving away from perimeter based security model giving way to Identity based endpoint security model. Hence, newer approaches to assert the legitimacy of the endpoint is more important than ever.

2. Revisiting Policies Usage

“Use managed policies, because with managed policies it easier to manage access across users. ” Jaiprakash Dave, Minjar Cloud Solutions

Earlier, only Identity-based (IAM) inline policies were available. Managed policies came later. So not all old AWS cloud best practices that existed during inline policies era might hold good in the present day. So, it is recommended to use managed policies that is available now. With managed policies you can manage permissions from a central place rather than having it attached directly to users. It also enables to properly categorize policies and reuse them. Updating permissions also becomes easier when a single managed policy is attached to multiple users. Plus, in managed policies you can add up to 10 managed policies to a user, role, or group. The size of each managed policy, however, cannot exceed 5,120 characters.

3. Make Multiple Account Switch Roles

“We encourage our clients to make multiple account switch roles for access controls as per their security needs.” Anoop Khandelwal, Botmetric LLC.  

Earlier, it was not recommended to switch roles for access controls while using VPC. However, now it is recommended to make multiple account switch roles for access controls as per their security needs. Plus, earlier VPCs came with de facto defaults, which was inherently less than ideal from a security perspective. Now, Amazon VPC provides features that you can use to increase and monitor the security for your Virtual Private Cloud (VPC).

4. Redesigning Architecture for New Attack Vectors

DDOS attacks through compromised IOT devices such as Mirai Bot attacks caught the security professionals by surprise. The possibility of the scale of the attack was not predicted by any security analyst. Such new attack vectors will be designed by hackers to penetrate popular and highly sensitive websites and it would be difficult to anticipate all potential attack vectors. So cloud professionals have to revisit their architecture and be ready with better contingency measures in case of such unanticipated attack vectors.

“You (cloud security engineer) need to relook into your architecture now and then and come up with better contingency measures for new age attack vectors like massively distributed denial of service(DDOS). ” Abhinay Dronavally, Botmetric LLC.

5. New API Security Mechanisms

Today, most enterprise applications consume data from external web services and also expose their data. The authentication mechanisms for the APIs cannot be the same as human user authentication, like earlier days. APIs must fit into machine to machine interactions. Focus more on integration API security mechanisms with specialized API security solution.

“As data breaches can happen through API, integration of API security mechanisms are a must.” — Shivanarayana Rayapati, Minjar Cloud Solutions.

Final Thoughts

As the sophistication of the attacks keep increasing, the security solutions too would have to improve their detection methods. Today’s security solutions leverage Artificial Intelligence (AI) algorithms like Random Forest Classification, Deep Learning techniques, etc. to study, organize, and identify the underlying access patterns of various users. A well thought-through  approach is pivotal in securing your AWS cloud. For that matter, any cloud.

Tightly Integrated Cloud Security Platform for AWS Just a Click Away — Get Started!

The Rise of Anything as a Service (XaaS): The New Hulk of Cloud Computing

Cloud Computing, as we see it today, has seen tremendous evolutionary as-a-service segments right from the dawn of Software-as-a-Service (SaaS) to Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). And now Anything-as-a-Service (XaaS).

Analysts forecasted that the global XaaS market will grow at a CAGR of 38.22% between 2016-2020. Besides the typical SaaS, IaaS, and PaaS offerings discussed, there are other ‘As-a-Service(aaS)’ offerings too. For instance, Database-as-a-service, Storage-as-a-Service, Windows-as-a-Service, and even Malware-as-a-Service.

No doubt the ‘Cloud-driven aaS’ era is clearly upon us. And cloud computing remains the top catalyst for all these services’ growth. The converse holds true too.

In the words of Amarkant Singh, Head of Product, Botmetric, “The persuasive wave of cloud computing is affecting every industry and every vertical we can think of. Thanks to all of its fundamental models – IaaS, PaaS, and SaaS plus the latest XaaS, cloud has brought in democratization of infrastructure for businesses. Talking about XaaS. It is the new hulk of the cloud computing and is ushering in more of ready-made, do-it-yourself components and drag-and-drop development.”

XaaS: Born to Win

The XaaS model was born due to elasticity that the cloud offers. More so, the XaaS provides an ever-increasing range of solutions that ultimately gives businesses the extreme flexibility to choose exactly what they want tailored for their business, irrespective of size/vertical.

Recently, Stratoscale asked 32 IT experts to share their insights on the differences between IaaS, PaaS and SaaS and compiled an exhaustive Op-Ed report IaaS/PaaS/SaaS – the Good, the Bad and the Ugly[1]. Among these experts, Amarkant too has penned few lines for the report.

Here are excerpts from the article:

More companies across the spectrum have gained trust in cloud infrastructure services, pioneered by AWS. While IaaS provides a high degree of control over the cloud infrastructure, it is very-capital intensive and has geographic limitations. On the other hand, PaaS comes with decreased costs but offers limited scalability.

With its roots strongly tied to virtualization, SOA and utility/grid computing, SaaS is gaining more popularity. More so, it is gaining traction due to its scalability, resilience, and cost-effectiveness.

According to a recent survey by IDC, 45% of the budget organizations allocate for IT cloud computing is spent on SaaS.

As organizations move more of their IT infrastructure and operations to the cloud, they are willing to embrace a serverless/NoOps model. This marks the gradual move towards the XaaS model (Anything as a Service), which cannot be ignored.

XaaS is the new hulk of the cloud computing. Born due to elasticity offered by the cloud, XaaS can provide an ever-increasing range of solutions, allowing businesses to choose exactly the solution they want, tailored for their business, irrespective of size/vertical. Additionally, since these services are delivered through either hybrid clouds or one or more of the IaaS/PaaS/SaaS models, XaaS has tremendous potential to lower costs. It can also offer low-risk infrastructure for building a new product or focusing on further innovation. XaaS embracement has already gained traction, so the day is not far when XaaS will be the new norm. But at the end of the day, it all matters on how cloud-ready a company is for XaaS adoption.

Concluding Thoughts

Each expert has an idiosyncratic perspective to what, where, when, and why XaaS. For few, it stands for everything-as-a-service and refers to the increasing number of services delivered through cloud over the Internet. For few it is anything-as-a-service. Techopedia quotes it as a broad category of services related to cloud computing and remote access where businesses can cut costs and get specific kinds of personal resources. Different perspective, different views, but one goal: Putting cloud in perspective.

Read what other experts are deliberating on XaaS on Stratoscale’s Op-Ed article ‘IaaS/PaaS/SaaS – the Good, the Bad and the Ugly.’[1]

Share your thoughts in the comment section below or give us a shout out on either Facebook, Twitter, or LinkedIn. We would love to hear what’s your take on XaaS.

[1] Stratoscale, 2017, “IaaS/PaaS/SaaS – the Good, the Bad and the Ugly.”

7 Tips on How to Work the Magic With DevOps for AWS Cloud Management

If you look at it, both Cloud and DevOps have gained importance because they help address some key transitions in IT. Cloud and DevOps have played a big role in helping IT address some of the biggest transformative shifts of our times. One, the rise of the service economy; two, the unprecedented, almost continuous, pace of disruption and thirdly, the infusion of digital into every facet of our lives. These are the shifts that are driving business in the 21st century. And DevOps for AWS Cloud Management is a match made in heaven.

Are you a DevOps engineer looking for AWS cloud management, then you’re at the right place. Read on to know how AWS and DevOps practices are a go-to combo.

The Backdrop

Cloud has finally come of age in the last few years. Gartner has projected that the worldwide public cloud services market will grow 18 percent in 2017 to a total $246.8 billion, up from $209.2 billion in 2016. Out of this, the highest growth is expected to come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 36.8 percent in 2017 to reach $34.6 billion.

IDC too has it’s views:

worldwide public cloud services market report from IDC Image Source: IDC 2017 Forecast on Public IT Spending

Several companies are hosting enterprise applications in AWS, suggesting that CIOs have become more comfortable hosting critical software in the public cloud. As per Forrester, the first wave of cloud computing was created by Amazon Web Services, which launched with a few simple compute and storage services in 2006. A decade later, AWS is operating at an $11 billion run rate.

“As a mindset, cloud is really about how you do your computing rather than where you do it.”

And with public cloud like AWS, it already provides a set of flexible services designed to enable companies to more rapidly and reliably build and deliver products using AWS and DevOps practices. These services simplify provisioning and managing infrastructure, deploying application code, automating software release processes, and monitoring your application and infrastructure performance.

“In simple words: AWS Cloud Management becomes much simpler through the use of DevOps and vice-versa.”

An essential element of DevOps is that development and operations are bound together, which means that configuration of the infrastructure is part of the code itself. This basically means that unlike the traditional process of doing development on one machine and deployment on another one, the machine becomes part of the application. This is almost impossible without cloud, because in order to get better reliability and performance, the infrastructure needs scale up and down as needed.

On its part, DevOps has gained its spotlight in the software development field, and is growing from strength to strength. DevOps has seen a tremendous increase in adoption in the recent years, becoming an essential component of software-centric organizations. But when DevOps and Cloud come together is when real magic is created.

Below are few useful tips to ensure that you get the most from your DevOps for AWS Cloud Management.

1. Templatize your Cloud Architecture

“Build your Cloud as Code.”

Using AWS CloudFormation’s sample templates or creating your own templates, you can describe the AWS resources including the deployment configuration, and any associated dependencies or runtime parameters, required to run your application.

 AWS CloudFormation’s sample templateImage Source: AWS CloudFormation Docs

This allows developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

This not only allows the source control of VPC design, application deployment architecture, network security design and application configuration in JSON format. In case you require multi-cloud support for safely creating and managing the cloud infrastructure at scale, you can consider using Terraform. This can help everyone in your team to understand your cloud design.

One great thing about CloudFormation is that you don’t need to figure out the order for provisioning AWS services or the subtitles of making those dependencies work. Once the AWS resources are deployed, it is possible to modify and update them in a controlled and predictable way, similar to the manner in which you apply version control to your AWS infrastructure.

“The best part is that CloudFormation is available at no additional charge, and customers need to pay only for the AWS resources needed to run your applications.”

2. Automate with AWS Cloud Management Tools

Cloud makes it easier for you to automate everything using APIs. AWS provides a bunch of services that help organizations practice DevOps, and these are built first for use with AWS. These tools have the capability to automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.  

AWS cloud automation tools help you use automation so you can build faster and more efficiently.

AWS Automation Tools

You might want to first and foremost automate the build and deploy process of your applications.  You can leverage Jenkins or CodePipeline to CodeDeploy to automate your build-test-release-deploy process. This will enable anyone from your team to deploy a new piece of code into production, potentially saving hundreds of hours every month for your engineers.

Using AWS services, you can also automate manual tasks or processes including deployments, development & test workflows, container management, and configuration management.

Doing manual work in the Cloud through console can be quite problematic. You simply cannot deal with the complexity and configuration required for your applications without automating everything from provisioning, config, build, release, deployment, monitor and troubleshooting issues.

“In Cloud, the only thing you should trust is your automation. Automate IT.”

3. Free up Engineers’ Time Using Managed DB and Search

In most cases, there is absolutely no reason for you to run your own SQL databases. AWS offers some great services – RDS and ElasticSearch. These can free you from the worry of the AWS Cloud Management processes by managing the complexity and handling underlying infrastructure.

Amazon Elasticsearch Service makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. Similarly, the Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business.

These managed offerings from AWS make everything from patch management, horizontal scalability to read replicas a breeze. The best part is that these will free up your engineers’ time to focus on more business initiatives by offloading a large chunk of operational work to AWS.

4. Simplify Troubleshooting Through Centralized Log Management

“DevOps allows you to do frequent deploys, so you debug quickly and do the release. With Centralized Log Management, debugging gets quicker and faster.”

The most important debug information of your applications that you need for troubleshooting will be in the log files. Therefore, you need a centralized system to collect and manage it. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. The ELK stack (Elasticsearch, Logstash, and Kibana) or EKK stack (Amazon Elasticsearch Service, Amazon Kinesis, and Kibana) is a solution that eliminates the undifferentiated heavy lifting of deploying, managing, and scaling your log aggregation solution. With the EKK stack, you can focus on analyzing logs and debugging your application, instead of managing and scaling the system that aggregates the logs.

You should look at using CloudWatch Logs to stream all logs from your servers into ELK stack provided by AWS. You can look at Sumologic or Loggly for doing this as well if you need advanced analytics and grouping of logs data. This will allow engineers to look at information for troubleshooting problems or handling issues without worrying about SSH access to systems.

5. Get Round-the-Clock Cloud Visibility

DevOps is a continuous process. Put it in action for round-the-clock cloud visibility. And Every business needs visibility into their cloud usage by users, operations, applications, access and network flow standpoint.

DevOps is a Continuous Process

You can do this easily in AWS using AWS’ DevOps tools like CloudTrail logs, VPC Flow logs, RDS Logs and ELB/CloudFront logs. You will have everything needed to audit what happened and when from where to understand any incident. This will help you understand and troubleshoot events faster.

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting.

Similarly, VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn can help you diagnose overly restrictive security group rules.

You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.

You can also monitor the MySQL error log, the slow query log, and the general log directly through the Amazon RDS console, the Amazon RDS API, the Amazon RDS CLI, or the AWS SDKs. 

6. Manage ROI Intelligently

“DevOps is the culture of innovating at velocity. Using DevOps concepts you can help keep cloud ROI in check.”

One of the benefits of moving your business to Cloud is reducing your infrastructure costs. Before you find ways to maximize your AWS Cloud ROI, you need to first have the right data to help you make decisions. After all, knowing how to control your cloud costs is easy when all the right data comes together in a single dashboard to help you make decisions. There are tools (including from  Botmetric) that provide full visibility into your cloud across the company to build a meaningful picture of expenses and analyze resources by business units or departments. With these tools, you have immediate answers to why your AWS cloud costs spiked or what caused it.

“A penny saved is a penny earned. Ensure you track down every unused and underused resource in your AWS cloud and help increase ROI.”

Using Botmetric products, you can fix cost leaks within minutes using the powerful click-to-fix automation. You also have a unified cloud cost savings dashboard to understand utilization across your business to know cost spillage at business unit or cloud account level.

Cloud capacity planning is pivotal to reduce your overall cloud spend. There is no better way to maximize ROI than considering Reserved Instance purchases in AWS for your predictable usage for the year.

With RI, you pay the low hourly usage fee for every hour in your Reserved Instance term. This means you’re charged hourly regardless of whether any usage has occurred during an hour. When your total quantity of running instances during a given hour exceeds the number of applicable RIs you own, you will be charged the On-Demand rate. There are other dynamics to it too.

Botmetric’s AWS Reserved Instance planner (RI Planner) evaluates cloud utilization to recommend intelligent ways for optimizing AWS RI costs. It enables you to plan right. Even better, there will be no more over reservation or underutilization. You have access to a suite of intelligent RI recommendation algorithms and smart RI purchase planner to save weeks of effort.

With the recent models, you can simplify the RI management and not worry about tiny configuration details for taking advantage of it in a region. You should have mechanisms to alert you in case of unused RI. With an effective RI, you can keep everyone happy and save money for the company.

7. Ensuring Top-Notch AWS Cloud Security

You can provide a far better security in AWS than you can potentially do in a data center without worrying about an exorbitant licensing cost for legacy security tools.

AWS provides WAF, DDoS Protection, Inspector, System Manager, Trusted Advisor and Config Rules for protecting your Cloud while you can get virtually all other security tools from the marketplace.

AWS CloudTrail, which provides a history of AWS API calls for an account, too facilitates security analysis, resource change tracking, and compliance auditing of an AWS environment.

Moreover, CloudTrail is an essential service for understanding AWS usage and should be enabled in every region – for all AWS accounts regardless of where services are deployed.

As a DevOps engineer, you can also use AWS Config, which creates an AWS resource inventory like configuration history, configuration change notification, and relationships between AWS resources. It provides a timeline of resource configuration changes for specific services too.  Plus, change snapshots are stored in a specified Amazon S3 bucket and can be configured to send Amazon SNS notifications when AWS resource changes are detected. This will help keep vulnerabilities under check.

Not to forget: add an additional layer of security for your business with Multi-Factor Authentication (MFA) for your AWS Root Account and all IAM users. The same should be applied for your SSH Jumpbox as well so no one can access it directly. You should enable MFA for all your critical S3 buckets that have business information & backup data to ensure it’s protected from accidental terminations. Given the number of advantages that MFA protection has for enhanced security, there is no reason for you to avoid it. This provides additional protection to secure your cloud and data.

Concluding Thoughts: Adopt Modern DevOps Tools

If cloud is a new way of computing then DevOps is the modern way of getting things done. You should leverage new age DevOps tools for monitoring, application performance management, log management, security, data protection and cloud management instead of trying to build adhoc automation or dealing with primitive tools offered by AWS. A good tool like New Relic, Shippable, CloudPassage etc. can save time and effort. However, using intelligent DevOps platform like Botmetric is the way forward if you want simplified cloud operations.

We’re at a stage now where most organisations don’t really need to be educated about the value of cloud computing, so to speak. The major advantages of cloud including agility, scalability, cost benefits, innovation and business growth are fairly well established. Rather, it is a matter of businesses trying to evaluate how they can fit cloud into their overall IT strategies.

With new innovations and changing dynamics, and increased demand of DevOps users, businesses are becoming more agile with each passing day. But DevOps isn’t the easiest thing in the world. We hope that your endeavor to get the best of your DevOps and AWS Cloud combo becomes a breeze with these seven tips! Do drop in a line or two below about what you think. Until next time, stay tuned! 

The March Roundup @ Botmetric: Easier AWS Cloud Management with NoOps

Spring is here, finally! The blooming fresh buds, the sweet smell of the roses, and the cheerful mood all around. Earth seems to come to life again. Seasons are vital to the transition and evolution of our planet; it also serves the purpose of the evolution of human consciousness too. Likewise, transition and evolution of your AWS Cloud Management consciousness too plays a vital role in improving the lives — primarily productivity — of DevOps and cloud engineers in your organization.

Your AWS Cloud Management efforts carried out by your DevOps engineers and cloud engineers, either in silos or with an integrated approach, needs to be regularly monitored, nurtured, and evolved from time to time. And when we say AWS Cloud Management efforts, we include AWS cost management, AWS governance, AWS cloud security and compliance, AWS cloud operations automation, and DevOps practices.

There are, of course, a variety of AWS services at your disposal to engineer a fully automated, continuous integration and delivery system, and help you be at the bleeding edge of DevOps practices. It is, however, easier said than done.

Right tools at hand are what that matters the most, especially when you are swimming in a tide of several modules. With agile digital transformations catching up quickly in every arena, it’s high time you must ensure that your team’s every AWS Cloud Management effort count to get that optimal ROI and lowered TCO.

To that end, Botmetric has been evolving all its products — Cost & Governance, Security & Compliance, and Ops & Automation, with several NoOps and DevOps features that make life of DevOps engineers and cloud engineers easier.

More so, you get more out of your AWS cloud management than you think. Explore Botmetric.

In March, Botmetric rolled-out four key product features. Here’re the four new feathers in the Botmetric’s cap:

1. Define Your Own AWS Security Best Practices & Audits with Botmetric Custom Audits

What is it about: Building your own company-wide AWS security policies to attain comprehensive security of the cloud.

How will it help:  Audit your infrastructure and enforce certain rules within your team, as per your requirements. You can put the custom rules or audits on auto-pilot — no need to build and run scripts every time through cron/CLI. Above all, you can automate your AWS security best practices checks.

Where can you find this feature on Botmetric: Under Security & Compliance’ Audit Report Console.

Get more details on this feature here.

2. Increase Operational Efficiency by 5X with Botmetric Custom Jobs’ Cloud Ops Automation

What is it about: Writing Python scripts inside Bometric to automate everyday, mundane DevOps tasks.

How will it help: Empowers DevOps engineers and cloud engineers to run desired automation with simple code logic in Python, and then schedule routine cloud tasks for increased operational excellence. Help engineers free up a lot of time.

Where can you find this feature on Botmetric: Under Ops & Automation’ Automation Console.

Get more details on this feature here.

3. Unlock Maximum AWS RDS Cost Savings with Botmetric RDS Cost Analyzer

What is it about: It is an intelligent analyzer that provides complete visibility into RDS spend.

How will it help: Discover unusual trends in your AWS RDS usage and know which component is incurring the significant chunk of the cost. Get a detailed breakup of RDS cost according to AWS instances, instance types, AWS accounts, AWS sub services, and instance engine.

Where can you find this feature on Botmetric: Under Cost & Governance’ Analyze console.

Get more details on this feature here.

4. AWS Reserved Instance Management Made Easy with Botmetric’s Smart RI

What is it about: Automatically modify reservation as soon as there is a modification available without going to AWS console.

How will it help: Reduce the effort involved in modifying the unused RIs. Automate modification of RIs that occur multiple times a day as soon as the unused RIs are found. Saves that much amount of cost that could have been wasted due to unnecessary on-demand usage, along with wasted RIs.

Where can you find this feature on Botmetric: Under Cost & Governance’ RI console.

Get more details on this feature here. You can also read it on AWS Week-in-Review.

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of March:

The Road to Perfect AWS Reserved Instance Planning & Management in a Nutshell

98% of Google search on ‘AWS RI benefits’ shows that you can get great discounts and save tremendously compared to on-demand pricing. The fact is, this discounted pricing can be reaped provided you know what RIs are, how to use them, when to buy them, how to optimize them, how to plan them, etc. This blog covers all the details how to perfect your AWS RI planning and management.

DevSecOps: A Game Plan for Continuous Security and Compliance for your Cloud

DevOps makes it possible for the code to deploy and function seamlessly. And where does “security” stand in this Agile, CI/CD environment? You cannot afford to compromise on security and turn your infrastructure vulnerable to hackers, for sure! So, here comes the concept of “DevSecOps” — the practices of DevSecOps. If you’re looking to bring Security Ops into DevOps, then bookmark this blog.

3 Effective DDoS Protection & Security Solutions Apt for Web Application Workloads on AWS

NexusGuard research quoting 83% increase in Distributed Denial of Service (DDoS) attacks in 2Q2016 compared to 1Q2016 indicates that these attacks seems to continue being prevalent even beyond 2017. Despite stringent measures, these attacks have been bringing down web applications and denying service availability to its users with botnets. Without a doubt, DDoS mitigation is pivotal. If you’re a security Ops engineer, then this blog is a must read.

5 Interesting 2017 DevOps Trends You Cannot Miss Reading

In 2017, there is a lot of noise about what will be the future of DevOps. Here is a look at five interesting 2017 DevOps trends  you cannot miss reading and what our thought leaders think.

Don’t Let 2017 Amazon AWS S3 Outage Like Errors Affect You Again

On February 28th, 2017, several companies reported Amazon AWS S3 Cloud Storage Outage. Within minutes, hundreds and thousands of Twitter posts started making rounds across the globe sharing their experiences how their apps went down due to this outage. No technology is perfect. All technologies might fail at some point. The best way forward is to fool-proof your system against such outages in the future, as suggested by Team Botmetric.

To Conclude:

Rain or shine, Botmetric has always striven to improve the lives of DevOps and cloud engineers. And will continue to do so with DevOps, NoOps, AIOps solutions. Get 14-Day Exclusive Botmetric Trial Now.

If you have missed rating us, Botmetric invites you to do it here. Until the next month, stay tuned with us.

DevSecOps: A Game Plan for Continuous Security and Compliance for your Cloud

Cloud is agile. Cloud engineers work continuously on iterations based on the continuous integration/continuous deployment (CI/CD) model of development and deployment. And DevOps is an integral part of the entire CI/CD spectrum. While DevOps makes it possible for the code to deploy and function seamlessly, where does “security” stand in this agile, CI/CD environment? You cannot afford to compromise on security and turn your infrastructure vulnerable to hackers, for sure! So, here comes the concept of “DevSecOps” — the practices of DevSecOps.

The concept of DevSecOps thrives on the powerful guideline: ‘Security is everyone’s responsibility.’  As we witness it, rapid application delivery is dramatically transforming how software is designed, created, and delivered. There is sense of urgency and in pushing the limits on the speed and innovation of development and delivery. The rise of DevOps creates opportunities to improve the software development life cycle (SDLC) in tandem with the moves being made toward agility and continuous delivery. However, how secure is the transition? And how can we make it secure? The answer is DevSecOps.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]We won’t simply rely on scanners and reports to make code better. We will attack products and services like an outsider to help you defend what you’ve created. We will learn the loopholes, look for weaknesses, and we will work with you to provide remediation actions instead of long lists of problems for you to solve on your own. — www.devsecops.org [/mk_blockquote],”

First, let’s analyze the true state of security in DevOps. Consider these points:

  • Where does your organization stand in the transition to DevOps?
  • How security measures are included in the transition?
  • What are the opportunities and obstacles in improving security practices in a DevOps environment?

In a recent study conducted by HPE Security Fortify team, the results provide insight into current DevOps security practices at both large and mid-sized enterprises.  Analysis of the report highlights multiple gaps that exist between the opportunity to have security as a natural part of DevOps and the reality of the current implementations.

The research has unearthed few key facts, such as:

  • Everybody believes that security must be an integral part of DevOps and transformations on DevOps will actually make them more secure. However, with higher priority on speed and innovation, very few DevOps programs actually have included security as part of the process since it’s deemed to be of much lower priority
HPE Survey on Security in DevOpsImage Source: http://sdtimes.com/hpe-security-fortify-report-finds-application-security-lacking-devops-processes/
  • This problem could worsen in DevOps environments because silos still exist between development and security

So what about it and what’s next?

Make security better; DevOps can do it

Application security and DevOps must go hand-in-hand. An opportunity lies with to make security an integral part of development and truly build secure coding practices into the early stages of the software development life cycle (SDLC). Thus, DevSecOps can attain the goal of safely distributing security decisions at speed and scale to those who hold the highest level of context without sacrificing the required safety.

With the rapid changes happening in DevOps, traditional security seizes to be an option. Very often, the traditional security is far too late in the cycle and too slow to be cooperative in the design and release phases of a system that is built by iteration. However, with the introduction of DevSecOps, risk reduction cannot continue to be abandoned by either the business operators or security staff; instead, it must be embraced and made better by everyone within the organization and supported by those with the skills to contribute security value into the system.

DevSecOps as a cooperative system

A true cooperative ecosystem will evolve when business operators are supplied with the right set of tools and processes that help with security decision making along with security staff that use and tune the tools. Now, the security engineers more closely align with the DevSecOps manifesto, which speaks to the value that a security practitioner must add to a larger ecosystem.  DevSecOps must continuously monitor, attack, and determine defects before non-cooperative attackers, read external hackers, might discover them.

Also, the DevSecOps as a mindset and security transformation further lends itself towards cooperation with other security changes. Security needs to be added to all business processes. A dedicated team needs to be created to establish an understanding of the business, tool to discover flaws, continuous testing, and science to forecast how to make decisions as a business operator.

Don’t miss the opportunity!

According to the recent research reports, the current state is that most organizations are not implementing security within their DevOps programs. This need to be changed and application security must be prioritized as a critical DevOps component. A secure SDLC must be incorporated as a disciplined practice, along with DevOps to define and implement diligent DevSecOps.

DevOps is a much thought about and evolved practice. The promise that it brings down organizational barriers towards swift and driven development and delivery has to be translated into security as well. A concentrated approach must be in place for organizations, to build security into the development tool chain and strategically implement security automation.

DevOps is good; DevSecOps is better

Information security architects must integrate security at multiple points into DevOps workflows in a collaborative way that is largely transparent to developers, and preserves the teamwork, agility and speed of DevOps and agile development environments, delivering ‘DevSecOps’, summarizes a recent Gartner report on how to seamlessly integrate security into DevOps.

The key challenges discussed in the report are:

  • DevOps compliance is a top concern of IT leaders, but information security is seen as an inhibitor to DevOps agility.
  • Security infrastructure has lagged in its ability to become ‘software defined’ and programmable, making it difficult to integrate security controls into DevOps-style workflows in an automated, transparent way.
  • Modern applications are largely ‘assembled’, not developed, and developers often download and use known vulnerable open-source components and frameworks.

In 2012, Gartner introduced the concept of ‘DevSecOps’ (originally ‘DevOpsSec’) to the market in a report titled, “DevOpsSec: Creating the Agile Triangle.” The need for information security professionals to get actively involved in DevOps initiatives and to remain true to the spirit of DevOps, embracing its philosophy of teamwork, coordination, agility, and shared responsibility were the key identified areas in the report.

In the recent report titled, “DevSecOps: How to Seamlessly Integrate Security Into DevOps”, Gartner estimates that:

  • Fewer than 20% of enterprise security architects have engaged with their DevOps initiatives to actively and systematically incorporate information security into their DevOps initiatives
  • Fewer still have achieved the high degrees of security automation required to qualify as DevSecOps.

This calls for optimization and improvement in overall security posture by designing a set of integrated controls to deliver DevSecOps without undermining the agility and collaborative spirit of the DevOps philosophy.

With DevSecOps on the cloud, security becomes an essential part of the development process itself instead of being an afterthought.

DevSecOps is an objective where security checks and controls are applied automatically and transparently throughout the development and delivery of cloud-enabled services. Simply implementing or relying on standard security tools and processes won’t work. Secure service delivery starts in development, and the most effective DevSecOps programs start at the earliest points in the development process and follow the workload throughout its life cycle. Even if you aren’t actively using DevOps, try to implement the security best practices to accelerate the development and delivery of cloud-enabled services.

Some strategies:

  • Equip DevOps engineers to start with secure development
  • Empower DevOps engineers to take personal responsibility for security
  • Incorporate automated security vulnerability and configuration scanning for open source components and commercial packages
  • Incorporated application security testing for custom code
  • Adopt version control and tight management of infrastructure automation tools
  • Adapt “continuous security” in tandem with “continuous integration” and “continuous deployment”

If you haven’t already, get involved in DevSecOps initiatives and start pressuring all security stakeholders for better security measures. Begin with the immediate scanning of services in development for vulnerabilities, and make OSS software module identification, configuration and vulnerability scanning a priority. Make custom code scanning a priority. As quoted by Madison Moore of SDtimes in one of her posts on DevSecOps, “mature development organizations finally realize how critical it is to weave automated security early in the SDLC.” And the Sonatype survey says it all.A Sonatype Survey on DevSecOps

Image Source: http://sdtimes.com/report-organizations-embracing-devsecops-automation/

The Bottom Line

Successful DevSecOps initiatives must remain true to the original DevOps philosophy: teamwork and transparency, and continual improvement through continual learning.

Interested in knowing more about DevSecOps?  We are just an email away: support@botmetric.com; and very much social: Twitter, Facebook, or LinkedIn. You can also drop in a line below in the comment section and get in touch with Botmetric experts to know more.

The Road to Perfect AWS Reserved Instance Planning & Management in a Nutshell

Ninety-eight percent of Google search on ‘AWS reserved instance (RI) benefits’ shows that you can get great discounts and save tremendously compared to on-demand pricing.The fact is, this discounted pricing can be reaped provided you know what RIs are, how to use them, when to buy them, how to optimize them, how to plan them, etc.

Many organizations have successfully put RIs to its best use and have the optimal RI planning and management in place due to the complete knowledge they have.

This overarching, in-depth blog post is a beginner’s guide that helps you leverage RIs completely and correctly, so that you can make that perfect RI planning and management. It also provides information on how to save smartly on AWS cloud.

Upon completely reading this post, you will know the basic 5Ws of AWS RIs, how to bring RIs into practice, types of AWS Reserved Instances, payment attributes associated with instance reservations, attributes to look for while buying/configuring an RI, facts to be taken into account while committing RIs, top RI best practices, top RI governance tactics that help reduce AWS bill shock, and common misconceptions attached to RIs.

The Essence: Get Your RI Basics Right to Reduce AWS Bill Shock

The Backdrop

Today, RIs are one of the most effective cost saving services offered by AWS. Irrespective of whether the reserved instances are used or unused, they will be charged. And AWS offers discounted usage pricing for as long as organizations own the RIs. So, opting for reserved instances over on-demand instances might waste several instances. However, a solid RI planning will provide the requisite ROI, optimal savings, and efficient AWS spend management for a long term.

 

AWS RIs are purchased for several reasons, like savings, capacity reservation, and disaster recovery.

Some of them are listed below here:

1. Savings

Reserved instances provide the highest savings approach in AWS cloud ecosystem. You can lower costs of the resources you are already using with a lower effective rate in comparison to on-demand pricing. Generally, EC2 and RDS RIs are contenders of projecting highest figures in your AWS bill. Hence, it’s advisable to go for EC2 and RDS reservations.

A Case-in-point: Consider an e-commerce website running on AWS on-demand instances.Unexpectedly, it started gaining popularity among customers. As a result, the IT manager sees a huge spike in his AWS bill due to unplanned sporadic activity in the workload. Now, he is under pressure to control both his budget and efficiently run the infrastructure.

A swift solution to this problem is opting for instance reservation against on-demand resources. By reserving instances, he can not only balance capacity distribution and availability according to work demands, but it can also reap substantial savings due to reservation.

P.S: Just reserving the instances will not suffice. Smart RI Planning is the way forward to reap optimal cost savings.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric helps you make wise decisions while reserving AWS instances. It also provides great insights to manage and utilize your RIs that ultimately lead to break-even costs. Get a comprehensive free snapshot of your RI utilization with Botmetric’s free trial of Cost & Governance.[/mk_blockquote]

2. Capacity Reservation

With capacity reservation, there is a guarantee that you will be able to launch an instance at any time during the term of the reservation. Plus, with AWS’ auto-scaling feature, you will be assured that all your workloads are running smoothly irrespective of the spikes. However, with capacity reservation, there will be a lot of underutilized resources, which will be charged irrespective of whether they are used or unused.

A Case-in-Point: Consider you’re running a social network app in your US-West-1a AZ. One day you observe some spike in the workload, as your app goes viral. In such a scenario, reserved capacity and auto-scaling together ensure that the app will work seamlessly. However, during off season, when the demand is less, there will be a lot of underutilized resources that will be charged. A regular health check of the resource utilization and managing them to that end will provide both resource optimization and cost optimization.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric performs regular health check of your usage of reservations and presents them in beautiful graphical representation for you to analyze usage optimally. Further, with the metrics, you can identify underutilization, cost-saving modification recommendations, upcoming RI expirations, and more from a single pane.[/mk_blockquote]

3. Always DR Ready

AWS supports many popular disaster recovery (DR) architectures. They could be smaller environments ideal for small customer workload data center failures or massive environments that enable rapid failover at scale. And with AWS already having data centers in several Regions across the globe, it is well-equipped to provide nimble DR services that enable rapid recovery of your IT infrastructure and data.

The Point-in-Case: Suppose, East coast in the U.S. is hit by a hurricane and everybody lines up to move their infrastructure to US-West regions of AWS. If you have reservation in place beforehand in US-West then your reservation guarantees prevention from exhaustion. Thus, your critical resources will run on US-West without waiting in the queue.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric scans your AWS infra like a pro with DR automation tasks to check if you have configured backup on RDS and EBS properly[/mk_blockquote]

How to Bring RI into Practice

The rationale behind RI is simple: getting AWS customers like you to commit to the usage of specific infrastructure. By doing so, Amazon can better manage their capacity and then pass on their savings onto you.

Here are the basic information on RI types, options, pricing, and terms (1 and 3 year) for you to leverage RIs to the fullest. These will help you bring RI into practice.

Types of AWS Reserved Instances

1. Standard RIs: These can be purchased with one year or three-year commitment. These are the best suited for steady state usage, and when you have good understanding of your long-term requirements. It provides up to 75% in savings compared to on-demand instances.

2. Convertible RIs: These can be purchased only with three year commitment. Unlike Standard RIs, Convertible RIs provide more flexibility and allows you to change the instance family and other parameters associated with a RI at any time. These RIs also provide savings but up to 45% savings compared to on-demand instances. Know more about it in detail.

3. Scheduled RIs: These can be launched within the time-span you have selected to reserve. This allows you to reserve capacity in a stipulated amount of time.

Types of AWS Reserved Instances and their charecteristicsTypes of AWS Reserved Instances and their characteristics

Payment Options

AWS RIs can be bought using any of the three payment options:

1. No-Upfront: The name says it all. You need not pay any upfront amount for the reservation. Plus, you are billed at discounted hourly-rate within the term regardless of the usage. These are only available for one year commitment if you buy Standard RI and for three years commitment if you opt for Convertible RI.

2. Partial Upfront: You pay a partial amount in advance and the remaining amount is paid at discounted hourly rate.

3. All Upfront: You make the full payment at the beginning of the term regardless of the number of hours of utilized. This option provides the maximum percentage of discount.

Attributes to be looked at while buying/configuring RI

Committing RIs

From our experience, a lot of stakeholders take a step back while committing towards reservation, because it’s an important investment that needs lot of deliberation. The fact is: Once you understand the key attributes, then it gives you all the confidence to commit on RIs.

Realize: How to?

  • Identify the instances, which are running constantly or having a higher utilization rate (above 60%)
  • Estimate your future instance usage and identify the usage pattern
  • Spot the instance classes that are the possible contenders for reservation

Evaluate: How to?

Once you know how to realize the RIs, you can identify possibilities and evaluate the alternatives with the following actions:

  • Look for suitable payment plans
  • Monitor On-Demand Vs. Reserved expenditure over time
  • Identify break-even point and payback period
  • Look for requirements of Standard or Convertible RIs

Select: How to?

Once you know how to evaluate, you can analyze and choose the best option that fits your planning, and further empower your infrastructure for greater efficiency with greater savings.

Implement: How to?

Once you know what your requirements are to commit for a RI purchase, implementation is the next stage. It is very crucial you do it right. For the reason that: Discounts might not apply in all cases. For instance, if you happen to choose the incorrect attributes or performs incorrect analysis. At the end of the line, your planned savings might not reflect in your spreadsheets (*.XLS) as calculated.

How to Implement the Chosen RI Like a Pro

The key parameter to reserve an EC2 instance is the instance class. To apply reservation, you can either modify or go for a new RI purchase by selecting platform, region, and instance type to match the reservation.

For Instance:

Consider a company XYZ LLC, where it has an on-demand portfolio of

  • 2*m3.large Linux in AZ us-east-1b
  • 4*c4.large Linux in AZ us-east-1c
  • 2*t2.large Linux in AZ us-east-1a

And XYZ LLC now purchases standards RIs as below:

  • 4*c4.large Linux in AZ us-east-1c
  • 2*t2.large Linux in AZ us-east-1b
  • 4*x1.large Linux in AZ us-east-1a

Based on the above on-demand portfolio and purchases, the following reservations are applied for XYZ LLC:

  • 4*c4.large Linux in AZ us-east-1c. Here’s how: This matches exactly the instance class the reservation was made, so offers discounts
  • 2*t2.large Linux in AZ us-east-1b. Here’s how: The existing instances class is in a different AZ but in the same region, so no discount is applied. However, if you change the scope of RI to region then the reservation will be applied but there is no guarantee of capacity
  • 4*x1.large Linux in AZ us-east-1a. Here’s how: The instance family, region, and OS don’t match. In this case, reservation will not be applied for these instances. However, if XYZ LLC had purchased Convertible RIs, modifying reservation will never be a problem but they have to commit for 3 years with a lesser discount.

Making Sense of the RIs for Payer and Linked Accounts

AWS bills, evidently, includes charges only on payer account for all utilization. However, in larger organizations, where the linked accounts are divided into business units, reservation purchases are made by these individual units. No matter who makes the purchase, the benefits of RI will float across the whole account (payer + its own linked accounts).

For Instance: Let’s assume X is the payer account and Y and Z are its two linked accounts. Then in an ideal situation:

$- Purchase

U-Can be applied

X($) then Y (U) or Z (U)

If Z($) then Y(U) or X(U) or Z(U)

Hence, in a group, reservation can be applied in any instance class available.

How to Govern RIs with Ease

Monitoring just a bunch of RIs are easy when the portfolio is small. However, in case of mid-sized and large sized businesses, RIs generally don’t get proper attention due to the dynamic environment and the plethora of AWS services to manage. This causes a dip in efficiency, unexpected minimal savings, and many more such issues. Nevertheless, this dip in efficiency and bill shock can be assuaged with few tweaks:

Make a regular note of unused and underutilized RIs:

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Unused and underutilized states of RIs are key issues that lead to inefficiency.[/mk_blockquote]

In case of unused RIs: The reservations were intended to be bought for determined constant utilization but somehow the utility ended just after few months of purchase and the reservation is now in dormant state or unused. If you modify and eliminate them, then they will add to cost savings.

In case of underutilized RIs: Few RIs are bought with the intention to use them for continuous workload but somewhere in the timeline the utility reduced and the reservation is not clocking to its ideal utilization. If you start reusing them, then they will add to cost savings. Read this blog post by Botmetric Director of Product Engineering Amarkant Singh’s post on how to go about unused and underutilized RI modifications and save cloud cost intelligently.

Finding the root cause of unused and underutilized RIs:

1. Incorrect analysis: While performing the analysis to determine RI purchase some miscalculations or lack of understanding of environment can be cause of trouble in management of RIs.

a. Wrong estimation of time (1 year/ 3 years): If you couldn’t understand your projected workload time then purchasing reservation for a longer interval e.g.: 3 years may bring RI into unused state

b. Wrong estimation of count: This could be due to overestimation/underestimation of the number of reservations required. If it’s too many, then you may modify them for DR capability. But if it’s too less, then you may still not satisfy your savings

c. Wrong estimation of projected workload: If you have not understood your workload, then chances are that you could have bought RIs with incorrect attributes like time, number of instances bought, etc. In such cases, RIs either go unused or underutilized

2. Improper Management: RIs, irrespective of the service, can offer optimal savings only when they are modified, tuned, managed, and governed continuously according to your changing infrastructure environment in AWS cloud.

You should never stop at reservation. For instance, if you have bought the recent Convertible RIs, then modifying them for the desired class. And, if you have older RIs, then get them to work for you either by breaking them into smaller instances or by combining them for a larger instance as per the requirement.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]FACT: Almost 92% of AWS users fail to manage Reserved Instances (RIs) properly, thus failing to optimize their AWS spend.[/mk_blockquote]

If you find all this overwhelming, then try Botmetric Smart RI Planner, which helps with apt RI management, right sizes your RIs and helps save cost with automation.

Top RI Best Practices to Live By

There are few best practices you should follow to ensure your RIs work for you and not the other way around.

Get Unused RIs back to work

If you have bought the recent Convertible RIs, then modifying them for desired class is now a child’s play. However, if you have older RIs, then getting them back to work is not so easy as Convertible RIs. But with just few modifications like breaking them into smaller instances /combining them for a larger instance according to your needs will do the trick.

Keep an eye on expired and near-expiry RIs in your portfolio

Always list your RIs in three ways to keep a constant check on them:

Active: RIs that are either new or will expire in 90 days

Near-expiry: RIs that are nearing to 90 days of expiration. Analyze these RIs and plan accordingly for re-purchase

Expired RIs: RIs that are expired. If there is an opportunity for renewal go ahead with it

Be sure of your workload demands and what suits your profile the best. Standard RIs work like a charm, in regards to cost saving and offering plasticity, only when you have a good understanding of your long-term requirements.

And if you have no idea of your long-term demand, then Convertible RIs are perfect, because you can have a new instance type, or operating system at your disposal in minutes without resetting the term.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric has found a smart way to perform the above. It uses the NoOps and AIOps concept to put the RIs back to work. Read this blog to know how.[/mk_blockquote]

Compare on-demand Vs. reserved instances to improve utilization

If you want to improve your utilization of reservation, the game plan is to track on-demand Vs. reserved instances utilization. It is evident from our experience that RI cost over a period of time offers the greatest discounted prices. Read this blog post to know the five tradeoff factors to consider when choosing AWS RIs over on-demand resources.

Compare on-demand Vs. reserved instances to improve utilization

For further benefits, a report on RI utilization that can throw the below insights will help:

1. Future requirement of reservation

2. Unused or underutilized RIs

3. Opportunities to re-use existing RIs

Here is a sample Botmetric RI Utilization Graph for your reference:

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

Before wrapping-up, here are few common RI misconceptions that you must know.

Common RI Misconceptions You Must Know

  • If you buy one EC2 instance and reserve an RI for that type of EC2, then you don’t own two instances but you own only one
  • RIs are not only available in EC2 and RDS but in five other services as well that can be reserved
  • Purchasing RIs alone and not monitoring and managing them may not give you any savings.
  • Managing and optimizing them is the key
  • Never purchase instance for an instance ID, but for an instance class
  • Buying a lot of RIs will not bring down the AWS bill
  • Managing RIs is very complex. It’s a continuous ongoing process. Few key best practices — if followed — can give desired savings and greater efficiency
  • Older RIs cannot have Region benefit
  • RIs can’t be re-utilized, if you fail to understand your workload distribution RIs can’t be returned, instead
  • AWS RI Marketplace facilitates you to sell your RIs to others

The Wrap-Up

RIs, as quoted earlier, are the highest saving option in your dynamic cloud environment. Buying RIs is not sufficient. A proper road map and management coupled with intelligent insights can get you the desired savings.

AWS is always coming up with new changes. Hence, understanding its services and knowing how to use them for greater benefit will always prove beneficial for your cloud strategy irrespective of your business size, above all for the startup world. And if you find all this overwhelming, then just try Botmetric’s Cost and Governance.

Get Started