The March Roundup @ Botmetric: Easier AWS Cloud Management with NoOps

Spring is here, finally! The blooming fresh buds, the sweet smell of the roses, and the cheerful mood all around. Earth seems to come to life again. Seasons are vital to the transition and evolution of our planet; it also serves the purpose of the evolution of human consciousness too. Likewise, transition and evolution of your AWS Cloud Management consciousness too plays a vital role in improving the lives — primarily productivity — of DevOps and cloud engineers in your organization.

Your AWS Cloud Management efforts carried out by your DevOps engineers and cloud engineers, either in silos or with an integrated approach, needs to be regularly monitored, nurtured, and evolved from time to time. And when we say AWS Cloud Management efforts, we include AWS cost management, AWS governance, AWS cloud security and compliance, AWS cloud operations automation, and DevOps practices.

There are, of course, a variety of AWS services at your disposal to engineer a fully automated, continuous integration and delivery system, and help you be at the bleeding edge of DevOps practices. It is, however, easier said than done.

Right tools at hand are what that matters the most, especially when you are swimming in a tide of several modules. With agile digital transformations catching up quickly in every arena, it’s high time you must ensure that your team’s every AWS Cloud Management effort count to get that optimal ROI and lowered TCO.

To that end, Botmetric has been evolving all its products — Cost & Governance, Security & Compliance, and Ops & Automation, with several NoOps and DevOps features that make life of DevOps engineers and cloud engineers easier.

More so, you get more out of your AWS cloud management than you think. Explore Botmetric.

In March, Botmetric rolled-out four key product features. Here’re the four new feathers in the Botmetric’s cap:

1. Define Your Own AWS Security Best Practices & Audits with Botmetric Custom Audits

What is it about: Building your own company-wide AWS security policies to attain comprehensive security of the cloud.

How will it help:  Audit your infrastructure and enforce certain rules within your team, as per your requirements. You can put the custom rules or audits on auto-pilot — no need to build and run scripts every time through cron/CLI. Above all, you can automate your AWS security best practices checks.

Where can you find this feature on Botmetric: Under Security & Compliance’ Audit Report Console.

Get more details on this feature here.

2. Increase Operational Efficiency by 5X with Botmetric Custom Jobs’ Cloud Ops Automation

What is it about: Writing Python scripts inside Bometric to automate everyday, mundane DevOps tasks.

How will it help: Empowers DevOps engineers and cloud engineers to run desired automation with simple code logic in Python, and then schedule routine cloud tasks for increased operational excellence. Help engineers free up a lot of time.

Where can you find this feature on Botmetric: Under Ops & Automation’ Automation Console.

Get more details on this feature here.

3. Unlock Maximum AWS RDS Cost Savings with Botmetric RDS Cost Analyzer

What is it about: It is an intelligent analyzer that provides complete visibility into RDS spend.

How will it help: Discover unusual trends in your AWS RDS usage and know which component is incurring the significant chunk of the cost. Get a detailed breakup of RDS cost according to AWS instances, instance types, AWS accounts, AWS sub services, and instance engine.

Where can you find this feature on Botmetric: Under Cost & Governance’ Analyze console.

Get more details on this feature here.

4. AWS Reserved Instance Management Made Easy with Botmetric’s Smart RI

What is it about: Automatically modify reservation as soon as there is a modification available without going to AWS console.

How will it help: Reduce the effort involved in modifying the unused RIs. Automate modification of RIs that occur multiple times a day as soon as the unused RIs are found. Saves that much amount of cost that could have been wasted due to unnecessary on-demand usage, along with wasted RIs.

Where can you find this feature on Botmetric: Under Cost & Governance’ RI console.

Get more details on this feature here. You can also read it on AWS Week-in-Review.

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of March:

The Road to Perfect AWS Reserved Instance Planning & Management in a Nutshell

98% of Google search on ‘AWS RI benefits’ shows that you can get great discounts and save tremendously compared to on-demand pricing. The fact is, this discounted pricing can be reaped provided you know what RIs are, how to use them, when to buy them, how to optimize them, how to plan them, etc. This blog covers all the details how to perfect your AWS RI planning and management.

DevSecOps: A Game Plan for Continuous Security and Compliance for your Cloud

DevOps makes it possible for the code to deploy and function seamlessly. And where does “security” stand in this Agile, CI/CD environment? You cannot afford to compromise on security and turn your infrastructure vulnerable to hackers, for sure! So, here comes the concept of “DevSecOps” — the practices of DevSecOps. If you’re looking to bring Security Ops into DevOps, then bookmark this blog.

3 Effective DDoS Protection & Security Solutions Apt for Web Application Workloads on AWS

NexusGuard research quoting 83% increase in Distributed Denial of Service (DDoS) attacks in 2Q2016 compared to 1Q2016 indicates that these attacks seems to continue being prevalent even beyond 2017. Despite stringent measures, these attacks have been bringing down web applications and denying service availability to its users with botnets. Without a doubt, DDoS mitigation is pivotal. If you’re a security Ops engineer, then this blog is a must read.

5 Interesting 2017 DevOps Trends You Cannot Miss Reading

In 2017, there is a lot of noise about what will be the future of DevOps. Here is a look at five interesting 2017 DevOps trends  you cannot miss reading and what our thought leaders think.

Don’t Let 2017 Amazon AWS S3 Outage Like Errors Affect You Again

On February 28th, 2017, several companies reported Amazon AWS S3 Cloud Storage Outage. Within minutes, hundreds and thousands of Twitter posts started making rounds across the globe sharing their experiences how their apps went down due to this outage. No technology is perfect. All technologies might fail at some point. The best way forward is to fool-proof your system against such outages in the future, as suggested by Team Botmetric.

To Conclude:

Rain or shine, Botmetric has always striven to improve the lives of DevOps and cloud engineers. And will continue to do so with DevOps, NoOps, AIOps solutions. Get 14-Day Exclusive Botmetric Trial Now.

If you have missed rating us, Botmetric invites you to do it here. Until the next month, stay tuned with us.

DevSecOps: A Game Plan for Continuous Security and Compliance for your Cloud

Cloud is agile. Cloud engineers work continuously on iterations based on the continuous integration/continuous deployment (CI/CD) model of development and deployment. And DevOps is an integral part of the entire CI/CD spectrum. While DevOps makes it possible for the code to deploy and function seamlessly, where does “security” stand in this agile, CI/CD environment? You cannot afford to compromise on security and turn your infrastructure vulnerable to hackers, for sure! So, here comes the concept of “DevSecOps” — the practices of DevSecOps.

The concept of DevSecOps thrives on the powerful guideline: ‘Security is everyone’s responsibility.’  As we witness it, rapid application delivery is dramatically transforming how software is designed, created, and delivered. There is sense of urgency and in pushing the limits on the speed and innovation of development and delivery. The rise of DevOps creates opportunities to improve the software development life cycle (SDLC) in tandem with the moves being made toward agility and continuous delivery. However, how secure is the transition? And how can we make it secure? The answer is DevSecOps.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]We won’t simply rely on scanners and reports to make code better. We will attack products and services like an outsider to help you defend what you’ve created. We will learn the loopholes, look for weaknesses, and we will work with you to provide remediation actions instead of long lists of problems for you to solve on your own. — www.devsecops.org [/mk_blockquote],”

First, let’s analyze the true state of security in DevOps. Consider these points:

  • Where does your organization stand in the transition to DevOps?
  • How security measures are included in the transition?
  • What are the opportunities and obstacles in improving security practices in a DevOps environment?

In a recent study conducted by HPE Security Fortify team, the results provide insight into current DevOps security practices at both large and mid-sized enterprises.  Analysis of the report highlights multiple gaps that exist between the opportunity to have security as a natural part of DevOps and the reality of the current implementations.

The research has unearthed few key facts, such as:

  • Everybody believes that security must be an integral part of DevOps and transformations on DevOps will actually make them more secure. However, with higher priority on speed and innovation, very few DevOps programs actually have included security as part of the process since it’s deemed to be of much lower priority
HPE Survey on Security in DevOpsImage Source: http://sdtimes.com/hpe-security-fortify-report-finds-application-security-lacking-devops-processes/
  • This problem could worsen in DevOps environments because silos still exist between development and security

So what about it and what’s next?

Make security better; DevOps can do it

Application security and DevOps must go hand-in-hand. An opportunity lies with to make security an integral part of development and truly build secure coding practices into the early stages of the software development life cycle (SDLC). Thus, DevSecOps can attain the goal of safely distributing security decisions at speed and scale to those who hold the highest level of context without sacrificing the required safety.

With the rapid changes happening in DevOps, traditional security seizes to be an option. Very often, the traditional security is far too late in the cycle and too slow to be cooperative in the design and release phases of a system that is built by iteration. However, with the introduction of DevSecOps, risk reduction cannot continue to be abandoned by either the business operators or security staff; instead, it must be embraced and made better by everyone within the organization and supported by those with the skills to contribute security value into the system.

DevSecOps as a cooperative system

A true cooperative ecosystem will evolve when business operators are supplied with the right set of tools and processes that help with security decision making along with security staff that use and tune the tools. Now, the security engineers more closely align with the DevSecOps manifesto, which speaks to the value that a security practitioner must add to a larger ecosystem.  DevSecOps must continuously monitor, attack, and determine defects before non-cooperative attackers, read external hackers, might discover them.

Also, the DevSecOps as a mindset and security transformation further lends itself towards cooperation with other security changes. Security needs to be added to all business processes. A dedicated team needs to be created to establish an understanding of the business, tool to discover flaws, continuous testing, and science to forecast how to make decisions as a business operator.

Don’t miss the opportunity!

According to the recent research reports, the current state is that most organizations are not implementing security within their DevOps programs. This need to be changed and application security must be prioritized as a critical DevOps component. A secure SDLC must be incorporated as a disciplined practice, along with DevOps to define and implement diligent DevSecOps.

DevOps is a much thought about and evolved practice. The promise that it brings down organizational barriers towards swift and driven development and delivery has to be translated into security as well. A concentrated approach must be in place for organizations, to build security into the development tool chain and strategically implement security automation.

DevOps is good; DevSecOps is better

Information security architects must integrate security at multiple points into DevOps workflows in a collaborative way that is largely transparent to developers, and preserves the teamwork, agility and speed of DevOps and agile development environments, delivering ‘DevSecOps’, summarizes a recent Gartner report on how to seamlessly integrate security into DevOps.

The key challenges discussed in the report are:

  • DevOps compliance is a top concern of IT leaders, but information security is seen as an inhibitor to DevOps agility.
  • Security infrastructure has lagged in its ability to become ‘software defined’ and programmable, making it difficult to integrate security controls into DevOps-style workflows in an automated, transparent way.
  • Modern applications are largely ‘assembled’, not developed, and developers often download and use known vulnerable open-source components and frameworks.

In 2012, Gartner introduced the concept of ‘DevSecOps’ (originally ‘DevOpsSec’) to the market in a report titled, “DevOpsSec: Creating the Agile Triangle.” The need for information security professionals to get actively involved in DevOps initiatives and to remain true to the spirit of DevOps, embracing its philosophy of teamwork, coordination, agility, and shared responsibility were the key identified areas in the report.

In the recent report titled, “DevSecOps: How to Seamlessly Integrate Security Into DevOps”, Gartner estimates that:

  • Fewer than 20% of enterprise security architects have engaged with their DevOps initiatives to actively and systematically incorporate information security into their DevOps initiatives
  • Fewer still have achieved the high degrees of security automation required to qualify as DevSecOps.

This calls for optimization and improvement in overall security posture by designing a set of integrated controls to deliver DevSecOps without undermining the agility and collaborative spirit of the DevOps philosophy.

With DevSecOps on the cloud, security becomes an essential part of the development process itself instead of being an afterthought.

DevSecOps is an objective where security checks and controls are applied automatically and transparently throughout the development and delivery of cloud-enabled services. Simply implementing or relying on standard security tools and processes won’t work. Secure service delivery starts in development, and the most effective DevSecOps programs start at the earliest points in the development process and follow the workload throughout its life cycle. Even if you aren’t actively using DevOps, try to implement the security best practices to accelerate the development and delivery of cloud-enabled services.

Some strategies:

  • Equip DevOps engineers to start with secure development
  • Empower DevOps engineers to take personal responsibility for security
  • Incorporate automated security vulnerability and configuration scanning for open source components and commercial packages
  • Incorporated application security testing for custom code
  • Adopt version control and tight management of infrastructure automation tools
  • Adapt “continuous security” in tandem with “continuous integration” and “continuous deployment”

If you haven’t already, get involved in DevSecOps initiatives and start pressuring all security stakeholders for better security measures. Begin with the immediate scanning of services in development for vulnerabilities, and make OSS software module identification, configuration and vulnerability scanning a priority. Make custom code scanning a priority. As quoted by Madison Moore of SDtimes in one of her posts on DevSecOps, “mature development organizations finally realize how critical it is to weave automated security early in the SDLC.” And the Sonatype survey says it all.A Sonatype Survey on DevSecOps

Image Source: http://sdtimes.com/report-organizations-embracing-devsecops-automation/

The Bottom Line

Successful DevSecOps initiatives must remain true to the original DevOps philosophy: teamwork and transparency, and continual improvement through continual learning.

Interested in knowing more about DevSecOps?  We are just an email away: support@botmetric.com; and very much social: Twitter, Facebook, or LinkedIn. You can also drop in a line below in the comment section and get in touch with Botmetric experts to know more.

Define Your Own AWS Security Best Practices & Audits Now with Botmetric Custom Audits

Implementing all the critical cloud security controls, including AWS security best practices & audits, according to your organization’s needs is pivotal. These controls act as a cornerstone in bolstering your AWS security posture and thus help tackle the threat landscape better for your cloud infrastructure.

Various research sources cite that verifying security policies, getting complete visibility into infrastructure security, and attaining compliance were the top AWS cloud security challenges or pet peeves among CISOs and their teams. Because each and every company has unique needs and use cases; and addressing each use case requires time and effort.

To address such “unique” challenges, further ensuring comprehensive AWS cloud security, and continuing our journey towards[mk_tooltip text=”NoOps” tooltip_text=”NoOps is a logical progression of DevOps with the philosophy of: Humans should solve new problems and the Machines should solve known problems!” href=”https://www.botmetric.com/blog/noops-through-eyes-devops-engineer/”], Botmetric built this extensible capability, Custom Audits, in its Security & Compliance.

[mk_title_box color=”#FFFFFF” highlight_color=”#008080″ highlight_opacity=”0.5″ size=”18″ line_height=”34″ font_weight=”inherit” margin_top=”0″ margin_bottom=”18″ margin_left=”10″ font_family=”none” align=”center”] Try Botmetric Custom Audits Now[/mk_title_box]

The Game Plan: Build & Automate AWS Security Audits like the Way You/Your Team Wants

With Botmetric’s new Custom Audit, you can define your own AWS Cloud security best practices checks as required by your organization.

Botmetric currently offers 200+ best practices that are aligned to current cloud security best practices. It has covered most of the use cases towards security, like root account access key, MFA not enabled for users, IP open to the world, etc. The team, however, realized the need for checks based on a company’s use-cases.

With Botmetric’s new Custom Audit, you can now audit your infrastructure and enforce certain rules within your team, as per your requirements. Ultimately, you’ll have to worry less about the AWS security best practices and also fine tune your infra for Disaster Recovery (DR) and performance.

You can build and configure custom audits using Python script and define its logic the way you want. Once you configure your custom audits, Botmetric will take the responsibility of running those checks everyday and ensures AWS best practices are being followed throughout the company.

Key Takeaways of Botmetric Security & Compliance’s Custom Audits:

  • Enforce several custom rules or audits within your team
  • Put the custom rules or audits on auto-pilot. No need to build and run scripts every time through cron/CLI
  • Focus on  solving the core application logic rather than scripting mundane tasks
  • View data of each audit result as well as the last execution time of the audit
  • Filter results based on the region and severity for each audit. Tag the severity level from low to high
  • Download results of custom audits as reports with filters applied to circulate it among the teams internally
  • Get a complete view of AWS health check, taking the custom audits into account along with the built-in audits, thereby further increasing the security, DR, and performance of your infrastructure

You can create Custom Audits through the Configure section available in the Botmetric Security & Compliance’ Audit Report console. Once up and running, Botmetric will throw the list of custom audits along with the details of each custom audit for easier view.

Botmetric Custom Audits

The Case in Point #1: Inactive IAM Users Login Check

Suppose, you want to have a regular check on all the inactive IAM users who did not login for 30 days. Generally, a DevOps engineer or a security engineer will write a script in cron/CLI every time to lookup the list or do it manually. For an engineer like me, this is a mundane task. Any day, I would like to automate it.

Imagine, you configure and build a custom python code once that can lookup your infrastructure every day for active/inactive IAM users, and throws the list in front of you with just one click. Awesome, right?

This is what Botmetric Custom Audit does.

 

Inactive AWS IAM Users Login Check

 

Using this custom audit, Botmetric scans the infrastructure every day and shows the list of inactive users who have not logged in for more than 30 days.

 

Inactive AWS IAM Users Login Check

The Case in Point #2: EC2 Instances Without Roles Attached

As a best practice, all instances must be accessible only with roles. As a general practice, a DevOps engineer or a security engineer will write a script in cron/CLI every time to lookup the list or do it manually. By writing a custom python code once and configuring it on Botmetric as shown in the use case above, you can put this audit on auto-pilot. Botmetric will scan your AWS infrastructure every day to lookup for all those EC2 instances that are not attached to roles. Thus, you save time and effort on doing his mundane AWS security checks.

The Bottom Line: Automate AWS Security Best Practices Checks

AWS offers a list of AWS Best Practices. However, various regular AWS security and compliance checks that take into account of your company’s needs is critical to meet that complete security posture. With Custom Audits, you can create your own custom best practice checks that align to your organizational standards/industry standards such as HIPAA, SOC, PCI-DSS and much more.

Currently, this new feature is in Beta version. If you are a current Botmetric user, then Team Botmetric invites you to run your desired checks for routine rule-checks through Botmetric and share your feedback.

Want to explore this feature, then take up a 14 day trial . If you have any questions on AWS security or AWS security best practices, just drop in a line below in the comment section or Tweet to us at @BotmetricHQ.

Unlock Maximum AWS RDS Cost Savings Now with Botmetric RDS Cost Analyzer

Amazon RDS (Relational Database Service) is an easy to set-up, operate, and scalable relational database offering from AWS cloud. It provides you with six familiar database engines to choose from, including Amazon AuroraPostgreSQLMySQLMariaDBOracle, and Microsoft SQL Server. If you are a RDS user, then you or your peers might have experienced that it’s the second most largest figure in your AWS bill. So, if you’re looking for quick AWS RDS cost savings, then you’re at the right place.

With much delight, I’d like to introduce Botmetric RDS Cost Analyzer, an intelligent analyzer that not only provides complete visibility into RDS spend with a detailed breakdown according to AWS instances, instance types, AWS accounts, AWS sub services, and instance engine. All this, just so that you become a wise AWS user. Above all, you an figure out some unusual trends in your AWS RDS usage and discover which component is incurring the major chunk of cost.

What Affects AWS RDS Cost Savings?

On an average, AWS RDS usage accounts for 15-20% of your AWS bill. Moreover, most of the underlying services like storage, data transfer, instance usage, etc. are used in conjunction with RDS service for deploying applications on AWS cloud. So these major components affect your AWS RDS cost savings. There are currently six components that cannot be ignored:

Compute: RDS usage is charged per hour, just like EC2. And it varies based on the selected instance type.

PIOPS Storage: PIOPS is more expensive, however, it is preferred as it can be configured for high performance workloads running at up to 30 K I/O operations per second.

I/O Costs:  You will be charged for I/O usage used in media, if you have configured standard storage.

Data Transfer: You will be charged for data transferred out to other regions or out to the internet. Plus, you will also be charged for transferring snapshots across regions.

Snapshots: With AWS, you can take point in time snapshots of the storage, which is awesome. However, you’ll be charged for keeping snapshots beyond the time at which they are useful. This can result in recurring AWS RDS costs.

Availability Zone:  AWS RDS has the ability to have a fully managed high availability deployment and concurs multi-AZ support. This requires an instance to run in two different zones. So the average compute cost gets doubled.

How Botmetric RDS Analyzer Can Help Reduce AWS RDS Cost?

Botmetric’ Cost & Governance performs a deep dive of cost items associated with AWS RDS by instances, instance types, AWS accounts, AWS sub services, and instance engine.

 

Get Visibility into AWS RDS Analyze

 

[mk_highlight text=”Get Started with Cost & Governance RDS Analyzer Now” text_color=”#FFFFFF” bg_color=”#008080″ font_family=”none” align=”center”] 

See how Botmetric can help.

Know your AWS RDS spend by instance:

With Botmetric RDS Cost Analyzer, you now have the capability to discover spend in your RDS service across different RDS instance families. You can further filter it down with the choice of AWS account. Botmetric also provides cost break down by instance type, so that you know which instance type incurs the highest cost. You can also discover some unpredictable spend ongoing with a certain instance type.

Know your RDS spend by instance

Know RDS cost breakdown by sub-services:

If you wish to visualize split of individual costs associated to sub-services in RDS services, then we’ve brought together cost of data transfer, storage, and instance usage in Botmetric RDS Cost Analyzer, where you have the opportunity to understand the split of cost incurred by these sub services.

RDS cost breakdown by sub-services:

Discover cost split by RDS instance engine:

Botmetric RDS Cost Analyzer helps you realize which database engine is driving how much cost. This helps you understand your spending patterns among engines such as SQL, PostgreSQL, MySQL etc.

Discover cost split by RDS instance engine

Know RDS cost split by AWS accounts:

Using Botmetric RDS Cost Analyzer, you can filter spend based on RDS for individual accounts in your payer account to understand your AWS RDS usage.

RDS cost by AWS accounts

Export AWS RDS Cost data in CSV:

The best of all, with Botmetric RDS Cost Analyzer, you can export or download different breakdown of RDS cost into CSV files so you can circulate it among your team members and use it for any internal analysis. The data export option allows you to visualize the cost breakdown by instance types, AWS accounts, instance type, instance engine etc.

Export RDS Cost Data in CSV

Concluding Thoughts

Even though AWS provides a Simple Monthly Calculator to calculate AWS RDS Cost, you need to have complete knowledge and the dynamics surrounding your AWS RDS usage. With Botmetric RDS Cost Analyzer, you can easily analyze the current RDS spending with split by month, day, or hour, in formats you love, not just by instances, instance types, accounts, sub services, and instance engine. Why? So that you can get optimal AWS RDS cost savings. You can also export spend report in CSV file or get beautiful graphical view in either bar graph or pie chart formats.

Get Botmetric Cost & Governance today to check in what Botmetric RDS Cost Analyzer offers. See for yourself how it helps increase your AWS ROI.

The Road to Perfect AWS Reserved Instance Planning & Management in a Nutshell

Ninety-eight percent of Google search on ‘AWS reserved instance (RI) benefits’ shows that you can get great discounts and save tremendously compared to on-demand pricing.The fact is, this discounted pricing can be reaped provided you know what RIs are, how to use them, when to buy them, how to optimize them, how to plan them, etc.

Many organizations have successfully put RIs to its best use and have the optimal RI planning and management in place due to the complete knowledge they have.

This overarching, in-depth blog post is a beginner’s guide that helps you leverage RIs completely and correctly, so that you can make that perfect RI planning and management. It also provides information on how to save smartly on AWS cloud.

Upon completely reading this post, you will know the basic 5Ws of AWS RIs, how to bring RIs into practice, types of AWS Reserved Instances, payment attributes associated with instance reservations, attributes to look for while buying/configuring an RI, facts to be taken into account while committing RIs, top RI best practices, top RI governance tactics that help reduce AWS bill shock, and common misconceptions attached to RIs.

The Essence: Get Your RI Basics Right to Reduce AWS Bill Shock

The Backdrop

Today, RIs are one of the most effective cost saving services offered by AWS. Irrespective of whether the reserved instances are used or unused, they will be charged. And AWS offers discounted usage pricing for as long as organizations own the RIs. So, opting for reserved instances over on-demand instances might waste several instances. However, a solid RI planning will provide the requisite ROI, optimal savings, and efficient AWS spend management for a long term.

 

AWS RIs are purchased for several reasons, like savings, capacity reservation, and disaster recovery.

Some of them are listed below here:

1. Savings

Reserved instances provide the highest savings approach in AWS cloud ecosystem. You can lower costs of the resources you are already using with a lower effective rate in comparison to on-demand pricing. Generally, EC2 and RDS RIs are contenders of projecting highest figures in your AWS bill. Hence, it’s advisable to go for EC2 and RDS reservations.

A Case-in-point: Consider an e-commerce website running on AWS on-demand instances.Unexpectedly, it started gaining popularity among customers. As a result, the IT manager sees a huge spike in his AWS bill due to unplanned sporadic activity in the workload. Now, he is under pressure to control both his budget and efficiently run the infrastructure.

A swift solution to this problem is opting for instance reservation against on-demand resources. By reserving instances, he can not only balance capacity distribution and availability according to work demands, but it can also reap substantial savings due to reservation.

P.S: Just reserving the instances will not suffice. Smart RI Planning is the way forward to reap optimal cost savings.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric helps you make wise decisions while reserving AWS instances. It also provides great insights to manage and utilize your RIs that ultimately lead to break-even costs. Get a comprehensive free snapshot of your RI utilization with Botmetric’s free trial of Cost & Governance.[/mk_blockquote]

2. Capacity Reservation

With capacity reservation, there is a guarantee that you will be able to launch an instance at any time during the term of the reservation. Plus, with AWS’ auto-scaling feature, you will be assured that all your workloads are running smoothly irrespective of the spikes. However, with capacity reservation, there will be a lot of underutilized resources, which will be charged irrespective of whether they are used or unused.

A Case-in-Point: Consider you’re running a social network app in your US-West-1a AZ. One day you observe some spike in the workload, as your app goes viral. In such a scenario, reserved capacity and auto-scaling together ensure that the app will work seamlessly. However, during off season, when the demand is less, there will be a lot of underutilized resources that will be charged. A regular health check of the resource utilization and managing them to that end will provide both resource optimization and cost optimization.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric performs regular health check of your usage of reservations and presents them in beautiful graphical representation for you to analyze usage optimally. Further, with the metrics, you can identify underutilization, cost-saving modification recommendations, upcoming RI expirations, and more from a single pane.[/mk_blockquote]

3. Always DR Ready

AWS supports many popular disaster recovery (DR) architectures. They could be smaller environments ideal for small customer workload data center failures or massive environments that enable rapid failover at scale. And with AWS already having data centers in several Regions across the globe, it is well-equipped to provide nimble DR services that enable rapid recovery of your IT infrastructure and data.

The Point-in-Case: Suppose, East coast in the U.S. is hit by a hurricane and everybody lines up to move their infrastructure to US-West regions of AWS. If you have reservation in place beforehand in US-West then your reservation guarantees prevention from exhaustion. Thus, your critical resources will run on US-West without waiting in the queue.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric scans your AWS infra like a pro with DR automation tasks to check if you have configured backup on RDS and EBS properly[/mk_blockquote]

How to Bring RI into Practice

The rationale behind RI is simple: getting AWS customers like you to commit to the usage of specific infrastructure. By doing so, Amazon can better manage their capacity and then pass on their savings onto you.

Here are the basic information on RI types, options, pricing, and terms (1 and 3 year) for you to leverage RIs to the fullest. These will help you bring RI into practice.

Types of AWS Reserved Instances

1. Standard RIs: These can be purchased with one year or three-year commitment. These are the best suited for steady state usage, and when you have good understanding of your long-term requirements. It provides up to 75% in savings compared to on-demand instances.

2. Convertible RIs: These can be purchased only with three year commitment. Unlike Standard RIs, Convertible RIs provide more flexibility and allows you to change the instance family and other parameters associated with a RI at any time. These RIs also provide savings but up to 45% savings compared to on-demand instances. Know more about it in detail.

3. Scheduled RIs: These can be launched within the time-span you have selected to reserve. This allows you to reserve capacity in a stipulated amount of time.

Types of AWS Reserved Instances and their charecteristicsTypes of AWS Reserved Instances and their characteristics

Payment Options

AWS RIs can be bought using any of the three payment options:

1. No-Upfront: The name says it all. You need not pay any upfront amount for the reservation. Plus, you are billed at discounted hourly-rate within the term regardless of the usage. These are only available for one year commitment if you buy Standard RI and for three years commitment if you opt for Convertible RI.

2. Partial Upfront: You pay a partial amount in advance and the remaining amount is paid at discounted hourly rate.

3. All Upfront: You make the full payment at the beginning of the term regardless of the number of hours of utilized. This option provides the maximum percentage of discount.

Attributes to be looked at while buying/configuring RI

Committing RIs

From our experience, a lot of stakeholders take a step back while committing towards reservation, because it’s an important investment that needs lot of deliberation. The fact is: Once you understand the key attributes, then it gives you all the confidence to commit on RIs.

Realize: How to?

  • Identify the instances, which are running constantly or having a higher utilization rate (above 60%)
  • Estimate your future instance usage and identify the usage pattern
  • Spot the instance classes that are the possible contenders for reservation

Evaluate: How to?

Once you know how to realize the RIs, you can identify possibilities and evaluate the alternatives with the following actions:

  • Look for suitable payment plans
  • Monitor On-Demand Vs. Reserved expenditure over time
  • Identify break-even point and payback period
  • Look for requirements of Standard or Convertible RIs

Select: How to?

Once you know how to evaluate, you can analyze and choose the best option that fits your planning, and further empower your infrastructure for greater efficiency with greater savings.

Implement: How to?

Once you know what your requirements are to commit for a RI purchase, implementation is the next stage. It is very crucial you do it right. For the reason that: Discounts might not apply in all cases. For instance, if you happen to choose the incorrect attributes or performs incorrect analysis. At the end of the line, your planned savings might not reflect in your spreadsheets (*.XLS) as calculated.

How to Implement the Chosen RI Like a Pro

The key parameter to reserve an EC2 instance is the instance class. To apply reservation, you can either modify or go for a new RI purchase by selecting platform, region, and instance type to match the reservation.

For Instance:

Consider a company XYZ LLC, where it has an on-demand portfolio of

  • 2*m3.large Linux in AZ us-east-1b
  • 4*c4.large Linux in AZ us-east-1c
  • 2*t2.large Linux in AZ us-east-1a

And XYZ LLC now purchases standards RIs as below:

  • 4*c4.large Linux in AZ us-east-1c
  • 2*t2.large Linux in AZ us-east-1b
  • 4*x1.large Linux in AZ us-east-1a

Based on the above on-demand portfolio and purchases, the following reservations are applied for XYZ LLC:

  • 4*c4.large Linux in AZ us-east-1c. Here’s how: This matches exactly the instance class the reservation was made, so offers discounts
  • 2*t2.large Linux in AZ us-east-1b. Here’s how: The existing instances class is in a different AZ but in the same region, so no discount is applied. However, if you change the scope of RI to region then the reservation will be applied but there is no guarantee of capacity
  • 4*x1.large Linux in AZ us-east-1a. Here’s how: The instance family, region, and OS don’t match. In this case, reservation will not be applied for these instances. However, if XYZ LLC had purchased Convertible RIs, modifying reservation will never be a problem but they have to commit for 3 years with a lesser discount.

Making Sense of the RIs for Payer and Linked Accounts

AWS bills, evidently, includes charges only on payer account for all utilization. However, in larger organizations, where the linked accounts are divided into business units, reservation purchases are made by these individual units. No matter who makes the purchase, the benefits of RI will float across the whole account (payer + its own linked accounts).

For Instance: Let’s assume X is the payer account and Y and Z are its two linked accounts. Then in an ideal situation:

$- Purchase

U-Can be applied

X($) then Y (U) or Z (U)

If Z($) then Y(U) or X(U) or Z(U)

Hence, in a group, reservation can be applied in any instance class available.

How to Govern RIs with Ease

Monitoring just a bunch of RIs are easy when the portfolio is small. However, in case of mid-sized and large sized businesses, RIs generally don’t get proper attention due to the dynamic environment and the plethora of AWS services to manage. This causes a dip in efficiency, unexpected minimal savings, and many more such issues. Nevertheless, this dip in efficiency and bill shock can be assuaged with few tweaks:

Make a regular note of unused and underutilized RIs:

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Unused and underutilized states of RIs are key issues that lead to inefficiency.[/mk_blockquote]

In case of unused RIs: The reservations were intended to be bought for determined constant utilization but somehow the utility ended just after few months of purchase and the reservation is now in dormant state or unused. If you modify and eliminate them, then they will add to cost savings.

In case of underutilized RIs: Few RIs are bought with the intention to use them for continuous workload but somewhere in the timeline the utility reduced and the reservation is not clocking to its ideal utilization. If you start reusing them, then they will add to cost savings. Read this blog post by Botmetric Director of Product Engineering Amarkant Singh’s post on how to go about unused and underutilized RI modifications and save cloud cost intelligently.

Finding the root cause of unused and underutilized RIs:

1. Incorrect analysis: While performing the analysis to determine RI purchase some miscalculations or lack of understanding of environment can be cause of trouble in management of RIs.

a. Wrong estimation of time (1 year/ 3 years): If you couldn’t understand your projected workload time then purchasing reservation for a longer interval e.g.: 3 years may bring RI into unused state

b. Wrong estimation of count: This could be due to overestimation/underestimation of the number of reservations required. If it’s too many, then you may modify them for DR capability. But if it’s too less, then you may still not satisfy your savings

c. Wrong estimation of projected workload: If you have not understood your workload, then chances are that you could have bought RIs with incorrect attributes like time, number of instances bought, etc. In such cases, RIs either go unused or underutilized

2. Improper Management: RIs, irrespective of the service, can offer optimal savings only when they are modified, tuned, managed, and governed continuously according to your changing infrastructure environment in AWS cloud.

You should never stop at reservation. For instance, if you have bought the recent Convertible RIs, then modifying them for the desired class. And, if you have older RIs, then get them to work for you either by breaking them into smaller instances or by combining them for a larger instance as per the requirement.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]FACT: Almost 92% of AWS users fail to manage Reserved Instances (RIs) properly, thus failing to optimize their AWS spend.[/mk_blockquote]

If you find all this overwhelming, then try Botmetric Smart RI Planner, which helps with apt RI management, right sizes your RIs and helps save cost with automation.

Top RI Best Practices to Live By

There are few best practices you should follow to ensure your RIs work for you and not the other way around.

Get Unused RIs back to work

If you have bought the recent Convertible RIs, then modifying them for desired class is now a child’s play. However, if you have older RIs, then getting them back to work is not so easy as Convertible RIs. But with just few modifications like breaking them into smaller instances /combining them for a larger instance according to your needs will do the trick.

Keep an eye on expired and near-expiry RIs in your portfolio

Always list your RIs in three ways to keep a constant check on them:

Active: RIs that are either new or will expire in 90 days

Near-expiry: RIs that are nearing to 90 days of expiration. Analyze these RIs and plan accordingly for re-purchase

Expired RIs: RIs that are expired. If there is an opportunity for renewal go ahead with it

Be sure of your workload demands and what suits your profile the best. Standard RIs work like a charm, in regards to cost saving and offering plasticity, only when you have a good understanding of your long-term requirements.

And if you have no idea of your long-term demand, then Convertible RIs are perfect, because you can have a new instance type, or operating system at your disposal in minutes without resetting the term.

[mk_blockquote style=”quote-style” font_family=”none” text_size=”12″ align=”left”]Botmetric has found a smart way to perform the above. It uses the NoOps and AIOps concept to put the RIs back to work. Read this blog to know how.[/mk_blockquote]

Compare on-demand Vs. reserved instances to improve utilization

If you want to improve your utilization of reservation, the game plan is to track on-demand Vs. reserved instances utilization. It is evident from our experience that RI cost over a period of time offers the greatest discounted prices. Read this blog post to know the five tradeoff factors to consider when choosing AWS RIs over on-demand resources.

Compare on-demand Vs. reserved instances to improve utilization

For further benefits, a report on RI utilization that can throw the below insights will help:

1. Future requirement of reservation

2. Unused or underutilized RIs

3. Opportunities to re-use existing RIs

Here is a sample Botmetric RI Utilization Graph for your reference:

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

Before wrapping-up, here are few common RI misconceptions that you must know.

Common RI Misconceptions You Must Know

  • If you buy one EC2 instance and reserve an RI for that type of EC2, then you don’t own two instances but you own only one
  • RIs are not only available in EC2 and RDS but in five other services as well that can be reserved
  • Purchasing RIs alone and not monitoring and managing them may not give you any savings.
  • Managing and optimizing them is the key
  • Never purchase instance for an instance ID, but for an instance class
  • Buying a lot of RIs will not bring down the AWS bill
  • Managing RIs is very complex. It’s a continuous ongoing process. Few key best practices — if followed — can give desired savings and greater efficiency
  • Older RIs cannot have Region benefit
  • RIs can’t be re-utilized, if you fail to understand your workload distribution RIs can’t be returned, instead
  • AWS RI Marketplace facilitates you to sell your RIs to others

The Wrap-Up

RIs, as quoted earlier, are the highest saving option in your dynamic cloud environment. Buying RIs is not sufficient. A proper road map and management coupled with intelligent insights can get you the desired savings.

AWS is always coming up with new changes. Hence, understanding its services and knowing how to use them for greater benefit will always prove beneficial for your cloud strategy irrespective of your business size, above all for the startup world. And if you find all this overwhelming, then just try Botmetric’s Cost and Governance.

Get Started

3 Effective DDoS Protection & Security Solutions Apt for Web Application Workloads on AWS

Distributed Denial of Service (DDoS) attacks have been around since decades. And NexusGuard research quoting 83% increase in DDoS attacks in 2Q2016 compared to 1Q2016, these attacks seems to continue being prevalent even beyond 2017. Thanks to proliferation of IoT devices using web applications: the volume and velocity of these attacks have reached a soaring high. Despite stringent measures, these attacks have been bringing down web applications and denying service availability to its users with botnets. Remember Mirai, Mega-D, Zeus Trojan, Kraken, etc.? Without a doubt, DDoS mitigation is pivotal. Otherwise, get ready for a panic attack (some day).

If you’re a security engineer from an organization hosting Web Application Workloads on AWS, then bookmark this blog.

The Backdrop: What DDoS holds and how to go about DDoS protection

DDoS is one of the most sophisticated and famed web attacks known to software industry till date. A typical DDoS attack can be simulated from a handful of machines to large network of bots spread across the world. It can be simulated at different OSI stack layers. Like network to session to application layers with attack throughputs ranging from tens of Mbps at application level to tens of Gbps at network level.

In the recent times, hundreds of popular web applications as well as enterprise businesses have been victims of several DDoS attacks. Some of the common DDoS attacks known are XOR.DDoS, SYN floods, SSL floods, large set of HTTP 500 errors, slow upload connections, large number of file downloads, etc.

Image Source: CDNetworks/ XOR.DDoS Infographics/ 2016

Due to the nature of DDoS attacks at different layers, developing and deploying a good defense protection against DDoS attacks demands a scalable and cost effective approach. If you are using AWS cloud to host your web applications, then there are a variety of effectual solutions that help protect your web applications and counteract these DDoS attacks using industry leading solutions.

Here’re the top options and cost effective strategy for DDoS protection and mitigation for web application workloads on AWS, using industry-standard solutions:

  • Big-IP F5 Advanced Firewall Manager(AFM)

Big-IP F5 AFM provides DDoS protection services from network layer to application level. F5 is one of the popular traditional vendors and are deployed by many enterprises. You can install F5 virtual application with AFM in AWS EC2 and create a cluster of them with DNS level load balancing to protect your web applications and services. F5 is a good option if you have an existing license and have already migrated workloads to AWS Cloud.

However, managing and scaling F5 AFM nodes and protecting against large DDoS attacks are generally cost prohibitive due to licensing & operational expenses burden. The best way forward is to start using F5 from the AWS Marketplace and it’s a good solution for customer hosted DDoS protection. To date, F5 is building application security for the digital economy.

If you are looking for a cheaper alternative to F5 then aiProtect is a good solution for customer hosted DDoS protection.

  • aiScaler aiProtect

If you have hosted your web applications on public cloud service like AWS or private virtual environments, use can opt for aiScaler’s aiProtect. It is known to protect web applications and infrastructure against DDoS attacks by limiting the number of requests from particular IP address, providing protection against SYN floods and URL based attacks, etc.

It protects your application from Denial of Service and other web based attacks as well, by pre-processing all HTTP traffic. It then isolates more vulnerable components at the network layer. It also provides detailed reporting that help end attacks permanently. Above all, its PCI compliant multi-level defense enforces sanity rules on incoming requests and isolates the origin environment protecting valuable assets while eliminating most common online attacks.

This product is available from AWS Marketplace with hourly or annual billing. You can easily configure aiProtect on AWS too.

  • Incapsula by Imperva

Imperva is an established player in traditional enterprises. Its Incapsula is a cost effective solution for startup & SME customers to protect their workloads in Cloud. It is one of the most popular SaaS solution for DDoS protection. It also offers standard web application firewall along with CDN capabilities.

Moreover, InCapsula offers global CDN to offload caching needs of many web applications. It manages the DNS for applications to protect against attacks. It also has a large DDoS protection networks spread across different geographies with attack protection ranging upto 10 Gbps to 100s of Gbps.

If you’d like to check your DDoS mitigation strategy, they have a free online tool called DDoS Resiliency Score (DRS) calculator that can evaluate the effectiveness of your strategy.

If you think all the above DDoS prevention solutions are not for you, then try Reblaze. They have great DDoS support for AWS Cloud. Their service is available as a SaaS platform and can be deployed as a hosted solution in customer VPC to protect variety of applications.

To Wrap-up: All solutions are good; evaluate your needs first to get the idle option

As quoted by CIO’s Senior Writer Sharon Florentine in her article ‘2017 security predictions,’ “ If you thought 2016 was bad, fasten your seat belts — next year is going to be even worse.”

So it’s better late than never. As a premier AWS Technology partner, we recommend evaluating InCapsula or Reblaze for cost effective solution. If you have an existing license from F5 already, it’s a good option for DDoS protection of your AWS cloud applications. In regards to aiProtect, it works best with respect to the price compared to F5 if you are deploying multiple nodes DDoS protection in AWS.

If you still want to know more about the DDoS, give us a shout in the comment section below or Tweet to us @BotmetricHQ. We’d love to help you!

Editorial Note: This blog is an adaptation of Minjar’s earlier blog post ‘DDoS Prevention and Security for Web Application Workloads on AWS Cloud.’

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

AWS users using more than 75% RIs in AWS realize the best savings, reveals an internal customer survey by Botmetric. One of the most expensive mistakes that AWS customers make is not using Reserved Instances (RIs) aptly. Just reserving instances does not reap optimum cost savings. A well maneuvered AWS Reserved Instance management plays a major role in reducing the overall cloud spend. But it takes a lot of effort and time when done manually. Don’t worry. Botmetric has your back with its AWS RI Planner.

With much delight, I’d like to introduce Botmetric Smart RI. Using Algorithmic IT Ops (AIOps), it analyzes, modifies, and monitors RIs automatically without you logging into AWS console, thus saving you time and effort that goes in manually modifying the RIs. It does all the heavy-lifting for you using automation so that you save cost optimally, which would have been impossible if done manually.

Simply put, by configuring this app on Botmetric, you can reduce the effort involved in modifying unused RIs, automate modification of RIs as soon as the unused RIs are found (that occur multiple times a day), and above all save cost that could have been wasted due to unnecessary on-demand usage, along with wasted RIs.

Key Objective Behind Building Botmetric Smart RI

Team Botmetric’s motto is to save your cloud cost as much as possible, and also help you govern your RIs and the cost associated with it optimally. Working day-in day-out on RIs, Botmetric team has observed that RIs go unused for two main reasons:

1. Changing Infrastructure Needs: Because, usage patterns change. At times, new instances are added, or few instances need upgradation from time to time as new modules, new features, or new projects are being launched. Sometimes, due to changing business strategy RIs go unused. This leads to wasted RIs and unnecessary on-demand usage and cost. 

2. Miscommunication: Most of the times, the engineer who purchased the RI and the engineer who is launching them and monitoring them are different. In such cases, there are high chances of miscommunication between both of them. When instances are launched without the alignment with the purchased RIs, it leads to RI mismatch.

To get the discounted reserved pricing, it is pivotal that it matches the defined rules. For EC2, these rules include Availability Zone (AZ), Scope of Reservation, Platform, Instance Size, and Instance Type. Botmetric scans through your infrastructure for any on-demand instances that are matching two of the three rules required to take price discount of any unused RIs and highlights those instances.

One amazing thing about AWS RIs is you can modify AWS EC2 RIs any number of times at no cost. AWS has introduced this new flexibility in reservations by providing the support for modifying reserved instances, so that you get better savings.

On the other hand, the only solution for unused reservations (RI discount not being applied) is to: reallocate the RI by modifying the RIs to different zones, changing the scope of RIs, or selling the reservations on the marketplace at loss. If you have purchased reserved instances as the newly launched Convertible RI, then you have the ability to convert the instance types as well as platforms, but it may involve fresh cost.

Essentially, modifying RIs involves changing the factors about your reservation with no cost, unless you have new Convertible RI and you are upgrading/downgrading the reservation. In a typical day-to-day AWS reserved instance management, instances needs to be checked one by one and then change the factors accordingly. Few of you might even follow the scheduled monitoring and analysis of RI from time to time too.

It’s critical that you check and evaluate hourly data and create the right modifications. To this end, you will have to create a list of unused reservations and possible modifications. In simple words, you need to create an analysis report of RI in some form and then act upon it.

It is, however, easier said than done!

Here’s why: you need to look for all the probabilities by having the context of list of instance types you are running within a family, then figuring out if there are unused reservations, then figuring out the combination for modifications matching with running on-demand instances. To ease this, Botmetric built Smart RI.

How does Botmetric Smart RI help?

Botmetric Smart RI automatically modifies the reservation as soon as there is a modification available. It smartly manages RIs, which are aligned for discounts, by analyzing the usage, planning, helping with modifications, and monitoring them from time to time.

Essentially, Botmetric Smart RI helps reduce the effort involved in modifying the unused RIs. It automates modification of RIs that occur multiple times a day as soon as the unused RIs are found. It saves you that much amount of cost that could have been wasted due to unnecessary on-demand usage, along with wasted RIs.

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

To leverage these benefits, you can easily enable Smart RI by providing the permission of “ec2:ModifyReservedInstances” in the policy you attached with the cross account role created for Botmetric.

AWS reserved instance management

Apart from helping with AWS Reserved Instance management, Botmetric also provides the capability to track the modified RI details from “Modify History.” You can get details like the time when the reservation was modified, the new combinations created by modification, etc.  

Furthermore, Botmetric AWS RI Planner also features other cool capabilities, which not only saves time and weeks of effort but also adds cloud intelligence that helps you with apt RI management, evaluates your cloud utilization, and saves RI cost.

Here they are below, if you’d like to explore them:

Make Confident & Smart RI Plans and Decisions

Using Botmetric AWS RI Planner, you can quickly identify key data that helps with RI planning from your existing infrastructure, all from one single console. Botmetric performs intelligent AIOps on your RI usage, on-demand utilization and existing reserved instances. For each AWS account, Botmetric runs various analysis for all relevant EC2 or RDS machines, and provide you with recommendations on what kind of RI purchase should be done for a specific server.

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

Analyze Existing Reserve Instances Portfolio

Using Botmetric AWS RI Planner, you can get a complete view of all the reservations across multiple accounts and multiple AWS regions. It also suggests whether a particular reserved instance is recommended to be renewed or not. Among the list of reservations, you can also know about the RIs that are expiring soon. Additionally, Botmetric also sends a RI expiry alert with the list of reservations expiring in the next 45 days. Furthermore, Botmetric also throws warning messages for RIs that are going to expire in the near future.

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

Get Recommendations for Unused RIs and Manage them Smartly

Botmetric, using machine learning (ML), keeps track of your RI utilization on a daily basis using your billing data and metadata of current infrastructure. If you have any unused RIs in your accounts, Botmetric detects them and provides recommendations on how these unused RIs of EC2/RDS can be utilized, essentially to reduce your monthly RI spend. Above all, it facilitates you govern and manage unused RIs aptly for maximum savings using Smart RI.

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

Export Reports to Plan RI Better

With CSV export option, you can download the reports of Plan RI recommendations, so that you can share it with your team. This will allow you to analyze what should be planned for RI purchase based on your utilization patterns and business needs.

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

EC2 Reserved Instance Change History

With the EC2 RI Change History, you can get a list of all the EC2 RI modifications performed on Botmetric. Apart from the details associated with each modified EC2 RI, the console also allows you to filter data by Linked Accounts, status, and action initiated, for better understanding of modified EC2 RIs.

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

Analyze RI Utilization Data

Using Botmetric AWS RI Planner, you can quickly analyze your entire RI portfolio and utilization from single graphical view in a matter of minutes. Botmetric recommends you to purchase RI for more machines to reduce your monthly spend, and the top RI recommendations, along with a visualization on on-demand vs. reserved hours.

AWS Reserved Instance Management Now Made Easy with Botmetric’s Smart RI

Concluding Thoughts

You can access the Smart RI app and RI Planner from Cost & Governance product in the Botmetric Cloud Management Platform. If you are looking for that one AWS Reserved Instance management that can help you save optimally, then Botmetric AWS RI Planner is the one. Do take the 14-day trial to experience it first hand. If you are already a Botmetric customer, then do try it and share your feedback below in the comment section or tweet to us @BotmetricHQ . We’d be happy to know how it’s working for you and what we can do to make it better to suit your needs.

A free copy of RI guide is up for a grab, if you’re looking to know about RIs in-depth. Hurry!

5 Interesting 2017 DevOps Trends You Cannot Miss Reading

Editorial Note: This exclusive post is written by Botmetric guest blogger, Kia Barocha.

If you think you heard and read a lot about DevOps in 2016, it’s not over yet. Gear up as there is much more expected in 2017. The key focus for DevOps in 2016 was to ensure security, enhancements and containerization. In 2017, there is a lot of noise about what will be the future of DevOps. Here is a look at what our thought leaders think (they definitely think that it will impact business hugely in 2017).

The main challenge, till date, for many professionals has been to clearly understand DevOps. Some call it a movement, while others think that it is a collection of concepts. If we were to properly define it, we would say that it is a combination of two terms, which are Development and Operations. Or as the definition goes, “it is a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing resilient systems at scale.”

DevOps is a practice where operations and development professionals or engineers participate in the entire service lifecycle (which means starting from design to development and till the production support stage). It is a recipe of success through cultural shift. It is characterized by autonomous teams and a constantly learning environment. If you are ready for this adoption, it implies that you are ready to change fast, develop fast, test fast, fail fast, recover fast, learn fast, and also prep fast for product launches.

With DevOps businesses have experienced higher business value, better alignment with IT, helps break down silos, helps build a flexible and software enabled IT Infrastructure.

DevOps Trends in 2017

Image Source: https://cloudsmartz.com/wp-content/uploads/2016/01/Devops-Team.jpg

Let’s take a look at the DevOps trends that will really be dominant through 2017:

1. Large enterprises will adopt DevOps

The predominant trend, till now, has been that organizations have experimented with DevOps in small and discrete projects. Experts feel that in 2017, large enterprises will be more comfortable to adopt DevOps at large (at an enterprise-level). It will finally take the center stage. One of the key benefits that DevOps enables is enhanced collaboration between developers, QA and testing professionals, Operations personnel, people from business planning and security teams.

2. Focus on enhanced security

Not that the focus on security was not there earlier but given the way attackers are getting smarter and more sophisticated, there will be much more focus on unifying development, continuous security and operations efforts. 2017 will be the year, user experience and advanced security measures will go hand in hand.

3. Consolidation of DevOps tools

There are too many DevOps tools that are good for different aspects of meeting the requirements of the delivery cycle. Most of the tools that are available help to automate some aspect of the process of software delivery. Tools like Jenkins, Docker, AWS, GitHub, and JIRA are already quite popular in the market. In 2017, there will be an integration of all these tools to a selected few which can cater to all the requirements across the continuous delivery cycle. This can mean that the biggies might acquire the smaller DevOps companies, as well as start a journey towards NoOps to put their operations on auto-pilot.

4. Confluence of Big Data and DevOps

Just like IoT, there is a lot of critical information generated when software releases are automated. And again like IoT, this large volumes of data needs to be analyzed. There is a dire need to apply machine learning to all this data so that there is some actionable business reports, and way to predict failures, and manage the releases more effectively.

5. More support for hybrid everything

The market is dynamic and bigger organizations have legacy applications in co-existence with micro services, on premise cloud infrastructure – basically hybrid everything (which includes infrastructure, tools, processes, applications, etc.). DevOps, in that respect, is ready and can support multiple aspects of an organization’s hybrid existence.

If you wish to focus on software innovation, and accelerate release of application updates, look at DevOps as an enterprise-wide investment.

-END-

Guest Blogger’s Profile:

Kia Barocha is a content marketing strategist at ISHIR, a leading Dallas-based software development company, offering high-quality Mobile App Development, Web Design & Development, Cloud Computing Solutions and Application Virtualization Services to the clients across the globe. 

Increase Operational Efficiency by 5X with New Botmetric Custom Jobs’ Cloud Ops Automation

As a Cloud Ops engineer, do you get that feeling — that you are stuck like a tiny pet hamster in a wheel, doing the same stuff again and again, and going nowhere? You have plans to automate everyday cloud operation tasks and a roadmap towards Cloud Ops Automation, but don’t know from where to start! Working on mundane operational tasks day-in day-out is too taxing. Does this ring a bell?

repetition

The best way forward is to schedule all your routine tasks and use simple python scripts to achieve the desired automation using Botmetric’s New Custom Jobs.

Here’s Why Botmetric Built Custom Jobs

Botmetric Ops & Automation product already offered a list of 25+ pre-defined automated jobs. Using these jobs, you could automate a lot of routine tasks for major 7 AWS services. A lot of Botmetric customers liked these automated jobs and further requested some unique operational tasks in AWS cloud. Hence, Botmetric team built an universal solution that had the ability to custom run python scripts through the Botmetric console.

Game-changing Cloud Ops Automation Features in Botmetric Custom Jobs

In current marketspace a lot of SaaS products offer automations but  lack in delivering categories of custom jobs. However, Botmetric Ops & Automation, since its release, has solved almost 80% of automation requirement.

With Botmetric Custom Jobs you can:

  • Run your own custom scripts: Through one Botmetric console now you can perform both governance and automation. Botmetric Custom Jobs allows you to write desired Python scripts and automate scheduled execution through Botmetric console.
  • Increase operational efficiency: There are a list of tasks that a DevOps engineer performs on everyday basis and these tasks differ from one infrastructure type to another. Automating such tasks through scripts would free up a lot of time for the DevOps engineer so that one could concentrate on business innovation.
  • Get visibility into executions: Unlike running a script through cron/CLI, with Botmetric, you will have the ability to view status, receive alert or email notification on success or failure, and get historic execution details.

How to Schedule Custom Jobs on Botmetric?

There are two ways to schedule custom jobs:

1. Create a job with new script

Write your new script in the editor provided and verify the syntax. Provide necessary naming for identification and give email address to be notified.

Create a job with new script

 

2. Utilize saved scripts to create a new job

You can also choose from the previously created scripts and schedule a task out of it.

2) Utilize saved scripts to create new job

 

Essentially, Custom Jobs will empower you with running desired automation in your environment. With simple code logic of yours, written in Python, you can schedule your routine tasks for increased operational excellence.

Here’re few use cases to give you a gist of Custom Jobs’ potential:

The Case in Point for Creating VPC in a Region

Assume, you’re headquartered in Bay Area of the USA and have your business on cloud. So you have populated maximum of your resources in US-west. Lately, you expand your business to Germany too. However, you are still launching instances in US-west. Your team starts complaining about latency issues. So you decide to populate resources in EU-central, as the present EU-central region offers greater benefits. With a simple Python script for creating VPC in a region, with user defined CIDR scheduled, you can have the VPC created for any resources launched in this region.

The Case in Point for Copying EBS Snapshots Automatically Across Instance Tags

If you are looking for heightened DR policies and want to secure your snapshots, you can use Custom Jobs to write a custom script on Copy EBS snapshots across instance tags that can schedule your volume snapshots for the mentioned  instance tags across regions and secure them.

The Case in Point for Automatically Deleting Snapshots

If you are looking to derive savings from optimizing your back-ups, you can form a custom script to schedule deletion of old snapshots after defined number of days. By automating this through Custom Jobs you will lower wastage and save on unnecessary back-up retentions.

[mk_title_box color=”#FFFFFF” highlight_color=”#008080″ highlight_opacity=”0.5″ size=”18″ line_height=”34″ font_weight=”inherit” margin_top=”0″ margin_bottom=”18″ margin_left=”10″ font_family=”none” align=”center”] Try Botmetric Custom Jobs Now[/mk_title_box]

To Conclude

Each passing day, we are moving more towards NoOps, which essentially means that machines can automate known problems, while humans can focus on new problems. Many of Botmetric customers have embraced NoOps (knowingly/unknowingly) by automating all and every possible routine tasks in their environment so that DevOps time is spent more towards solving new issues, and increase operational efficiency by 5X.

What are you waiting for? Take a 14-day trial and check for yourself how Botmetric helps you automate cloud ops tasks. If you’re already a customer, and have any questions, please pop them in the comment section below. We will get back to you ASAP. If you are looking to know about all things cloud, follow us on Twitter.

Don’t Let 2017 Amazon AWS S3 Outage Like Errors Affect You Again

On February 28th, 2017, several companies reported Amazon AWS S3 Cloud Storage Outage. Within minutes, hundreds and thousands of Twitter posts started making rounds across the globe sharing their experiences how their apps went down due to this outage.

 

Don’t Let 2017 Amazon AWS S3 Outage Like Errors Affect You Again

Image Source: https://twitter.com/TechCrunch

 

The issue that kicked off around 9:44 Pacific Time (17:44 UTC) on 28th Feb 2017, was reported primarily due to an error in simple storage buckets hosted in the us-east-1 region (as tweeted by AWS).

 

Don’t Let 2017 Amazon AWS S3 Outage Like Errors Affect You Again

Image Source: https://twitter.com/awscloud?lang=en

 

This AWS S3 outage led to major websites and services, such as Medium, Docker Registry Hub, Asana, Quora, Runkeeper, Trello, Yahoo Mail, etc. falling offline, losing images, or left running haphazardly.

According to sources, this AWS S3 outage disrupted half the internet. Because, Amazon S3 is used by approximately 148,213 websites, and 121,761 unique domains, according to SimilarTech.  

Even though AWS fixed it after juggling for several hours, its impact was huge.

 

Don’t Let 2017 Amazon AWS S3 Outage Like Errors Affect You AgainImage Source: https://status.aws.amazon.com/

What caused it?

According to Nick Kephart, senior director of product marketing for San Francisco-based network intelligence company ThousandEyes, who monitored this situation closely said that during the outage information could get into Amazon’s overall network, but attempting to establish a network connection with the S3 servers was not possible. Due to this, all traffic was dead in its tracks. Hence, all the sites and apps that hosted data, images, or other information on S3 were affected.

As a company, how could you avoid this situation or fool-proof your system against such outages in the future? Here’s how:

Use Region Level DR Rather Than AZ Level DR

People who did not go for Region level DR and opted for AZ level DR in their session were affected. When you opt for Region level DR, you will have to copy data from one S3 Region to another S3 Region and sync that S3 data between Regions. This helps with availability of a backup, hence helping you cope with such outages in future.  

Opt for Cross-Region Replication

Cross region replication helps to make it easier for you to make copies of your AMIs/snapshots to another in a second AWS region so that you can always keep a system running in another region. Or, so to say, run the stand-by environment in another region.

Go for the AWS CloudFormation Template

Use CloudFormation template to create a cluster in another region, so that you don’t have to wait for a longer duration to spring back to normal. It will not take more than 2 hours for you to bring the cluster and whole environment back on track with CloudFormation.

Bottom line: No Technology is Perfect

All technologies might fail at some point. A Plan B in place is the best way forward.

Even though large swaths of the internet went down due to this Amazon AWS S3 Cloud Storage Outage, several sites and apps were not affected by it. Reason:  They had their data spread across multiple regions.

If you know of any other way to mitigate such outages, do share it with us in the comment section below or tweet us @BotmetricHQ.