Top 11 Hard-Won Lessons We’ve Learned about AWS Auto Scaling

Auto scaling, as we know today, is one of the most powerful tools leveraging the elasticity feature of public cloud – Amazon Web Services (AWS). Its ability to improve the availability of an application or a service, while still keeping cloud infrastructure costs under check, has been applauded by many enterprises across verticals, be it fleet management services or NASA’s research base.

However, at times, AWS Auto Scaling can be a double-edged sword. For the reason that, it introduces higher level complexity in the technology architecture and daily operations management. Without the proper configuration and testing, it might do more harm than good. Even so, all these challenges can be nullified with few precautions. To this end, we’ve collated few lessons we learned over a period – to help you make the most of Auto Scaling capabilities on AWS.

  1. Use Auto Scaling, whether your application is stateful or dynamic

There is a myth among many AWS users that AWS Auto Scaling is hard to use and not so useful with stateful applications. However, the fact is that it is not hard to use. You can get started in minutes, with few precautionary measures like using sticky sessions, keeping provisioning time to minimum, etc. Plus, AWS Auto Scaling helps monitor the instances and heals them if they become unhealthy.

Here’s how: Once the Auto Scaling is activated, it automatically creates an Auto Scaling Group, and provisions the instances accordingly behind the load balancer. This maintains the performance of the application. In addition, Auto Scaling’s Rebalance feature ensures that your capacity is automatically distributed among several availability zone to maximize the resilience of the application. So, whether your application is stateful or dynamic, AWS Auto Scaling helps maintain its performance irrespective of compute capacity demands.

  1. Identify the metrics that impact the performance, during capacity planning

Identify the metrics for the constraining resources, like CPU utilization, memory utilization, of an application. By doing so, it will help track how the resources are impacting the performance of the application. And the result of this analysis will provide the threshold values that will help scale up and scale down the resources perfectly.

  1. Configure AWS CloudWatch to track the identified metrics

The best way forward is to configure Auto Scaling with AWS CloudWatch so that you can fetch these metrics, as and when needed. Using CloudWatch, you can track the metrics in real-time. CloudWatch can be configured to launch the provisioning of an auto scaling group based on the state of a particular metric. 

  1. Understand functionality of Auto Scaling Groups while using Dynamic Auto Scaling

The resource configurations have to be specified in Auto Scaling groups feature provided by AWS. Auto scaling groups would also include rules defining circumstances under which the resources will be launched dynamically. AWS allows assigning the of autoscale groups to the Elastic Load Balancers (ELBs) so that the requests coming to the load balancers are routed to the newly deployed resources whenever they are commissioned.

  1. Use Custom Metrics for Complex Auto Scaling Policies

A practical auto-scaling policy must include multiple metrics, instead of just one allowed by CloudWatch. The best approach to circumvent this restriction is to code a custom metric as a Boolean function using Python and the Boto framework. You can use application specific metric as well along with default metrics like memory utilization or CPU, network, etc.

  1. Use Simple Queuing Services

As an alternative to writing complex code for the custom metric, you can also architect your applications to take requests from a Simple Queuing Service and enable CloudWatch to monitor the length of the queues to decide the scale of the computing environment based on the amount of items in the queue at a given time.

  1. Create Custom Amazon Machine Images (AMIs)

To reduce the time taken to provision instances that contain many custom software (not included in the standard AMIs), you can create a custom AMI that contains the software components and libraries required to create the server instance.

  1. Scaling up other AWS services other than EC2, like AWS DynamoDB

Along with AWS EC2, other resources such as AWS DynamoDB, can also be scaled up and scaled down using Auto Scaling. However, the implementation of the policies are different. Since storage is the second most important service other than computing service, efforts to optimize storage will yield good performance as well as cost benefits.

  1. Predictive Analytics for Proactive Management

Setting up thresholds as described above is reactive. Hence, you can leverage time-series prediction analytics to identify patterns within the traffic logs and ensure that the resources are scaled up at pre-defined time, before events take place.

  1. Custom define Auto Scaling policies & provision AZs capacity accordingly

Auto scaling policies must be defined based on the capacity needs as per Availability Zone (AZ) to save on cost spikes. Because pricing of the resources are based on different regions that encompass these AZs. This is critical especially for Auto Scaling groups configured to leverage multiple AZs along with a percent-based scaling policy.

  1. Use Reactive Scaling policies on top of schedule scaling feature

By using Reactive Scaling policies on top of schedule scaling feature will give you the ability to really respond to the dynamic changing conditions in your application.

Conclusion:

Embrace an intelligent cloud management platform.

Here’s why: Despite configuring CloudWatch and other features of Auto Scaling, you cannot always get everything you need. Further automating various Auto Scaling features using key data-driven insights is the way forward. So, sign-up for an intelligent cloud management platform like Botmetric, which throws key insights to manage AWS Auto Scaling, provides detailed predictive analytics and helps you leapfrog your business towards digital disruption.

Also, do listen to Andre Dufour’s recent keynote on AWS Auto Scaling during the recent 2016 re:Invent, where he reveals that Auto Scaling feature will be available to Amazon EMR (Elastic Map Reduce) service as well along with AWS ECS Container service, and Spot Fleet in regards to dynamic scheduling policies.

It is evident. Automation in every field is upon us. There will soon be a time when we will reach the NoOps state. If you have any questions in regards to AWS Auto Scaling or how you can minimize Ops work with scheduled Auto Scaling or anything about cloud management, just comment below or give us a shout out on Twitter, Facebook, or LinkedIn. We’re all ears! Botmetric Cloud Geeks are here to help.

How to Automate AMI Creation for Customized EC2 Instances using Cloud Automation Jobs

AMI (Amazon Machine Image) comes with the advantages of customization of EC2 instances. It also accelerates launching new instances and reduces external dependencies. By creating AMIs, you can launch identical, fully-provisioned copies of your working image into multiple environments. It could be development or production. However, the process of creation of AMIs should be consistent, where the changes between revisions are identifiable and auditable. How to go about it? By automating AMI creation for customized EC2 instances.

The backdrop: Importance of AMIs

An AMI provides the information required to instantiate an EC2 Instance, which is a virtual server in the cloud. You can also specify an AMI when you launch an instance. And you can launch as many instances from the AMI as you need. You can even customize the instance that you launch, from a public AMI. And you can save that configuration as a custom AMI for your own use.

In general, an AMI includes three key components:

  • A template for the root volume, for the instance. It could be for an operating system, an application server, or for the applications
  • Launch permissions: Those permissions that help control which AWS accounts can use the AMI – to launch instances
  • A block device mapping: The one that specifies the volumes to be attached to the instance when it’s launched

Even though AMI comes with the advantages of customization of EC2 instances, it does not help when you need to make additional customization to your AMIs based on the run-time information. Hence, we recommend that the process of creation of AMIs should be consistent with the revisions. And the best way forward to create AMIs consistently is by automating the whole process using the Botmetric Cloud Automation Jobs.

Automate AMI Creation

Using our Botmetric Cloud Automation Jobs, you can automate the task of creating AMI of your customized EC2 Instances. These jobs can be scheduled either at Interval or Cron basis depending on your requirement.

In the dashboard, you can select a single instance or a group of instances for creating AMIs based on the tags.

The Instances with these tags can only be used to create AMIs. These AMIs can be created using “only Root Volume” and “no Reboot” flags that suggest the way to create AMIs.

  • If only RootVolume is selected in the dashboard:  The Botmetric will create AMI that contains only root volume. If RootVolume is deselected, it will create AMI for the instance considering all the attached block devices
  • If noReboot is selected in the dashboard: The Botmetric will create AMI without restarting the instance. In this case, the Botmetric will first shutdown the instance, create AMI for it, and then start the instance

Additionally, you can also choose the number of AMIs to be retained after creating new AMIs for that instance. It will help in maintaining the latest AMIs available for that instance.

To Conclude

As an AWS expert, we suggest all our customers to look for a hybrid approach. For instance, build static components of your stack into AMIs, and configure dynamic aspects that change regularly (such as application code) at the run time.

Ultimately, you need to build the process of creation of AMIs based on your requirements like frequency of deployments, reduction of external dependencies, requirements to scale quickly, and so forth. To learn how to create AMI for EC2 instances in detail, read our support page here.

This product feature write-up is written by our Software Developer Engineer, Anoop Khandelwal.

Until we come up with the next blog post do keep in touch with us on Facebook, Twitter and LinkedIn.

Automating on AWS Cloud- The DevOps Way

With innovations accelerating and increased demands of DevOps users, businesses are becoming more agile with each passing day. To smooth the progress of functional excellence and achieve overall business goals, organizations need to stay agile. This advancement is progressing downstream with the evolution of DevOps.

But DevOps isn’t as easy as it sounds. Deploying a highly efficient Amazon Web Services (AWS) environment without expensive configuration management tools is possible. But it requires serious efforts as there are chances of errors and mistakes.

AWS offers wide range of tools and services which can help you in configuring and deploying your AWS resources. Some of these tools are CloudFormation and ElasticBeanstalk. But these tools cannot manage your AWS environment fully. They can only cover the AWS objects created by you. They can’t deal with the software and various configuration data present on your EC2 instances.

While cloud is emerging as a hero for enterprises by giving them a great platform to manage their multifaceted software applications, enterprises look for more flexibility in their software creation practices. They have simply migrated from conventional models to agile or lean development practices. This move or let’s say it as a development has also spread to various operation teams and has shorten the impending gap between customary Development and Ops teams.

Providing a flexible and highly efficient environment, Amazon Web Services (AWS) has successfully facilitated the growth and profitability of its clients including Netflix, Airbnb, Etsy, and many more and these all embraced DevOps. In this post, we will try to deconstruct the elements of DevOps that have brought those successful impacts. We have provided here some of the best practices and practical examples.

How to make sure that your RDS/EBS data is being backed up timely? Do you keep a copy of your AWS snapshots across regions to be disaster recovery prepared? Botmetric offers Cloud Automation jobs for all these use cases and many more.

Here are some of the cloud automation jobs which will help you in saving time and advance your operational agility.

Take EBS volume snapshot based on instance/volume tags

Enable regular snapshots for your AWS EBS volumes. Use Botmetric’s Cloud Automation to schedule a job for creating snapshots automatically. This can be done for the EBS volumes having specified instance or volume tags. This would also help you to be DR ready.

Take RDS snapshot based on RDS tags

Enable regular snapshots for your AWS RDS instances. Use Botmetric’s Cloud Automation to schedule a job for creating snapshots automatically for the RDS instances having specified tags.

Stop EC2 Instances based on instance tags

Stop the instances which are not required anymore and save some cost. Botmetric’s Cloud Automation can schedule a job for your infrastructure which will stop EC2 instances automatically at specified time.

Start EC2 Instances based on instance tags

Start your stopped instances whenever it is required. Botmetric’s Cloud Automation automatically schedules a job which will start EC2 instances automatically at a specified time.

Create AMI for EC2 Instances based on instance tags

Use automation to create AMI for EC2 Instances based on instance tags automatically.

Copy EBS Volume snapshot (based on volume tags) across regions

Enable your data backups to be copied across AWS regions. Use Botmetric’s Cloud Automation to schedule a job which will automatically on specified periods copy EBS Volume snapshot based on volume tags from a source region to the destination region.

Copy RDS snapshot (based on RDS tags) across regions

Using Botmetric’s Cloud Automation you can schedule a job which will automatically on specified periods copy RDS snapshot based on RDS tags across regions.

Botmetric periodically copies your data backups across the AWS regions. With Botmetric, you can do so by scheduling a job for cross-region copy:
• Copy EBS Volume snapshot (based on volume tags) across regions
• Copy RDS snapshot (based on RDS tags) across regions

How Automation can help you further?

Auto-Healing with 24×7 DevOps Automation

Automate your most common and repetitive AWS tasks to save up to 30% of time. Detect and fix critical issues in just the click of a button.

  • Fix issues 10x faster, within seconds with ‘CLICK TO FIX’ feature

     

  • Automate start/stop of EC2 instances to save more time and avoid unnecessary expenses

     

  • Resolves problems in an on-demand/automatic basis to save cost and improve your operational agility.

     

  • One-click log activation of load balancers and AWS CloudTrail.

     

  • Quick ‘How-To-Fix’ guide to resolve audit issues

     

Implementing DevOps Automation can offer extremely helpful prospects and improve functional excellence with time-to-market. In addition to these, automation also helps in abridging expenses in several dimensions including manpower costs, resources costs, value costs, intricacy costs, and, most valuable in the eyes of all industry leaders, the time costs.

DevOps has progressed to become a key part of enterprise IT planning. The simple realistic way of managing security in cloud is developing very fast and changing swiftly to make it automation first. The Cloud Automation jobs being offered by Botmetric are helpful for all the use cases.

Take up a 14-day free trial to learn how Botmetric in your AWS Cloud infrastructure can simplify your Cloud automation tasks 10x faster.

With Botmetric’s AWS DevOps Automation, you can easily supervise your everyday cloud tasks with just a click. Not only this, but you can minimize approaching security risks while maintaining fast growth and quick time-to-market on the side of your production. Automation also helps you in reducing CloudOps overload. It eliminates repetitive and boring tasks and focuses on what matters for you business most. Automating your data backups not only frees you from the fear of losing them but also enable you to run your business smoothly. And as we rightly say as DevOps in Cloud is a match made in heaven, implementing the best practices will help you enjoy the freedom of saving up more time in automating your routine cloud tasks.

So what does the future hold for DevOps?  Tweet your thoughts to us.