The January Roundup @ Botmetric: Solidifying DevOps with NoOps on the Cloud

The first month of 2017 at Botmetric has ended on a good note. We launched one of most sought after feature every DevOps engineer on the cloud seeks to have it: Alert Analytics.

Plus, we have been contemplating more around how DevOps and NoOps on private cloud looks like in 2017 and beyond. We spoke about how DevOps drives cloud more than ever. Likewise, the DevOps community, for long, has been speaking about the emergence of NoOps. Especially around continuous software delivery, continuous security, and high-end business contribution.

But there seems to be a lot of confusion among the DevOps community, as most DevOps guys believe ‘NoOps’ is an Utopian paradise; and more so it is a threat to IT operations and would downsize the resources-in-line. However, the fact is NoOps is just minimizing the amount of time spent on Ops by a DevOps guy. To give a perspective to it, we spoke about what NoOps is through the eyes of a DevOps engineer.

Besides to further help the DevOps community, Botmetric is conscientiously working on improving its Ops & Automation product around it, to further define the future of cloud computing.

Alert Analytics: A new feather in the Botmetric’s cap

Enhancements are an essential part of growth. This month, Botmetric product engineering team took cloud management a notch higher with the release of Alert Analytics in its Ops & Automation product.

What is it about: Managing operational email alerts generated by systems  

How will it help:  It provides DevOps engineer with a bird’s eye view of all the system generated email alerts from multiple monitoring systems like DataDog, NewRelic, etc., and detects anomalies across a cloud infra, thus helping manage alerts.

Where can you find this feature on Botmetric: In Ops & Automation.

Know more about it on this blog post.

Knowledge Sharing @ Botmetric

Botmetric published several blogs to help make your cloud management a breeze. Do check out some of the trending blogs posted on Botmetric this month.

Cloud Computing in 2017: An Op-Ed From the Cloud Geeks

Know the trends that will reign cloud computing in 2017: Microservices, NoOps, Serverless Computing, Machine Learning, and more.

Blue Whale Docker on AWS as CaaS, and How Enterprises Can Use it!

Docker Datacenter (DDC) on AWS as Container-as-a-Service (CaaS) is changing the delivery process of containerized application among the enterprises; and Docker wants to make it easy to set up internal CaaS operations. Know more.

7 Blind Spots to Watch Out for in Your Public Cloud Strategy

Public cloud, today, is all about driving business innovation, agility, and enabling new processes & insights. And for this to happen, a practical public cloud strategy is the cornerstone. A strategy that is based on your own unique landscape and requirements while also taking all the critical blind spots into account. Read this blog to know these top blind spots.

11 Hard-Won Lessons We’ve Learned about AWS Auto Scaling

AWS Auto scaling, even though one of the powerful tools leveraging the elasticity feature of public cloud, it is a double-edged sword. Because it introduces higher level complexity in the technology architecture and daily operations management. Without the proper configuration and testing, it might do more harm than good. So, we’ve collated few lessons we learned over a period – to help you make the most of Auto Scaling capabilities on AWS.

5 AWS Tips and Tricks to Solidify your EC2 and RDS RI Planning in 2017

Almost 92% of AWS users fail to manage EC2 and RDS Reserved Instances (RIs) properly, thus failing to optimize their AWS spend. An effective AWS cost optimization excise starts with an integrated RI strategy that combines a well thought out AWS EC2 and RDS planning. To this end, we have collated top 5 tips and tricks to solidify your EC2 and RDS RI planning. Besides, we’ve an exclusive cloud engineer’s guide for you. This guide demystifies RIs, and covers all the pro tips & tricks and best practices one need to know to make successful RI planning that helps save thousands of dollars. Grab your free copy now.

There are many more insightful blogs populated on public cloud and cloud management at Botmetric. Do read them here.

Rain or shine, Botmetric will continue to provide many more AWS cloud management features. And, if you have not caught-up with all the 2016 AWS re:Invent announcements, we have a cheat sheet ready for you. Until the next month, stay tuned with us on Twitter, Facebook, and LinkedIn.

By the way, have you rated Botmetric, yet? Rate it now.

The Perspective: NoOps Through the Eyes of a DevOps Engineer

What is NoOps for a DevOps guy? If you ask a coder, he/she will be whip-smart enough to tell it is something about ‘No Operations.’ And if you ask a DevOps engineer, you would have evoked several emotions in him/her. Few say it’s automating everything underlying the infrastructure, and few counter argue saying not everything can be automated; it just involves stuff that minimizes few steps in the IT operations.

Whatever the case, few experts say, NoOps is a progression of DevOps.

What NoOps means to the IT industry:

Several research companies, who have been following several IT Cos fro decades, have come up with their own version of NoOps definition. Tech Target defines it as a concept where an IT environment can become so automated and abstracted from the underlying infrastructure that there is no need for a dedicated team to manage software in-house. Which means: No dedicated team will be required for operations while the application development team can itself manage operations.

On the other hand, Forrester defines NoOps as “the goal of completely automating the deployment, monitoring and management of applications and the infrastructure on which they run.”

Adrian Cockcroft from Netflix, in one of his blogs, mentioned that the Platform-as-a-Service (PaaS) will produce lesser needs for operations which in turn gives rise to NoOps culture.

So many versions, and so many ideas, out there. Phew!

What NoOps means to the DevOps fraternity:

For many, who take the literal meaning of it, perceive that with NoOps there shall be no operations whatsoever and that a system will start, manage, report and repair resources in IT infrastructure by itself.

And for a hard-core DevOps engineer “NoOps is a progression of DevOps.” For this engineer, it is just a process that minimizes Ops tasks by 10-15% and thus helps free-up a developer’s time for more innovation.

In other words (or from the eyes of a DevOps engineer), NoOps means performing DevOps with lesser Ops involvement. And that some parts of processes of DevOps will still be used for it.  So, both are no much different. This is why, it is seen as a progression of DevOps. Similar to how DevOps was a progression from IT Ops.

Lucas Carlson, CEO of Appfrog, once said, “SysOps is blu-ray but NoOps is streaming. Blu-ray is going to be around for a long time and there is a strong market for it. There will be people wanting to play Blu-ray disks for decades to come. But streaming is a generational shift.”

The last word: NoOps is a progression of DevOps

The ultimate vision of NoOps is to free-up developers’ time to further utilize their time for more innovation. Whatever you perceive it is: zero Ops, Ops controlled by bots, eliminating Ops workforce, Artificial Intelligence (AI) Ops, or more, it is a rolling stone with the right momentum.

What is your take on this? Do share your thoughts below in the comment section, or on any of our social media pages Twitter, Facebook or LinkedIn. Do read the Botmetric blog post on Alert Analytics to see how Botmetric is driving cloud management with NoOps capabilities.

Introducing Alert Analytics in Botmetric: The Smartest Way to DevOps Alert Management

Are you a DevOps engineer vexed by a deluge of system generated alert mails? Ah! Alert fatigue it is. We heard you. We are now on a pursuit to make every Ops engineer’s life a lot easier with an intelligent and efficient alerts management. To this end, we power packed Botmetric Ops & Automation with Alert Analytics. The new feature addition provides you a bird’s eye view of all the alerts from multiple monitoring systems like DataDog, NewRelic etc. It also detects anomalies across your cloud infrastructure proactively and helps you with efficient alerts management by reducing the noise.

The backdrop: What drove Botmetric to build Alert Analytics

Since the last decade, IT operations have been seeing a lot of adaptations – from physical servers to cloud to containers, from admins to IT Ops, and from IT Ops to DevOps. Across all these adaptations, one unavoidable yet irksome activity remains with the Ops team throughout. That is handling a deluge of system generated alerts.

On an average, an Ops engineer spends 1-2 minutes to identify the alerts. And then 2 to 7 minutes to analyze the issue and look for a quick-fix solution. This constant need for being vigilant and cognitive is quite stressful and eats up lot of time and effort. In many cases, alerts tend to re-occur and might get devoid of the engineer’s attention. Even with the help of monitoring tools, alerts management is a huge challenge as distributed cloud architectures and containers add to the complexity.

Botmetric CEO Vijay Rayapati, in one of his blog post, had collated few wishlist every DevOps engineer wants in place of alert fatigue, like the ability to understand signal over noise, need for scope aware alerting, etc.

Catering to this need, Botmetric Alert Analytics was built. Primarily, to help DevOps engineers and architects have a quick look at the alerts (either by host or by metric) over a period of time, to understand the patterns and have the capability to make informed decisions and thus reduce alert fatigue.

Here’re the features that the new Alert Analytics offers:

  • Integrates with leading monitoring systems

Alert Analytics integrates seamlessly with many industry standard monitoring tools. All you need to do is enable the integration from the list of Botmetric supported integrations. Upon successful integration, It collects alerts from the monitoring systems of your choice like Datadog, NewRelic, CloudWatch, etc. We will be adding lot more integrations over coming months!

Configuring monitoring tools in Botmetric

 

  • Analyzes collated alerts for efficient alert management

    Botmetric analyzes the collated alerts from the monitoring tool and provides you bird’s eye view of  all the alerts from the chosen monitoring systems, thus helping you visualize alerts in the form of graphical reports.

Botmetric Alert Analytics Overview

  • Provides operational insights for a supercharged alerts management

With Analyze option in the new feature, you  can quickly discover all the alerts that are bothering you the most. This feature also provides alert reports in the form of bar graphs and line graphs based on  metrics & hosts causing maximum problems.  These graphs can be generated with various filters, like duration, hosts, and metrics, to obtain granular information that further help in root cause analysis.

Analyze Alerts

You can access this feature in Botmetric Cloud Management Platform under Ops & Automation. Give it a 14-days try, experience what NoOps is, and write to us with your feedback on how we can improve it further.

Until our next blog post, do stay tuned with us on Twitter, Facebook, and  LinkedIn for other interesting news from us!

5 AWS Tips and Tricks to Solidify your EC2 and RDS RI Planning in 2017

Almost 92% of AWS users fail to manage EC2 and RDS Reserved Instances (RIs) properly, thus failing to optimize their AWS spend. An effective AWS cost optimization excise starts with an integrated RI strategy that combines a well thought out AWS EC2 and RDS planning. To this end, we have collated top 5 tips and tricks to solidify your EC2 and RDS RI planning.

  1. Continuously Manage and Govern Both EC2 and RDS RIs Effectively. Don’t Stop at Reservation

RIs, irrespective of EC2 or RDS, can offer optimal savings only when they are modified, tuned, managed, and governed continuously according to your changing infrastructure environment in AWS cloud. For instance, if you have bought the recent Convertible RIs, then modifying them for the desired class. And, if you have older RIs, then get them to work for you either by breaking them into smaller instances or by combining them for a larger instance as per the requirement.

  1. Take Caution While Exchanging Standard RIs for Convertible RIs

Standard RIs work like a charm, in regards to cost saving and offering plasticity, only when you have a good understanding of your long-term requirements. And if you have no idea of your long-term demand, then Convertible RIs are perfect, because you can have a new instance type, operating system, or tenancy at your disposal in minutes without resetting the term.  

However, there is a catch here: AWS claims there’s no fee for making an exchange to Convertible RI. True that. But when you opt for an exchange, be aware that you can acquire new RIs that are of equal or greater value than those you started with. Sometimes, you’ll need to make a true-up payment to balance the books. Essentially, the exchange process is based on the list value of each Convertible RI. And the list value is simply the sum of all payments you’ll make over the remaining term of the original RI.

  1. Don’t Forget to Use the Regional Benefit Scope for Older Standard RIs

The new regional RI benefit broadens the application of your existing RI discounts. It waives the capacity reservation associated with Standard RIs.  With Regional scope selected, the RI can be used by your instance in any AZ in the given Region. Plus, you can have your RI discount automatically applied without you worrying about which AZ. If you frequently launching and terminating instances, then this option will reduce the amount of time and effort you spend looking for optimal alignment between your RIs and your instances in different AZs. In cases of new RI purchases, the Scope selects Region by default, however, with older RIs, you need to manually change the current RIs scope from AZ to Region.

  1. Leverage Content Delivery Networks (CDNs) to Reinforce EC2 RI Planning

CDNs reduce the reliance on EC2 for content delivery while providing optimal user experience for your applications by leveraging edge locations.  With CDNs, the cost of delivering content is limited to the data transfer costs for the services. In AWS, static content such as images and video files can be stored in S3 buckets. Your application EC2 instances can be configured in the CDN to be used to cache the dynamic content so you can reduce the dependency on the backend instances.

For CDNs that have a minimum monthly usage level of 10TB per month from a single region, AWS provides significant discounts. When the capacity request is higher, the discount also increases. If CDNs are included in the capacity planning for EC2, the usage requirement for EC2 itself can go down for your business thus reducing the need for RIs.

  1. Complement RDS RI Planning by Opting for Non-SQL Database and In-memory Data Stores

Just like CDN, in-memory data stores and data caches can reduce the reliance and utilization of RDS. AWS also provides RI option for AWS ElastiCache (the in-memory data store and cache service) and DynamoDB (the NoSQL database). The technical advantages of these database technologies over relational databases will contribute indirectly to cost optimization of RDS. Leveraging in-memory data stores can also speed up your application performance.

To Wrap-Up

You might have heard this several times: Effective RI planning optimizes AWS cost by 5X. True that. And the fact is there is no universal formula, a magic wand or a one-solution-fits-all that can provide perfect EC2 and RDS RI planning. Be it 2017 or 2020, the secret recipe to solid AWS RI planning lies in understanding your long term usage, application requirements and of course planning reservations for all resources. To know more, read Botmetric’s expert blog, 7 Stepping Stones to Bulletproof Your AWS RI Planning.

And if you find this overwhelming, then you should try Botmetric Cost & Governance that can optimize your cloud spend with smart RI capacity planning, without you managing RIs from your AWS console. And if you think, we have missed any of the key points that can help bolster up EC2 and RDS RI planning, then just drop in a comment below, or on any of our social media pages, Twitter, Facebook or LinkedIn. We are all ears!

 

DevOps Drives Cloud More Than Ever

Does DevOps drive cloud? This is one of the technology-driven debates buzzing amidst the software world these days. #DevOps is trending on Twitter and that too along with #Cloud and #CloudComputing. Needless to say, there is a heightened interest among many, like you and me, for these IT buzzwords. To add to it, many industry experts say, DevOps is dictating a new approach to cloud development.

With the increased adoption of cloud, many software companies are experiencing a transition from being product-oriented to service oriented. Earlier, these companies used to develop products and hand it over to customers, but now, they also take care of the operations after the product has been delivered. To this end, the cloud has come out as a clear winner, as it makes service delivery along with product delivery a breeze.

Many companies are now more focused on offering astounding customer experiences as well. Even though DevOps is all about communication, collaboration, integration, automation, and cooperation, it is more than just a set of tools or collaboration. It is an outlook that helps companies prioritizes great products and great customer experiences over complex processes. It enables cautious product development as well as operations.

Together these factors have led to the exponential growth of DevOps on the cloud. More so, DevOps’ inherent nature to identify interdependence of different teams in developing software and finally executing and maintaining it is now recognized wide and far. Right from software development to quality assurance, this approach defines an effective process. It bolsters communication, collaboration, and integration between software developers and IT operations.

Thus, DevOps being aligned to cloud ensures efficiency. Therefore, it is a plus.

Besides, the software industry is quickly moving towards extreme IT service delivery agility and innovation at a supersonic rate. To cope up with the fast moving world, companies need to fasten their pace, shorten work cycles, and increase delivery efficiency. To achieve these, DevOps gives a boost. It recognizes the interdependence of software development and IT operations and helps software companies produce software and IT services more rapidly, with frequent iterations.

Cloud, Agile development, and DevOps: Paving the Way for Extreme Digital Disruption

According to Forrester, enterprise cloud computing adoption accelerated in 2016 and will do so again in 2017. While it’s adoption has increased, knowing how to use it to achieve digital disruption is the key? Well, it’s simple. Tie cloud computing with agile development and follow the DevOps approach.

So, for the cloud-DevOps-agile combo to work like a charm, there are few core objectives that needs sheer attentiveness. First, there should be a continuous process. This process must include all aspects of development, testing, staging, deployment, and operations. Second, the parts of the process must be completely automated from the very beginning, including self- and auto-provisioning resources in the cloud. Also, the deployment platform of applications on the cloud must support unlimited provisioning of resources via the cloud.

“If Cloud is an instrument, DevOps is the musician that plays it,” said someone once. And we totally agree!

How are you using DevOps for your cloud infrastructure? Botmetric is using DevOps intelligence on the cloud, essentially the NoOps — the future of cloud computing.

Try Botmetric Ops & Automation solution if you want to get a hands-on experience on NoOps. Drop a line to us on Twitter, Facebook or LinkedIn. for anything cloud.

Editor’s Note: This post is an adaption and update of our previously published blog, ‘Does DevOps Drive Cloud?

Blue Whale Docker on AWS as CaaS, and How Enterprises Can Use it!

We live in an exciting era of data centers and cloud operations. And in these data centers, innovative technologies such as Docker containers eliminate all the superfluous processes that can bog down a machine and enable servers to live up to their potential. With the availability of Docker Datacenter (DDC) as Container-as-a-Service (CaaS), the excitement among enterprises is more than ever.

Making Sense of Docker Datacenter (DDC) as Container-as-a-Service (CaaS)

As you may know, containers make it easy to develop, deploy, and deliver applications that can be deployed and brought down in a matter of seconds. This flexibility makes it very useful for DevOps team to automate continuous integration and deploy containers.

And Docker datacenter offering, which can be deployed on-premise or in the cloud, makes it even more easier for enterprises to set up their own internal CaaS environments. Put simply, the package helps to integrate Docker into enterprise software delivery processes.

Basically, the CaaS platform provides both container and cluster orchestration. And with the availability of cloud templates pre-built for the DDC, developers and IT operations staff can move Dockerized applications not only into the cloud but also into and out of their premises.

Below is the brief architecture that DDC offers:

Docker Datacenter Architecture
Image Source: Docker | https://www.docker.com/products/docker-datacenter

A pluggable architecture provides flexibility in regards to compute, network, and storage, which are generally a part of a CaaS infrastructure. Moreover, the pluggable architecture provides flexibility without disrupting the application code. So, enterprises can leverage existing technology investments with DDC. Plus, the Docker Datacenter consists of integrated solutions including open source and commercial software. And the integration between them includes full Docker API support, validated configurations, and commercial support for DDC environment. The open APIs allow DDC CaaS to easily integrate into an enterprise’s existing systems like LDAP/AD, monitoring, logging, and more.

Before we move to comprehending Docker on AWS, it is advisable to have a look at the pluggable Docker Architecture.

Pluggable Docker Architecture.
Image Source: Docker

As we can see from the image, DDC is the mixture of several Docker Project:

  • Docker Universal Control Plane [UCP]
  • Docker Trusted Registry [DTR]
  • Commercial Supported Docker Engine [CSE]

The Universal Control Plane (UCP)

The UCP is a cluster management solution that can be installed on-premise or on a virtual private cloud, says Docker. The UCP exposes the standard Docker API so that you can continue to use the tools that you already know to manage an entire cluster. With the Docker UCP, you can still manage the nodes of your infrastructure as apps, containers, networks, images, and volumes.

The Docker UCP has its own built-in authentication mechanism, and it supports LDAP and Active Directory as well. But, as role-based access control (RBAC). This ensures that only authorized users can access and make changes to the cluster.

In addition, the UCP, which is a containerized application, allows you to manage a set of nodes that are part of the same Docker Swarm. The core component of the UCP is a globally-scheduled service called ‘ucp-agent.’ Once this service is running, it deploys containers with other UCP components and ensures that they continue to run.

DDC Universal Control Panel in Architecture
Image Source: Docker.com

Docker Trusted Registry (DTR)

DTR allows you to store and manage your Docker images, either on-premise or in your virtual private cloud, to support security or regulatory compliance requirements. Docker security is one of the biggest challenges that developers face when it comes to enterprise adoption of Docker. The DTR uses the same authentication mechanism as the Docker UCP. It has a built-in authentication mechanism and integrates with LDAP. It also supports RBAC. This allows enterprises to implement individualized access control policies as necessary.

Commercial Supported Docker Engine (CSE)

CSE is nothing but a normal legacy Docker engine with commercial support and additional orchestration features.

For many enterprises, it seems that more the components in the architecture, more complex it gets. Hence, the deployment of DDC will be very painful. It’s a myth! Thanks to AWS and Docker. They already had prepared the recipe to deploy the entire DDC on AWS with the best practices of AWS. The recipe is prepared in CloudFormation Template by taking care of enhanced security. Below diagram shows the overall commercial architecture.

Cloud Formation Template for a Commercial Architecture
Image Source: AWS

For a detailed resource utilization, enterprises need to just checkout the CloudFormation Stack Architecture as shown below. This seems very complex, but this is the most secure approach for the sake of enterprise production. To get the infrastructure ready, one has to Launch a Stack.

Cloudformation Stack Architecture
AWS CloudFormation Stack Architecture

The CloudFormation template will create the below resources and activity:

  • Creates a new VPC, private and public subnets in different AZs, ELBs, NAT gateways, internet gateways, AutoScaling Groups- all based on AWS best practices
  • Creates and configures a S3 bucket for DDC to be used for cert backup and DTR image storage
  • Deploys 3 UCP controllers across multiple AZs within VPC and creates a UCP ELB with preconfigured HTTP healthchecks
  • Deploys a scalable cluster of UCP nodes, and backs up UCP Root CAs to S3
  • Creates a 3 DTR Replicas across multiple AZs within VPC, and creates a DTR with preconfigured healthchecks
  • Creates a jumphost EC2 instance to be able to SSH to the DDC nodes
  • Creates a UCP Nodes ELB with pre-configured health checks (TCP Port 80). This can be used for your application that are deployed on UCP
  • Deploys NGINX+Interlock to dynamically register your application containers
  • Creates a CloudWatch Log Group (called DDCLogGroup)and allows log streams from DDC instances. It also automatically logs the UCP and DTR installation containers

A click on Launch Stack will redirect the user to AWS console where he/she will get the CloudFormation template page with already filled Amazon S3 template URL.

Cloudformation template page

After getting the Status of the Stack as CREATE_COMPLETE, he/she needs to click on Outputs to get the Login URL of UCP and DTR. Refer the image below:

Login URL of UCP and DTR

 

After the UCP Login, the user will get a secure login web page as shown below:

Secure Docker Login Webpage

A beautiful, detailed Control Panel will appear as shown below:

A beautiful detailed Control Panel

 

And a DTR panel will look something like this:

DTR Panel

Viola!!! Now, Docker DC (as CaaS) is ready to use on AWS host.

As a DDC user, you must be thinking, why anyone will go for the DDC for the Deployment of Application. Here’s why: The technology that makes the application more robust and nimble will be considered as the king. And with the adoption of microservices architecture,  your application will be the most competitive.

To set the context here: Nowadays, Docker is considered as one of the best containerization technologies to deploy microservices-based applications. While implementing it, there are few key steps that needs to be followed. They are:

  1. Package the microservice as a (Docker) container image.
  2. Deploy each service instance as a container.
  3. Perform scaling, which is done based on changing the number of container instances.
  4. Build, deploy, and start a microservice, which is much faster than a regular VM.
  5. Write a Docker Compose file where we will mention all the image and their connectivity and then just build it.

Lots of enterprises are now considering to refactor their existing Java and C++ legacy applications by dockerizing them and deploying them as containers, says Docker. Hence, this technology was built to provide a distributed application deployment architecture that can manage workloads, plus can be deployed in both private and eventually public cloud environments.

In this way, DDC solves lots of problems today’s enterprises face, including BBC, Splunk, New Relic, Uber, PayPal, eBay, GE, Intuit, New York Times, Spotify, etc.

For demo purpose, Docker provides users with options to deploy a microservice application for different services, such as:

  • A Python web app that lets you vote between two options
  • A Redis queue that collects new votes
  • A Java worker who consumes votes and stores them in
  • A Postgres database backed by a Docker volume
  • A Node.js web app that shows the results of the voting in real time

Here’s how: After going to the Resource of UCP, click on Application and then + Deploy Compose.yml

Docker resources dashboard

Hit the name of the application and write down the docker compose yaml. Or, you can also upload the file and click on create. After sometime, you will be able to see some logs, and then the application will be deployed.

In below image, we can see that 5 containers were spinned-up at the time of deployment of the application. Each and every container have their own services and worker.

Docker dashboard

If you want to try to hit the application from a web browser by getting the exact IP and Port, refer the image below:

Docker Dashboard

 

By performing this activity, the application will be up and running. Refer the illustration below:

An Illustrative App
An Illustrative App

If you want to modify it in code, you can modify it from the Web UI. This means that you can access the container CLI from Web as well.

Accessing the container CLI from Web

To Wrap Up:

DDC as CaaS is changing the delivery process of containerized application among the enterprises. The idea is that Docker wants to make it easy for enterprises to set up their own internal Containers-as-a-Service (CaaS) operations.

Are you an enterprise looking to leverage DDC as CaaS on AWS? As a Premier AWS Consulting Partner, we at Minjar have your back! Do share your comments and thought with us on Twitter, Facebook or LinkedIn. You can drop in a comment below too.

References used:

  • https://github.com/nicolaka/ddc-aws
  • https://github.com/docker/docker-birthday-3/tree/master/example-voting-app
  • https://www.docker.com/sites/default/files/RA_UCP%20Load%20Balancing-Feb%202016_1.pdf

Top 11 Hard-Won Lessons We’ve Learned about AWS Auto Scaling

Auto scaling, as we know today, is one of the most powerful tools leveraging the elasticity feature of public cloud – Amazon Web Services (AWS). Its ability to improve the availability of an application or a service, while still keeping cloud infrastructure costs under check, has been applauded by many enterprises across verticals, be it fleet management services or NASA’s research base.

However, at times, AWS Auto Scaling can be a double-edged sword. For the reason that, it introduces higher level complexity in the technology architecture and daily operations management. Without the proper configuration and testing, it might do more harm than good. Even so, all these challenges can be nullified with few precautions. To this end, we’ve collated few lessons we learned over a period – to help you make the most of Auto Scaling capabilities on AWS.

  1. Use Auto Scaling, whether your application is stateful or dynamic

There is a myth among many AWS users that AWS Auto Scaling is hard to use and not so useful with stateful applications. However, the fact is that it is not hard to use. You can get started in minutes, with few precautionary measures like using sticky sessions, keeping provisioning time to minimum, etc. Plus, AWS Auto Scaling helps monitor the instances and heals them if they become unhealthy.

Here’s how: Once the Auto Scaling is activated, it automatically creates an Auto Scaling Group, and provisions the instances accordingly behind the load balancer. This maintains the performance of the application. In addition, Auto Scaling’s Rebalance feature ensures that your capacity is automatically distributed among several availability zone to maximize the resilience of the application. So, whether your application is stateful or dynamic, AWS Auto Scaling helps maintain its performance irrespective of compute capacity demands.

  1. Identify the metrics that impact the performance, during capacity planning

Identify the metrics for the constraining resources, like CPU utilization, memory utilization, of an application. By doing so, it will help track how the resources are impacting the performance of the application. And the result of this analysis will provide the threshold values that will help scale up and scale down the resources perfectly.

  1. Configure AWS CloudWatch to track the identified metrics

The best way forward is to configure Auto Scaling with AWS CloudWatch so that you can fetch these metrics, as and when needed. Using CloudWatch, you can track the metrics in real-time. CloudWatch can be configured to launch the provisioning of an auto scaling group based on the state of a particular metric. 

  1. Understand functionality of Auto Scaling Groups while using Dynamic Auto Scaling

The resource configurations have to be specified in Auto Scaling groups feature provided by AWS. Auto scaling groups would also include rules defining circumstances under which the resources will be launched dynamically. AWS allows assigning the of autoscale groups to the Elastic Load Balancers (ELBs) so that the requests coming to the load balancers are routed to the newly deployed resources whenever they are commissioned.

  1. Use Custom Metrics for Complex Auto Scaling Policies

A practical auto-scaling policy must include multiple metrics, instead of just one allowed by CloudWatch. The best approach to circumvent this restriction is to code a custom metric as a Boolean function using Python and the Boto framework. You can use application specific metric as well along with default metrics like memory utilization or CPU, network, etc.

  1. Use Simple Queuing Services

As an alternative to writing complex code for the custom metric, you can also architect your applications to take requests from a Simple Queuing Service and enable CloudWatch to monitor the length of the queues to decide the scale of the computing environment based on the amount of items in the queue at a given time.

  1. Create Custom Amazon Machine Images (AMIs)

To reduce the time taken to provision instances that contain many custom software (not included in the standard AMIs), you can create a custom AMI that contains the software components and libraries required to create the server instance.

  1. Scaling up other AWS services other than EC2, like AWS DynamoDB

Along with AWS EC2, other resources such as AWS DynamoDB, can also be scaled up and scaled down using Auto Scaling. However, the implementation of the policies are different. Since storage is the second most important service other than computing service, efforts to optimize storage will yield good performance as well as cost benefits.

  1. Predictive Analytics for Proactive Management

Setting up thresholds as described above is reactive. Hence, you can leverage time-series prediction analytics to identify patterns within the traffic logs and ensure that the resources are scaled up at pre-defined time, before events take place.

  1. Custom define Auto Scaling policies & provision AZs capacity accordingly

Auto scaling policies must be defined based on the capacity needs as per Availability Zone (AZ) to save on cost spikes. Because pricing of the resources are based on different regions that encompass these AZs. This is critical especially for Auto Scaling groups configured to leverage multiple AZs along with a percent-based scaling policy.

  1. Use Reactive Scaling policies on top of schedule scaling feature

By using Reactive Scaling policies on top of schedule scaling feature will give you the ability to really respond to the dynamic changing conditions in your application.

Conclusion:

Embrace an intelligent cloud management platform.

Here’s why: Despite configuring CloudWatch and other features of Auto Scaling, you cannot always get everything you need. Further automating various Auto Scaling features using key data-driven insights is the way forward. So, sign-up for an intelligent cloud management platform like Botmetric, which throws key insights to manage AWS Auto Scaling, provides detailed predictive analytics and helps you leapfrog your business towards digital disruption.

Also, do listen to Andre Dufour’s recent keynote on AWS Auto Scaling during the recent 2016 re:Invent, where he reveals that Auto Scaling feature will be available to Amazon EMR (Elastic Map Reduce) service as well along with AWS ECS Container service, and Spot Fleet in regards to dynamic scheduling policies.

It is evident. Automation in every field is upon us. There will soon be a time when we will reach the NoOps state. If you have any questions in regards to AWS Auto Scaling or how you can minimize Ops work with scheduled Auto Scaling or anything about cloud management, just comment below or give us a shout out on Twitter, Facebook, or LinkedIn. We’re all ears! Botmetric Cloud Geeks are here to help.

Top 7 Blind Spots to Watch Out for in Your Public Cloud Strategy

A May 2016 survey cites that 51% of surveyed organizations took over a year to plan their public cloud strategy. Few may take up to three years too! It’s completely comprehensible why it takes so long and that a lot of detailing goes into it — understanding the precise costs and challenges that the cloud will introduce, knowing how to make the public cloud approach work for the organization, what tools & technology choices to make that will supplement the cloud adoption, etc.

Despite a detailed, pragmatic approach towards building the public cloud strategy, a majority of organizations still fail at some point. And our cloud geeks attribute it to ‘blind spots’ that get overlooked either due to complexity or lack of awareness. Soon enough, in some cases, these blind spots might take the team back to the boardroom.

To usher in the right approach towards building a seamless and successful public cloud strategy, we’ve collated the top seven blind spots that smart companies watch out for during their cloud-first and cloud-ready journey.

  1. Not calculating the ‘REAL’ Total Cost of Ownership (TCO)

Many companies have realized that the real benefit of cloud computing is not the cost savings it can bring. But it is the agility and time-to-market. And the prominent factor that plays a vital role in bringing such a nimbleness are the TCO models. However, many companies don’t define the actual TCO. They just go by the cost data alone, which may save some operational expenses in the short term but not in the long term. Hence, they end up missing the market when it comes to IT’s ability to deliver the real value of the business.

The way forward is to consider TCO models that also identify gray areas, and take them into account during calculations. Mainly, these models must understand the actual value of cloud-based technology. Plus, they should & must take critical factors into account too, like existing infrastructure in place, existing skills & workforce involved, the cost of all the cloud services when in operations, value of agility & time-to-market, future capital expenditures, and cost of risk around compliance issues.

  1. Not knowing who owns the data in the cloud, and how to recover it

Understanding the terms of a cloud service is paramount. Agreed. But it is more critical to know who owns the data in the system. The decisiveness lies in carefully checking the terms and conditions of the contract and ensure the data policy includes all the fine lines that ensure the actual owner owns the data.

By doing so, you, as a user, can own and recover the data on-demand. Above all, your service provider cannot access, use, or share your data in any shape or form without your written permission.

  1. Not having strong Service Level Agreements (SLAs)

While you focus on putting data policy and terms of cloud service in place, you should not change the spotlight on SLAs. A strong SLA goes a long way in monitoring, measuring, and managing how the cloud service provider’s services are performing. The essence lies in working closely with lawyers who can help define strong contracts. And also help you get what you want from the service, and whether this can be expressed in the contract.

If you still find this less important, then consider this scenario: You have SLAs with AWS but have no idea how its SaaS offering is performing. That’s because AWS gives them figures for the performance of the infrastructure, not the software.

  1. Not making complete use of elasticity of the cloud

Many enterprises fail to develop a cloud strategy that are linked to business outcomes, because they miss out leveraging the real benefits of elasticity feature that a cloud offers. They purchase instances in bulk to handle peak demands, like how they did with on-premise IT infra, and then turn a blind eye towards idle resources that could be optimized easily. They also overlook the fact that ‘anything and everything’ on the cloud can be codified. And APIs can be used to automate the tasks on the cloud completely.

Even if APIs are used, weak APIs and mismanagement of APIs can take a toll on the elasticity feature of the cloud. Essentially, going NoOps — with efficient APIs and APIs management — is the way forward.

  1. Not appointing a competent DevOps team

While Continuous Development, Continuous Testing, Continuous Integration, and Continuous Deployment play a significant role in bringing agility into the business process, workforce working on each of these Continuous Delivery stages ( which is the end goal of DevOps) contribute equally to the success of the cloud. Organizations need to identify the right talent and “ PEOPLE PROOF” their DevOps team to make it strong. Essentially, to ensure that there are no roadblocks in achieving any of the milestones due to the skills shortage.

The best way forward is to go the NoOps way, so that more Ops teams can work on innovating on the cloud, rather than operating.

  1. Not able to avoid cloud service provider lock-in

To date, vendor lock-in remains one of the major roadblocks in achieving success in the cloud. To this end, a majority of IT leaders consciously have been choosing not to invest in cloud fully. For the reason that they value long-term vendor flexibility over long-term cloud success, say experts.

One of the best approaches decreed by cloud experts is to avoid assigning business processes and data to the cloud service provider. Another solution, say, experts, not to keep one foot out of the cloud into on-premise, but to completely embrace it in a new way. Here’s how: By managing IT with governance models, taking cost control measures and the processes, etc.  

  1. Not bridging the cloud security and compliance gaps properly

With the choice of public cloud, which features a shared responsibility model, its users are responsible for their data security and access management of all the cloud resources. While building a cloud strategy, one should respect the fact that the freedom of elasticity that the cloud offers is accompanied by greater responsibility. And this responsibility can be administered only by bridging the cloud security and compliance gaps correctly. How? By adopting ‘Continuous Security’ and making a habit of regular audits and backups, preferably automated.

The Final Word

Today’s public cloud is all about driving business innovation, agility, and enabling new processes and insights that were previously impossible. And for this to happen, a practical public cloud strategy is the cornerstone. A strategy that is based on your own unique landscape and requirements while also taking all the critical blind spots into account. This is our take. Tell us what’s your public cloud strategy for 2017 is? Share your learning and stories in the cloud with us on Twitter, Facebook, and LinkedIn.

Cloud Computing in 2017: An Op-Ed From the Cloud Geeks

Digital transformation has changed the way organizations work, and so has the cloud. Following along the lines of VMWare’s Ex CEO Paul Martiz, cloud geeks across the globe have been saying it loud now ‘Cloud Computing in 2017 is about how you do computing, not where you do computing.’

Forrester, in one of its recent report, says, “Cloud computing will continue to disrupt traditional computing models at least through 2020. Starting in 2017, large enterprises will move to cloud in a big way, and that will supercharge the market. We predict that the influx of enterprise dollars will push the global public cloud market to $236 billion in 2020, up from $146 billion in 2017.”

While these numbers are enough to validate that ‘Cloud is the New Black,’ it also sends out clear signals that it is imperative to take the right measure and shove in the right strokes to get the most from your cloud computing in 2017.

The Way Forward

In 2016, we saw that many enterprises failed to achieve success with cloud computing, especially public cloud. For the reason that they failed to develop a cloud strategy rooted in the definition and delivery of IT services linked to business outcomes. More so, they missed out leveraging the real benefits of elasticity feature that a cloud offers. They purchased instances in bulk to handle peak demands, like how they did with on-premise IT infra, and then turned a blind eye towards idle resources that could be optimized easily.

They also overlooked the fact that ‘anything and everything’ on the cloud can be codified and APIs can be made use of to automate the tasks on the cloud completely. Essentially, to go the NoOps way while on the cloud.

So, 2017 is the year where you put these in perspective and introspect how you can align these in your cloud strategy so that IT are seamlessly linked to business outcomes. Here’re few tips from our tech geeks on what to focus in the cloud for 2017:

1.Implement cost governance as a discipline:

Every business has its own ideas on how best to determine cloud ROI. However, they will have to think beyond Capex and Opex to get the cloud economics right. Our cloud geeks say that to get maximum ROI of your cloud, the first step is to establish the right policies, and closely monitor as well as regulate the resource usage every day. By bringing in a discipline with the right policies and budgeting, you can easily govern the costs. Plus, automating the tasks to monitor and streamline the cloud spend continuously will definitely help bring down the TCO.

2. Focus more on compliance:

There’s a myth that has remained with many companies even today — public or hybrid cloud present compliance challenges, unlike private clouds where control and customization are much easier. With the increase in the adoption of cloud, things have changed. Increasingly cloud service providers are open to dialogues when it comes to SLA, and also provide services that comply with PCI DSS, HIPAA, and other regulatory requirements. Another block that many of our customers are concerned is noncompliance due data’s location. It is simple. Locate your data, and during an audit, justify its location along with the measures that are in place to protect it.   

3. Automate security:

While compliance and assuaging DDoS attacks plays a major role in cloud security, an API-driven strategy that puts all things in the right place at the right time also plays an equally important role. Automation is the future. And to get there, APIs are the keys to unlock the door. By automating security, it helps you code in the practices that make your data comply with your company’s security policy, identify vulnerabilities on running instances, and further fix those vulnerabilities in split seconds.

4. Take a note of Security of Things (SoT):

In this age of IoT, many are skeptical about the security of these connected devices talking on the cloud. At the end of the line how much ever a technology advances to help protect our networks and devices, security is ultimately a shared responsibility on the cloud. To know more, read the Botmetric blog on ‘Bridging the Cloud Security Gaps: With Freedom Comes Greater Responsibility.’ 

5. Go serverless:

Serverless architectures have already taken the cloud computing by the storm. With the amount of interest leading cloud services providers  are showing, especially AWS, Azure, Google, and IBM, it will be the theme of 2017. So, DevOps teams need to be hands-on in choosing the right services and be nimble. One observation we made during 2016 was that there is a common misconception in the DevOps community that going serverless is NoOps. It is not! DevOps team still need test, deploy, log and monitor the code.

6. Leverage Microservices:

Microservices are currently playing a key role in cloud computing, and they will continue to do so. Thanks to container technologies such as Docker, Rocket, and LXD — portability of code (microservice) across multiple environments is seamless. More so, deploying and managing containerized applications are now easier. Above all, these microservices along with the container technology will help developers autoscale and easily handle the load.

7. Use Machine Learning in IT Operations:

Machine learning, and its subset Deep Learning, is now no more restricted to just the Big Data applications. It is slowly seeping into the walls of DevOps sitting on the cloud to help them improvise IT operations. Especially to minimize human intervention. More so, applying machine intelligence to problem-solving will be a norm very soon. For instance, it will come in handy to fix the alerts flood & monitoring fatigue caused by a company’s operational management systems in this 24*7 uptime world. Read this Botmetric blog Assuage Alert Fatigue Mess with DevOps Intelligence to know how machine intelligence can help solve this issue.

8. Use Robotic Process Automation (RPA):

Even though this technology is in a quiet nascent stage now, it has made its footprints felt in the cloud computing. From our observation, 2017 will be the year, it will be used voraciously by many cloud management and SaaS products, where in they will offer data-driven ‘Bots’ that can automatically capture and interpret existing data, manipulate it, and trigger responses and do more intelligently and smartly. In short: Gear-up yourself to embrace this gen-next of deep learning.

9. Embrace NoOps:

While RPA, machine learning, automating security, etc. are making strides towards efficient cloud computing, NoOps will be the next wave of efficient cloud computing in 2017. Soon automating all the operations will be the norm. Building cloud as a code, with just the Dev team and investing Ops team into development efforts & innovation, will be the way forward. Are you game?

Cloud Computing in 2017: The State of Cloud Providers

AWS Cloud

In 2006, AWS created the first wave of cloud computing. A decade later, AWS has again created the second wave. Apart from the fact that it is currently operating at an $11 Billion USD, it has also announced 50+ services. This is helping change the landscape of enterprise cloud computing. As an AWS Technical partner, here’re few tips from team Botmetric that you can take into account in 2017:

1. Use Lambda@Edge to deliver a low latency UX for customized web applications, and more so, run code at CloudFront edge locations without provisioning or managing servers.

2. Integrate Blox, the new open source scheduler for Amazon EC2 container service.

3. Leverage AWS CodeBuild and AWS Elastic Beanstalk without fail.

4. Buy Convertible RIs and leverage Regional Benefit to get the most out of EC2s.

5. Use new features available in S3 Storage. For instance: Object Tagging, S3 Analytics, Storage Class Analysis, S3 Inventory, and S3 CloudWatch Metrics.

Microsoft Azure

According to a leading survey, Microsoft Azure accounts to 28.4% of the global IaaS ecosystem and is quickly catching up with AWS in the race to tap this growing market. With its growing portfolio of services (with support for machine learning, DevTest Labs, Active Directory, Log Analytics, BotService, etc.) and continued patronage among Microsoft aficionados, the Azure PaaS is also gaining ground, especially among enterprises.  

Google Cloud Platform

With Google accelerating its move to cloud data warehousing and machine learning, it is also up for the race with AWS and Azure stealing 16.5% share of them from the IaaS market.

To Wrap Up

At Botmetric, we talk about cloud computing and DevOps everyday.  Not just with a bunch of clients, but with other cloud geeks in the ecosystem too, closely observing the industry trends and practical challenges cloud engineers face every day. To this end, we have developed a close-knit collaboration with other partners and seek to make cloud management a breeze for all — using automation. Hence this write-up. And we hope that we have thrown some light on the trends that will reign the cloud computing in 2017. Let us know if we have missed on something here. Plus, do share your views and thoughts on the state of cloud computing in 2017 on Twitter, Facebook, & LinkedIn.