Reserved Instances (RI) have brought in significant opportunities in cost reducing strategies in the AWS ecosystem. AWS started with EC2 Reserved Instance (RI) and EC2 is the most reserved AWS service. Did you know that apart from EC2 there are other six services that can be reserved in AWS?
Currently Botmetric supports AWS RI for EC2 and RDS under Cost & Governance. A lot of Botmetric customers have been using Elasticache and wanted to seek better recommendations for reservations. As reservations are crucial in cost savings, we as Botmetric wanted to empower our customers with reservation planner for Elasticache.
Botmetric Cost & Governance has now extended its support for Elasticache RI’s. Currently in Cost & Governance RI tab you can manage, track and regulate your Elasticache RI’s. Elasticache RI under Cost & Governance only supports high utilization RI’s at the moment.
Elasticache RI Dashboard
Elasticache RI has a holistic dashboard that is similar to EC2 and RDS RI, which lists out RI summary and savings recommendations for variant tenure.
Elasticache RI has a holistic dashboard that is similar to EC2 and RDS RI, which lists out RI summary and savings recommendations for variant tenure.
Unused Elasticache RI
Provides a list of reservations which aren’t being used by any running instance, so that you can launch a matching instance class for the reservation.
Provides a list of all the recommended ElastiCache reservations. Botmetric analyzes your ElastiCache on-demand utilization and generates recommendation after matching it with your current set of reservations. You can change the tenure from 1 to 3 year to understand variant savings. You also change the confidence factor to view more differentiated recommendations.
This is a complete portfolio of your Elasticache RI. It lists all Elasticache RI in your account.
The chief advantage of existing RI portfolio is to identify near-expiry RI and plan for renewal or renew already expired RI.
RI Utilization Graph
Here you can identify and understand the performance of Reserved vs On-demand Elasticache RI. You can look at the graph and identify if your On-demand hours are constant , then there is more scope for reservations.
We as Botmetric want to make cost savings in AWS cloud more simpler and better. We have not stopped with EC2, RDS and ElastiCache, we are working on adopting Botmetric Smart Recommendations for Redshift and DynamodB as well. Meanwhile if you wish you perform any kind of cost analysis how much ever detailed, you can achieve this through Custom Reports.
Cost budgeting in a large company is an exhaustive process. A tremendous amount of detail and input goes into this iterative procedure where each senior team member brings a cost budget from his or her team and the finance leader integrates it and then negotiates with the senior team members to get the numbers where they need to be. Budgeting is a collective process in which each individual operating units or Cost Centers prepare their own budget in conformity with company goals published by top management. Since cloud is quite scalable and often teams exceed their budget or don’t have a clear visibility over projected spend which leads to budget mismanagement and overall havoc for IT Directors to re-evaluate budget and get the approval of the Finance department. Also, at times IT Directors wish if they were able to set budgets at a very granular level that could diminish any kind of uncertainty. This is where Botmetric’s Budgeting can help you create a comprehensive budget model.
So, what is Enterprise Budgeting?
Botmetric’s new feature ‘Budgeting’ under Cost & Governance, will empower the financial leaders in your organization to set the budget and track it with seamless workflows and processes. The two inputs imperative to the budgeting process in a large enterprise are, a detailed cost model for the entire payer account and a comprehensive cost model for individual Cost Center based on linked account(s) and tags.
Who will benefit from Enterprise Budgeting?
Enterprise budgeting is a powerful tool which will be helpful to senior level professionals such as CFOs, CTOs, IT-Directors, Head of Infrastructure & Engg, Senior IT Managers and more.
Which Botmetric subscription plans have access to Budgeting?
Currently, we are enabling the Budgeting feature for Professional, Premium and Enterprise plans only, on request basis.
Botmetric Workflows Used in Budgeting:
The following workflow can be assigned to the people using budgeting:
User: User workflows with write permission will be allowed to only set the budget which will then be sent to a financial admin for approval.
Admin: Admin workflows/roles can provide the user with read and write access to budgeting. An admin can set the budget but only a financial admin can approve it.
Financial Admin: A Botmetric admin can also be a financial admin whose role will be to define the budget goal in Budgeting and approve the budget set by other users. By default, the owner of a Botmetric account will also be a financial admin.
Understanding Botmetric’s New Smart Cost Center
A Cost Center can be a department or any business unit in the company whose performance is usually evaluated through the comparison of budgeted to actual costs. Previously, Botmetric allowed you to create a Cost Center using tag-keys like ‘owner’, ‘customer’, ’role’, ‘team’ etc. Now, as per extensive budgeting requirements, Cost Centers in Botmetric can be defined in two ways- based on tag keys alone and based on accounts and associated tag key-value pairs.
Based on Tag-Key
Here, you can choose the tag key which corresponds to your cost center. Based on the chosen tag key, Botmetric will create all possible cost centers for the tag values corresponding it.
Based on account(s) or combination of multiple account(s) and tags
You can also create Cost Centers based on account(s) and customize them based on multiple grouping of tag keys. You can create a cost center group such as account1->team1->role1.
Let’s say you have different nomenclature for the same tag-keys such as user:TEAM, user:team, user:Team, then you can multi-select these tags and get complete clarity on your cost center group.
Please note that you can only choose one option at a time. You cannot have a few cost centers created based on tags and few on account and tags combination.
How to set, track and monitor the budget?
Allocate & Review
Botmetric budgeting enables the financial leader to define a budget goal for the entire payer account as per his estimations for the financial year. You can either enter the budget inputs manually or you can use Botmetric’s estimate to populate the budget inputs across months, quarter and year. Botmetric looks at the data for the last 12 months for yearly budget tracking.
Based on your company size it can take upto 72 hours of time to enable, process and crunch your data.
Assigning Budget to Individual Cost Center:
Individual Cost Center owner(s) or financial admin(s) can set/edit budget goals for their respective units. The owner or financial admin(s) can either enter the budget inputs manually or make use of Botmetric’s estimate to populate the budget inputs across months, quarter and year. If a non financial admin or user is creating the budget for his Cost Center, it will be sent to a financial admin for approval. The new roles provided for Budgeting are helpful for providing clear demarcation between users and financial admin(s). This will allow financial admin(s) to have control over the approval of budget while providing enough flexibility to the other roles to manage their Cost Centers effectively.
Botmetric’s Budgeting Overview provides a summarised view where you can see a snapshot of your financial year performance at the payer account level. You can compare the actual, allocated and projected spends for the current month, current quarter and financial year. You can alse see a list of top spending Cost Centers for the current month and current quarter. Moreover, a complete trend graph comparing your actual, allocated and budgeted spend performance at a payer account level for 12 months and 4 quarters will help you evaluate Budgeting with a quick glance.
Cost Center View
Botmetric’s Cost Center Overview provides a comprehensive view to track the performance for each Cost Center. Fine grained resources and service details provide a deeper and instantaneous understanding of where a certain Cost Center is incurring more cost. Ability to shuffle the view among monthly, quarterly and yearly options will allow the user to understand the budget variance over time. Each Cost Center will be evaluated to determine whether its incurred cost is within the allocated budget or it has exceeded the defined budget limit.
Moreover, each Cost Center has a corresponding budget trend graph to show the comparison between actual, allocated and estimated spend. If you have a huge list of Cost Centers in your cloud, the search bar will help you to quickly find the desired Cost Center.
Botmetric’s Enterprise Budgeting will empower IT budget owners to define and track budgets at every granular level. This will also streamline budget processes in your organization and bring composure in the chaotic world of budget goals setting. Signup for 14 days free trial and check how it can help your organisation in cloud cost saving.
When your enterprise is migrating to the public cloud, you might be faced with various challenges at each stage of migration. In the course of its ever-evolving partnership with AWS and recently with Azure, Minjar has analyzed that there are 5 major challenges that any enterprise will face while they are undergoing the migration process.
These challenges are:
Fig.1. Minjar Analysis
Time/Duration: The process of migration or rather the replication process, faces a huge challenge during the process of transferring large data sets to the cloud. Once migration is complete, it is important to make sure that the processes are consistent and in tandem. The less time-taken to automate services on the cloud, the more happy clients you get.
For example; SingPost SAM was migrated to AWS Cloud infrastructure by Minjar. Singapore Post approached Minjar with the requirement of automating their cloud deployments, auto scaling their business solution and tighten their security. The company also required 24×7 monitoring, managing and continuous improvement of their AWS infrastructure.
Upon onboarding Minjar re-architected SP eCommerce application infrastructure for high availability, enhanced performance and lower latency. Minjar enforced AWS security best practices, created a foolproof environment by implementing DDoS, VAPT, WAF and data security. It created a VPN setup to connect AWS environment with Singapore post’s network. Minjar automated their application deployment and achieved a 10x reduction in launch time which further reduced the cost by implementing auto scaling and optimized provisioning.
Complexity: The complexity of migration can be reduced with the help of tools. It provides ground to support cloud migration in order to avoid the complexities of the tasks. Minjar helped Cleartrip to migrate to cloud by using the services of AWS products like Virtual Private Cloud (VPC), Elastic Compute Cloud (EC2), Elastic MapReduce (EMR), Relational Database Service (RDS) – MySQL, Route 53, Lambda, Snowball, Simple Storage Service (S3)
Risk: A tip for all the enterprises that plan to migrate to cloud is to first test the tools. This helps to avoid uninvited surprises which can incur cost and time of your business. This can happen due to human intervention. Automating day to day processes with the help of Botmetric Ops and Automation, a premium product by Minjar is a one-stop-solution to overcome the risk of migration.
For example; Shaadi.com evaluated options and decided to migrate its website infrastructure from its legacy cloud managed service to Amazon Web Service (AWS) Cloud, as AWS offers a comprehensive portfolio of services, competitive pricing and allows rapid innovation. However migrating to AWS seemed overwhelming due to the inherent risk involved in migrating its complete online business without impacting its customers. Consequently, Shaadi.com team selected Minjar as its AWS experts to execute the migration.
Cost: The cost and performance of your migration are mutually dependent. Agility of migration in a secure and cost-effective way can transform the enterprise’s business. Through proper management of cost, provisioning of servers and continuous checking of underutilized resources you can reduce the spend on the cloud in an effective way. Botmetric Cost and Governance has been helping Minjar since its inception, to get along well with cost management. With this product, Minjar could easily help companies to adopt cloud environment in a seamless way.
For Bigbasket the services offered in Deployment Automation was done within 2 weeks. Migration of infrastructure was done from Singapore to Mumbai Region which was done per plan in under a month. Automation done by MSP team on production deployment which brought down the deployment task from hours to minutes and the automation done during migration which brought down the total migration downtime to less than an hour helped them.
Security: This is the crucial point where many companies will think twice before cloud migration activity. Minjar uses Botmetric Compliance and Security to overcome this. It helps in finding security loopholes and rectifying it through the product. According to LinkedIn Information Security Community survey, 49% of CIOs and CSOs feel that “one of the major barriers to cloud adoption is the fear of data loss and leakage…59% believe that traditional network security tools/appliances worked only somewhat or not at all” in the cloud.
Samsung, MAPP, Cleartrip, Godrej, Shaadi.com and now Bigbasket have relied on Minjar for their Cloud migration and automation processes. From the analysis it is understood that Minjar has enhanced businesses by synergising cloud platform to their operations.
Any change invites huge challenge and migrating to cloud is one among those challenges faced by your enterprise during the course of time. We have analyzed and put down a table that speaks a lot when you Do Cloud Magic with Minjar.
Cloud migration needs to be secure, agile and seamless. It should synergize the operations of your enterprise by automating the activities and helping business to reduce the total spend on resources.
IT budgeting can start with a painful process but end in crafting a better strategy and road mapping. Post cloud adoption you grew and so did your cloud spend and keeping budget spend at par was always required, so that you have the money for resources that are need of the hour and required reservations. There are various mechanism to control budgets, and alerts are the easiest way to control your budgets.
Botmetric’s Cost & Governance has a crisp budget alerting where when amount exceeded on payer account will trigger alerts. A lot of customer requests for more filtered budget alerting for a much-focused cost management.
With new budget alerts, now Botmetric users can:
Set budget alert for linked accounts
Now you can configure budget alerts for linked accounts along daily, weekly and monthly filters. So from now, if any of the linked accounts are exceeding your set accepted budget figure, you will get notified instantly.
Set budget alert for cost center
A lot of businesses have an understanding of their silos in cloud infrastructure in the form of cost centers. They gauge spend and addition of resources in terms of cost centers. For them budget alerts for cost center will empower in setting set accepted budget figure, which when exceeded will be notified instantly
Set budget alert for custom group
Very powerful budgeting feature where you can create a group with different rules and filters to create a custom defined budget alerts.
Custom budget alert has rules for:
Accompanied across filters for:
Example: Suppose you are looking for EC2 RI’s in a linked account for <tag:Production> if exceeds $10000 to send an alert, you can create a custom group for this alert and get updated whenever set budget exceeds.
Budget alerts are crucial to keep your cloud finance in place and keep you always informed.
Say goodbye to scheduling downtime while modifying Elastic Block Storage (EBS) volumes. No more bottlenecks. Modify these EBS volumes on-the-go. Here’s why: AWS announces new feature to its EBS portfolio, called Elastic Volumes, which will help you automate changes to your EBS workloads without going offline or impacting your operations. Plus, grow your volume, change your IOPS, or change your volume types too, as your requirements evolve. All without the need for scheduling downtime. And with today’s 24×7 operating models, it is more important than ever to have no room for that downtime.
Elastic Volumes: What is it about
EBS workloads are known to optimize capacity, performance, or cost by allowing you to increase volume size, adjust performance, and change volume type as and when the need arises. Primarily, due to its dynamic nature and the ability to offer persistence high-performance block storage for AWS EC2.
Prior to the launch of Elastic Volumes, you had to schedule a downtime to that end, perform several steps like create a snapshot, restore it to a new volume, and attach this snapshot to a EC2 instance as and when your data volume grows.
Now, with the launch of Elastic Volumes, AWS has simplified the process of modifying EBS volumes drastically. You can also use CloudWatch or CloudFormation, along with AWS Lambda, to automate EBS volume modifications, without any down time.
AWS, in one of its blogs, says that Elastic Volumes reduce the amount of work and planning needed when managing space for EC2 instances. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.
Essentially with AWS Elastic Volumes, as per AWS, you can:
Change workloads: For instance, at some point, you realize that Throughput Optimized volumes are a better fit and need to change the type of the volume. You can do so easily with this new feature, without any downtime.
Better handle the spiking demands: Assume, you’re running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month. You observe ten fold increase in traffic during the final three days of each month due to month-end processing. In this scenario, you can use this new feature provision right, handle the spike, and then dial it down once the spike tones down.
Increase storage: Suppose, you need to provision a volume for 100 GiB. An alert alarm goes off indicating that it is now at 90% of capacity (disk-almost-full). Using this new feature, you can increase size of the volume and expand file system to match, with no downtime, and in a fully automated fashion. You can also use Botmetric Ops & Automation’s Incidents, Actions & Triggers app, which can help you automate increase in size of the volume as soon as this disk-almost-full alert gets triggered. Instead of manually working on it, Botmetric will help you right-size the volume based on the criterion decreed in the respective Actions and Triggers. To know more about Botmetric Incidents, Actions & Trigger, read here.
How to go about it:
It’s very simple to configure:
Sign in to AWS Console
Select Amazon EBS
Right click on the Volume you wish to modify
Image Source: Amazon Web Services
Image Source: Amazon Web Services
Check the progress, whether modified, optimized, or completed.
Image Source: Amazon Web Services
While the new feature helps increase capacity, tune performance, and change volume types on-the-fly, without disruption, and with single-click, it comes with certain restrictions:
Your volume needs to be detached or the instance stopped for modification to proceed, if you encounter an error message while attempting to apply a modification to an EBS volume, or if you are modifying an EBS volume attached to a previous-generation instance type
The previous generation Magnetic volume type is not supported by the volume modification methods
Decreasing the size of an EBS volume is not supported. However, you can create a smaller volume and then migrate your data to it using application-level tools such as robocopy
Modifying a volume, you need to wait at least six hours before applying further modifications to the same volume
medium instances are treated as current generation. M3.large, m3.xlarge, and m3.2xl instances are treated as previous generation.
With the launch of Elastic Volumes, AWS EBS is now more elastic. The best part, you can change an EBS volume’s size or performance characteristics when it’s still attached to and in use by an EC2 instance.
We live in an exciting era of datacenters and cloud operations. And in these data centers, innovative technologies such as Docker containers eliminate all the superfluous processes that can bog down a machine and enable servers to live up to their potential. With the availability of DockerDatacenter (DDC) as Container-as-a-Service (CaaS), the excitement among enterprises is more than ever.
Making Sense of Docker Datacenter (DDC) as Container-as-a-Service (CaaS)
As you may know, containers make it easy to develop, deploy, and deliver applications that can be deployed and brought down in a matter of seconds. This flexibility makes it very useful for DevOps team to automate continuous integration and deploy containers.
And Docker datacenter offering, which can be deployed on-premise or in the cloud, makes it even more easier for enterprises to set up their own internal CaaS environments. Put simply, the package helps to integrate Docker into enterprise software delivery processes.
Basically, the CaaS platform provides both container and cluster orchestration. And with the availability of cloud templates pre-built for the DDC, developers and IT operations staff can move Dockerized applications not only into the cloud but also into and out of their premises.
Below is the brief architecture that DDC offers:
A pluggable architecture provides flexibility in regards to compute, network, and storage, which are generally a part of a CaaS infrastructure. Moreover, the pluggable architecture provides flexibility without disrupting the application code. So, enterprises can leverage existing technology investments with DDC. Plus, the Docker Datacenter consists of integrated solutions including open source and commercial software. And the integration between them includes full Docker API support, validated configurations, and commercial support for DDC environment. The open APIs allow DDC CaaS to easily integrate into an enterprise’s existing systems like LDAP/AD, monitoring, logging, and more.
Before we move to comprehending Docker on AWS, it is advisable to have a look at the pluggable Docker Architecture.
As we can see from the image, DDC is the mixture of several Docker Project:
Docker Universal Control Plane [UCP]
Docker Trusted Registry [DTR]
Commercial Supported Docker Engine [CSE]
The Universal Control Plane (UCP)
The UCP is a cluster management solution that can be installed on-premise or on a virtual private cloud, says Docker. The UCP exposes the standard Docker API so that you can continue to use the tools that you already know to manage an entire cluster. With the Docker UCP, you can still manage the nodes of your infrastructure as apps, containers, networks, images, and volumes.
The Docker UCP has its own built-in authentication mechanism, and it supports LDAP and Active Directory as well. But, as role-based access control (RBAC). This ensures that only authorized users can access and make changes to the cluster.
In addition, the UCP, which is a containerized application, allows you to manage a set of nodes that are part of the same Docker Swarm. The core component of the UCP is a globally-scheduled service called ‘ucp-agent.’ Once this service is running, it deploys containers with other UCP components and ensures that they continue to run.
Docker Trusted Registry (DTR)
DTR allows you to store and manage your Docker images, either on-premise or in your virtual private cloud, to support security or regulatory compliance requirements. Docker security is one of the biggest challenges that developers face when it comes to enterprise adoption of Docker. The DTR uses the same authentication mechanism as the Docker UCP. It has a built-in authentication mechanism and integrates with LDAP. It also supports RBAC. This allows enterprises to implement individualized access control policies as necessary.
Commercial Supported Docker Engine (CSE)
CSE is nothing but a normal legacy Docker engine with commercial support and additional orchestration features.
For many enterprises, it seems that more the components in the architecture, more complex it gets. Hence, the deployment of DDC will be very painful. It’s a myth! Thanks to AWS and Docker. They already had prepared the recipe to deploy the entire DDC on AWS with the best practices of AWS. The recipe is prepared in CloudFormation Template by taking care of enhanced security. Below diagram shows the overall commercial architecture.
For a detailed resource utilization, enterprises need to just checkout the CloudFormation Stack Architecture as shown below. This seems very complex, but this is the most secure approach for the sake of enterprise production. To get the infrastructure ready, one has to Launch a Stack.
The CloudFormation template will create the below resources and activity:
Creates a new VPC, private and public subnets in different AZs, ELBs, NAT gateways, internet gateways, AutoScaling Groups- all based on AWS best practices
Creates and configures a S3 bucket for DDC to be used for cert backup and DTR image storage
Deploys 3 UCP controllers across multiple AZs within VPC and creates a UCP ELB with preconfigured HTTP healthchecks
Deploys a scalable cluster of UCP nodes, and backs up UCP Root CAs to S3
Creates a 3 DTR Replicas across multiple AZs within VPC, and creates a DTR with preconfigured healthchecks
Creates a jumphost EC2 instance to be able to SSH to the DDC nodes
Creates a UCP Nodes ELB with pre-configured health checks (TCP Port 80). This can be used for your application that are deployed on UCP
Deploys NGINX+Interlock to dynamically register your application containers
Creates a CloudWatch Log Group (called DDCLogGroup)and allows log streams from DDC instances. It also automatically logs the UCP and DTR installation containers
A click on Launch Stack will redirect the user to AWS console where he/she will get the CloudFormation template page with already filled Amazon S3 template URL.
After getting the Status of the Stack as CREATE_COMPLETE, he/she needs to click on Outputs to get the Login URL of UCP and DTR. Refer the image below:
After the UCP Login, the user will get a secure login web page as shown below:
A beautiful, detailed Control Panel will appear as shown below:
And a DTR panel will look something like this:
Viola!!! Now, Docker DC (as CaaS) is ready to use on AWS host.
As a DDC user, you must be thinking, why anyone will go for the DDC for the Deployment of Application. Here’s why: The technology that makes the application more robust and nimble will be considered as the king. And with the adoption of microservices architecture, your application will be the most competitive.
To set the context here: Nowadays, Docker is considered as one of the best containerization technologies to deploy microservices-based applications. While implementing it, there are few key steps that needs to be followed. They are:
Package the microservice as a (Docker) container image.
Deploy each service instance as a container.
Perform scaling, which is done based on changing the number of container instances.
Build, deploy, and start a microservice, which is much faster than a regular VM.
Write a Docker Compose file where we will mention all the image and their connectivity and then just build it.
Lots of enterprises are now considering to refactor their existing Java and C++ legacy applications by dockerizing them and deploying them as containers, says Docker. Hence, this technology was built to provide a distributed application deployment architecture that can manage workloads, plus can be deployed in both private and eventually public cloud environments.
In this way, DDC solves lots of problems today’s enterprises face, including BBC, Splunk, New Relic, Uber, PayPal, eBay, GE, Intuit, New York Times, Spotify, etc.
For demo purpose, Docker provides users with options to deploy a microservice application for different services, such as:
A Python web app that lets you vote between two options
A Redis queue that collects new votes
A Java worker who consumes votes and stores them in
A Postgres database backed by a Docker volume
A Node.js web app that shows the results of the voting in real time
Here’s how: After going to the Resource of UCP, click on Application and then + Deploy Compose.yml
Hit the name of the application and write down the docker compose yaml. Or, you can also upload the file and click on create. After sometime, you will be able to see some logs, and then the application will be deployed.
In below image, we can see that 5 containers were spinned-up at the time of deployment of the application. Each and every container have their own services and worker.
If you want to try to hit the application from a web browser by getting the exact IP and Port, refer the image below:
By performing this activity, the application will be up and running. Refer the illustration below:
If you want to modify it in code, you can modify it from the Web UI. This means that you can access the container CLI from Web as well.
To Wrap Up:
DDC as CaaS is changing the delivery process of containerized application among the enterprises. The idea is that Docker wants to make it easy for enterprises to set up their own internal Containers-as-a-Service (CaaS) operations.
Are you an enterprise looking to leverage DDC as CaaS on AWS? As a Premier AWS Consulting Partner, we at Minjar have your back! Do share your comments and thought with us on Twitter, Facebook or LinkedIn. You can drop in a comment below too.
Auto scaling, as we know today, is one of the most powerful tools leveraging the elasticity feature of public cloud – Amazon Web Services (AWS). Its ability to improve the availability of an application or a service, while still keeping cloud infrastructure costs under check, has been applauded by many enterprises across verticals, be it fleet management services or NASA’s research base.
However, at times, AWS Auto Scaling can be a double-edged sword. For the reason that, it introduces higher level complexity in the technology architecture and daily operations management. Without the proper configuration and testing, it might do more harm than good. Even so, all these challenges can be nullified with few precautions. To this end, we’ve collated few lessons we learned over a period – to help you make the most of Auto Scaling capabilities on AWS.
Use Auto Scaling, whether your application is stateful or dynamic
There is a myth among many AWS users that AWS Auto Scaling is hard to use and not so useful with stateful applications. However, the fact is that it is not hard to use. You can get started in minutes, with few precautionary measures like using sticky sessions, keeping provisioning time to minimum, etc. Plus, AWS Auto Scaling helps monitor the instances and heals them if they become unhealthy.
Here’s how: Once the Auto Scaling is activated, it automatically creates an Auto Scaling Group, and provisions the instances accordingly behind the load balancer. This maintains the performance of the application. In addition, Auto Scaling’s Rebalance feature ensures that your capacity is automatically distributed among several availability zone to maximize the resilience of the application. So, whether your application is stateful or dynamic, AWS Auto Scaling helps maintain its performance irrespective of compute capacity demands.
Identify the metrics that impact the performance, during capacity planning
Identify the metrics for the constraining resources, like CPU utilization, memory utilization, of an application. By doing so, it will help track how the resources are impacting the performance of the application. And the result of this analysis will provide the threshold values that will help scale up and scale down the resources perfectly.
Configure AWS CloudWatch to track the identified metrics
The best way forward is to configure Auto Scaling with AWS CloudWatch so that you can fetch these metrics, as and when needed. Using CloudWatch, you can track the metrics in real-time. CloudWatch can be configured to launch the provisioning of an auto scaling group based on the state of a particular metric.
Understand functionality of Auto Scaling Groups while using Dynamic Auto Scaling
The resource configurations have to be specified in Auto Scaling groups feature provided by AWS. Auto scaling groups would also include rules defining circumstances under which the resources will be launched dynamically. AWS allows assigning the of autoscale groups to the Elastic Load Balancers (ELBs) so that the requests coming to the load balancers are routed to the newly deployed resources whenever they are commissioned.
Use Custom Metrics for Complex Auto Scaling Policies
A practical auto-scaling policy must include multiple metrics, instead of just one allowed by CloudWatch. The best approach to circumvent this restriction is to code a custom metric as a Boolean function using Python and the Boto framework. You can use application specific metric as well along with default metrics like memory utilization or CPU, network, etc.
Use Simple Queuing Services
As an alternative to writing complex code for the custom metric, you can also architect your applications to take requests from a Simple Queuing Service and enable CloudWatch to monitor the length of the queues to decide the scale of the computing environment based on the amount of items in the queue at a given time.
Create Custom Amazon Machine Images (AMIs)
To reduce the time taken to provision instances that contain many custom software (not included in the standard AMIs), you can create a custom AMI that contains the software components and libraries required to create the server instance.
Scaling up other AWS services other than EC2, like AWS DynamoDB
Along with AWS EC2, other resources such as AWS DynamoDB, can also be scaled up and scaled down using Auto Scaling. However, the implementation of the policies are different. Since storage is the second most important service other than computing service, efforts to optimize storage will yield good performance as well as cost benefits.
Predictive Analytics for Proactive Management
Setting up thresholds as described above is reactive. Hence, you can leverage time-series prediction analytics to identify patterns within the traffic logs and ensure that the resources are scaled up at pre-defined time, before events take place.
Custom define Auto Scaling policies & provision AZs capacity accordingly
Auto scaling policies must be defined based on the capacity needs as per Availability Zone (AZ) to save on cost spikes. Because pricing of the resources are based on different regions that encompass these AZs. This is critical especially for Auto Scaling groups configured to leverage multiple AZs along with a percent-based scaling policy.
Use Reactive Scaling policies on top of schedule scaling feature
By using Reactive Scaling policies on top of schedule scaling feature will give you the ability to really respond to the dynamic changing conditions in your application.
Embrace an intelligent cloud management platform.
Here’s why: Despite configuring CloudWatch and other features of Auto Scaling, you cannot always get everything you need. Further automating various Auto Scaling features using key data-driven insights is the way forward. So, sign-up for an intelligent cloud management platform like Botmetric, which throws key insights to manage AWS Auto Scaling, provides detailed predictive analytics and helps you leapfrog your business towards digital disruption.
Also, do listen to Andre Dufour’s recent keynote on AWS Auto Scaling during the recent 2016 re:Invent, where he reveals that Auto Scaling feature will be available to Amazon EMR (Elastic Map Reduce) service as well along with AWS ECS Container service, and Spot Fleet in regards to dynamic scheduling policies.
It is evident. Automation in every field is upon us. There will soon be a time when we will reach the NoOps state. If you have any questions in regards to AWS Auto Scaling or how you can minimize Ops work with scheduled Auto Scaling or anything about cloud management, just comment below or give us a shout out on Twitter, Facebook, or LinkedIn. We’re all ears! Botmetric Cloud Geeks are here to help.
A May 2016 survey cites that 51% of surveyed organizations took over a year to plan their public cloud strategy. Few may take up to three years too! It’s completely comprehensible why it takes so long and that a lot of detailing goes into it — understanding the precise costs and challenges that the cloud will introduce, knowing how to make the public cloud approach work for the organization, what tools & technology choices to make that will supplement the cloud adoption, etc.
Despite a detailed, pragmatic approach towards building the public cloud strategy, a majority of organizations still fail at some point. And our cloud geeks attribute it to ‘blind spots’ that get overlooked either due to complexity or lack of awareness. Soon enough, in some cases, these blind spots might take the team back to the boardroom.
To usher in the right approach towards building a seamless and successful public cloud strategy, we’ve collated the top seven blind spots that smart companies watch out for during their cloud-first and cloud-ready journey.
Not calculating the ‘REAL’ Total Cost of Ownership (TCO)
Many companies have realized that the real benefit of cloud computing is not the cost savings it can bring. But it is the agility and time-to-market. And the prominent factor that plays a vital role in bringing such a nimbleness are the TCO models. However, many companies don’t define the actual TCO. They just go by the cost data alone, which may save some operational expenses in the short term but not in the long term. Hence, they end up missing the market when it comes to IT’s ability to deliver the real value of the business.
The way forward is to consider TCO models that also identify gray areas, and take them into account during calculations. Mainly, these models must understand the actual value of cloud-based technology. Plus, they should & must take critical factors into account too, like existing infrastructure in place, existing skills & workforce involved, the cost of all the cloud services when in operations, value of agility & time-to-market, future capital expenditures, and cost of risk around compliance issues.
Not knowing who owns the data in the cloud, and how to recover it
Understanding the terms of a cloud service is paramount. Agreed. But it is more critical to know who owns the data in the system. The decisiveness lies in carefully checking the terms and conditions of the contract and ensure the data policy includes all the fine lines that ensure the actual owner owns the data.
By doing so, you, as a user, can own and recover the data on-demand. Above all, your service provider cannot access, use, or share your data in any shape or form without your written permission.
Not having strong Service Level Agreements (SLAs)
While you focus on putting data policy and terms of cloud service in place, you should not change the spotlight on SLAs. A strong SLA goes a long way in monitoring, measuring, and managing how the cloud service provider’s services are performing. The essence lies in working closely with lawyers who can help define strong contracts. And also help you get what you want from the service, and whether this can be expressed in the contract.
If you still find this less important, then consider this scenario: You have SLAs with AWS but have no idea how its SaaS offering is performing. That’s because AWS gives them figures for the performance of the infrastructure, not the software.
Not making complete use of elasticity of the cloud
Many enterprises fail to develop a cloud strategy that are linked to business outcomes, because they miss out leveraging the real benefits of elasticity feature that a cloud offers. They purchase instances in bulk to handle peak demands, like how they did with on-premise IT infra, and then turn a blind eye towards idle resources that could be optimized easily. They also overlook the fact that ‘anything and everything’ on the cloud can be codified. And APIs can be used to automate the tasks on the cloud completely.
Even if APIs are used, weak APIs and mismanagement of APIs can take a toll on the elasticity feature of the cloud. Essentially, going NoOps — with efficient APIs and APIs management — is the way forward.
Not appointing a competent DevOps team
While Continuous Development, Continuous Testing, Continuous Integration, and Continuous Deployment play a significant role in bringing agility into the business process, workforce working on each of these Continuous Delivery stages ( which is the end goal of DevOps) contribute equally to the success of the cloud. Organizations need to identify the right talent and “ PEOPLE PROOF” their DevOps team to make it strong. Essentially, to ensure that there are no roadblocks in achieving any of the milestones due to the skills shortage.
The best way forward is to go the NoOps way, so that more Ops teams can work on innovating on the cloud, rather than operating.
Not able to avoid cloud service provider lock-in
To date, vendor lock-in remains one of the major roadblocks in achieving success in the cloud. To this end, a majority of IT leaders consciously have been choosing not to invest in cloud fully. For the reason that they value long-term vendor flexibility over long-term cloud success, say experts.
One of the best approaches decreed by cloud experts is to avoid assigning business processes and data to the cloud service provider. Another solution, say, experts, not to keep one foot out of the cloud into on-premise, but to completely embrace it in a new way. Here’s how: By managing IT with governance models, taking cost control measures and the processes, etc.
Not bridging the cloud security and compliance gaps properly
With the choice of public cloud, which features a shared responsibility model, its users are responsible for their data security and access management of all the cloud resources. While building a cloud strategy, one should respect the fact that the freedom of elasticity that the cloud offers is accompanied by greater responsibility. And this responsibility can be administered only by bridging the cloud security and compliance gaps correctly. How? By adopting ‘Continuous Security’ and making a habit of regular audits and backups, preferably automated.
The Final Word
Today’s public cloud is all about driving business innovation, agility, and enabling new processes and insights that were previously impossible. And for this to happen, a practical public cloud strategy is the cornerstone. A strategy that is based on your own unique landscape and requirements while also taking all the critical blind spots into account. This is our take. Tell us what’s your public cloud strategy for 2017 is? Share your learning and stories in the cloud with us on Twitter, Facebook, and LinkedIn.
Digital transformation has changed the way organizations work, and so has the cloud. Following along the lines of VMWare’s Ex CEO Paul Martiz, cloud geeks across the globe have been saying it loud now ‘Cloud Computing in 2017 is about how you do computing, not where you do computing.’
Forrester, in one of its recent report, says, “Cloud computing will continue to disrupt traditional computing models at least through 2020. Starting in 2017, large enterprises will move to cloud in a big way, and that will supercharge the market. We predict that the influx of enterprise dollars will push the global public cloud market to $236 billion in 2020, up from $146 billion in 2017.”
While these numbers are enough to validate that ‘Cloud is the New Black,’ it also sends out clear signals that it is imperative to take the right measure and shove in the right strokes to get the most from your cloud computing in 2017.
The Way Forward
In 2016, we saw that many enterprises failed to achieve success with cloud computing, especially public cloud. For the reason that they failed to develop a cloud strategy rooted in the definition and delivery of IT services linked to business outcomes. More so, they missed out leveraging the real benefits of elasticity feature that a cloud offers. They purchased instances in bulk to handle peak demands, like how they did with on-premise IT infra, and then turned a blind eye towards idle resources that could be optimized easily.
They also overlooked the fact that ‘anything and everything’ on the cloud can be codified and APIs can be made use of to automate the tasks on the cloud completely. Essentially, to go the NoOps way while on the cloud.
So, 2017 is the year where you put these in perspective and introspect how you can align these in your cloud strategy so that IT are seamlessly linked to business outcomes. Here’re few tips from our tech geeks on what to focus in the cloud for 2017:
1.Implement cost governance as a discipline:
Every business has its own ideas on how best to determine cloud ROI. However, they will have to think beyond Capex and Opex to get the cloud economics right. Our cloud geeks say that to get maximum ROI of your cloud, the first step is to establish the right policies, and closely monitor as well as regulate the resource usage every day. By bringing in a discipline with the right policies and budgeting, you can easily govern the costs. Plus, automating the tasks to monitor and streamline the cloud spend continuously will definitely help bring down the TCO.
2. Focus more on compliance:
There’s a myth that has remained with many companies even today — public or hybrid cloud present compliance challenges, unlike private clouds where control and customization are much easier. With the increase in the adoption of cloud, things have changed. Increasingly cloud service providers are open to dialogues when it comes to SLA, and also provide services that comply with PCI DSS, HIPAA, and other regulatory requirements. Another block that many of our customers are concerned is noncompliance due data’s location. It is simple. Locate your data, and during an audit, justify its location along with the measures that are in place to protect it.
3. Automate security:
While compliance and assuaging DDoS attacks plays a major role in cloud security, an API-driven strategy that puts all things in the right place at the right time also plays an equally important role. Automation is the future. And to get there, APIs are the keys to unlock the door. By automating security, it helps you code in the practices that make your data comply with your company’s security policy, identify vulnerabilities on running instances, and further fix those vulnerabilities in split seconds.
4. Take a note of Security of Things (SoT):
In this age of IoT, many are skeptical about the security of these connected devices talking on the cloud. At the end of the line how much ever a technology advances to help protect our networks and devices, security is ultimately a shared responsibility on the cloud. To know more, read the Botmetric blog on ‘Bridging the Cloud Security Gaps: With Freedom Comes Greater Responsibility.’
5. Go serverless:
Serverless architectures have already taken the cloud computing by the storm. With the amount of interest leading cloud services providers are showing, especially AWS, Azure, Google, and IBM, it will be the theme of 2017. So, DevOps teams need to be hands-on in choosing the right services and be nimble. One observation we made during 2016 was that there is a common misconception in the DevOps community that going serverless is NoOps. It is not! DevOps team still need test, deploy, log and monitor the code.
6. Leverage Microservices:
Microservices are currently playing a key role in cloud computing, and they will continue to do so. Thanks to container technologies such as Docker, Rocket, and LXD — portability of code (microservice) across multiple environments is seamless. More so, deploying and managing containerized applications are now easier. Above all, these microservices along with the container technology will help developers autoscale and easily handle the load.
7. Use Machine Learning in IT Operations:
Machine learning, and its subset Deep Learning, is now no more restricted to just the Big Data applications. It is slowly seeping into the walls of DevOps sitting on the cloud to help them improvise IT operations. Especially to minimize human intervention. More so, applying machine intelligence to problem-solving will be a norm very soon. For instance, it will come in handy to fix the alerts flood & monitoring fatigue caused by a company’s operational management systems in this 24*7 uptime world. Read this Botmetric blog Assuage Alert Fatigue Mess with DevOps Intelligence to know how machine intelligence can help solve this issue.
8. Use Robotic Process Automation (RPA):
Even though this technology is in a quiet nascent stage now, it has made its footprints felt in the cloud computing. From our observation, 2017 will be the year, it will be used voraciously by many cloud management and SaaS products, where in they will offer data-driven ‘Bots’ that can automatically capture and interpret existing data, manipulate it, and trigger responses and do more intelligently and smartly. In short: Gear-up yourself to embrace this gen-next of deep learning.
9. Embrace NoOps:
While RPA, machine learning, automating security, etc. are making strides towards efficient cloud computing, NoOps will be the next wave of efficient cloud computing in 2017. Soon automating all the operations will be the norm. Building cloud as a code, with just the Dev team and investing Ops team into development efforts & innovation, will be the way forward. Are you game?
Cloud Computing in 2017: The State of Cloud Providers
In 2006, AWS created the first wave of cloud computing. A decade later, AWS has again created the second wave. Apart from the fact that it is currently operating at an $11 Billion USD, it has also announced 50+ services. This is helping change the landscape of enterprise cloud computing. As an AWS Technical partner, here’re few tips from team Botmetric that you can take into account in 2017:
1. Use Lambda@Edge to deliver a low latency UX for customized web applications, and more so, run code at CloudFront edge locations without provisioning or managing servers.
2. Integrate Blox, the new open source scheduler for Amazon EC2 container service.
4. Buy Convertible RIs and leverage Regional Benefit to get the most out of EC2s.
5. Use new features available in S3 Storage. For instance: Object Tagging, S3 Analytics, Storage Class Analysis, S3 Inventory, and S3 CloudWatch Metrics.
According to a leading survey, Microsoft Azure accounts to 28.4% of the global IaaS ecosystem and is quickly catching up with AWS in the race to tap this growing market. With its growing portfolio of services (with support for machine learning, DevTest Labs, Active Directory, Log Analytics, BotService, etc.) and continued patronage among Microsoft aficionados, the Azure PaaS is also gaining ground, especially among enterprises.
Google Cloud Platform
With Google accelerating its move to cloud data warehousing and machine learning, it is also up for the race with AWS and Azure stealing 16.5% share of them from the IaaS market.
To Wrap Up
At Botmetric, we talk about cloud computing and DevOps everyday. Not just with a bunch of clients, but with other cloud geeks in the ecosystem too, closely observing the industry trends and practical challenges cloud engineers face every day. To this end, we have developed a close-knit collaboration with other partners and seek to make cloud management a breeze for all — using automation. Hence this write-up. And we hope that we have thrown some light on the trends that will reign the cloud computing in 2017. Let us know if we have missed on something here. Plus, do share your views and thoughts on the state of cloud computing in 2017 on Twitter, Facebook, & LinkedIn.
We are thrilled to close this year with a bang! It’s hard to sum up 2016 in just a few scribbles, especially when we made many new friends and rolled-out so many enhancements & new features: a new platform, new audits, more ingrained intelligence, more cloud optimization options, a revamped website with new UI, and much much more.
And as the New Year sets in slowly across the world, minute by minute, second by second, and continues its journey with the same charm and diligence, we at Botmetric, likewise, will continue our journey towards making cloud management and optimization a breeze for you. With rolled sleeves. With more focus. With more features. And with more zeal. Everything for you, dear customer.
Here’s 2016 Year-in-Review: The Best Year so far, for Botmetric
We attribute our success to our dear customers. Thank you for the timely feedback, and those wonderful testimonials bedecked with five stars. Based on the feedback and our learning, we revamped Botmetric into a platform of three products that are use-case targeted instead of a ‘one-size-fits-all’ approach:
1. Cost management & Governance: This Botmetric product helps you control your cloud spend, save that extra bit, optimize spend through intelligent analytics and allocate cloud resources wisely for maximum ROI. It is built for businesses and CIO teams to enable them in decision making & maximizing cloud ROI.
2. Security & Compliance: This Botmetric product helps you get compliant and keeps your cloud secure with automated audits, health checks, and best practices. It provides the most comprehensive list of automated health checks. It is built for CISO and Security Engineers to proactively identify issues and fix vulnerabilities before they become problems.
3. Ops & Automation: This Botmetric product helps you save time and effort you invest in automating cloud operations. It has built-in operational intelligence that can analyse problems and fix events in seconds. Above all, speeds-up DevOps. It is built for CTOs and DevOps Engineers seeking alert diagnostics, event intelligence, and out of the box automation.
Chose any of the above, or any two of the above, or choose all. Your wish, your products, tailored for your AWS cloud. With these three products, you can realize the full potential of your cloud without any information overload. Find insights that matter to you, in just one-click.
To celebrate what you’ve helped us achieve this year, we have put together few 2016 Botmetric facts:
New and Key Botmetric Product Features Rolled-Out in 2016:
5. Cloud Reporting Module: Helps you quickly find your AWS cloud reports from one centralized module without scrolling, and to counter endless searching for what you need.
6. Reserved Instance Planner: Provides reservation recommendations at instance level. This revisited RI planner allows you to filter the recommendations, look at the details of the instance being recommended, and accordingly add it to a RI plan. You can also download the plan and work on budget approvals and actual reservations offline.
An Advanced DevOps Intelligence Feature: Assuages alert fatigue mess, helps easily understand the alert events through intelligence, and tells you why is it happening. It also checks for pattern in the problems.
We have much more coming up in 2017. So stay tuned with us.
Here’s Botmetric wishing you a very Happy New Year.
Let’s blow the heartiest kisses to the cloud in 2017
Cloud is the new black. Let’s together embrace it more, with dexterity, in 2017.
Share all your 2016 cloud musings, learning, and accomplishments with Botmetric on Twitter, Facebook, or LinkedIn. We’re all ears, and we’ve your back.
Let’s make cloud an easier and a better place to grow our business.
P.S. If you have not signed-up with Botmetric yet, then go ahead and take a 14-day free trial. As an AWS user, we’re sure you’ll love it!
Amazon Simple Storage Service (S3) is one of the most widely deployed AWS Services, next to AWS EC2. It is used for a wide range of use cases such as static HTML websites, blogs, personal media storage, enterprise backup storage. From AWS cost perspective, AWS S3 storage is one of the top preferred resources. For every enterprise looking to optimize AWS Costs, analysing and formulating an effective cost management strategy towards AWS S3 is important. More so, understanding data lifecycle of the applications hosted is the key step towards implementing a good AWS S3 cost management strategy.
Making the most of AWS S3:
With AWS, you pay for the services you use and the storage units you’ve consumed. If AWS S3 service is a significant component of your AWS cost, then implementing AWS S3 management best practices is the way forward.
For example, if a business has opted for AWS S3 service and has provisioned 100 GB of it but has actually stored only 10 GB of files in it, then AWS would only charge for the 10 GB and not for the entire 100 GB provisioned initially. However, there are various factors that affect the S3 cost too, which many are unaware. Many AWS administrators tend to overlook S3 from cost management perspective because of this aspect.
To this end, we’ve collated few basic checks to get the S3 cost management right as AWS S3 usage grows:
1. EC2 and S3 buckets should be in the same AWS region because there is a cost involved for data transfers outside of its AWS region.
2. The Naming Schema should be chosen such that access keys generated ensures files are stored and distributed across multiple drives of the AWS S3 system. If the access keys are distributed evenly, the number of file operations needed to read and write the files will be less. This will lead to less spend costs as there is an additional cost overhead for read-write operations for S3.
3. Only temporary access credentials of AWS S3 should be hardcoded into an application’s code that uses S3. There can be misuse of the S3 resources if access keys are exposed to third party. This can prove very costly, if access credentials are compromised in the future.
4. Monitoring the actual usage of AWS S3 periodically is one of the best practices. By doing so, misuse of the provisioned S3 resources will come to limelight and help in curtailing data compromise.
5. Files form the key object type, and are stored in S3. All files that are no longer relevant should be removed from S3 buckets. All files that are temporary files can be recreated through a computation process. All temporary files generated due to incomplete multi-part uploads should be cleaned up periodically.
6. When using versioning for an S3 bucket, enable “Lifecycle” feature to delete old versions. Here’s why and how: With Lifecycle Management, you can define time-based rules that can trigger ‘Transition’ and ‘Expiration’ (deletion of objects). The Expiration rules give you the ability to delete objects or versions of objects, which are older than a particular age. This ensures that the objects remain available in case of an accidental or planned delete while limiting your storage costs by deleting them after they are older than your preferred rollback window.
7. Try to send the data to S3 in compressible format, because AWS S3 is charged for the amount of units you’ve consumed.
Ultimately, every data stored in the S3 will have its lifecycle stages of creation, usage and then followed by infrequent usage. Just like content creation in a news website. The daily news created along with its images can be stored in AWS S3. Current news items will be accessed most and hence have to be quickly accessible to a reader. At the end of the week, the older daily news content can be moved to the AWS S3 RRS for faster, but slightly infrequent access. At the end of the month, they can be moved to an standard infrequent access storage type. At the end of the quarter, these content can be moved to the low cost rarely accessed archival mode of AWS Glacier.
This data lifecycle is applicable across domains including e-commerce and enterprise computing as well. Hence, leverage data’s inherent lifecycle for AWS S3 cost optimization.
You can also take advantage of Amazon S3 Reduced Redundancy Storage (RRS) as an alternative to S3, because it’s cheaper.
Once you follow all the above hacks, start observing the bills. And don’t forget to follow other key best practices too. Use RRS where ever you can. Keep your buckets organized. Archive when appropriate. Speed up your data processing with proper access keys names. Use S3, if you are hosting a static website. Architect around data transfer costs. Use consolidated billing.
Finally, AWS provides a simple configuration mechanism to specify the rules of the data lifecycle and transferring of the objects across storage types. So, do take data lifecycle as well as into account when it comes to S3 cost management.
If you are finding it difficult to save on AWS S3 cost, then explore the intelligent Botmetric AWS Cloud Management Platform with a 14-day free trial. It can help you manage your AWS storage resource management and help keep them at optimal pricing levels at all times. For other interesting news on cloud, follow us on Twitter, Facebook, and LinkedIn.
Elastic Cloud Compute (EC2) is one of the most popular services of AWS and used by almost every Amazon cloud customer. And, in general, EC2 usage accounts for 70 to 75% of AWS bill for an average AWS user. Moreover, most of the underlying services like EBS, EIP, ELB, NAT, etc. are used in conjunction with EC2 service for deploying applications on AWS cloud.
So, several unique EC2-related line items can show up on your AWS bill, which will further make it even more difficult to comprehend what’s driving all that spending. A high-level view of the spend will not suffice. Because of this, it is critical to analyze EC2 usage and its spend breakdown by various dimensions like resource, instance type, services, accounts, and more while optimizing AWS costs.
To cater to this need and help our customers understand their AWS EC2 spend easily and efficiently, we have introduced the support for “EC2 Cost Analysis” in the ‘All-New’ Botmetric platform as part of its Cost Management and Governance’ Analyze feature.
Here’re the top features that the new Botmetric EC2 Cost Analysis offers:
1. Know your EC2 spend by instance type: You can quickly drill down and understand your total EC2 cost on AWS cloud split across different EC2 instance families. You can filter this further by various AWS accounts.
2. EC2 cost breakdown by sub services: We have brought together the cost of EBS, EIP, ELB, Data Transfer, NAT Gateway under EC2 cost analysis module so you can easily see what is your mix of total spend across various EC2 related services. You can filter this cost further for any AWS account or month so you can drill into specific details. We also encourage you to drill down this analysis for a particular instance family.
3. EC2 cost breakdown across different AWS accounts: You can split the EC2 cost across different AWS accounts in your master payer account so you can categorize them based on your usage.
4. Data export in CSV: We allow you to export different breakdown of EC2 cost into CSV file so you can use it for any internal analysis. The data export option allows you to see the cost breakdown by instance types, AWS accounts, related services, specific EC2 resources, etc.
You can access this feature in Botmetric Cloud Management Platform under Cost & Governance application in the Analyze Module. Please write to us with your feedback on what we can do better and where we can improve it further.
If you want to know some of the AWS EC2 cost saving tips that pro AWS users follow, read this Botmetric blog post. And if you want to know what are the other new features available in the new release of Botmetric, then take an exclusive 14-daytrial or read the expert blog post by Botmetric Customer Success Manager, Richa Nath. Until our next blog post, do stay tuned with us on Twitter, Facebook, and LinkedIn for other interesting news from us!