4 ways AIOps will bring joy to Cloud Engineers

The world of IT has evolved exponentially over the last decade with cloud being the new normal for enterprise businesses. From on-premise data centers to the rise of cloud and converged architecture of the IT operations has undergone a wave of process and procedural changes with DevOps philosophy. The not-so-IT companies like Amazon, Microsoft and Google have disrupted traditional infrastructure and IT operations by removing the heavy lifting of installing data centers, managing servers, networks and storage etc. so that engineers can put their focus back on applications and business operations instead of IT plumbing.

Above all, the DevOps philosophy is to save time and improve performance by bridging the gap between engineers and IT operations, however DevOps hasn’t truly delivered what was expected out of it as engineers still had to handle all the issues and events in their infrastructure whether in Cloud or on-prem data center.
There is a new philosophy emerging around “what if humans could solve new complex problems while we let machines resolve known, repetitive, and identifiable problems in cloud infrastructure management?” – this is known as “the AIOps philosophy” that is slowly taking root in cloud-native and cloud-first companies to reduce the dependency on engineers to resolve problems.  

Many enterprises have already adopted Cloud as key component of their IT and have limited their DevOps to configuration management and automated application deployments. Nurturing the AIOps philosophy will further eliminate the repetitive need for engineers to manage everyday operations and save precious engineering time to focus on business problems. While Cloud has made automation easy for engineers, it’s the lack of intelligence powering their day to day operations that is still causing operational fatigue for engineers even in the cloud world.

The adoption of Cloud and emergence of AI, ML technologies are allowing companies to use intelligent software automation rather than vanilla scripting to make decisions on known problems, predict issues and provide diagnostic information for the new problems to reduce the operational overhead for engineers. The era of pagers to wake up engineers in the middle of the night for down times and known issues will be a by-gone over the next 18 to 24 months.

In the traditional IT world, the main focus of operation engineers was to keep lights ON but in the world cloud, there are new dimensions like Variable Costs, API Driven Infrastructure Provisioning, No Centralised Security and Dynamic Changes that further increase the work burden.  The only way to help companies reduce their cloud costs, improve security compliance for on-demand provisioning, reduce alerts fatigue for engineers and bring intelligent machine operations to handle problems due to dynamic changes is through AIOps – Put AI to make Cloud work for your business.

Managing Enterprise Cloud Costs 

According to RightScale “State of Cloud 2017 Report”, managing cloud costs is the #1 priority for companies that are using Cloud computing. The cloud cost challenges are causing massive headache for finance, product and engineering teams within the organisations due to dynamic provisioning, auto-scaling support and lack of unused cloud resources garbage collection.  When hundreds of engineers within an enterprise use Cloud platforms like AWS, AZURE and Google for their applications, it will be impossible for one person to keep track of spend or deploy any centralised approval processes. Many companies like Botmetric are using machine intelligence and AI technologies to detect the cost spikes, provide deep visibility into who used what and help companies deploy intelligent automation to reduce unused resources and auto resize over provisioned servers, storage etc. in the cloud.

As IT infrastructure is an important factor to your business success, so is its need to understand the optimal usage limitations for your organizational IT infrastructure needs. In comparison on-prem cloud looks pretty easy because of “it’s pay-as-you-go” model, however when you grow exponentially you scale your cloud the same way and this gives you a bill shock. A lot of teams put in place tagging policies, rigorous monitoring but still as controlling cost is not the engineer’s way you still lack the edge. The process of continuous automation will help in reducing those misses in saving cloud cost. AIOps will put across the process of continuous saving in your cloud paradigm. For example: You can automate the process of purchase of Reserved Instances in AWS cloud through simple code with help of AWS Lambda. Another most favourable and most common use case is that of turning off dev instances over weekends and auto-turn back on while the start of weekdays, this saves upto 36% savings for most of cloud users.

Ensuring Cloud Security Compliance 

When any engineer within the organisation can provision a cloud resource through API call, how can businesses ensure that every cloud resource is provisioned with the right security compliance configuration needed for their business and satisfy regulatory requirements like PCI-DSS, ISO 27001, HIPPAA etc. This again requires a real time security compliance detection, informing the right user who provisioned the resource and take actions like shutting down machines if not complied to ensure your business stays protected. The most important part of security these days is continuous monitoring, and this can be achieved if you have a mechanism in place that detects and reports the next millisecond when the alert is received. A lot of organization are developing tools that not only detects security vulnerabilities but auto-resolves them. By leveraging AIOps and using the real time event configuration management data from cloud providers, companies can stay compliant and reduce their business risk.

Reduce Alert Fatigue

The problem of too many alerts is a known issue in the data center world, and is popularly called as Ops fatigue. The traditional NOC team (look at alert emails), IT support team (review tickets & respond) and then engineers looking into the critical problems was broken in the cloud world with DevOps Engineers managing all these tasks.

Also, anybody who managed production infrastructure, business services, applications and architected systems, knows that most of the problems are caused by the known events or identifiable patterns. Noisy Alerts are the common denominator in any IT operations management. With swarm of alerts flooding inboxes, it becomes highly difficult to manage which ones really matter or are ones to be looked upon by engineers. A great solution powered by anomaly detection would be to filter out unnecessary alerts or suppress duplicate alerts for a more concise alert management to detect real issues and predict problems. The engineers already have an idea on what to do when certain events or symptom occur in their application or production infrastructure. When events or alerts are triggered, most of the current tools just provide a text of what happened instead of providing a context of what is happening or why it’s happening? So as DevOps engineers, it’s important for you to create diagnostic scripts or programs so you can get a context of why CPU spiked? Why an application went down? Or why API latency increased? Essentially, to get to the root cause faster powered by intelligence. You should encourage them to deploy anomaly detection powered by machine intelligent and smart automated actions (response mechanisms) for known events with business logic embedded so team can sleep peacefully and never sweat again.

Intelligent Automation For Operations

The engineers responsible for managing the production operations (from ITOps to DevOps era) have been frustrated with the static tooling that’s mostly not intelligent. With the rise of machine intelligence and adoption of deep learning, we will see more of dynamic tooling that can help them in day to day operations.  In the Cloud world, the only magic wand for solving operational problems is to use code and automation as a weapon. Without using intelligent automation to operate your cloud infrastructure would only increase complexity for your DevOps teams. You can create everything from automated remediation actions to alert diagnostics. As a team and DevOps engineer, you need to focus on using CODE as a mechanism for resolving problems. If you are building the CI/CD today then you should certainly deploy a trigger as part of your CI/CD pipeline that can monitor deployment for health metrics and invoke a rollback if it detects performance or SLA issues. Simple remedies like this can save hours of time after every deployment and handle failures gracefully.

We will also see various ITSM vendors bringing AI & ML into their offerings like Intelligent Monitoring (without static thresholds for alerts instead of dynamic alerts), Intelligent Deployment (with cluster management and auto-healing tooling), Intelligent APM (not just what’s happening but why it’s happening due to what), Intelligent Log Management (real time streaming of log events and auto detection of relevant anomaly events based on application stack) and Intelligent Incident Management (suppression of noise from different alerting systems and providing diagnostics for engineers to get to the root cause faster).

The state of Cloud platforms and ITSM offerings is evolving at rapid pace, we are still to see newer concepts powered by AI and ML that revolve around disrupting cloud operations and infrastructure management to ease the pain for engineers to let them sleep peacefully in the night and not worry every time a pager goes off!

You can also read the original post here.

5 Point Guide to AWS DR Automation

Disaster Recovery is a procedure to recover technology infrastructure and systems following a disaster. There are 2 types of disasters:
Natural – These include natural calamities like floods, tornado, earthquakes.
Man-Made –  These are disasters caused by human negligence or errors such as infrastructure failure, IT Bugs, cyber-terrorism.
In such cases, not only should we have backups but backups should be copied across multiple regions and multiple accounts.

Here is a 5-point guide for AWS DR automation:

Type of Backups

There are three major levels of recoveries, organization should consider while designing their recovery solution:

File Level Recovery – from files stored in S3.

Volume Level Recovery – from snapshots.

Database Level Recovery – from DB Snapshots.
For every AWS Infrastructure, there are many kinds of resources that need to be backed up for DR purpose:
      EC2 Instance Backups (EC2 AMIs)
      EBS Volume Backups (Snapshots)
      RDS DB Backups (DBSnapshots)
      Elasticache DB Cluster Backups (Elasticache Snapshots)
      Redshift  DB Cluster Backups (Redshift Snapshots)
      Route53 Hosted Zone Backups (S3 Copy Hosted Zone Files)
      CloudFormation Template Backups (CloudFormation Template)

Critical vs Less Critical vs Non-Critical

Depending on the systems and their potential impacts on the business, we can classify strategies into 3 types –
     Most Critical System – Frequency – 1 hour. Retention -1 year
     Less Critical System – Frequency -1 day.  Retention – 180 days 
     Non-Critical System – Frequency -1 week. Retention – 4 weeks.
                                       – Manually Backup if required.

Automated vs Manual backups

In a dynamic cloud environment, with a wide range of services, it is extremely difficult to manage resources and deal with continuous changes beneath them.
For example:
If an organization has 100’s of instances of different types with different roles to play, it becomes impossible to manually create backups and monitor them.
With Automation, you just need to add tags to every instance defining their
role. It will help to create individual policies based on their role.
Let’s say, you have the following definition of instances –
Tag Instance Count Backup Policy

ENV/DEVELOPMENT 30 Once in a week
ENV/MONITORING 5 Once in a month
ENV/PRODUCTION 60  Every 4 hours
ENV/OTHERS 5 Not required(manually)

In the example shown above, automation is a clear winner relative to a manual backup.

Cost Optimized backups

Organizations should make strategies to clean up old backups which are no longer required. This will drastically reduce AWS Infrastructure Cost.
Also, AWS has a limit on the number of backups that can be created in an account.  For e.g. EBS Snapshot limit is 10,000.

Cost Optimized DR Strategy is therefore required to ensure limited backups.
In Botmetric backups jobs, Snapshots to retain parameter(s) ensures to keep the number of snapshots per volume.
Similarly, AMIs to retain ensures to keep number of AMIs per instance.
Let us understand it with an example – If there are 180 Snapshot to retain, and the job execution is once a day it will keep snapshots of 180 days (i.e. 6 months) old. 

If there are 360 Snapshot to retain and the job execution is twice a day, it will keep a backup of 180 days (i.e. 6 months) old. However, it will keep 2 snapshots per volume of the past 180 days.

Note: For safety purpose we will try to keep Snapshot to retain+1.

DR Automation for various AWS Resource

Depending on the AWS Infrastructure and DR Strategy backups can be taken across regions/across accounts.
In Botmetric, we have a wide variety of jobs for various services-

EC2:
          Create EC2 Ami based on EC2 Instance tags
          Copy EC2 Ami based on EC2 Instance tags across regions
          Copy EC2 Ami based on EC2 ami tags across regions
          Copy EC2 Ami based on EC2 Instance tags across accounts
EBS:
          Create EBS snapshot based on ebs volume tags
          Create EBS snapshot based on ec2 instance tags
          Create EBS snapshot based on ec2 instance ids
          Copy EBS snapshot based on ebs volume tags across regions
          Copy EBS snapshot based on ec2 instance tags across regions
          Copy EBS snapshot based on ebs volume tags across accounts

RDS:
         Create RDS snapshot snapshot based on DB Instance tags
         Copy RDS snapshot based on DB Instance tags across regions

REDSHIFT:
         Create Redshift snapshot based on redshfit cluster tags

ROUTE53:
         Create Route53 Hosted Zone backups

In addition to it, for cleaning up of old backups, we have de-register Old EC2 AMIs and Delete Old EBS Snapshots jobs.

Conclusion

In today’s ever changing cloud environment, zeal to achieve continuous availability, robustness, scalability and dynamicity spawned the rise of ‘Backup as a Service’ (BaaS).  With AWS DR automation and smart strategies you can secure make your business ‘disaster-free’. Read about the do’s and don’ts of DR Automation strategy.

Botmetric is an intelligent cloud management platform that is designed to make cloud easy for engineers. Sign up now, to see how Botmetic can help you with your Disaster recovery planning.

 

May Roundup @ Botmetric: Deeper AWS Cost Analysis and Continuous Security

Cost modelling, budget reduction and cost optimization are some of the top most considerations for businesses irrespective of size. Whether it is an enterprise with 100+ foot print or a small start-up with less than 10 employees, cost reduction is always a great news. This month, we had two awesome news by AWS in regards to cost reduction — 61st Price Reduction slashing the rates of EC2 RIs & M4 Prices and releasing better Cost Allocation for EBS snapshots, and a key Botmetric Security & Compliance product roll-out on CIS Compliance. So in the month of May, focus was on AWS cloud cost analysis and continuous security.

Like every month, here we are presenting May month-in-review, covering all the key activities around Botmetric and AWS cloud.

Product News You Must Know @ Botmetric

Botmetric continues to build more competencies on its platform. Here’re the May month updates:

CIS Compliance for Your AWS

What is about: Auditing your infrastructure as per AWS CIS Benchmark policies to ensure complete CIS compliance of your AWS infra, without you going through complex process or studying docs.

How it will help: It will help AWS users, AWS auditor, AWS system integrator, AWS partner, or a AWS consultant to imbibe CIS AWS Framework best practices. This ensures CIS compliance for your AWS cloud.

Where can you find this feature on Botmetric: Under Security & Compliance’ Security Audit & Remediation console.

To know more in detail, read the blog ‘Embrace Continuous Security and Ensure CIS Compliance for Your AWS, Always.’

Cost Allocation for AWS EBS Snapshots

What is about: AWS has been evolving the custom tagging support for most of the services like EC2, RDS, ELB, BeanStalk, etc. And now it has introduced Cost Allocation for EBS snapshots. Botmetric, quickly acting on this new AWS announcement, incorporated this cost allocation and cost analysis for EBS snapshots.

How it will help: It will allow you to use Cost Allocation Tags for your EBS snapshots so that you can assign costs to your customers, applications, teams, departments, or billing codes at the level of individual resources. With this new feature you can analyze your EBS snapshot costs as well as usage easily.

Where can you find this feature on Botmetric: Under Cost & Governance’s Chargeback console.

To know more in detail, read the blog ‘Cost Allocation for AWS EBS Snapshots Made Easy, Get Deeper AWS Cost Analysis.’

Use of InfluxDB Real-Time Metrics Data Store by Botmetric

What is about: Botmetric’s journey in choosing InfluxDB real-time metrics data store over KairosDB+Cassandra cluster, and key reasons why engineer or an architect looking for a real-time data store featuring a simple operational management should opt for InfluxDB.  

How it helped Botmetric: With the use of InfluxDB, Botmetric could speed-up application development time, while the simple operational management of InfluxDB has been helpful. Plus, team Botmetric was able to easily query data and aggregate it. Above all, InfluxDB offered auto expiry support for certain datasets. Using InfluxDB, Botmetric is able reduce its DevOps effort in cleaning up old data using separate utilities.

Knowledge Sharing @ Botmetric

5 Cloud Security Trends Shaping 2017 and Beyond

While the switch to cloud computing provides many advantages in cost savings and flexibility, security is still a prime consideration for several businesses. It’s vital to consider new cloud technologies in 2017 for countering such rising threats. This guest post by Josh McAllister covered the top cloud security trends that are shaping 2017. Some of them are AI and automation, micro-segmentation, software governance, adopt new security technologies, ransomware and the IoT, and much more. If you are looking to improve your security posture, then this blog post is a must read.  

The Biggest Pet Peeves of Cloud Practitioners and Why You Should Know

Despite adoption, there are a lot of barriers and challenges to a cloud’s adoption and acceleration. So it is for cloud practitioners as well. Botmetric throws some light on it — it could be apprehensions about losing control and visibility over data, having lesser visibility and control over operations compared to on-prem IT infra, fear of bill shock, and more. As a cloud user, do you want to know the top pet peeves of a cloud practitioner and turn them into possibilities or opportunities? Know about these roadblocks here.

A CFO’s Roadmap to AWS Cloud Cost Forecasting and Budgeting

Despite exponential increase in cloud adoption, there is one major fear attached to AWS, for that matter all the cloud’s adoption — how to be on top of cloud sprawl, and how to perfect AWS cost forecasting and budgeting as an enterprise business. To add to it, for today’s CFOs, IT is at the top of their agenda. If  you are a CFO trying to up your game and seeking to build a roadmap for AWS cloud cost modelling, spend forecasting and cloud budgeting, and above all assuage cloud sprawl?  Bookmark this blog.

What is NoOps, Is it Agile Ops?

DevOps is there, but today it is being augmented with NoOps using automation. And by taking a NoOps approach, businesses will be able to focus on clean application development, shorter cycles, and more so increased business agility.

On the other hand, in the journey of DevOps, if you automate mundane Ops tasks, it leads to NoOps. Essentially, NoOps frees-up developers’ time to further utilize their time for more innovation and to bring agility into ops (which is Agile Ops). Do read Botmetric’s take on this.

​Ultimate Comparison of AWS EC2 t2.large vs. m4.large for Media Industry

Two types of AWS EC2 instances, t2.large and m4.large, feature almost similar configuration. With media sites required to handle large number of concurrent visitors at any given time, both these resources seem perfect. This makes it challenging to make a decision on choosing the best resource, in terms of price and performance if you are a media company.  To eliminate this confusion, Botmetric has come up with information break-up of AWS EC2 t2.large vs. m4.large for media companies.  If you are a media company on AWS, this post by Botmetric might interest you.

The Wrap-up

Before we wrap-up this month, we have a freebie to share. Botmetric has always recommended AWS users to use tagging and monitoring as a stepping stone towards ensuring budgeting and cost compliance. To this end, Botmetric has come up with an expert guide that will help save cost on AWS cloud with smart tagging. Download it here.

Until next month, stay tuned with us.

What is NoOps, Is it Agile Ops?

Some time during 2011, Forrester had released a report ‘Augment DevOps With NoOps‘ quoting, “DevOps is good, but cloud computing will usher in NoOps.” It’s been over five years, several statements quoted then in the report still have a lot of weightage. While several have embraced cloud and Devops, there’s a huge bunch of DevOps professionals out there who still think NoOps is the end of DevOps. But, in reality NoOps is just the progression of DevOps.

And DevOps being the mere extension of Agile to include Ops as well, can we call NoOps as Agile Ops? In this post we will dive deep into how developers are building, testing and deploying applications, automating operations and making use of micro services, leading to  NoOps (more so Agile Ops) where everything is rolling out fast, very fast.  

Role of Cloud in NoOps and the Continuous Automative-Approach to DevOps for Agility

Before DevOps concept came into existence, the development team was responsible for the estimation of servers, memory and network components and the final specification of the resources. This was a tedious process. Later ITOps started taking care of estimation of servers, memory and network components and the final specification of the resources, while also managing them. However, to bring in agility, DevOps was born where developers started leveraging Agile concepts and managing operations too to roll-out applications faster.  

Today, several cloud and PaaS platforms help developers automate the application lifecycle management activities such as allocating a machine instance to run the application, loading the OS, and other architectural software components like application servers and databases, setting up the networking layer, building the application from the latest source in the code repository, deploying it in the configured machine, etc.  

So as and when the developers automate operational tasks, they are able to free-up their time more for business logic and less for operations. In most cases they perform ‘no Ops tasks at all.’ In essence, they have made a progression from DevOps towards NoOps.

DevOps is there, but it is being augmented with NoOps using automation.

Mike Gualtieri of Forrester Research, who coined the term NoOps, once said in his blog post, “NoOps means that application developers will never have to speak with an operations professional again.”  This means that now more and more developers are responsible for operations. And operations are getting ingrained in job description of developers. Thanks to increasing cloud adoption, today’s operational tasks are increasingly carried out by developers more rather than the ITOps professional. Here’s why: Cloud has brought in consistency and elasticity which makes it easier for developers to automate everything using APIs.

For instance, the leading public cloud AWS offers a bunch of services and tools that have the capability to automate repetitive tasks. Use of services like Jenkins, CodePipeline, & CodeDeploy helps them automate their build-test-release-deploy process. This enables developers to deploy a new piece of code into production, potentially saving hundreds of hours every month.

Consider the Netflix case study. Adrian Cockcroft, VP Cloud Architecture Strategy at AWS and then Cloud Architect at Netflix, says his blog post, “Several hundred development engineers use tools to build code, run it in a test account in AWS, then deploy it to production themselves. They never have to have a meeting with ITops, or file a ticket asking someone from ITops to make a change to a production system, or request extra capacity in advance.”

Cockcroft further adds in the same post, “They use a web based portal to deploy hundreds of new instances running their new code alongside the old code, put one ‘canary’ instance into traffic, if it looks good the developer flips all the traffic to the new code. If there are any problems they flip the traffic back to the previous version (in seconds) and if it’s all running fine, some time later the old instances are automatically removed. This is part of what we call NoOps.”

“NoOps approach leads to business focus, clean application development, shorter cycles, and more so increased business agility.” – Vijay Rayapati, CEO, Minjar Cloud Solutions

Further, as DevOps and microservices work better when applied together, adopting microservices architectural style and common toolset that supports it through code, engineers can bring about additional productivity to DevOps and agility to Ops.

DevOps and Microservices Architecture: Moving Hand-in-hand to Enable NoOps

Microservices can help developers and DevOps collaborate over requirements, dependencies, and problems, allowing them to work jointly on a problem such as build configuration or build script issue. With microservices, functional components can be deployed in their own archives. The application can then be organized as a logical whole through a lightweight communication component such as REST over HTTP.

More so, microservices-based architecture empowers DevOps to manage their own line of codes (LOCs) without depending on others for deploying them anytime. By enabling this independence, microservices architecture can not only help increase developer productivity but also make the applications more flexible and scalable.  

Here’re the highlights how microservices can help DevOps in all the aspects of operations management:  

  • Service Deployability: Microservices enables DevOps to incorporate service-specific security, replication, persistence, and monitoring configurations.
  • Service Replication: Kubernetes provides a great way to replicate services easily using Replication Controller when services are needed to be replicated using X-axis cloning or Y-axis partitioning. Each service can build their logic to scale.
  • Service Resiliency:  Since the services are independent by design, even if one service fails, it would not bring down the entire application. The DevOps can remedy that particular service without having to worry about the cascading impact because of the individual service failure.
  • Service Monitoring: As a distributed system, Microservices can simplify service monitoring and logging. Microservices allows DevOps to take a proactive action, for example, if a service is consuming unexpected resources and scale resources only for that service alone.

Considering the above points, DevOps should embrace the microServices approach to bring agility to all the Ops tasks carried out by DevOps engineers.

Public Cloud Services: Empowering DevOps to Move Towards NoOps

Due to a diverse set of features, tools, and services offered by cloud services, today’s developers as well as DevOps are able to automate several tasks and autoscale without the help of ITOps professionals.  This has brought down the burden of doing repetitive operational tasks, especially for developers and DevOps. For instance:

  1. Auto Scaling: A DevOps can create collections of EC2 instances/VMs, specify desired instance ranges, and create scaling policies that define when instances are provisioned or removed from the collection. When this resource provisioning capability is available at hand, the tasks of Ops team gets redundant and they can focus more on business logic rather than Ops.
  2. AWS OpsWorks, which helps configure and manage applications, create groups of EC2 instances streamline the instance provisioning and management process. When this resource managing capability is available at hand, the tasks of Ops team gets redundant and thus they can focus more on business logic rather than Ops.
  3. A centralized log management tool helps developers and devops simplify troubleshooting by monitoring, storing, and accessing log files from EC2, AWS CloudTrail, and other sources.
  4. Using EKK stack, a developer can focus on analyzing logs and debugging application instead of managing and scaling the system that aggregates the logs.  
  5. AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy help automate manual tasks or processes including deployments, development & test workflows, container management, and configuration management.
  6. AWS Config, which creates an AWS resource inventory like configuration history, configuration change notification, and relationships between AWS resources. It provides a timeline of resource configuration changes for specific services too.  Plus, change snapshots are stored in a specified Amazon S3 bucket and can be configured to send Amazon SNS notifications when AWS resource changes are detected. This will help keep vulnerabilities under check.

If you want to know how to work the magic with DevOps for AWS Cloud Management, read this post.

The last word: NoOps Brings Agility to Ops Tasks, So NoOps is Synonymous to Agile Ops

In journey of DevOps, if you automate mundane Ops tasks, it leads to NoOps. Essentially, NoOps frees-up developers’ time to further utilize their time for more innovation and to bring agility into ops (which is Agile Ops). Whatever you perceive it is: Ops Automation, AI Ops, Agile Ops , or more, it is a rolling stone with the right momentum.

What is your take on this? Do share your thoughts.

April Roundup @ Botmetric: Aiding Teamwork to Solidify 3 Pillars of Cloud Management

Spring is still on at Botmetric, and we continue to evolve like seasons with new features. This month, the focus was on how to bring in more collaboration and teamwork while performing various tasks related to cloud management. The three pillars of cloud management, visibility, control, and optimization, can be solidified only with seamless collaboration. To that end, Botmetric released two cool collaborative features in April: Slack Integration and Share Reports.

1. Slack Integration

What is it about: Integrating Slack collaboration tool and Botmetric so that a cloud engineer will never miss an alert or notification when on a Slack channel and quickly communicate/alert it to their team ASAP. 

How will it help: Cloud engineers can quickly get a sneak-peak into specific Botmetric alerts, as well as details of various cloud events, on their desired channel of Slack. Be it an alert generated by Botmetric’s Cost & Governance, Security & Compliance, or Ops & Automation, engineers can see these alerts without logged into Botmetric, and quickly communicate the problem between the team members.

Where can you find this feature on Botmetric: Under the Admin section inside 3rd Party Integrations.

To know more in detail, read the blogBotmetric Brings Slack Fun to Cloud Engineers

2. Share/Email Data-Rich AWS Cloud Reports Instantly

What is it about: Sharing/emailing Botmetric reports directly from Botmetric. No downloading required.

How will it help: For successful cloud management, all the team members need complete visibility with pertinent data in the form of AWS cloud reports. The new ‘Share Reports’ feature provides complete visibility across accounts and helps multiple AWS users in the team better collaborate while managing the cloud.

Where can you find this feature on Botmetric: Across all the Botmetric products in the form of a share icon.

To know more in detail, read the blog ‘Share Data-Rich AWS Cloud Reports Instantly with Your Team Directly From Botmetric.’

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of April:

Gauge AWS S3 Spend, Minimize AWS S3 Bill Shock

AWS S3 offers a durability of  99.999999999% compared to other object storage on AWS, and features simple web interface to store and retrieve any amount of data. When it comes to AWS S3 spend, it has something more in it beyond just the storage cost. If you’re a operations manager or a cloud engineer, you probably know that data read/write or data moved in/out also do count  AWS S3 bill. Hence, a detailed analysis of all these can help you keep AWS S3 bill shock to a minimum. To know how, visit this page.

7 Tips on How to Work the Magic With DevOps for AWS Cloud Management

Are you a DevOps engineer looking for complete AWS cloud management? Or are you a AWS user looking to use DevOps practices to optimize your AWS usage? Both ways, AWS and DevOps are modern way of getting things done. You should leverage new age DevOps tools for monitoring, application performance management, log management, security, data protection and cloud management instead of trying to build adhoc automation or dealing with primitive tools offered by AWS.

Get the top seven tips on how to work the magic with DevOps for AWS cloud management.

The Ultimate Cheat Sheet On Deployment Automation Using AWS S3, CodeDeploy & Jenkins

If you’re a DevOps engineer or an enterprise looking for a complete guide on how to automate app deployment using Continuous Integration (CI)/Continuous Deliver(CD) strategies, and tools like AWS S3, CodeDeploy, Jenkins & Code Commit, then bookmark this blog penned by Minjar’s cloud expert.

Botmetric Cloud Explorer: A Handy Topological Relationship View of AWS Resources

Do you want to get a complete understanding of your AWS infrastructure. And map how each resources are connected and where they stand today for building stronger governance, auditing, and tracking of resources. Above all get one handy, cumulative relationship view of AWS resources without using AWS Config service. Read this blog how to get a complete topological relationship view of your AWS resources.

The Cloud Computing Think-Tank Pieces @ Botmetric

5 Reasons Why You Should Question Your Old AWS Cloud Security Practices

While you scale your business on cloud, AWS too keeps scaling its services too. So, cloud engineers have to constantly adapt to architectural changes as and when AWS updates are announced. While all architectural changes are made, AWS Cloud Security best practices and audits need to be relooked too from time to time.

Tightly Integrated AWS Cloud Security Platform Just a Click Away

As a CISO, you must question your old practices and relook at them whether it’s relevant in the present day. Here’re the excerpts from a think tank session highlighting the five reasons why you should question your old practices.

The Rise of Anything as a Service (XaaS): The New Hulk of Cloud Computing

The ‘Cloud-driven aaS’ era is clearly upon us. Besides the typical SaaS, IaaS, and PaaS offerings discussed, there are other ‘As-a-Service(aaS)’ offerings too. For instance, Database-as-a-service, Storage-as-a-Service, Windows-as-a-Service, and even Malware-as-a-Service. It is the era of Anything-as-a-Service (XaaS). Read the excerpts from an article by Amarkant Singh, Head of Product, Botmetric, featured on Stratoscale, which share views on XaaS, IaaS, PaaS, and SaaS.

April Wrap-Up: Helping Bring Success to Cloud Management

Rain or shine, Botmetric has always striven to bring success to cloud management. And will continue to do so with DevOps, NoOps, AIOps solutions.

If you have missed rating us, you can do it here now. If you haven’t tried Botmetric, we invite you to sign-up for a 14-day trial. Until the next month, stay tuned with us on Social Media.

The Ultimate Cheat Sheet On Deployment Automation Using AWS S3, CodeDeploy & Jenkins

A 2016 State of DevOps Report indicates that high-performing organizations deploy 200 times more frequently, with 2,555 times faster lead times, recover 24 times faster, and have three times lower change failure rates. Irrespective of whether your app is greenfield, brownfield, or legacy, high performance is possible due to lean management, Continuous Integration (CI), and Continuous Delivery (CD) practices that create the conditions for delivering value faster, sustainably.

And with AWS Auto Scaling, you can maintain application availability and scale your Amazon EC2 capacity up or down automatically according to conditions you define. Moreover, Auto Scaling allows you to run your desired number of healthy Amazon EC2 instances across multiple Availability Zones (AZs).

Additionally, Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during less busy periods to optimize costs.

Optimize your cloud spend and performance from a single console

The Scenario

We have an application of www.xyz.com. The web servers are setup on Amazon Web Services (AWS). As a part of the architecture, our servers are featured with AWS Auto Scaling service which is used to help scale our servers depending on the metrics and policies we specified. So every time a new feature is developed, we have to manually run the test cases before the code gets integrated and deployed. Later, we need to pull the latest code to all the environment servers. There’re several challenges while doing it manually.

The Challenges

The challenges of manually runing the test cases before the code gets integrated and deployed are:

  1. Pulling and pushing code for deployment from a centralized repository
  2. Working manually to run test cases and pull the latest code on all the servers
  3. Deploying code on new instance that are configured in AWS Auto Scaling
  4. Pulling the latest code on one server, taking the image of that server, re configuring it with AWS Auto Scaling, since the servers were auto scaled
  5. Deploying build automatically on instances in a timely manner
  6. Reverting back to previous build

The above challenges requires lots of time and human resource. So we have to find a technique that can save time and make our life easy while automating all the process from CI to CD.

Here’s a complete guide on how to automate app deployment using AWS S3, CodeDeploy, Jenkins & Code Commit.

To that end, we’re going to use:

Now, let’s walk through the flow, how it’s going to work, and what are the advantages before we implement it all. When a new code is pushed to a particular GIT repo/AWS CodeCommit branch:

  1. Jenkins will run the test cases (Jenkins listening to a particular branch through git web hooks )
  2. If the test cases fail, it will notify us and stop the further after-build actions.
  3. If the test cases are successful, it will go to post build action and trigger AWS CodeDeploy.
  4. Jenkins will push the latest code in the zip file format to AWS S3 on the account we specify.
  5. AWS CodeDeploy will pull the zip file in all the Auto Scaled servers that have been mentioned.
  6. For the auto scaling  server, we can choose the AMI that has the default AWS CodeDeploy Agent running on it. This agent helps AMIs to launch faster and pull the latest revision automatically.
  7. Once the latest code is copied to the application folder , it will once again run the test cases.
  8. If the test cases fail, it will roll back the deployment to previous successful revision.
  9. If it is successful , it will run post deployment build commands on server and ensure that latest deployment does not fail.
  10. If we want to go back to previous revision then also we can roll back easily

This way of automation using CI and CD strategies makes the deployment of application smooth, error tolerant, and faster.

Smart Deployment Automation: Using AWS S3, CodeDeploy, Jenkins & CodeCommit

The Workflow:

Here’s the workflow steps of the above architecture:

  1. The application code with the Appspec.yml file will be pushed to the AWS CodeCommit. The Appspec.yml file includes the necessary scripts path and command, which will help the AWS CodeDeploy to run the application successfully
  2. As the application and Appspec.yml file will get committed in the AWS CodeCommit, Jenkins will automatically  get triggered by poll SCM function.
  3. Now Jenkins will pull the code from AWS CodeCommit into its workspace (Path in Jenkins where all the artifacts are placed) and archive it and push it to the AWS S3 bucket. This can be considered as Job1.

Here’s the Build Pipeline

Jenkins pipeline (previously workflow) refers to the job flow in a specific manner. Building Pipeline means breaking the big Job into small individual jobs, relying on which, if first job get failed then it will trigger the email to the admin and stop the building process at that step only and will not move to the second job.

To achieve the pipeline, one should need to install the pipeline plugin in Jenkins.

According to the above scenario, the Jobs will be broken into three individual Jobs

  • Job 1: When the code commit runs, the Job 1 will run and it will pull the latest code from the CodeCommit repository, and it will archive the artifact and email about the status of Job1, whether it got successful build or got failed altogether with the console output. If the Job1 got build successfully then it will trigger to Job 2
  • Job2: This Job will run only when the Job 1 will be stable and run successfully. In Job2, the artifacts from Job1 will be copied to workspace 2 and will be pushed to AWS S3 bucket. Post to that if the artifacts will be send to S3 bucket, the email will be send to the admin. And then it will trigger the Job3
  • Job3: This Job is responsible to invoke the AWS CodeDeploy and pull the code from S3 and push it either running EC2 instance or AWS auto scaling instances. When it will be done

The below image shows the structure of pipeline.

Smart Deployment Automation: Using AWS S3, CodeDeploy, Jenkins & CodeCommit

Conditions:

  1. If Job 1 executes successfully then it will trigger the Job2, which is  responsible to pull the successful build version of code to S3 bucket and then trigger the Job3. If Job 2 fails, then again email will be triggered with a message of Job Failure.
  2. When Job 3 gets triggered, the archive file (Application code along with Appspec.yml) will be pushed to AWS CodeDeploy deployment service, where AWS Code Deploy will run the CodeDeploy agent in the instance and run the Appspec.yml file that will help the application to get up and running.
  3. If at any point the Job fails then the application will be deployed with the previous build.

Below are the five steps necessary for deployment automation using AWS S3, CodeDeploy, Jenkins & CodeCommit.

Step 1: Set Up AWS CodeCommit in Development Environment

Create an AWS CodeCommit repository:

1. Open the AWS CodeCommit console at https://console.aws.amazon.com/codecommit.

2. On the welcome page, choose Get Started Now. (If a Dashboard page appears instead of the welcome page, choose Create new repository.)

DEPLOYMENT AUTOMATION USING AWS S3, CODEDEPLOY, JENKINS AND CODE COMMIT

3. On the Create new repository page, in the Repository name box, type xyz.com

4. In the Description box, type Application repository of http://www.xyz.com

DEPLOYMENT AUTOMATION USING AWS S3, CODEDEPLOY, JENKINS AND CODE COMMIT

5. Choose Create repository to create an empty AWS CodeCommit repository named xyz.com

Create a Local Repo

In this step, we will set up a local repo on our local machine to connect to our repository. To do this, we will select a directory on our local machine that will represent the local repo. We will use Git to clone and initialize a copy of our empty AWS CodeCommit repository inside of that directory. Then we will specify the username and email address that will be used to annotate your commits. Here’s how you can create a Local Repo:

1. Generate ssh-keys in your local machine #ssh-keygen without any passphrase.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

2. Cat id_rsa.pub and paste it into the IAM User->Security Credentials-> Upload SSH Keys Box. And Note Down the SSH-KeyID

$ cat /.ssh/id_rsa.pub 

Copy this value. It will look similar to the following:

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

  1. Click on Create Access keys and Download the Credentials having Access Key and Secret Key.

2. Set the Environment Variables in BASHRC File at the end.

# vi /etc/bashrc

          export AWS_ACCESS_KEY_ID=AKIAINTxxxxxxxxxxxSAQ

         export AWS_SECRET_ACCESS_KEY=9oqM2L2YbxxxxxxxxxxxxzSDFVA

3. Set the config file inside .ssh folder

# vi ~/.ssh/config

Host git-codecommit.us-east-1.amazonaws.com

 User APKAxxxxxxxxxxT5RDFGV

 IdentityFile ~/.ssh/id_rsa             —> Private Key

         # chmod 400 config

4. Configure the Global Email and Username

#git config –global user.name “username”

#git config –global user.email “emailID”

5. Copy the SSH URL to use when connecting to the repository and clone it

#git clone ssh://git-codecommit.us-east-1.amazonaws.com/xyz.com

6. Now put the Application/Code inside the cloned directory and also write the Appspec.yml file and you are ready to push it.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

7. Install_dependencies.sh includes.

#!/bin/bash

yum groupinstall -y “PHP Support”   

yum install -y php-mysql  

yum install -y httpd

yum install -y php-fpm  

Start_server.sh includes

#!/bin/bash

service httpd start  

service php-fpm start

Stop_server.sh includes

#!/bin/bash

isExistApp=`pgrep httpd`  

if [[ -n  \$isExistApp ]]; then  

 service httpd stop

fi  

isExistApp=`pgrep php-fpm`  

if [[ -n  \$isExistApp ]]; then  

  service php-fpm stop

Fi

Appspec.yml includes

version: 0.0  

os: linux  

files:  

– source: /

  destination: /var/www/xyz.com

hooks:  

BeforeInstall:

  – location: .scripts/install_dependencies.sh

    timeout: 300

    runas: root

ApplicationStart:

  – location: .scripts/start_server.sh

    timeout: 300

    runas: root

ApplicationStop:

  – location: .scripts/stop_server.sh

    timeout: 300

    runas: root

Now push the code to the CodeCommit

# git add .

# git commit -m “1st push”

        # git push

8. Now we can see that the code will be pushed to the CodeCommit.

Step 2: Setting Up Jenkins Server in EC2 Instance

1. Launch the EC2 instance (CentOS7/RHEL7) and perform the following operations

# yum update -y

# yum install java-1.8.0-openjdk

Verify the java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

# yum install java-1.8.0-openjdk

2. Verify the Java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

3. Install Jenkins:

# yum install Jenkins

4. Add Jenkins to system boot:

# chkconfig jenkins on

5. Start Jenkins:

# service jenkins start

6. By default Jenkins will start on Port 8080, this can be verified via

# netstat -tnlp | grep 8080

7. Go to browser and navigate to http://:8080. You will see Jenkins dashboard.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

8. Configure the Jenkins username and password, and install the AWS and GIT related plugins.

Here’s how to Setup a Jenkins Pipeline Job:

Under Source Control Management click on GIT.

Pass the GIT ssh URL and under credentials click on ADD and then in kind option click SSH username with PrivateKey.

Note that username will be same as mentioned in the config file of development machine where repo was initiated and we have to catch the private key of development machine and paste it here.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

In Build Trigger, click on Poll SCM and mention the time whenever you want to start the build.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

For the Post Build Action, we have to archive the files and  provide the name of Job 2, if the Job 1 will  get successful build after then it should trigger the email.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Now for the time being we can start building the Job and we have to verify that when the code is committed. By now, Jenkins should start building automatically and tell whether it is able to pull the code into its workspace folder. But before that we have to create S3 bucket and pass credentials (Access key and Secret key) in Jenkins so that when the Jenkins pulls code from AWS CodeCommit  it can push build in the s3 bucket after archiving.

Step 3: Create S3 Bucket

Create S3 Bucket.

After creating S3 bucket, provide the details into Jenkins with AWS credentials.

After creating S3 bucket, provide the details into Jenkins with AWS credentials.

Now when we run Job 1  of Jenkins it will pull the code from AWS CodeCommit. After archiving, it will keep it into the workspace folder of Job1.

AWS CodeCommit Console Output

AWS CodeCommit Console Output

From the above Console output, we can see that it is pulling the code from AWS CodeCommit. After archiving, it is triggering the email. Post that it calls for the next job, Job 2.

Console Output

The above image shows that after building Job2, the Job3 will also get triggered. Now before triggering Job3, we need to setup AWS CodeDeploy environment.

Step 4: Launch the AWS CodeDeploy Application

Creating IAM Roles

Create an IAM instance profile and attach AmazonEC2FullAccess policy and also attach the following inline policy:

{

  “Version”: “2012-10-17”,

  “Statement”: [

      {

          “Action”: [

              “s3:Get*”,

              “s3:List*”

          ],

          “Effect”: “Allow”,

          “Resource”: “*”

      }

  ]

}

Select Role type AWS CodeDeploy

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create an auto scaling group for a scalable environment.

Here’re the steps below:

1. Choose an AMI and select an instance type for it. Attach the IAM instance profile that we created in the earlier step.

Choose an AMI and select an instance type for it. Attach the IAM instance profile

2. Now go to Advanced Settings and type the following commands in “User Data” field to install AWS CodeDeploy agent on your machine (if it’s not already installed on your AMI)

#!/bin/bash

yum -y update  

yum install -y ruby  

yum install -y aws-cli  

sudo su –  

aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . –region us-east-1

chmod +x ./install  

./install auto

3. Select Security Group in the next step and create the launch configuration for the auto scaling group. Now using the launch configuration created in the above step, create an auto scaling group.

4. Now after creating Autoscaling group, it’s time to create the Deployment Group.

5. Click on AWS CodeDeploy and Click on create application.

6. Mention the application name and deployment Group Name.

AWS codedeploy and click on create application

7. In tag type, click on either EC2 instance or AWS AutoScale Group. Mention the name of EC2 instance or AWS Autoscale Group.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

8. Select ServiceRoleARN for the service role that we created in the “Creating IAM Roles” section of this post.

9. Go to Deployments and choose Create New Deployment.

10. Select Application and Deployment Group and select the revision type for your source code.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

11. Note that the IAM role associated with the instance or autoscale group should be same as CodeDeploy and the arn name must have the CodeDeploy policy associated with it.

Step 5: Fill CodeDeploy Info in Jenkins and build it

1. Now go back to Jenkins Job 3 and click on “Add PostBuild Action” and select “Deploy the application using AWS CodeDeploy.

2. Fill the details of AWS CodeDeploy Application Name, AWS CodeDeploy Deployment Group, AWS CodeDeploy Deployment Config,  AWS Region  S3 Bucket, Include Files ** and click on Access/secret to fill the Keys for the Authentication.

3. Click on save and build the project. After few minutes, the application will be deployed on the Autoscale instances.

4. When this Job3 will get build successfully then we will get the console output as below:

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

5. After this Build, there will be changes that takes place in AWS CodeDeploy group.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

6. Once you hit the DNS of the instance, you will get your application up and running.

To Wrap-Up

It’s proven that teams and organizations who adopt continuous integration and continuous delivery practices significantly improve their productivity. And with AWS CodeDeploy with Jenkins, it is an awesome combo when it comes to automating app deployment and achieve CI and CD.

Are you an enterprise looking to automate app deployment using CI/CD strategy? As a Premier AWS Consulting Partner, we at Minjar have your back! Do share your comments in the section below or give us a shout out on Twitter, Facebook or LinkedIn.

Share Data-Rich AWS Cloud Reports Instantly with Your Team Directly From Botmetric

Once Henry Ford said, “Coming together is a beginning. Keeping together is progress. Working together is success.” This adage holds so true to find success while managing AWS cloud. For the reason that: to achieve complete AWS cloud management is not a one person’s responsibility, but is a shared responsibility and more so a teamwork. And for the teamwork to reap benefits, all the team members need complete visibility with pertinent data in the form of AWS cloud reports in hand. To cater to this need Botmetric has introduced ‘Share Reports’ feature that allows a Botmetric user to share important AWS cost, security or Ops automation reports with multiple AWS users for better collaboration.

If you’re a Botmetric user, you can now:

  • Share the data-rich reports directly from any Botmetric products, thus saving time and effort
  • Educate both Botmetric and non-Botmetric user(s) within your team about various aspects of your AWS infrastructure
  • Highlight items that may need action by other teammates

Why Botmetric Built Share Reports

Currently, Botmetric offers more than 40 reports and 30 graphs and charts. These reports, charts and graphs help for better cloud governance. More so, these data-rich reports offer a great culmination of insights and help keep you updated on your AWS infrastructure.

Earlier, Botmetric empowered its users (those added to your Botmetric account) to download all these reports. However, at times, it’s likely you’ll need to send perpetual reports to other colleagues too that may not be part of your Botmetric Account.

Thus, continuing our mission to provide complete visibility and control for AWS users and your AWS infrastructure, Botmetric now allows you to email/share those reports directly to non-Botmetric user(s) too. By doing so, Botmetric empowers every custodian for cloud in your organization responsible for cloud with pertinent data, even if they are not Botmetric users.

More so, the new share functionality enables you to share specific reports across Cost & Governance, Security & Compliance, and Ops & Automation to custodians who are not Botmetric users in your organization and wish to discover knowledge on certain AWS cloud items.

The new share reports can be used across Botmetric platform in two specific ways:

1. Share Historical Reports

Share all the AWS cloud reports present under reports library on the Botmetric console to other custodians in the team.

Share all the AWS Cloud reports for better cloud management

2. Export and Share Charts and Graphs as CSV Reports

If you find any crucial information in any of the reports under Botmetric Cost & Governance, Security & Compliance or Ops & Automation, you can share using the ‘Share icon’ to any other custodian who isn’t Botmetric user(s) but responsible for cloud.

Share AWS cloud reports on Cost, Security, Ops with the team using Botmetric

For example, you would want to share the list of ports open to public to the person in your team who is responsible for perimeter security. You can do this from Audit Reports section of Security & Compliance.

The Bottom Line:

AWS has more than 70 resources and each resource has multiple family types. With so many variance in AWS’ services, you surely need either holistic information or a particular information at some point for analysis. With Botmetric reports and the new sharability feature, you and your team can together manage and optimize your AWS cloud with minimal effort.    

If you are a current Botmetric user, then Team Botmetric invites you to try this feature and share your feedback. If you’re yet to try Botmetric and want to explore this feature, then take up a 14 day trial . If you have any questions on AWS cloud management, just drop in a line below in the comment section or give us a shout out at @BotmetricHQ.

Botmetric Brings Slack Fun to Cloud Engineers

Slack (the real-time messaging app), today, is one of the robust communication platforms among various teams. Teams of all shapes and sizes —  right from NASA to NGOs — are using Slack today as a go-to tool for both communication as well as collaboration. Thanks to all of the fun and useful integrations Slack folks have built on top of it, there’s a whole lot of really cool stuff you can do with Slack. Above all, it helps engineers’ work fun and more collaborative.

Closely following our tenets to make a cloud engineer’s life more easier by the day, we’re excited to bring Botmetric and Slack integration. With this integration, cloud engineers can quickly get a sneak-peak into specific Botmetric alerts, as well as details of various cloud events, on their desired channel of Slack.

Be it an alert generated by Botmetric’s Cost & Governance, Security & Compliance, or Ops & Automation, a cloud engineer will never miss an alert or notification when on Slack.

We Get Notifications on Email Anyway, Is Botmetric-Slack Integration Really Necessary?

Understood that most of the alerts and notification management in IT infrastructure are delivered to you in form of emails. But with alert deluge, it becomes annoying. Email is of course one of the most crucial form of communication in corporate world. For communication with external stakeholders, it has proven success. However, for internal communication it is more time consuming. Plus, enterprise have various usage like file sharing, integration, private groups etc. even for internal usage.

As I said earlier: Slack makes engineers’ work fun and more collaborative. You as an engineer, have you ever wondered how many of email you swarm around each day. Chat is one channel that has proved to be more collaborative and productive. That’s why, it has been a more preferred tool amongst most team members. And Slack is one such collaborative tool that has a plethora of 3rd party integration capability making communication more collaborative, textual, transparent and efficient.

So, we recommend Botmetric and Slack integration for seamless alerts and notifications management for effective communication of AWS cloud issues over chat.

How to Make the Best Use of Botmetric-Slack Integration?

Botmetric generates several alerts day in day out on various cloud events. Since this happens continuously in the form of alerts or notifications throughout the day, monitoring these alerts on the most favoured channel will ease a cloud engineer’s life.

  • Receive desired Botmetric alerts or notifications in real-time onto your desired Slack channel
  • Never miss a Botmetric notification anymore, even while not logged into Botmetric
  • Be more nimble at work and agile on cloud. Increase productivity

Botmetric Brings Slack Fun to Cloud Engineers

Who Can Use Botmetric-Slack Integration?

Anyone who has subscribed to Botmetric and is on Slack and would like to receive specific Botmetric alerts or notifications in real-time on the Slack channel of their choice can use it.

What All Can You do With Botmetric-Slack Integration?

You can perform several integrations using this feature. Few of them are listed below:

1. You can create very specific integrations. For example, you may chose to receive Ops & Automation alerts on a channel that has only developers in it. Similarly, you can also create a separate integration for Cost & Governance where only senior management is present.

Botmetric and Slack Integration

 

2.  Integrations can be created per account and/or per notification event type.

 Botmetric-Slack Integration

3. You can pick and choose notification events for which you wish to receive notifications.

4. If you are using an application for ticketing (that listens to a slack channel and creates tickets) then this adds another dimension to it. For example, under Ops & Automation , whenever an incident is created, a message is pushed to a slack channel. This message can be used to create an automated ticket in your system.

This Botmetric-Slack integration feature can be found under the Admin section inside 3rd Party Integrations.

The Wrap-Up

Communication and collaboration are essential part of life. Enterprises who have proven success in teams have two things in common: collaboration and communication. In fact, efficacy of a business depends on communicating the right thing at the right time. Slack is doing just that. Helping businesses with apt communication and collaboration. With Slack and Botmetric integration, your cloud engineers will never miss an alert or notification from Botmetric.

Do try this feature and provide feedback. If you need any assistance in integrating Botmetric with Slack, just drop in a line in the comment section below or visit Botmetric support page. You can also give us a shout out either on  Facebook, Twitter, or LinkedIn to know more about it.

If you’re yet to try Botmetric, then we invite you to take a 14-day trial

Botmetric Cloud Explorer: A Handy Topological Relationship View of AWS Resources

Picture this: A cloud engineer is trying hard to map all his AWS resources to have a complete understanding of the infrastructure. He also wants to map how each resources are connected and where they stand today so that he can build stronger governance, auditing, and tracking of resources. All he wishes for is one handy, cumulative relationship view of AWS resources in a topological view. Of course, there is AWS Config service at his disposal, but it does not provide that topological view.

Plus, getting a complete relationship view of AWS resources can be taxing. For the reason that: when on AWS, we tend to create, delete, and manage resources sporadically. No more worries. Botmetric Cloud Explorer Relationship View has your back!

Why Botmetric Cloud Explorer Relationship View?

“Sometimes, it’s good to get a different perspective,” says a famous adage. You don’t get a complete picture of what’s happening when you are cleaving through the complex roads. You get to figure out what you are looking for only when you take a different perspective. Perhaps, a bird’s eye view will help rather than deep diving into complex data. Likewise, when you deep dive into your cloud data, there are chances you will be lost. However, if you get a bird’s eye view of your AWS resources, then it’s nothing like it.

Of course there is AWS Config service at your disposal, but on a long run, a relationship view of all AWS resources will help manage and evaluate these resources with greater accuracy and less effort.

Here, at Botmetric, we always strive to give a complete picture of your AWS cloud infrastructure, not just the tip of the iceberg. That’s why we built Cloud Explorer that provides a handy topology and relationship view of all your AWS cloud resources.

Botmetric Cloud Explorer’s Relationship View gives the topological representation of your complete AWS infrastructure. In a single glance, you can get a complete view of your resources how they are connected to each other.

The primary function of Relationship View is to track the state of different AWS resources like AWS VPCs, AWS Subnets, EC2 Instances, EC2 volumes, Security Groups, EIP, Route Table, Internet Gateway, VPN Gateway, Network Interface, Network ACL, Customer Gateway, and more.

Botmetric Cloud Explorer Relationship View of AWS Resources

 

And, if you’re an organization or an enterprise with a huge number of servers under a VPC, Botmetric Cloud Explorer’s Relationship View will give you a view of which server is connected to which Subnet. Plus, it also gives topological relationship view of each Security Group the instance is associated to.

Also, if there are multiple VPCs on your AWS account, then Relationship View will give you a glance on which subnets belongs to which VPC. By dragging the VPC on to the side of the topological view you can see the complete details on how the resources  are connected with each other under specific VPC.

Relationship View of AWS Resources

There are other highlights of Botmetric Cloud Explorer Relationship View too, like it provides:

  • Ability to find which security groups are not assigned to any resources
  • Visibility on unused security groups and subnets
  • Real-time view on the resources i.e if you make any change in your infrastructure, then that change in data will immediately reflect on the topological view in Botmetric

Apart from giving a relationship view, Botmetric Cloud Explorer Relationship View can be used as a knowledge sharing too. Plus, it can help your entire team to verify the relationship between each AWS resources and check manually. For instance, which subnet belongs to which VPC or which security group is associated to which Instance. This saves a lot of  time!

Above all, to build stronger governance, tracking of resources is pivotal. With Botmetric Cloud Explorer Relationship View, you can easily and quickly identify the resources that are not utilized and thus help govern the resources timely.

How to Access Botmetric’s Cloud Explorer Relationship View?

The Botmetric Cloud Explorer Relationship View can be accessed from Botmetric Ops & Automation product — an intelligent cloud automation console for smarter cloud operations and management. 

One of the prerequisites to access it is to enable AWS Config for the regions you would want to use this feature with few steps. Because, AWS Config provides you with an AWS resource inventory, configuration history, and configuration change notifications. Primarily to enable security and governance. Above all, it takes a snapshot of the state of your AWS resources and how they are wired together, then tracks changes that take place between them. So, any modification, addition, deletion in your AWS infra gets recorded in AWS CloudTrail.

Once up and running, you can have Botmetric Cloud Explorer Relationship View handy.

Conclusion: Topological Relationship View of AWS Resources is Pivotal

As your business scales on the cloud, usage of resources and modification to them scale too. Instead of diving deep into the complex data at the first glance, you must first get a bird’s eye view of the resource usage for better cloud governance. That is what Botmetric Cloud Explorer Relationship View does. Providing a beautiful visualization of your AWS infrastructure. 

If you want to know more about this feature, do drop in a line below, or take a 14-day free trial

The March Roundup @ Botmetric: Easier AWS Cloud Management with NoOps

Spring is here, finally! The blooming fresh buds, the sweet smell of the roses, and the cheerful mood all around. Earth seems to come to life again. Seasons are vital to the transition and evolution of our planet; it also serves the purpose of the evolution of human consciousness too. Likewise, transition and evolution of your AWS Cloud Management consciousness too plays a vital role in improving the lives — primarily productivity — of DevOps and cloud engineers in your organization.

Your AWS Cloud Management efforts carried out by your DevOps engineers and cloud engineers, either in silos or with an integrated approach, needs to be regularly monitored, nurtured, and evolved from time to time. And when we say AWS Cloud Management efforts, we include AWS cost management, AWS governance, AWS cloud security and compliance, AWS cloud operations automation, and DevOps practices.

There are, of course, a variety of AWS services at your disposal to engineer a fully automated, continuous integration and delivery system, and help you be at the bleeding edge of DevOps practices. It is, however, easier said than done.

Right tools at hand are what that matters the most, especially when you are swimming in a tide of several modules. With agile digital transformations catching up quickly in every arena, it’s high time you must ensure that your team’s every AWS Cloud Management effort count to get that optimal ROI and lowered TCO.

To that end, Botmetric has been evolving all its products — Cost & Governance, Security & Compliance, and Ops & Automation, with several NoOps and DevOps features that make life of DevOps engineers and cloud engineers easier.

More so, you get more out of your AWS cloud management than you think. Explore Botmetric.

In March, Botmetric rolled-out four key product features. Here’re the four new feathers in the Botmetric’s cap:

1. Define Your Own AWS Security Best Practices & Audits with Botmetric Custom Audits

What is it about: Building your own company-wide AWS security policies to attain comprehensive security of the cloud.

How will it help:  Audit your infrastructure and enforce certain rules within your team, as per your requirements. You can put the custom rules or audits on auto-pilot — no need to build and run scripts every time through cron/CLI. Above all, you can automate your AWS security best practices checks.

Where can you find this feature on Botmetric: Under Security & Compliance’ Audit Report Console.

Get more details on this feature here.

2. Increase Operational Efficiency by 5X with Botmetric Custom Jobs’ Cloud Ops Automation

What is it about: Writing Python scripts inside Bometric to automate everyday, mundane DevOps tasks.

How will it help: Empowers DevOps engineers and cloud engineers to run desired automation with simple code logic in Python, and then schedule routine cloud tasks for increased operational excellence. Help engineers free up a lot of time.

Where can you find this feature on Botmetric: Under Ops & Automation’ Automation Console.

Get more details on this feature here.

3. Unlock Maximum AWS RDS Cost Savings with Botmetric RDS Cost Analyzer

What is it about: It is an intelligent analyzer that provides complete visibility into RDS spend.

How will it help: Discover unusual trends in your AWS RDS usage and know which component is incurring the significant chunk of the cost. Get a detailed breakup of RDS cost according to AWS instances, instance types, AWS accounts, AWS sub services, and instance engine.

Where can you find this feature on Botmetric: Under Cost & Governance’ Analyze console.

Get more details on this feature here.

4. AWS Reserved Instance Management Made Easy with Botmetric’s Smart RI

What is it about: Automatically modify reservation as soon as there is a modification available without going to AWS console.

How will it help: Reduce the effort involved in modifying the unused RIs. Automate modification of RIs that occur multiple times a day as soon as the unused RIs are found. Saves that much amount of cost that could have been wasted due to unnecessary on-demand usage, along with wasted RIs.

Where can you find this feature on Botmetric: Under Cost & Governance’ RI console.

Get more details on this feature here. You can also read it on AWS Week-in-Review.

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of March:

The Road to Perfect AWS Reserved Instance Planning & Management in a Nutshell

98% of Google search on ‘AWS RI benefits’ shows that you can get great discounts and save tremendously compared to on-demand pricing. The fact is, this discounted pricing can be reaped provided you know what RIs are, how to use them, when to buy them, how to optimize them, how to plan them, etc. This blog covers all the details how to perfect your AWS RI planning and management.

DevSecOps: A Game Plan for Continuous Security and Compliance for your Cloud

DevOps makes it possible for the code to deploy and function seamlessly. And where does “security” stand in this Agile, CI/CD environment? You cannot afford to compromise on security and turn your infrastructure vulnerable to hackers, for sure! So, here comes the concept of “DevSecOps” — the practices of DevSecOps. If you’re looking to bring Security Ops into DevOps, then bookmark this blog.

3 Effective DDoS Protection & Security Solutions Apt for Web Application Workloads on AWS

NexusGuard research quoting 83% increase in Distributed Denial of Service (DDoS) attacks in 2Q2016 compared to 1Q2016 indicates that these attacks seems to continue being prevalent even beyond 2017. Despite stringent measures, these attacks have been bringing down web applications and denying service availability to its users with botnets. Without a doubt, DDoS mitigation is pivotal. If you’re a security Ops engineer, then this blog is a must read.

5 Interesting 2017 DevOps Trends You Cannot Miss Reading

In 2017, there is a lot of noise about what will be the future of DevOps. Here is a look at five interesting 2017 DevOps trends  you cannot miss reading and what our thought leaders think.

Don’t Let 2017 Amazon AWS S3 Outage Like Errors Affect You Again

On February 28th, 2017, several companies reported Amazon AWS S3 Cloud Storage Outage. Within minutes, hundreds and thousands of Twitter posts started making rounds across the globe sharing their experiences how their apps went down due to this outage. No technology is perfect. All technologies might fail at some point. The best way forward is to fool-proof your system against such outages in the future, as suggested by Team Botmetric.

To Conclude:

Rain or shine, Botmetric has always striven to improve the lives of DevOps and cloud engineers. And will continue to do so with DevOps, NoOps, AIOps solutions. Get 14-Day Exclusive Botmetric Trial Now.

If you have missed rating us, Botmetric invites you to do it here. Until the next month, stay tuned with us.

Increase Operational Efficiency by 5X with New Botmetric Custom Jobs’ Cloud Ops Automation

As a Cloud Ops engineer, do you get that feeling — that you are stuck like a tiny pet hamster in a wheel, doing the same stuff again and again, and going nowhere? You have plans to automate everyday cloud operation tasks and a roadmap towards Cloud Ops Automation, but don’t know from where to start! Working on mundane operational tasks day-in day-out is too taxing. Does this ring a bell?

repetition

The best way forward is to schedule all your routine tasks and use simple python scripts to achieve the desired automation using Botmetric’s New Custom Jobs.

Here’s Why Botmetric Built Custom Jobs

Botmetric Ops & Automation product already offered a list of 25+ pre-defined automated jobs. Using these jobs, you could automate a lot of routine tasks for major 7 AWS services. A lot of Botmetric customers liked these automated jobs and further requested some unique operational tasks in AWS cloud. Hence, Botmetric team built an universal solution that had the ability to custom run python scripts through the Botmetric console.

Game-changing Cloud Ops Automation Features in Botmetric Custom Jobs

In current marketspace a lot of SaaS products offer automations but  lack in delivering categories of custom jobs. However, Botmetric Ops & Automation, since its release, has solved almost 80% of automation requirement.

With Botmetric Custom Jobs you can:

  • Run your own custom scripts: Through one Botmetric console now you can perform both governance and automation. Botmetric Custom Jobs allows you to write desired Python scripts and automate scheduled execution through Botmetric console.
  • Increase operational efficiency: There are a list of tasks that a DevOps engineer performs on everyday basis and these tasks differ from one infrastructure type to another. Automating such tasks through scripts would free up a lot of time for the DevOps engineer so that one could concentrate on business innovation.
  • Get visibility into executions: Unlike running a script through cron/CLI, with Botmetric, you will have the ability to view status, receive alert or email notification on success or failure, and get historic execution details.

How to Schedule Custom Jobs on Botmetric?

There are two ways to schedule custom jobs:

1. Create a job with new script

Write your new script in the editor provided and verify the syntax. Provide necessary naming for identification and give email address to be notified.

Create a job with new script

 

2. Utilize saved scripts to create a new job

You can also choose from the previously created scripts and schedule a task out of it.

2) Utilize saved scripts to create new job

 

Essentially, Custom Jobs will empower you with running desired automation in your environment. With simple code logic of yours, written in Python, you can schedule your routine tasks for increased operational excellence.

Here’re few use cases to give you a gist of Custom Jobs’ potential:

The Case in Point for Creating VPC in a Region

Assume, you’re headquartered in Bay Area of the USA and have your business on cloud. So you have populated maximum of your resources in US-west. Lately, you expand your business to Germany too. However, you are still launching instances in US-west. Your team starts complaining about latency issues. So you decide to populate resources in EU-central, as the present EU-central region offers greater benefits. With a simple Python script for creating VPC in a region, with user defined CIDR scheduled, you can have the VPC created for any resources launched in this region.

The Case in Point for Copying EBS Snapshots Automatically Across Instance Tags

If you are looking for heightened DR policies and want to secure your snapshots, you can use Custom Jobs to write a custom script on Copy EBS snapshots across instance tags that can schedule your volume snapshots for the mentioned  instance tags across regions and secure them.

The Case in Point for Automatically Deleting Snapshots

If you are looking to derive savings from optimizing your back-ups, you can form a custom script to schedule deletion of old snapshots after defined number of days. By automating this through Custom Jobs you will lower wastage and save on unnecessary back-up retentions.

[mk_title_box color=”#FFFFFF” highlight_color=”#008080″ highlight_opacity=”0.5″ size=”18″ line_height=”34″ font_weight=”inherit” margin_top=”0″ margin_bottom=”18″ margin_left=”10″ font_family=”none” align=”center”] Try Botmetric Custom Jobs Now[/mk_title_box]

To Conclude

Each passing day, we are moving more towards NoOps, which essentially means that machines can automate known problems, while humans can focus on new problems. Many of Botmetric customers have embraced NoOps (knowingly/unknowingly) by automating all and every possible routine tasks in their environment so that DevOps time is spent more towards solving new issues, and increase operational efficiency by 5X.

What are you waiting for? Take a 14-day trial and check for yourself how Botmetric helps you automate cloud ops tasks. If you’re already a customer, and have any questions, please pop them in the comment section below. We will get back to you ASAP. If you are looking to know about all things cloud, follow us on Twitter.

Champion DevOps to NoOps in 2017 with Algorithmic IT Operations (AIOps)

How awesome would it be if machines resolve known, repetitive, and identifiable mundane problems while humans solve new complex problem? As a DevOps engineer managing cloud operations and handling tons of alerts everyday, had you wish this was true; that day isn’t far. Thanks to Algorithmic IT Operations (AIOps), which can reduce the stress and fatigued workload, eliminate alerts & repetitive events, improve business agility through intelligent management layers, and above all respond quickly to production incidents ten times faster.

Here’re the five AIOps that can help you navigate your DevOps tasks smoothly through 2017:

  • Adopt a Culture of NoOps

    For engineering teams to nurture the belief that “machines should solve known problems and engineers must focus on solving new problems,” they must first adopt NoOps philosophy, which essentially means saying NO to manual operations and yes to AIOps.

  • Automate Known Problems

    Engineers, who have managed production infrastructure, business services, applications and architected systems often observe that few problems are caused by known events and few problems have identifiable patterns. In such scenarios, these engineers would have had an idea on what to do when certain events or symptoms occur in their application or production infrastructure. Automating actions (response mechanisms) for such known events, along with the business logic embedded using algorithms, is the best way forward.

  • Build Diagnostics for Operational Issues

    Most of today’s tools just provide a text of what happened instead of providing a context of what is happening or why it’s happening, when events or alerts occurring due to problems are triggered. If there are diagnostic, algorithmic scripts or programs to tell why, rather than just when and where, it becomes easier to get the context, thus enabling to find the root cause faster. 

  • Use Code as a Weapon

    Yes! You can create everything from automated actions to diagnostics using code to save hours of time after every deployment. In essence, using CODE as a mechanism for resolving problems should be the way forward.The key is to start applying algorithms for solving IT operational problems.As a DevOps engineer, if you are building the continuous integration or continuous delivery today then you should certainly deploy a trigger as part of your CI/CD pipeline that can monitor deployment for health metrics and invoke a rollback if it detects issues.

  • Adopt Intelligent DevOps Tooling

    With the availability of technologies like Docker, microservices, cloud and API driven approach to deploying applications at scale, days of using static tooling for deployments, provisioning, packaging, monitoring, APM and log management is almost over. It’s about time, embrace AIOps.

The Wrap-Up

A progression from DevOps to NoOps is a must, if you are building your application or solution with a credo that “machines should solve known problems and engineers must focus on solving new problems.” And for that progression to actualize from DevOps to NoOps, embracing AIOps is a must.

Botmetric is excited about working on an intelligent event-driven platform for managing incidents and operations for the cloud world. Botmetric, as a platform, handles most of the operational problems a DevOps engineer faces using application discovery, alerts data, cloud configuration, historic patterns and known events. The ultimate goal at Botmetric is to help customers move from DevOps to NoOps philosophy by bringing Algorithmic IT Operations for incident management in the cloud.

What is your take on AIOps and NoOps? Do share your thoughts below in the comment section, or on Twitter. Do read the Botmetric blog post on Alert Analytics to see how Botmetric is driving cloud management with NoOps capabilities.

Editor’s Note: This exclusive blog post is an adaption of the original article, DevOps to NoOps: Embrace Algorithmic IT Operations in 2017, penned by Botmetric CEO Vijay Rayapati.