April Roundup @ Botmetric: Aiding Teamwork to Solidify 3 Pillars of Cloud Management

Spring is still on at Botmetric, and we continue to evolve like seasons with new features. This month, the focus was on how to bring in more collaboration and teamwork while performing various tasks related to cloud management. The three pillars of cloud management, visibility, control, and optimization, can be solidified only with seamless collaboration. To that end, Botmetric released two cool collaborative features in April: Slack Integration and Share Reports.

1. Slack Integration

What is it about: Integrating Slack collaboration tool and Botmetric so that a cloud engineer will never miss an alert or notification when on a Slack channel and quickly communicate/alert it to their team ASAP. 

How will it help: Cloud engineers can quickly get a sneak-peak into specific Botmetric alerts, as well as details of various cloud events, on their desired channel of Slack. Be it an alert generated by Botmetric’s Cost & Governance, Security & Compliance, or Ops & Automation, engineers can see these alerts without logged into Botmetric, and quickly communicate the problem between the team members.

Where can you find this feature on Botmetric: Under the Admin section inside 3rd Party Integrations.

To know more in detail, read the blogBotmetric Brings Slack Fun to Cloud Engineers

2. Share/Email Data-Rich AWS Cloud Reports Instantly

What is it about: Sharing/emailing Botmetric reports directly from Botmetric. No downloading required.

How will it help: For successful cloud management, all the team members need complete visibility with pertinent data in the form of AWS cloud reports. The new ‘Share Reports’ feature provides complete visibility across accounts and helps multiple AWS users in the team better collaborate while managing the cloud.

Where can you find this feature on Botmetric: Across all the Botmetric products in the form of a share icon.

To know more in detail, read the blog ‘Share Data-Rich AWS Cloud Reports Instantly with Your Team Directly From Botmetric.’

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of April:

Gauge AWS S3 Spend, Minimize AWS S3 Bill Shock

AWS S3 offers a durability of  99.999999999% compared to other object storage on AWS, and features simple web interface to store and retrieve any amount of data. When it comes to AWS S3 spend, it has something more in it beyond just the storage cost. If you’re a operations manager or a cloud engineer, you probably know that data read/write or data moved in/out also do count  AWS S3 bill. Hence, a detailed analysis of all these can help you keep AWS S3 bill shock to a minimum. To know how, visit this page.

7 Tips on How to Work the Magic With DevOps for AWS Cloud Management

Are you a DevOps engineer looking for complete AWS cloud management? Or are you a AWS user looking to use DevOps practices to optimize your AWS usage? Both ways, AWS and DevOps are modern way of getting things done. You should leverage new age DevOps tools for monitoring, application performance management, log management, security, data protection and cloud management instead of trying to build adhoc automation or dealing with primitive tools offered by AWS.

Get the top seven tips on how to work the magic with DevOps for AWS cloud management.

The Ultimate Cheat Sheet On Deployment Automation Using AWS S3, CodeDeploy & Jenkins

If you’re a DevOps engineer or an enterprise looking for a complete guide on how to automate app deployment using Continuous Integration (CI)/Continuous Deliver(CD) strategies, and tools like AWS S3, CodeDeploy, Jenkins & Code Commit, then bookmark this blog penned by Minjar’s cloud expert.

Botmetric Cloud Explorer: A Handy Topological Relationship View of AWS Resources

Do you want to get a complete understanding of your AWS infrastructure. And map how each resources are connected and where they stand today for building stronger governance, auditing, and tracking of resources. Above all get one handy, cumulative relationship view of AWS resources without using AWS Config service. Read this blog how to get a complete topological relationship view of your AWS resources.

The Cloud Computing Think-Tank Pieces @ Botmetric

5 Reasons Why You Should Question Your Old AWS Cloud Security Practices

While you scale your business on cloud, AWS too keeps scaling its services too. So, cloud engineers have to constantly adapt to architectural changes as and when AWS updates are announced. While all architectural changes are made, AWS Cloud Security best practices and audits need to be relooked too from time to time.

Tightly Integrated AWS Cloud Security Platform Just a Click Away

As a CISO, you must question your old practices and relook at them whether it’s relevant in the present day. Here’re the excerpts from a think tank session highlighting the five reasons why you should question your old practices.

The Rise of Anything as a Service (XaaS): The New Hulk of Cloud Computing

The ‘Cloud-driven aaS’ era is clearly upon us. Besides the typical SaaS, IaaS, and PaaS offerings discussed, there are other ‘As-a-Service(aaS)’ offerings too. For instance, Database-as-a-service, Storage-as-a-Service, Windows-as-a-Service, and even Malware-as-a-Service. It is the era of Anything-as-a-Service (XaaS). Read the excerpts from an article by Amarkant Singh, Head of Product, Botmetric, featured on Stratoscale, which share views on XaaS, IaaS, PaaS, and SaaS.

April Wrap-Up: Helping Bring Success to Cloud Management

Rain or shine, Botmetric has always striven to bring success to cloud management. And will continue to do so with DevOps, NoOps, AIOps solutions.

If you have missed rating us, you can do it here now. If you haven’t tried Botmetric, we invite you to sign-up for a 14-day trial. Until the next month, stay tuned with us on Social Media.

The Ultimate Cheat Sheet On Deployment Automation Using AWS S3, CodeDeploy & Jenkins

A 2016 State of DevOps Report indicates that high-performing organizations deploy 200 times more frequently, with 2,555 times faster lead times, recover 24 times faster, and have three times lower change failure rates. Irrespective of whether your app is greenfield, brownfield, or legacy, high performance is possible due to lean management, Continuous Integration (CI), and Continuous Delivery (CD) practices that create the conditions for delivering value faster, sustainably.

And with AWS Auto Scaling, you can maintain application availability and scale your Amazon EC2 capacity up or down automatically according to conditions you define. Moreover, Auto Scaling allows you to run your desired number of healthy Amazon EC2 instances across multiple Availability Zones (AZs).

Additionally, Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during less busy periods to optimize costs.

Optimize your cloud spend and performance from a single console

The Scenario

We have an application of www.xyz.com. The web servers are setup on Amazon Web Services (AWS). As a part of the architecture, our servers are featured with AWS Auto Scaling service which is used to help scale our servers depending on the metrics and policies we specified. So every time a new feature is developed, we have to manually run the test cases before the code gets integrated and deployed. Later, we need to pull the latest code to all the environment servers. There’re several challenges while doing it manually.

The Challenges

The challenges of manually runing the test cases before the code gets integrated and deployed are:

  1. Pulling and pushing code for deployment from a centralized repository
  2. Working manually to run test cases and pull the latest code on all the servers
  3. Deploying code on new instance that are configured in AWS Auto Scaling
  4. Pulling the latest code on one server, taking the image of that server, re configuring it with AWS Auto Scaling, since the servers were auto scaled
  5. Deploying build automatically on instances in a timely manner
  6. Reverting back to previous build

The above challenges requires lots of time and human resource. So we have to find a technique that can save time and make our life easy while automating all the process from CI to CD.

Here’s a complete guide on how to automate app deployment using AWS S3, CodeDeploy, Jenkins & Code Commit.

To that end, we’re going to use:

Now, let’s walk through the flow, how it’s going to work, and what are the advantages before we implement it all. When a new code is pushed to a particular GIT repo/AWS CodeCommit branch:

  1. Jenkins will run the test cases (Jenkins listening to a particular branch through git web hooks )
  2. If the test cases fail, it will notify us and stop the further after-build actions.
  3. If the test cases are successful, it will go to post build action and trigger AWS CodeDeploy.
  4. Jenkins will push the latest code in the zip file format to AWS S3 on the account we specify.
  5. AWS CodeDeploy will pull the zip file in all the Auto Scaled servers that have been mentioned.
  6. For the auto scaling  server, we can choose the AMI that has the default AWS CodeDeploy Agent running on it. This agent helps AMIs to launch faster and pull the latest revision automatically.
  7. Once the latest code is copied to the application folder , it will once again run the test cases.
  8. If the test cases fail, it will roll back the deployment to previous successful revision.
  9. If it is successful , it will run post deployment build commands on server and ensure that latest deployment does not fail.
  10. If we want to go back to previous revision then also we can roll back easily

This way of automation using CI and CD strategies makes the deployment of application smooth, error tolerant, and faster.

Smart Deployment Automation: Using AWS S3, CodeDeploy, Jenkins & CodeCommit

The Workflow:

Here’s the workflow steps of the above architecture:

  1. The application code with the Appspec.yml file will be pushed to the AWS CodeCommit. The Appspec.yml file includes the necessary scripts path and command, which will help the AWS CodeDeploy to run the application successfully
  2. As the application and Appspec.yml file will get committed in the AWS CodeCommit, Jenkins will automatically  get triggered by poll SCM function.
  3. Now Jenkins will pull the code from AWS CodeCommit into its workspace (Path in Jenkins where all the artifacts are placed) and archive it and push it to the AWS S3 bucket. This can be considered as Job1.

Here’s the Build Pipeline

Jenkins pipeline (previously workflow) refers to the job flow in a specific manner. Building Pipeline means breaking the big Job into small individual jobs, relying on which, if first job get failed then it will trigger the email to the admin and stop the building process at that step only and will not move to the second job.

To achieve the pipeline, one should need to install the pipeline plugin in Jenkins.

According to the above scenario, the Jobs will be broken into three individual Jobs

  • Job 1: When the code commit runs, the Job 1 will run and it will pull the latest code from the CodeCommit repository, and it will archive the artifact and email about the status of Job1, whether it got successful build or got failed altogether with the console output. If the Job1 got build successfully then it will trigger to Job 2
  • Job2: This Job will run only when the Job 1 will be stable and run successfully. In Job2, the artifacts from Job1 will be copied to workspace 2 and will be pushed to AWS S3 bucket. Post to that if the artifacts will be send to S3 bucket, the email will be send to the admin. And then it will trigger the Job3
  • Job3: This Job is responsible to invoke the AWS CodeDeploy and pull the code from S3 and push it either running EC2 instance or AWS auto scaling instances. When it will be done

The below image shows the structure of pipeline.

Smart Deployment Automation: Using AWS S3, CodeDeploy, Jenkins & CodeCommit

Conditions:

  1. If Job 1 executes successfully then it will trigger the Job2, which is  responsible to pull the successful build version of code to S3 bucket and then trigger the Job3. If Job 2 fails, then again email will be triggered with a message of Job Failure.
  2. When Job 3 gets triggered, the archive file (Application code along with Appspec.yml) will be pushed to AWS CodeDeploy deployment service, where AWS Code Deploy will run the CodeDeploy agent in the instance and run the Appspec.yml file that will help the application to get up and running.
  3. If at any point the Job fails then the application will be deployed with the previous build.

Below are the five steps necessary for deployment automation using AWS S3, CodeDeploy, Jenkins & CodeCommit.

Step 1: Set Up AWS CodeCommit in Development Environment

Create an AWS CodeCommit repository:

1. Open the AWS CodeCommit console at https://console.aws.amazon.com/codecommit.

2. On the welcome page, choose Get Started Now. (If a Dashboard page appears instead of the welcome page, choose Create new repository.)

DEPLOYMENT AUTOMATION USING AWS S3, CODEDEPLOY, JENKINS AND CODE COMMIT

3. On the Create new repository page, in the Repository name box, type xyz.com

4. In the Description box, type Application repository of http://www.xyz.com

DEPLOYMENT AUTOMATION USING AWS S3, CODEDEPLOY, JENKINS AND CODE COMMIT

5. Choose Create repository to create an empty AWS CodeCommit repository named xyz.com

Create a Local Repo

In this step, we will set up a local repo on our local machine to connect to our repository. To do this, we will select a directory on our local machine that will represent the local repo. We will use Git to clone and initialize a copy of our empty AWS CodeCommit repository inside of that directory. Then we will specify the username and email address that will be used to annotate your commits. Here’s how you can create a Local Repo:

1. Generate ssh-keys in your local machine #ssh-keygen without any passphrase.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

2. Cat id_rsa.pub and paste it into the IAM User->Security Credentials-> Upload SSH Keys Box. And Note Down the SSH-KeyID

$ cat /.ssh/id_rsa.pub 

Copy this value. It will look similar to the following:

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

  1. Click on Create Access keys and Download the Credentials having Access Key and Secret Key.

2. Set the Environment Variables in BASHRC File at the end.

# vi /etc/bashrc

          export AWS_ACCESS_KEY_ID=AKIAINTxxxxxxxxxxxSAQ

         export AWS_SECRET_ACCESS_KEY=9oqM2L2YbxxxxxxxxxxxxzSDFVA

3. Set the config file inside .ssh folder

# vi ~/.ssh/config

Host git-codecommit.us-east-1.amazonaws.com

 User APKAxxxxxxxxxxT5RDFGV

 IdentityFile ~/.ssh/id_rsa             —> Private Key

         # chmod 400 config

4. Configure the Global Email and Username

#git config –global user.name “username”

#git config –global user.email “emailID”

5. Copy the SSH URL to use when connecting to the repository and clone it

#git clone ssh://git-codecommit.us-east-1.amazonaws.com/xyz.com

6. Now put the Application/Code inside the cloned directory and also write the Appspec.yml file and you are ready to push it.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

7. Install_dependencies.sh includes.

#!/bin/bash

yum groupinstall -y “PHP Support”   

yum install -y php-mysql  

yum install -y httpd

yum install -y php-fpm  

Start_server.sh includes

#!/bin/bash

service httpd start  

service php-fpm start

Stop_server.sh includes

#!/bin/bash

isExistApp=`pgrep httpd`  

if [[ -n  \$isExistApp ]]; then  

 service httpd stop

fi  

isExistApp=`pgrep php-fpm`  

if [[ -n  \$isExistApp ]]; then  

  service php-fpm stop

Fi

Appspec.yml includes

version: 0.0  

os: linux  

files:  

– source: /

  destination: /var/www/xyz.com

hooks:  

BeforeInstall:

  – location: .scripts/install_dependencies.sh

    timeout: 300

    runas: root

ApplicationStart:

  – location: .scripts/start_server.sh

    timeout: 300

    runas: root

ApplicationStop:

  – location: .scripts/stop_server.sh

    timeout: 300

    runas: root

Now push the code to the CodeCommit

# git add .

# git commit -m “1st push”

        # git push

8. Now we can see that the code will be pushed to the CodeCommit.

Step 2: Setting Up Jenkins Server in EC2 Instance

1. Launch the EC2 instance (CentOS7/RHEL7) and perform the following operations

# yum update -y

# yum install java-1.8.0-openjdk

Verify the java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

# yum install java-1.8.0-openjdk

2. Verify the Java

# java –version

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

3. Install Jenkins:

# yum install Jenkins

4. Add Jenkins to system boot:

# chkconfig jenkins on

5. Start Jenkins:

# service jenkins start

6. By default Jenkins will start on Port 8080, this can be verified via

# netstat -tnlp | grep 8080

7. Go to browser and navigate to http://:8080. You will see Jenkins dashboard.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

8. Configure the Jenkins username and password, and install the AWS and GIT related plugins.

Here’s how to Setup a Jenkins Pipeline Job:

Under Source Control Management click on GIT.

Pass the GIT ssh URL and under credentials click on ADD and then in kind option click SSH username with PrivateKey.

Note that username will be same as mentioned in the config file of development machine where repo was initiated and we have to catch the private key of development machine and paste it here.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

In Build Trigger, click on Poll SCM and mention the time whenever you want to start the build.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

For the Post Build Action, we have to archive the files and  provide the name of Job 2, if the Job 1 will  get successful build after then it should trigger the email.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Now for the time being we can start building the Job and we have to verify that when the code is committed. By now, Jenkins should start building automatically and tell whether it is able to pull the code into its workspace folder. But before that we have to create S3 bucket and pass credentials (Access key and Secret key) in Jenkins so that when the Jenkins pulls code from AWS CodeCommit  it can push build in the s3 bucket after archiving.

Step 3: Create S3 Bucket

Create S3 Bucket.

After creating S3 bucket, provide the details into Jenkins with AWS credentials.

After creating S3 bucket, provide the details into Jenkins with AWS credentials.

Now when we run Job 1  of Jenkins it will pull the code from AWS CodeCommit. After archiving, it will keep it into the workspace folder of Job1.

AWS CodeCommit Console Output

AWS CodeCommit Console Output

From the above Console output, we can see that it is pulling the code from AWS CodeCommit. After archiving, it is triggering the email. Post that it calls for the next job, Job 2.

Console Output

The above image shows that after building Job2, the Job3 will also get triggered. Now before triggering Job3, we need to setup AWS CodeDeploy environment.

Step 4: Launch the AWS CodeDeploy Application

Creating IAM Roles

Create an IAM instance profile and attach AmazonEC2FullAccess policy and also attach the following inline policy:

{

  “Version”: “2012-10-17”,

  “Statement”: [

      {

          “Action”: [

              “s3:Get*”,

              “s3:List*”

          ],

          “Effect”: “Allow”,

          “Resource”: “*”

      }

  ]

}

Select Role type AWS CodeDeploy

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create a service role CodeDeployServiceRole. Select Role type AWS CodeDeploy. Attach the Policy AWSCodeDeployRole as shown in the below screenshots:

Create an auto scaling group for a scalable environment.

Here’re the steps below:

1. Choose an AMI and select an instance type for it. Attach the IAM instance profile that we created in the earlier step.

Choose an AMI and select an instance type for it. Attach the IAM instance profile

2. Now go to Advanced Settings and type the following commands in “User Data” field to install AWS CodeDeploy agent on your machine (if it’s not already installed on your AMI)

#!/bin/bash

yum -y update  

yum install -y ruby  

yum install -y aws-cli  

sudo su –  

aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . –region us-east-1

chmod +x ./install  

./install auto

3. Select Security Group in the next step and create the launch configuration for the auto scaling group. Now using the launch configuration created in the above step, create an auto scaling group.

4. Now after creating Autoscaling group, it’s time to create the Deployment Group.

5. Click on AWS CodeDeploy and Click on create application.

6. Mention the application name and deployment Group Name.

AWS codedeploy and click on create application

7. In tag type, click on either EC2 instance or AWS AutoScale Group. Mention the name of EC2 instance or AWS Autoscale Group.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

8. Select ServiceRoleARN for the service role that we created in the “Creating IAM Roles” section of this post.

9. Go to Deployments and choose Create New Deployment.

10. Select Application and Deployment Group and select the revision type for your source code.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

11. Note that the IAM role associated with the instance or autoscale group should be same as CodeDeploy and the arn name must have the CodeDeploy policy associated with it.

Step 5: Fill CodeDeploy Info in Jenkins and build it

1. Now go back to Jenkins Job 3 and click on “Add PostBuild Action” and select “Deploy the application using AWS CodeDeploy.

2. Fill the details of AWS CodeDeploy Application Name, AWS CodeDeploy Deployment Group, AWS CodeDeploy Deployment Config,  AWS Region  S3 Bucket, Include Files ** and click on Access/secret to fill the Keys for the Authentication.

3. Click on save and build the project. After few minutes, the application will be deployed on the Autoscale instances.

4. When this Job3 will get build successfully then we will get the console output as below:

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

5. After this Build, there will be changes that takes place in AWS CodeDeploy group.

Smart Deployment Automation: Using AWS S3, Codedeploy, Jenkins and Code Commit

6. Once you hit the DNS of the instance, you will get your application up and running.

To Wrap-Up

It’s proven that teams and organizations who adopt continuous integration and continuous delivery practices significantly improve their productivity. And with AWS CodeDeploy with Jenkins, it is an awesome combo when it comes to automating app deployment and achieve CI and CD.

Are you an enterprise looking to automate app deployment using CI/CD strategy? As a Premier AWS Consulting Partner, we at Minjar have your back! Do share your comments in the section below or give us a shout out on Twitter, Facebook or LinkedIn.

Share Data-Rich AWS Cloud Reports Instantly with Your Team Directly From Botmetric

Once Henry Ford said, “Coming together is a beginning. Keeping together is progress. Working together is success.” This adage holds so true to find success while managing AWS cloud. For the reason that: to achieve complete AWS cloud management is not a one person’s responsibility, but is a shared responsibility and more so a teamwork. And for the teamwork to reap benefits, all the team members need complete visibility with pertinent data in the form of AWS cloud reports in hand. To cater to this need Botmetric has introduced ‘Share Reports’ feature that allows a Botmetric user to share important AWS cost, security or Ops automation reports with multiple AWS users for better collaboration.

If you’re a Botmetric user, you can now:

  • Share the data-rich reports directly from any Botmetric products, thus saving time and effort
  • Educate both Botmetric and non-Botmetric user(s) within your team about various aspects of your AWS infrastructure
  • Highlight items that may need action by other teammates

Why Botmetric Built Share Reports

Currently, Botmetric offers more than 40 reports and 30 graphs and charts. These reports, charts and graphs help for better cloud governance. More so, these data-rich reports offer a great culmination of insights and help keep you updated on your AWS infrastructure.

Earlier, Botmetric empowered its users (those added to your Botmetric account) to download all these reports. However, at times, it’s likely you’ll need to send perpetual reports to other colleagues too that may not be part of your Botmetric Account.

Thus, continuing our mission to provide complete visibility and control for AWS users and your AWS infrastructure, Botmetric now allows you to email/share those reports directly to non-Botmetric user(s) too. By doing so, Botmetric empowers every custodian for cloud in your organization responsible for cloud with pertinent data, even if they are not Botmetric users.

More so, the new share functionality enables you to share specific reports across Cost & Governance, Security & Compliance, and Ops & Automation to custodians who are not Botmetric users in your organization and wish to discover knowledge on certain AWS cloud items.

The new share reports can be used across Botmetric platform in two specific ways:

1. Share Historical Reports

Share all the AWS cloud reports present under reports library on the Botmetric console to other custodians in the team.

Share all the AWS Cloud reports for better cloud management

2. Export and Share Charts and Graphs as CSV Reports

If you find any crucial information in any of the reports under Botmetric Cost & Governance, Security & Compliance or Ops & Automation, you can share using the ‘Share icon’ to any other custodian who isn’t Botmetric user(s) but responsible for cloud.

Share AWS cloud reports on Cost, Security, Ops with the team using Botmetric

For example, you would want to share the list of ports open to public to the person in your team who is responsible for perimeter security. You can do this from Audit Reports section of Security & Compliance.

The Bottom Line:

AWS has more than 70 resources and each resource has multiple family types. With so many variance in AWS’ services, you surely need either holistic information or a particular information at some point for analysis. With Botmetric reports and the new sharability feature, you and your team can together manage and optimize your AWS cloud with minimal effort.    

If you are a current Botmetric user, then Team Botmetric invites you to try this feature and share your feedback. If you’re yet to try Botmetric and want to explore this feature, then take up a 14 day trial . If you have any questions on AWS cloud management, just drop in a line below in the comment section or give us a shout out at @BotmetricHQ.

AWS Cloud Security Think Tank: 5 Reasons Why You Should Question Your Old Practices

Agile deployments and scalability seem to be the most dominant trend in public cloud, today; especially on AWS. While you scale your business on cloud, AWS too keeps scaling its services as well as upgrading its technology from time to time, to keep up with the technology disruptions happening across the globe. To that end, your cloud engineers have to constantly adapt to architectural changes as and when updates are announced. While all these architectural changes are made, AWS Cloud Security best practices and audits need to be relooked too from time to time.

As a CISO, have you ever questioned your old practices and relooked at them whether it’s relevant in the present day.

Here are few excerpts from our AWS Cloud Security Think Tank: A collation of deliberations we had recently at Botmetric HQ with our security experts on why anyone on cloud should question their old AWS cloud security best practices.

1. Relooking at Endpoint Security

“Securing the server end is just one part of enterprise cloud security. If there is a leakage at the endpoints, the net result is adverse impact on your cloud infrastructure.  Newer approaches to assert the legitimacy of the endpoint is more important than ever.” — Upaang Saxena, Botmetric LLC.

As most cloud apps provide APIs, the client authentication mechanisms have to be redesigned. Moreover, as the endpoints are now mobile devices, IOT devices, and laptops that might be anywhere in the world, increasingly the endpoint security is moving away from perimeter based security model giving way to Identity based endpoint security model. Hence, newer approaches to assert the legitimacy of the endpoint is more important than ever.

2. Revisiting Policies Usage

“Use managed policies, because with managed policies it easier to manage access across users. ” Jaiprakash Dave, Minjar Cloud Solutions

Earlier, only Identity-based (IAM) inline policies were available. Managed policies came later. So not all old AWS cloud best practices that existed during inline policies era might hold good in the present day. So, it is recommended to use managed policies that is available now. With managed policies you can manage permissions from a central place rather than having it attached directly to users. It also enables to properly categorize policies and reuse them. Updating permissions also becomes easier when a single managed policy is attached to multiple users. Plus, in managed policies you can add up to 10 managed policies to a user, role, or group. The size of each managed policy, however, cannot exceed 5,120 characters.

3. Make Multiple Account Switch Roles

“We encourage our clients to make multiple account switch roles for access controls as per their security needs.” Anoop Khandelwal, Botmetric LLC.  

Earlier, it was not recommended to switch roles for access controls while using VPC. However, now it is recommended to make multiple account switch roles for access controls as per their security needs. Plus, earlier VPCs came with de facto defaults, which was inherently less than ideal from a security perspective. Now, Amazon VPC provides features that you can use to increase and monitor the security for your Virtual Private Cloud (VPC).

4. Redesigning Architecture for New Attack Vectors

DDOS attacks through compromised IOT devices such as Mirai Bot attacks caught the security professionals by surprise. The possibility of the scale of the attack was not predicted by any security analyst. Such new attack vectors will be designed by hackers to penetrate popular and highly sensitive websites and it would be difficult to anticipate all potential attack vectors. So cloud professionals have to revisit their architecture and be ready with better contingency measures in case of such unanticipated attack vectors.

“You (cloud security engineer) need to relook into your architecture now and then and come up with better contingency measures for new age attack vectors like massively distributed denial of service(DDOS). ” Abhinay Dronavally, Botmetric LLC.

5. New API Security Mechanisms

Today, most enterprise applications consume data from external web services and also expose their data. The authentication mechanisms for the APIs cannot be the same as human user authentication, like earlier days. APIs must fit into machine to machine interactions. Focus more on integration API security mechanisms with specialized API security solution.

“As data breaches can happen through API, integration of API security mechanisms are a must.” — Shivanarayana Rayapati, Minjar Cloud Solutions.

Final Thoughts

As the sophistication of the attacks keep increasing, the security solutions too would have to improve their detection methods. Today’s security solutions leverage Artificial Intelligence (AI) algorithms like Random Forest Classification, Deep Learning techniques, etc. to study, organize, and identify the underlying access patterns of various users. A well thought-through  approach is pivotal in securing your AWS cloud. For that matter, any cloud.

Tightly Integrated Cloud Security Platform for AWS Just a Click Away — Get Started!

Botmetric Brings Slack Fun to Cloud Engineers

Slack (the real-time messaging app), today, is one of the robust communication platforms among various teams. Teams of all shapes and sizes —  right from NASA to NGOs — are using Slack today as a go-to tool for both communication as well as collaboration. Thanks to all of the fun and useful integrations Slack folks have built on top of it, there’s a whole lot of really cool stuff you can do with Slack. Above all, it helps engineers’ work fun and more collaborative.

Closely following our tenets to make a cloud engineer’s life more easier by the day, we’re excited to bring Botmetric and Slack integration. With this integration, cloud engineers can quickly get a sneak-peak into specific Botmetric alerts, as well as details of various cloud events, on their desired channel of Slack.

Be it an alert generated by Botmetric’s Cost & Governance, Security & Compliance, or Ops & Automation, a cloud engineer will never miss an alert or notification when on Slack.

We Get Notifications on Email Anyway, Is Botmetric-Slack Integration Really Necessary?

Understood that most of the alerts and notification management in IT infrastructure are delivered to you in form of emails. But with alert deluge, it becomes annoying. Email is of course one of the most crucial form of communication in corporate world. For communication with external stakeholders, it has proven success. However, for internal communication it is more time consuming. Plus, enterprise have various usage like file sharing, integration, private groups etc. even for internal usage.

As I said earlier: Slack makes engineers’ work fun and more collaborative. You as an engineer, have you ever wondered how many of email you swarm around each day. Chat is one channel that has proved to be more collaborative and productive. That’s why, it has been a more preferred tool amongst most team members. And Slack is one such collaborative tool that has a plethora of 3rd party integration capability making communication more collaborative, textual, transparent and efficient.

So, we recommend Botmetric and Slack integration for seamless alerts and notifications management for effective communication of AWS cloud issues over chat.

How to Make the Best Use of Botmetric-Slack Integration?

Botmetric generates several alerts day in day out on various cloud events. Since this happens continuously in the form of alerts or notifications throughout the day, monitoring these alerts on the most favoured channel will ease a cloud engineer’s life.

  • Receive desired Botmetric alerts or notifications in real-time onto your desired Slack channel
  • Never miss a Botmetric notification anymore, even while not logged into Botmetric
  • Be more nimble at work and agile on cloud. Increase productivity

Botmetric Brings Slack Fun to Cloud Engineers

Who Can Use Botmetric-Slack Integration?

Anyone who has subscribed to Botmetric and is on Slack and would like to receive specific Botmetric alerts or notifications in real-time on the Slack channel of their choice can use it.

What All Can You do With Botmetric-Slack Integration?

You can perform several integrations using this feature. Few of them are listed below:

1. You can create very specific integrations. For example, you may chose to receive Ops & Automation alerts on a channel that has only developers in it. Similarly, you can also create a separate integration for Cost & Governance where only senior management is present.

Botmetric and Slack Integration

 

2.  Integrations can be created per account and/or per notification event type.

 Botmetric-Slack Integration

3. You can pick and choose notification events for which you wish to receive notifications.

4. If you are using an application for ticketing (that listens to a slack channel and creates tickets) then this adds another dimension to it. For example, under Ops & Automation , whenever an incident is created, a message is pushed to a slack channel. This message can be used to create an automated ticket in your system.

This Botmetric-Slack integration feature can be found under the Admin section inside 3rd Party Integrations.

The Wrap-Up

Communication and collaboration are essential part of life. Enterprises who have proven success in teams have two things in common: collaboration and communication. In fact, efficacy of a business depends on communicating the right thing at the right time. Slack is doing just that. Helping businesses with apt communication and collaboration. With Slack and Botmetric integration, your cloud engineers will never miss an alert or notification from Botmetric.

Do try this feature and provide feedback. If you need any assistance in integrating Botmetric with Slack, just drop in a line in the comment section below or visit Botmetric support page. You can also give us a shout out either on  Facebook, Twitter, or LinkedIn to know more about it.

If you’re yet to try Botmetric, then we invite you to take a 14-day trial

The Rise of Anything as a Service (XaaS): The New Hulk of Cloud Computing

Cloud Computing, as we see it today, has seen tremendous evolutionary as-a-service segments right from the dawn of Software-as-a-Service (SaaS) to Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). And now Anything-as-a-Service (XaaS).

Analysts forecasted that the global XaaS market will grow at a CAGR of 38.22% between 2016-2020. Besides the typical SaaS, IaaS, and PaaS offerings discussed, there are other ‘As-a-Service(aaS)’ offerings too. For instance, Database-as-a-service, Storage-as-a-Service, Windows-as-a-Service, and even Malware-as-a-Service.

No doubt the ‘Cloud-driven aaS’ era is clearly upon us. And cloud computing remains the top catalyst for all these services’ growth. The converse holds true too.

In the words of Amarkant Singh, Head of Product, Botmetric, “The persuasive wave of cloud computing is affecting every industry and every vertical we can think of. Thanks to all of its fundamental models – IaaS, PaaS, and SaaS plus the latest XaaS, cloud has brought in democratization of infrastructure for businesses. Talking about XaaS. It is the new hulk of the cloud computing and is ushering in more of ready-made, do-it-yourself components and drag-and-drop development.”

XaaS: Born to Win

The XaaS model was born due to elasticity that the cloud offers. More so, the XaaS provides an ever-increasing range of solutions that ultimately gives businesses the extreme flexibility to choose exactly what they want tailored for their business, irrespective of size/vertical.

Recently, Stratoscale asked 32 IT experts to share their insights on the differences between IaaS, PaaS and SaaS and compiled an exhaustive Op-Ed report IaaS/PaaS/SaaS – the Good, the Bad and the Ugly[1]. Among these experts, Amarkant too has penned few lines for the report.

Here are excerpts from the article:

More companies across the spectrum have gained trust in cloud infrastructure services, pioneered by AWS. While IaaS provides a high degree of control over the cloud infrastructure, it is very-capital intensive and has geographic limitations. On the other hand, PaaS comes with decreased costs but offers limited scalability.

With its roots strongly tied to virtualization, SOA and utility/grid computing, SaaS is gaining more popularity. More so, it is gaining traction due to its scalability, resilience, and cost-effectiveness.

According to a recent survey by IDC, 45% of the budget organizations allocate for IT cloud computing is spent on SaaS.

As organizations move more of their IT infrastructure and operations to the cloud, they are willing to embrace a serverless/NoOps model. This marks the gradual move towards the XaaS model (Anything as a Service), which cannot be ignored.

XaaS is the new hulk of the cloud computing. Born due to elasticity offered by the cloud, XaaS can provide an ever-increasing range of solutions, allowing businesses to choose exactly the solution they want, tailored for their business, irrespective of size/vertical. Additionally, since these services are delivered through either hybrid clouds or one or more of the IaaS/PaaS/SaaS models, XaaS has tremendous potential to lower costs. It can also offer low-risk infrastructure for building a new product or focusing on further innovation. XaaS embracement has already gained traction, so the day is not far when XaaS will be the new norm. But at the end of the day, it all matters on how cloud-ready a company is for XaaS adoption.

Concluding Thoughts

Each expert has an idiosyncratic perspective to what, where, when, and why XaaS. For few, it stands for everything-as-a-service and refers to the increasing number of services delivered through cloud over the Internet. For few it is anything-as-a-service. Techopedia quotes it as a broad category of services related to cloud computing and remote access where businesses can cut costs and get specific kinds of personal resources. Different perspective, different views, but one goal: Putting cloud in perspective.

Read what other experts are deliberating on XaaS on Stratoscale’s Op-Ed article ‘IaaS/PaaS/SaaS – the Good, the Bad and the Ugly.’[1]

Share your thoughts in the comment section below or give us a shout out on either Facebook, Twitter, or LinkedIn. We would love to hear what’s your take on XaaS.

[1] Stratoscale, 2017, “IaaS/PaaS/SaaS – the Good, the Bad and the Ugly.”

Botmetric Cloud Explorer: A Handy Topological Relationship View of AWS Resources

Picture this: A cloud engineer is trying hard to map all his AWS resources to have a complete understanding of the infrastructure. He also wants to map how each resources are connected and where they stand today so that he can build stronger governance, auditing, and tracking of resources. All he wishes for is one handy, cumulative relationship view of AWS resources in a topological view. Of course, there is AWS Config service at his disposal, but it does not provide that topological view.

Plus, getting a complete relationship view of AWS resources can be taxing. For the reason that: when on AWS, we tend to create, delete, and manage resources sporadically. No more worries. Botmetric Cloud Explorer Relationship View has your back!

Why Botmetric Cloud Explorer Relationship View?

“Sometimes, it’s good to get a different perspective,” says a famous adage. You don’t get a complete picture of what’s happening when you are cleaving through the complex roads. You get to figure out what you are looking for only when you take a different perspective. Perhaps, a bird’s eye view will help rather than deep diving into complex data. Likewise, when you deep dive into your cloud data, there are chances you will be lost. However, if you get a bird’s eye view of your AWS resources, then it’s nothing like it.

Of course there is AWS Config service at your disposal, but on a long run, a relationship view of all AWS resources will help manage and evaluate these resources with greater accuracy and less effort.

Here, at Botmetric, we always strive to give a complete picture of your AWS cloud infrastructure, not just the tip of the iceberg. That’s why we built Cloud Explorer that provides a handy topology and relationship view of all your AWS cloud resources.

Botmetric Cloud Explorer’s Relationship View gives the topological representation of your complete AWS infrastructure. In a single glance, you can get a complete view of your resources how they are connected to each other.

The primary function of Relationship View is to track the state of different AWS resources like AWS VPCs, AWS Subnets, EC2 Instances, EC2 volumes, Security Groups, EIP, Route Table, Internet Gateway, VPN Gateway, Network Interface, Network ACL, Customer Gateway, and more.

Botmetric Cloud Explorer Relationship View of AWS Resources

 

And, if you’re an organization or an enterprise with a huge number of servers under a VPC, Botmetric Cloud Explorer’s Relationship View will give you a view of which server is connected to which Subnet. Plus, it also gives topological relationship view of each Security Group the instance is associated to.

Also, if there are multiple VPCs on your AWS account, then Relationship View will give you a glance on which subnets belongs to which VPC. By dragging the VPC on to the side of the topological view you can see the complete details on how the resources  are connected with each other under specific VPC.

Relationship View of AWS Resources

There are other highlights of Botmetric Cloud Explorer Relationship View too, like it provides:

  • Ability to find which security groups are not assigned to any resources
  • Visibility on unused security groups and subnets
  • Real-time view on the resources i.e if you make any change in your infrastructure, then that change in data will immediately reflect on the topological view in Botmetric

Apart from giving a relationship view, Botmetric Cloud Explorer Relationship View can be used as a knowledge sharing too. Plus, it can help your entire team to verify the relationship between each AWS resources and check manually. For instance, which subnet belongs to which VPC or which security group is associated to which Instance. This saves a lot of  time!

Above all, to build stronger governance, tracking of resources is pivotal. With Botmetric Cloud Explorer Relationship View, you can easily and quickly identify the resources that are not utilized and thus help govern the resources timely.

How to Access Botmetric’s Cloud Explorer Relationship View?

The Botmetric Cloud Explorer Relationship View can be accessed from Botmetric Ops & Automation product — an intelligent cloud automation console for smarter cloud operations and management. 

One of the prerequisites to access it is to enable AWS Config for the regions you would want to use this feature with few steps. Because, AWS Config provides you with an AWS resource inventory, configuration history, and configuration change notifications. Primarily to enable security and governance. Above all, it takes a snapshot of the state of your AWS resources and how they are wired together, then tracks changes that take place between them. So, any modification, addition, deletion in your AWS infra gets recorded in AWS CloudTrail.

Once up and running, you can have Botmetric Cloud Explorer Relationship View handy.

Conclusion: Topological Relationship View of AWS Resources is Pivotal

As your business scales on the cloud, usage of resources and modification to them scale too. Instead of diving deep into the complex data at the first glance, you must first get a bird’s eye view of the resource usage for better cloud governance. That is what Botmetric Cloud Explorer Relationship View does. Providing a beautiful visualization of your AWS infrastructure. 

If you want to know more about this feature, do drop in a line below, or take a 14-day free trial

7 Tips on How to Work the Magic With DevOps for AWS Cloud Management

If you look at it, both Cloud and DevOps have gained importance because they help address some key transitions in IT. Cloud and DevOps have played a big role in helping IT address some of the biggest transformative shifts of our times. One, the rise of the service economy; two, the unprecedented, almost continuous, pace of disruption and thirdly, the infusion of digital into every facet of our lives. These are the shifts that are driving business in the 21st century. And DevOps for AWS Cloud Management is a match made in heaven.

Are you a DevOps engineer looking for AWS cloud management, then you’re at the right place. Read on to know how AWS and DevOps practices are a go-to combo.

The Backdrop

Cloud has finally come of age in the last few years. Gartner has projected that the worldwide public cloud services market will grow 18 percent in 2017 to a total $246.8 billion, up from $209.2 billion in 2016. Out of this, the highest growth is expected to come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 36.8 percent in 2017 to reach $34.6 billion.

IDC too has it’s views:

worldwide public cloud services market report from IDC Image Source: IDC 2017 Forecast on Public IT Spending

Several companies are hosting enterprise applications in AWS, suggesting that CIOs have become more comfortable hosting critical software in the public cloud. As per Forrester, the first wave of cloud computing was created by Amazon Web Services, which launched with a few simple compute and storage services in 2006. A decade later, AWS is operating at an $11 billion run rate.

“As a mindset, cloud is really about how you do your computing rather than where you do it.”

And with public cloud like AWS, it already provides a set of flexible services designed to enable companies to more rapidly and reliably build and deliver products using AWS and DevOps practices. These services simplify provisioning and managing infrastructure, deploying application code, automating software release processes, and monitoring your application and infrastructure performance.

“In simple words: AWS Cloud Management becomes much simpler through the use of DevOps and vice-versa.”

An essential element of DevOps is that development and operations are bound together, which means that configuration of the infrastructure is part of the code itself. This basically means that unlike the traditional process of doing development on one machine and deployment on another one, the machine becomes part of the application. This is almost impossible without cloud, because in order to get better reliability and performance, the infrastructure needs scale up and down as needed.

On its part, DevOps has gained its spotlight in the software development field, and is growing from strength to strength. DevOps has seen a tremendous increase in adoption in the recent years, becoming an essential component of software-centric organizations. But when DevOps and Cloud come together is when real magic is created.

Below are few useful tips to ensure that you get the most from your DevOps for AWS Cloud Management.

1. Templatize your Cloud Architecture

“Build your Cloud as Code.”

Using AWS CloudFormation’s sample templates or creating your own templates, you can describe the AWS resources including the deployment configuration, and any associated dependencies or runtime parameters, required to run your application.

 AWS CloudFormation’s sample templateImage Source: AWS CloudFormation Docs

This allows developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

This not only allows the source control of VPC design, application deployment architecture, network security design and application configuration in JSON format. In case you require multi-cloud support for safely creating and managing the cloud infrastructure at scale, you can consider using Terraform. This can help everyone in your team to understand your cloud design.

One great thing about CloudFormation is that you don’t need to figure out the order for provisioning AWS services or the subtitles of making those dependencies work. Once the AWS resources are deployed, it is possible to modify and update them in a controlled and predictable way, similar to the manner in which you apply version control to your AWS infrastructure.

“The best part is that CloudFormation is available at no additional charge, and customers need to pay only for the AWS resources needed to run your applications.”

2. Automate with AWS Cloud Management Tools

Cloud makes it easier for you to automate everything using APIs. AWS provides a bunch of services that help organizations practice DevOps, and these are built first for use with AWS. These tools have the capability to automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.  

AWS cloud automation tools help you use automation so you can build faster and more efficiently.

AWS Automation Tools

You might want to first and foremost automate the build and deploy process of your applications.  You can leverage Jenkins or CodePipeline to CodeDeploy to automate your build-test-release-deploy process. This will enable anyone from your team to deploy a new piece of code into production, potentially saving hundreds of hours every month for your engineers.

Using AWS services, you can also automate manual tasks or processes including deployments, development & test workflows, container management, and configuration management.

Doing manual work in the Cloud through console can be quite problematic. You simply cannot deal with the complexity and configuration required for your applications without automating everything from provisioning, config, build, release, deployment, monitor and troubleshooting issues.

“In Cloud, the only thing you should trust is your automation. Automate IT.”

3. Free up Engineers’ Time Using Managed DB and Search

In most cases, there is absolutely no reason for you to run your own SQL databases. AWS offers some great services – RDS and ElasticSearch. These can free you from the worry of the AWS Cloud Management processes by managing the complexity and handling underlying infrastructure.

Amazon Elasticsearch Service makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. Similarly, the Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business.

These managed offerings from AWS make everything from patch management, horizontal scalability to read replicas a breeze. The best part is that these will free up your engineers’ time to focus on more business initiatives by offloading a large chunk of operational work to AWS.

4. Simplify Troubleshooting Through Centralized Log Management

“DevOps allows you to do frequent deploys, so you debug quickly and do the release. With Centralized Log Management, debugging gets quicker and faster.”

The most important debug information of your applications that you need for troubleshooting will be in the log files. Therefore, you need a centralized system to collect and manage it. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. The ELK stack (Elasticsearch, Logstash, and Kibana) or EKK stack (Amazon Elasticsearch Service, Amazon Kinesis, and Kibana) is a solution that eliminates the undifferentiated heavy lifting of deploying, managing, and scaling your log aggregation solution. With the EKK stack, you can focus on analyzing logs and debugging your application, instead of managing and scaling the system that aggregates the logs.

You should look at using CloudWatch Logs to stream all logs from your servers into ELK stack provided by AWS. You can look at Sumologic or Loggly for doing this as well if you need advanced analytics and grouping of logs data. This will allow engineers to look at information for troubleshooting problems or handling issues without worrying about SSH access to systems.

5. Get Round-the-Clock Cloud Visibility

DevOps is a continuous process. Put it in action for round-the-clock cloud visibility. And Every business needs visibility into their cloud usage by users, operations, applications, access and network flow standpoint.

DevOps is a Continuous Process

You can do this easily in AWS using AWS’ DevOps tools like CloudTrail logs, VPC Flow logs, RDS Logs and ELB/CloudFront logs. You will have everything needed to audit what happened and when from where to understand any incident. This will help you understand and troubleshoot events faster.

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting.

Similarly, VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn can help you diagnose overly restrictive security group rules.

You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.

You can also monitor the MySQL error log, the slow query log, and the general log directly through the Amazon RDS console, the Amazon RDS API, the Amazon RDS CLI, or the AWS SDKs. 

6. Manage ROI Intelligently

“DevOps is the culture of innovating at velocity. Using DevOps concepts you can help keep cloud ROI in check.”

One of the benefits of moving your business to Cloud is reducing your infrastructure costs. Before you find ways to maximize your AWS Cloud ROI, you need to first have the right data to help you make decisions. After all, knowing how to control your cloud costs is easy when all the right data comes together in a single dashboard to help you make decisions. There are tools (including from  Botmetric) that provide full visibility into your cloud across the company to build a meaningful picture of expenses and analyze resources by business units or departments. With these tools, you have immediate answers to why your AWS cloud costs spiked or what caused it.

“A penny saved is a penny earned. Ensure you track down every unused and underused resource in your AWS cloud and help increase ROI.”

Using Botmetric products, you can fix cost leaks within minutes using the powerful click-to-fix automation. You also have a unified cloud cost savings dashboard to understand utilization across your business to know cost spillage at business unit or cloud account level.

Cloud capacity planning is pivotal to reduce your overall cloud spend. There is no better way to maximize ROI than considering Reserved Instance purchases in AWS for your predictable usage for the year.

With RI, you pay the low hourly usage fee for every hour in your Reserved Instance term. This means you’re charged hourly regardless of whether any usage has occurred during an hour. When your total quantity of running instances during a given hour exceeds the number of applicable RIs you own, you will be charged the On-Demand rate. There are other dynamics to it too.

Botmetric’s AWS Reserved Instance planner (RI Planner) evaluates cloud utilization to recommend intelligent ways for optimizing AWS RI costs. It enables you to plan right. Even better, there will be no more over reservation or underutilization. You have access to a suite of intelligent RI recommendation algorithms and smart RI purchase planner to save weeks of effort.

With the recent models, you can simplify the RI management and not worry about tiny configuration details for taking advantage of it in a region. You should have mechanisms to alert you in case of unused RI. With an effective RI, you can keep everyone happy and save money for the company.

7. Ensuring Top-Notch AWS Cloud Security

You can provide a far better security in AWS than you can potentially do in a data center without worrying about an exorbitant licensing cost for legacy security tools.

AWS provides WAF, DDoS Protection, Inspector, System Manager, Trusted Advisor and Config Rules for protecting your Cloud while you can get virtually all other security tools from the marketplace.

AWS CloudTrail, which provides a history of AWS API calls for an account, too facilitates security analysis, resource change tracking, and compliance auditing of an AWS environment.

Moreover, CloudTrail is an essential service for understanding AWS usage and should be enabled in every region – for all AWS accounts regardless of where services are deployed.

As a DevOps engineer, you can also use AWS Config, which creates an AWS resource inventory like configuration history, configuration change notification, and relationships between AWS resources. It provides a timeline of resource configuration changes for specific services too.  Plus, change snapshots are stored in a specified Amazon S3 bucket and can be configured to send Amazon SNS notifications when AWS resource changes are detected. This will help keep vulnerabilities under check.

Not to forget: add an additional layer of security for your business with Multi-Factor Authentication (MFA) for your AWS Root Account and all IAM users. The same should be applied for your SSH Jumpbox as well so no one can access it directly. You should enable MFA for all your critical S3 buckets that have business information & backup data to ensure it’s protected from accidental terminations. Given the number of advantages that MFA protection has for enhanced security, there is no reason for you to avoid it. This provides additional protection to secure your cloud and data.

Concluding Thoughts: Adopt Modern DevOps Tools

If cloud is a new way of computing then DevOps is the modern way of getting things done. You should leverage new age DevOps tools for monitoring, application performance management, log management, security, data protection and cloud management instead of trying to build adhoc automation or dealing with primitive tools offered by AWS. A good tool like New Relic, Shippable, CloudPassage etc. can save time and effort. However, using intelligent DevOps platform like Botmetric is the way forward if you want simplified cloud operations.

We’re at a stage now where most organisations don’t really need to be educated about the value of cloud computing, so to speak. The major advantages of cloud including agility, scalability, cost benefits, innovation and business growth are fairly well established. Rather, it is a matter of businesses trying to evaluate how they can fit cloud into their overall IT strategies.

With new innovations and changing dynamics, and increased demand of DevOps users, businesses are becoming more agile with each passing day. But DevOps isn’t the easiest thing in the world. We hope that your endeavor to get the best of your DevOps and AWS Cloud combo becomes a breeze with these seven tips! Do drop in a line or two below about what you think. Until next time, stay tuned! 

Gauge AWS S3 Spend with Botmetric S3 Cost Analyzer, Minimize AWS S3 Bill Shock

AWS Simple Storage Service (S3) is one of the most popular AWS services offered under the umbrella of storage. For most enterprises, AWS S3 spend is one of the top five biggest spends among all AWS offerings. There are three primary costs associated with S3: The storage cost charged per GB per month or per hour, API cost for operation of files where write requests are ten fold expensive, and data transfer cost outside of AWS region.

Despite expenses, however, S3 cannot be ignored. It offers a durability of  99.999999999% compared to other object storage. Plus, it has a simple web interface to store and retrieve any amount of data.

If you’re a operations manager or a cloud engineer, you probably already know that data read/write or data moved in/out are also considered as moving billable parts of S3 storage. So, AWS S3 billing has something more in it beyond just the storage cost. Hence, a detailed analysis of all these can help you keep AWS S3 bill shock to a minimum.

How Botmetric AWS S3 Cost Analyzer Helps?

Enterprises who use S3 for content delivery build up a huge cost on S3. Especially media-tech companies. As mentioned earlier, S3 cost may differ on storage, data transfer and many other attributes. If you, however, drill down deeper into attributes such as various S3 services, instances, tags, etc., and analyze them, you can help curtail AWS S3 bill shock to a great extent.

That’s why Botmetric built AWS S3 Cost Analyzer. This analyzer has built-in extensive filters that help drill down by:

  • TimeRange
    • 7  Days
    • Current month
    • Previous month
    • Custom time range
  • Buckets
  • Sub-Services
    • S3 API requests
    • S3 data transfer
    • S3 Standard Storage
    • S3 Reduced Redundancy storage
    • S3 Infrequent Access
  • Instances
  • Tag  keys

With Botmetric S3 Cost Analyzer, you get a complete clarity on S3 usage and AWS S3 spend.

Know Your AWS S3 Spend by Current Month/Day

With Botmetric S3 Cost Analyzer, you can view your overall S3 spend by day or current month with ease. This drill down helps in understanding your current S3 cost involvement.

Know Your S3 Spend by Current Month/Day

Know Your AWS S3 Spend by Buckets

Data/objects in S3 is stored in buckets created under S3 service. A bucket is a logical unit of storage in AWS’ object storage service, S3. Buckets are used to store objects, which consist of data and metadata that describes the data. There is no limit to the number of objects you can store in a bucket. By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional buckets, you can increase your bucket limit by submitting a service limit increase.

Thus, Botmetric S3 Cost Analyzer can help you get a complete overview and break up of cost on each individual bucket. You can further drill down using filters to have a better understanding of your AWS spend for S3 bucket-wise. What’s more? You can view the data with organized and graphical view of your choice (bar chart, pie chart, line chart, or table format) too.

Know Your S3 Spend by Buckets

Know Your AWS S3 Spend by Accounts

If you are an organization who have categorized your business units as different accounts, then getting an understanding of S3 spend by each of these business units is very essential to understand usage patterns of these accounts in your organization. Botmetric S3 Cost Analyzer caters to these needs of organizations with beautiful graphical view of your choice.

Know Your S3 Spend by Accounts

Know Your AWS S3 Spend by Sub-services

If you look deep into AWS S3 cost, it depends on the sub services usage too. Hence, Botmetric S3 Cost Analyzer provides deep drill down into several services, like:

  • S3 Standard Storage
  • S3 API Request
  • S3 Data Transfer
  • S3 Infrequent Access Storage
  • S3 Reduced Redundancy Storage

With a drill down of sub-services you can understand individual cost items associated to these sub-services. Plus, you get to look at all these data-driven insights in beautiful graphical forms of your choice.

Know Your S3 Spend by Sub-services

Know Your AWS S3 Spend by Tags

Tags are one of the most important custom feature AWS offers to its users. Tags helps you define infrastructure identifiers with your own naming. Many AWS users are more interested in pulling reports via tags as these reports give them a better understanding of spend via environment.

With Botmetric S3 Analyzer, you can now drill down to S3 costs associated to various configured tags  in your environment and see them in graphical view without data overload.

Know Your S3 Spend by Tags

Export AWS S3 Spend Data in CSV:

The best of all, with Botmetric S3 Cost Analyzer, you can export or download different breakdown of S3 cost into CSV files so you can circulate it among your team members and use it for any internal analysis. The data export option allows you to visualize the cost breakdown by buckets, sub-services, accounts, tags, and more.

Export AWS S3 Spend Data in CSV

Concluding Thoughts

For many businesses, S3 has been a primary storage for cloud native applications such as bulk repository, data lake for analytics, target for backup and disaster recovery, serverless computing and many more. So, it’s pivotal to keep a check on Amazon S3 billing.

AWS provides a Simple Monthly Calculator to calculate cost. But you need to have complete knowledge and know the dynamics surrounding your AWS S3 usage and how S3 billing is done. With Botmetric S3 Cost Analyzer, you can easily analyze as well as optimize S3 billing split by month or day, in formats you love, without data overload. We’ve also collated the top seven hacks on how to get a handle on AWS S3 spend, if you’d like to have a look at it.

Apart from S3 Cost Analyzer, Botmetric also helps analyze other major AWS services, EC2, RDS cost in detail. Do check them out, here.

Get Botmetric Cost & Governance today to check in what Botmetric S3 Cost Analyzer offers. See for yourself how it helps increase your AWS ROI.

Plus, if you want to fool-proof your system against Amazon AWS S3 Cloud Storage Outage that occurred on February 28th, 2017, you might have a look at a Botmetric blog here. Until next time, stay tuned with us.