Here’s Botmetric Wishing Thank You for a Fantastic 2016 & Happy 2017

Dear You,

We are thrilled to close this year with a bang! It’s hard to sum up 2016 in just a few scribbles, especially when we made many new friends and rolled-out so many enhancements & new features: a new platform, new audits, more ingrained intelligence, more cloud optimization options, a revamped website with new UI, and much much more.

And as the New Year sets in slowly across the world, minute by minute, second by second, and continues its journey with the same charm and diligence, we at Botmetric, likewise, will continue our journey towards making cloud management and optimization a breeze for you. With rolled sleeves. With more focus. With more features. And with more zeal. Everything for you, dear customer.

Here’s 2016 Year-in-Review: The Best Year so far, for Botmetric

Continues to be the Highest Rated Product on AWS Marketplace

We attribute our success to our dear customers. Thank you for the timely feedback, and those wonderful testimonials bedecked with five stars. Based on the feedback and our learning, we revamped Botmetric into a platform of three products that are use-case targeted instead of a ‘one-size-fits-all’ approach:

1. Cost management & Governance: This Botmetric product helps you control your cloud spend, save that extra bit, optimize spend through intelligent analytics and allocate cloud resources wisely for maximum ROI. It is built for businesses and CIO teams to enable them in decision making & maximizing cloud ROI.

2. Security & Compliance: This Botmetric product helps you get compliant and keeps your cloud secure with automated audits, health checks, and best practices. It provides the most comprehensive list of automated health checks. It is built for CISO and Security Engineers to proactively identify issues and fix vulnerabilities before they become problems.

3. Ops & Automation: This Botmetric product helps you save time and effort you invest in automating cloud operations. It has built-in operational intelligence that can analyse problems and fix events in seconds. Above all, speeds-up DevOps. It is built for CTOs and DevOps Engineers seeking alert diagnostics, event intelligence, and out of the box automation.

Chose any of the above, or any two of the above, or choose all. Your wish, your products, tailored for your AWS cloud. With these three products, you can realize the full potential of your cloud without any information overload. Find insights that matter to you, in just one-click.

There’s More:

To celebrate what you’ve helped us achieve this year, we have put together few 2016 Botmetric facts:2016 Botmetric Facts

New and Key Botmetric Product Features Rolled-Out in 2016:

1. Data Transfer Analysis for AWS Cost Management: Provides insights into your bandwidth costs on AWS cloud.

2. EC2 Cost Analysis to Optimize AWS Spend: Helps you understand your AWS EC2 spend easily and efficiently.

3. Internal Chargeback: Allocate cost easily across multiple cost centers and bring in the required parity in your AWS cost management.

4. Compliance Audit Policies for Heightened Security: Helps mitigate vulnerabilities in real time and adopt a comprehensive security management.

5. Cloud Reporting Module: Helps you quickly find your AWS cloud reports from one centralized module without scrolling, and to counter endless searching for what you need.

6. Reserved Instance Planner: Provides reservation recommendations at instance level. This revisited RI planner allows you to filter the recommendations, look at the details of the instance being recommended, and accordingly add it to a RI plan. You can also download the plan and work on budget approvals and actual reservations offline.

Work-in-progress

An Advanced DevOps Intelligence Feature: Assuages alert fatigue mess, helps easily understand the alert events through intelligence, and tells you why is it happening. It also checks for pattern in the problems.

We have much more coming up in 2017. So stay tuned with us.

Here’s Botmetric wishing you a very Happy New Year.

 

From,

Team Botmetric

 

Let’s blow the heartiest kisses to the cloud in 2017

Cloud is the new black. Let’s together embrace it more, with dexterity, in 2017.

Share all your 2016 cloud musings, learning, and accomplishments with Botmetric on Twitter, Facebook, or LinkedIn. We’re all ears, and we’ve your back.

Let’s make cloud an easier and a better place to grow our business.

 

P.S. If you have not signed-up with Botmetric yet, then go ahead and take a 14-day free trial. As an AWS user, we’re sure you’ll love it!

Top 7 Hacks on How to Get a Handle on AWS S3 Cost

Amazon Simple Storage Service (S3) is one of the most widely deployed AWS Services, next to AWS EC2. It is used for a wide range of use cases such as static HTML websites, blogs, personal media storage, enterprise backup storage. From AWS cost perspective, AWS S3 storage is one of the top preferred resources. For every enterprise looking to optimize AWS Costs, analysing and formulating an effective cost management strategy towards AWS S3 is important. More so, understanding data lifecycle of the applications hosted is the key step towards implementing a good AWS S3 cost management strategy.

Making the most of AWS S3:

With AWS, you pay for the services you use and the storage units you’ve consumed. If AWS S3 service is a significant component of your AWS cost, then implementing AWS S3 management best practices is the way forward. 

For example, if a business has opted for AWS S3 service and has provisioned 100 GB of it but has actually stored only 10 GB of files in it, then AWS would only charge for the 10 GB and not for the entire 100 GB provisioned initially. However, there are various factors that affect the S3 cost too, which many are unaware. Many AWS administrators tend to overlook S3 from cost management perspective because of this aspect.

To this end, we’ve collated few basic checks to get the S3 cost management right as AWS S3 usage grows: 

1. EC2 and S3 buckets should be in the same AWS region because there is a cost involved for data transfers outside of its AWS region.

2. The Naming Schema should be chosen such that access keys generated ensures  files are stored and distributed across multiple drives of the AWS S3 system. If the access keys are distributed evenly, the number of file operations needed to read and write the files will be less. This will lead to less spend costs as there is an additional cost overhead for read-write operations for S3.

3. Only temporary access credentials of AWS S3 should be hardcoded into an application’s code that uses S3. There can be misuse of the S3 resources if access keys are exposed to third party. This can prove very costly, if access credentials are compromised in the future.

4. Monitoring the actual usage of AWS S3 periodically is one of the best practices. By doing so, misuse of the provisioned S3 resources will come to limelight and help in curtailing data compromise.

5. Files form the key object type, and are stored in S3. All files that are no longer relevant should be removed from S3 buckets. All files that are temporary files can be recreated through a computation process. All temporary files generated due to incomplete multi-part uploads should be cleaned up periodically.

6. When using versioning for an S3 bucket, enable “Lifecycle” feature to delete old versions. Here’s why and how: With Lifecycle Management, you can define time-based rules that can trigger ‘Transition’ and ‘Expiration’ (deletion of objects). The Expiration rules give you the ability to delete objects or versions of objects, which are older than a particular age. This ensures that the objects remain available in case of an accidental or planned delete while limiting your storage costs by deleting them after they are older than your preferred rollback window.

7. Try to send the data to S3 in compressible format, because AWS S3 is charged for the amount of units you’ve consumed.

Ultimately, every data stored in the S3 will have its lifecycle stages of creation, usage and then followed by infrequent usage. Just like content creation in a news website. The daily news created along with its images can be stored in AWS S3. Current news items will be accessed most and hence have to be quickly accessible to a reader. At the end of the week, the older daily news content can be moved to the AWS S3 RRS for faster, but slightly infrequent access. At the end of the month, they can be moved to an standard infrequent access storage type. At the end of the quarter, these content can be moved to the low cost rarely accessed archival mode of AWS Glacier.

This data lifecycle is applicable across domains including e-commerce and enterprise computing as well. Hence, leverage data’s inherent lifecycle for AWS S3 cost optimization.

You can also take advantage of Amazon S3 Reduced Redundancy Storage (RRS) as an alternative to S3, because it’s cheaper.

To Conclude:

Once you follow all the above hacks, start observing the bills. And don’t forget to follow other key best practices too. Use RRS where ever you can. Keep your buckets organized. Archive when appropriate. Speed up your data processing with proper access keys names. Use S3, if you are hosting a static website. Architect around data transfer costs. Use consolidated billing.

Finally, AWS provides a simple configuration mechanism to specify the rules of the data lifecycle and transferring of the objects across storage types. So, do take data lifecycle as well as into account when it comes to S3 cost management.

If you are finding it difficult to save on AWS S3 cost, then explore the intelligent Botmetric AWS Cloud Management Platform with a 14-day free trial. It can help you manage your AWS storage resource management and help keep them at optimal pricing levels at all times. For other interesting news on cloud, follow us on Twitter, Facebook, and LinkedIn.

DevSecOps: The Next Wave of Cloud Security

The adoption of DevOps, agile and public cloud services among businesses worldwide is increasing by the day. These are seen as the major shift in enterprise IT, and as the next wave after Internet. Thanks to digital democratization, due to which businesses have to be nimble to remain competitive. That said, security threats and cybercrime continue to outsmart businesses despite having cutting-edge security wall around them. To this end, DevSecOps was born to bridge the security gap into DevOps, just as DevOps bridged the development and operations divide.

Plugging in the right chord: Security into DevOps, on the cloud

Business leaders now understand that moving to the cloud is not just any tech adaptation, but it is more about speed of service delivery and dynamic scalability. One of the most significant paybacks of the DevOps has been better software quality delivered faster, even on the cloud.

Cloud technology dissolves enterprise perimeter, the key construct around which security solutions have been developed. Earlier, security concerns were holding back many businesses from jumping on to the cloud bandwagon. And when the idea of perimeter and boundary was once again threatened by new security requirements such as those warranted by Bring Your Own Device (BYOD) policies, the IT industry slowly started to embrace the cloud. Security professionals are now leveraging real-time analytics and have also adopted “Continuous Security” in clear parallel to the “Continuous Integration” and “Continuous deployment” approach of the DevOps movement.

Image Source: RSAConference, 2016, DevSecOps In Baby Steps

DevSecOps Tools: Filling in the Security Gap

Many enterprises have started to explore ways of making application quality and security testing more scripted, continuous, and automated. With DevSecOps, they are taking an automation approach for security tests throughout development, even on the cloud. They are even integrating security-feature design and implementation into the development lifecycle in ways that wasn’t possible before.

For instance, in the DevOps automation cycle, every code commit triggers a build that tests security and functionality of the application using tools like Amazon Inspector and Selenium. Selenium, which was used for test automation only earlier, is now emerging as one of the top DevSecOps tools as it can easily trigger security scanning tests along with other application test scripts. Moreover, it ensures that systems are always patched, vulnerabilities scanned and checked for functioning before deployment.

To sum up: Application security is reaching a level that many security professionals have been advocating for years. This is possible only through automation of security and regulatory compliance tests throughout development and deployment. And by leveraging automation tools to enforce security and compliance controls, DevSecOps will empower organizations to achieve regulatory compliance at speed, and at scale. Furthermore, DevSecOps makes detection and closing of security vulnerabilities faster than before while on the cloud.

With DevSecOps on the cloud, security becomes an essential part of the development process itself instead of being an afterthought.

To Conclude:

The provisioning of the server infrastructure itself can be dynamic process on the cloud. DevSecOps processes can trigger both the platform and application security checks whenever a new version of application is deployed. Hence DevSecOps on the cloud effectively blurs lines between the platform security and application security, as the automation of compliance and regulatory tests along with application specific quality tests will be the norm. Clearly, DevSecOps is set to evolve as the next significant wave for cloud security.

Let us know what you think of this story. If you need to talk to experts on how to leverage DevSecOps for your cloud, write to us at support@botmetric.comor just give us a shout out on TwitterFacebook, or LinkedIn. You might as well explore Botmetric, an intelligent cloud management platform that has integrated DevOps and SecOps features in it. Do checkout how Botmetric can add value to your cloud infrastructure with a 14-day trail run.

Introducing EC2 Cost Analysis in New Botmetric: A Game Plan to Optimize AWS Spend

Elastic Cloud Compute (EC2) is one of the most popular services of AWS and used by almost every Amazon cloud customer. And, in general,  EC2 usage accounts for 70 to 75% of AWS bill for an average AWS user. Moreover, most of the underlying services like EBS, EIP, ELB, NAT, etc. are used in conjunction with EC2 service for deploying applications on AWS cloud.

So, several unique EC2-related line items can show up on your AWS bill, which will further make it even more difficult to comprehend what’s driving all that spending.  A high-level view of the spend will not suffice. Because of this, it is critical to analyze EC2 usage and its spend breakdown by various dimensions like resource, instance type, services, accounts, and more while optimizing AWS costs.

To cater to this need and help our customers understand their AWS EC2 spend easily and efficiently, we have introduced the support for “EC2 Cost Analysis” in the ‘All-New’ Botmetric platform as part of its Cost Management and Governance’ Analyze feature.

Here’re the top features that the new Botmetric EC2 Cost Analysis offers:

1. Know your EC2 spend by instance type: You can quickly drill down and understand your total EC2 cost on AWS cloud split across different EC2 instance families. You can filter this further by various AWS accounts.

 

EC2 Cost Breakdown by instance type

 

2. EC2 cost breakdown by sub services: We have brought together the cost of EBS, EIP, ELB, Data Transfer, NAT Gateway under EC2 cost analysis module so you can easily see what is your mix of total spend across various EC2 related services. You can filter this cost further for any AWS account or month so you can drill into specific details. We also encourage you to drill down this analysis for a particular instance family.

 

EC2 cost breakdown by sub services

 

3. EC2 cost breakdown across different AWS accounts: You can split the EC2 cost across different AWS accounts in your master payer account so you can categorize them based on your usage.  

 

EC2 cost across different AWS account

 

4. Data export in CSV: We allow you to export different breakdown of EC2 cost into CSV file so you can use it for any internal analysis. The data export option allows you to see the cost breakdown by instance types, AWS accounts, related services, specific EC2 resources, etc.

 

Export data in CSV

You can access this feature in Botmetric Cloud Management Platform under Cost & Governance application in the Analyze Module. Please write to us with your feedback on what we can do better and where we can improve it further.

If you want to know some of the AWS EC2 cost saving tips that pro AWS users follow, read this Botmetric blog post. And if you want to know what are the other new features available in the new release of Botmetric, then take an exclusive 14-day trial or read the expert blog post by Botmetric Customer Success Manager, Richa Nath. Until our next blog post, do stay tuned with us on Twitter, Facebook, and  LinkedIn for other interesting news from us! 

Bridging the Cloud Security Gaps: With Freedom Comes Greater Responsibility

By 2019, global spending on public cloud services by businesses is expected to reach $141 billion, says  IDC reports. Approximately two-thirds of CIOs across the globe view cloud computing as a principal disruptive force in their businesses, says another leading survey. With cloud adoption currently gaining momentum, it is evident that thick-skin cloud computing is here to remain, despite cloud security concerns looming over the heads of many enterprises.

Here’s why: Apart from elasticity and agility the cloud offers, it is the freedom to swiftly launch an infrastructure with just a few clicks & have it ready in few minutes. And this is what has made  developers/engineers to be the prime drivers of cloud adoption across organizations. Plus, organizations are saving 14 percent of their budgets on an average as an outcome of public cloud adoption, according to a Gartner’s 2015 cloud adoption survey. The infographic below lists few influencing factors.

AlienVault Cloud Security Report 2016
Image Source: AlienVault Cloud Security Report 2016

True. However, this freedom to  scale up or scale down the infrastructure as and when required can very easily wash away that 14 percent saved on budgets if not handled with greater responsibility. Why? Due to cloud security gaps that need to be filled, says Amarkant Singh, Head of Product, Botmetric in one of his articles.

“With Freedom comes greater Responsibility.” And with the choice of public cloud that features shared responsibility model, you need to pay close attention to key security measures from time to time.

Security in the Cloud: A Shared Responsibility

Customers of public cloud services are responsible for their data security and access management of cloud resources. For instance, if you’re using AWS EC2 public cloud infrastructure service, you are responsible for Amazon Machine Images (AMIs), operating systems, applications, data in transit, data at rest, data stores, credentials, policies, and configurations. According to Amarkant, a public cloud user needs to tackle four major components when it comes to cloud security:

  1. Access Controls
  2. Network Security
  3. Data Security
  4. Activity & access trail

And here’re the top five best practices, as suggested by Amarkant, that will help close the cloud security gaps within your cloud infrastructure:

1.Grant least privileges

Use this a thumb rule when granting privileges to users and programs. If you’re using AWS, you must make full use of its IAM capabilities to define a very fine-grained permission level for all access points into your cloud infrastructure. Plus, make multi-factor authentication mandatory for your users. And don’t forget to rotate access credentials regularly.

2.Enable all the detective services

Leverage all the tools and configurations provided by your cloud service provider. This will help track activities within your cloud. For instance: If you use AWS, you must enable AWS CloudTrail Logs (Even in regions where you don’t have instances), VPC Flow Logs, ELB Access Logs, and AWS Config.

3.Encrypt data that is at rest and in transit

Despite knowing the importance of encryption, very few follow it, even though they store sensitive data on the cloud. Ignorance is bliss, however, can prove costly when it come to security of data. Not to worry. Major cloud service providers, like AWS, provides native encryption capabilities to all its data storage services like RDS, S3 and EBS. Great! Now, don’t forget to use HTTPS/SSL when transferring data over the Internet or across regions.

4.Architect networks with desired segmentation

While you architect, do follow the best practices. In case of AWS, you can create VPC and further segment your network into public and private subnets. Do not forget to keep your data stores in a private subnet.

5.Backup the backups

Yes! It is recommended to have one or multiple separate cloud accounts just to keep backups. Plus, only a few users should have access to these accounts. Why? For example, you’re using AWS EBS and you take regular snapshots for backup. When the account is compromised by a hacker, it is highly likely that both EBSand its snapshots(backup) are deleted.

To Conclude:

The statement “With Freedom comes great Responsibility” when it comes to looking into public cloud security, is neither a hype nor an understatement. Bring in the required discipline within the team to perform regular audits, follow best practices, and preferably automate key tasks, and see how cloud computing will never cease to amaze you. Try Botmetric Security & Compliance to see how it can help.

Do tell us what’s your cloud security posture, and how you are implementing the critical cloud security controls and tackling the threat landscape for your cloud. Tweet to us.  Comment to us on Facebook. Or  connect with us on LinkedIn. We’re all ears!

PS: Hear the Botmetric webinar recording on  AWS Security Do’s and Don’ts – Tackling the Threat Landscape  by Amarkant to know more.

Editor’s Note: This blog post  is an adaptation of LinkedIn Pulse post by Amarkant Singh, published on Sep 28, 2016.

Ace Your AWS Cost Management with New Botmetric’s Chargeback Feature

Editor’s Note: This exclusive feature launch blog post is authored by Botmetric CEO Vijay Rayapati.

How awesome would it be if you, as an AWS user, could define, control and understand the cost allocation by different cost centers in your organization, while providing an ability to generate chargeback invoices. It’s possible with the new Botmetric Cost Management & Governance application’s Chargeback module.

This Chargeback module has the same Cost Allocation and Internal Chargeback features from the earlier version of Botmetric, however, packaged in a new UI with augmented features. These features together will help you manage costs associated with  AWS infra in a smart way. Plus, you can manage budget controls better across projects and departments. More than that, users can manage complexities involved in treating chargebacks when dealing with multiple linked accounts from a central payer account. Additionally, you can gain access to many insightful reports for effective tracking and managing of AWS cloud costs.

Key benefits of the new Chargeback module in Botmetric includes:

  • Define Cost Allocation: We have implemented support for defining cost centers within your company by either department or team or business unit so you can split the overall spend into specific cost centers. This works based on your AWS tags for cost allocation and further defines a grouping within Botmetric for creating a cost centers for your business. You can have multiple centers for cost allocation in Botmetric.

Define Cost Allocation with Botmetric Cost & Governance

 

  • Control Unallocated Spend: From the Chargeback module dashboard, you can quickly identify the cost that is unallocated to any cost center in your business. This will allow you to split or allocate the cost into different cost centers and also inform your cloud team on missing tags for cost management.

 

ontrol Unallocated Spend with Botmetric Cost & Governance

 

  • Spend View within a Cost Center : We launched a drill down view so that you can understand the spend view by various AWS services within a cost center. This will allow you to inform the specific cost center teams if they are about to surpass their allocated budget for the month and let them know what is causing the increase in spend.

 

Spend View within a Cost Center

 

  • Download Chargeback and Cost Allocation Data: You can download the chargeback data for any cost center in CSV format. We also allow you to download the unallocated cost as a CSV file, so that you can share it with your IT team for inputs on how  cost should be allocated for different cost centers.

 

Download Chargeback and Cost Allocation Data

We are excited to release the general availability of enterprise cost allocation and chargeback support in Botmetric for AWS cloud. Now Botmetric customers can better manage their budgeting and internal reporting processes.

Please let us know if we can improve anything in Botmetric Chargeback module that can be more useful for your business. Do take a 14-day test drive, if you have not yet tried Botmetric.  

If you want to know several other new features available in the new release, then read the expert blog post by Botmetric Customer Success Manager, Richa Nath. For other interesting news from us, stay tuned on TwitterFacebook, and LinkedIn!

Make AWS Cloud Management a Breeze with New Botmetric Cloud Reporting Module

In an effort to help you quickly find your AWS cloud reports from one centralized module without scrolling, and to counter endless searching for what you need, we came up with a completely revamped AWS cloud reporting module in the ‘All-New’ Botmetric.

Over the years, we’ve become more and more aware of the fact that the key to optimizing the cloud infrastructure is to avoid common mistakes and implement the best practices using a right-sized cloud management console  that aggregates all the key data in one place. With the new Cloud Reporting feature, you can get an aggregated view of your AWS cloud health in a single pane, thus making AWS cloud management a breeze.   

Key benefits of Botmetric’s new AWS cloud reporting module include:

  • A centralized reporting module

We brought together all reports in one centralized module for Botmetric’s new suite of applications so that you can find everything you need in one place. You can now quickly filter the reports by application type (Cost & Governance, Security & Compliance or Ops & Automation) providing more clarity into your AWS’ spend, security, governance, and more. In essence, the new Cloud Reporting module will save time and reduce the manual search-overhead for you.

Centralized reporting module in 'all-new' Botmetric

  • Daily, weekly, and monthly split of reports

Under the new reporting module, you can quickly locate all daily, weekly, and monthly reports for Cost & Governance, Security & Compliance or Ops & Automation using the simple left side navigation filtering. You no longer need to scroll through all reports to find what you need.

View Split of Daily, Weekly, and Monthly Reports in Botmetric AWS Cloud Management

 

  • Filter reports by AWS accounts or month for deep drill down of data:

The new Cloud Reporting module now includes a quick filtering support too so that you can find the reports for specific AWS accounts or a month. This will help you navigate to historic reports and switch between different AWS accounts from a single pane.

Filter Reports by AWS Accounts or Month with Botmetric AWS Cloud Management Platform

What can you expect in the near future in Botmetric Cloud Reporting module for AWS cloud?

  • Create your own reports: Many of our customers asked us to give them the ability to create custom reports. We are working on this request. To this end, we’ll be rolling-out a custom reporting designer in Q1 of 2017.
  • Share reports via email: We will be soon launching report sharing support so that you can send specific reports to different stakeholders in your company, in a snap.

You can access the new Cloud Reporting module in the top right-hand corner of the Botmetric Cloud Management Platform, next to Notifications bar. We would love to hear how useful this module has been for you and how we can make it better to suit your needs.

We’ve also rolled-out the new compliance audit policies and improved security assessment in Botmetric to further simplify and help you resolve the most critical audit vulnerabilities. Give a quick shout out to us about the ‘All-New’ Botmetric. If you have not tried it, then sign-up for 14-day test drive  to experience how easy AWS cloud management is with Botmetric.  

Do read the  expert blog post  by Botmetric Customer Success Manager, Richa Nath, to unearth several other new features available in the new release. Stay tuned with us on  Twitter,  Facebook, and   LinkedIn  for other interesting news from us!

Step-Up Your AWS Cloud Security With New Botmetric Compliance Audit Policies

Knowing your cloud compliance and security vulnerabilities in real-time is the best way forward to bulletproof your AWS infra as well as for business continuity. It is possible to quickly assess and mitigate vulnerabilities in real time and adopt a comprehensive security management for your cloud with the new Botmetric Security and Governance. And we are super excited to announce the support for compliance audit policies in this application. Using this application, you can optimally improve your AWS cloud security and identify critical vulnerabilities quickly from various perspectives like Data Security, Disaster Recovery, Cloud Performance and Network.

Here’re the new compliance audit policies in Botmetric that can help assuage critical vulnerabilities to ensure complete AWS Cloud Security:

  • Specific audit policies for disaster recovery, performance, and security: There are four default audit policies now, which can help you understand the compliance audit from your cloud disaster recovery audit, cloud performance audit and cloud security audit perspective. There is also another default policy that includes all the audit items for a comprehensive assessment of all your AWS cloud.

 

Specific audit policies for disaster recovery, performance, and security

 

  • Critical vulnerabilities assessment: You can quickly find your AWS cloud critical vulnerabilities across different compliance policies. This allows you to narrow down the Botmetric identified issues for the data security assessment or network security assessment.

 

Critical Vulnerabilities Assessment in Botmetric Security & Compliance Solution

 

  • Support for compliance audit by AWS services: We have included a quick filtering support so that you can find the issues in specific AWS service like EC2 or RDS or VPC without going over the complete list of the audit findings. This will help you save time and focus on what matters the most for your business from security standpoint.

 

Step-Up Your AWS Cloud Security With New Botmetric Compliance Audit Policies

 

  • AWS cloud Security audit by network, data, server and infrastructure dimensions: We have introduced different compliance audit categories like Network Security, Data Security, Server Security and Infrastructure Security of AWS cloud. This allows you to quickly find the relevant security issues for your data or network on AWS cloud without worrying about everything else.

 

Security audit by network, data, server and infrastructure dimensions in Botmetric Security & Compliance Solution

We hope the new compliance audit policies and improved security assessments in Botmetric further simplifies and helps you resolve the most critical audit vulnerabilities quickly. Please write to us with your feedback on what we can do better and where we can improve it. If you are not using Botmetric already, then signup today for 14-day trial.

Do read the expert blog post by Botmetric Customer Success Manager, Richa Nath, to know what else is new in Botmetric 2.0. Stay tuned with us on  Twitter,  Facebook, and LinkedIn for other interesting news from us!

P.S: As a quick read, we’ve collated top 21 AWS cloud security best practices that will help improve your AWS cloud security posture. Bookmark this blog page, if you are an IT decision maker facing security challenges in your cloud infra.

 

Introducing Data Transfer Analysis for AWS Cost Management in ‘All New’ Botmetric

Editor’s Note: This exclusive launch blog post is authored by Vijay Rayapati, CEO at Botmetric.

We are thrilled to release Data Transfer Analysis feature in our “All-New” Botmetric — for AWS Cost Management. This feature, one of the most requested by our customers, provides insights into your bandwidth costs on AWS cloud.

Many of our customers use variety of AWS services like EC2, S3, ELB, RDS, CloudFront, ElasticCache, etc. in their production environments. So, they wanted to understand their data transfer costs splits by inter region bandwidth, out of AWS bandwidth and bandwidth costs between different AWS regions, etc. To this end, we decided to include “Data Transfer Analysis” as part of Analyze module under Botmetric Cost Management & Governance application.

Here’s how the new Data Transfer Analytics feature will help you break down your bandwidth costs into a variety of fine grained insights:

1.Know the total bandwidth cost in AWS:  You can understand the overall bandwidth costs across your account or split it by regions using filters. You can also use accounts filter to see total bandwidth costs for each of the AWS costs under your master payer account.

cg-data-transfer-1-copy

2.Know the data transfer cost by services: You can view the data transfer costs by different AWS services. This will help you understand the bandwidth cost split across different AWS services.

Know the data transfer cost by services

3.Know the bandwidth cost by transfer type : You can drill down cost by different data transfer types like Intra Region (within AWS region between multiple availability zones), Inter Region (between different AWS regions), and Public Outbound (out of AWS into internet). You can also split this cost for specific account or specific service like RDS using the filters.

Know the bandwidth cost by transfer type

4.Know the data transfer cost by resources: You can understand the bandwidth costs by specific AWS resources, so that you can identify who is causing your highest bandwidth costs from servers or ELB or RDS, etc.

cg-data-transfer-4

We hope this feature will further simplify and help better understand your cloud spend by different dimensions on the bandwidth cost. Please write to us with your feedback on what we can do better and where we can improve it. If you are not using Botmetric already, then signup today for 14-day free trial.

Do read the  expert blog post  by Botmetric Customer Success Manager, Richa Nath, to unearth several other new features available in the new release. Stay tuned with us on  Twitter,  Facebook, and  LinkedIn  for other interesting news from us!

Streamlining Your Startup Business with AWS Cloud

Editor’s Note: This is a guest blog post authored by Nate Vickery, a business technology expert.

A few years ago, cloud computing emerged as perhaps the most powerful and useful technology in the IT sector, and many businesses, government agencies and even individual users have already started leveraging its potential. In fact, according to a recent IDG Enterprise Cloud Computing Survey, cloud adaptation is, in fact, still on the rise.

The report revealed that today roughly 72% of organizations have at least one app in the cloud or a piece of their infrastructure, as opposed to 57% back in 2012. What’s more, last year Forbes reported that 37% of organizations in the United States already fully embraced cloud computing. And if that’s not enough, the number is expected to increase to around 80% by the end of the decade.

Startups and the Cloud

So, why should you company migrate to the cloud exactly? Well, for starters, ever since the early days of the cloud, one of its main appeals was (and still is) that it’s quite cheaper than an on-premises environment. Hence, it will save your business a considerable amount of money. According to Cloudworks, one of the main reasons why the cloud market will surpass $200 billion in the next two years is its affordability. In addition, around 80% of cloud adopters firmly believe that it helps their business reduce IT costs.

Now, while it is true that there are some concerns about data security, it seems like most small businesses are more than optimistic about the technology because it allows them to stay in the race with their bigger competitors. A recent Rackspace survey revealed that in the past few years, SMBs had managed to increase their profits up to 75%, as a result of cloud computing. Moreover, almost 85% of startups were able to increase their investment back into the business by up to 50%.

What can AWS do for Your Startup?

In the last couple of years, AWS (Amazon Web Services) has become one of the go-to places for small businesses looking for hosting and a plethora of other services for their applications and other business. Now, let’s say you have a tech startup, you surely need to maximize the amount of energy you put into making your services or product the best they can be. But if you are squandering your energy allocating bandwidth, comprehending cooling or racking servers, you can’t add any unique value to your customers.

Well, that unique value is something the consumers pay for, which means when you start focusing your attention and energy properly, you’ll get more customers and thus, grow faster. According to a recent Pacific Crest survey, businesses of all sizes that use third-party hosting services such as AWS, are expanding twice as fast as the businesses that are self-hosting. Furthermore, the survey also revealed that these businesses are also spending around 25% less on hosting expenses.

It’s also worth mentioning that most startups managed to save their money using tools such as Botmetric, which allow them to monitor their costs, and minimize their costs through analytics. However, before we start talking about the perks of AWS, we have to mention that cloud computing requires a high-speed Internet connection. Therefore, if your connection is sluggish, make sure to check the speed of Internet providers near you, and see which provider offers the fastest connection.

An Overabundance of Services

Whether your startup is still in the concept phase, or you’ve already started serving customers, AWS surely has a solution serve your business needs. Some of its services include:

  • Good Pricing Solutions

Low-variable prices and pay-as-you-go pricing models, tiered for S3 and a good number of tools, such as the aforementioned Botmetric that can help you manage your costs.

  • Software Development Kits

If you want to get your company up and running as quickly as possible, you cannot go wrong with AWS. It provides all the tools you need to start immediately, by using the technology you’re already familiar with.

  • Constant Innovation

Amazon is one of the biggest innovators of cloud services, and releases new features on a regular basis. This can help you build your infrastructure on a broad platform, and reduce your product’s TTM (time-to-market).

  • Special Startup Perks

Finally, some services are specifically designed to help startups grow – a free usage tier, a month of support, technical training for the members of the staff, among other exclusive offers.

Final Thoughts

As you can clearly see, Amazon Web Service offers startups everything necessary from the very beginning and allows them to develop fully, without ever outgrowing its capabilities. And with autoscaling, every organization can maintain app availability and scale EC2 up or down automatically. And that is why hundreds (maybe even thousands) of startups around the globe use AWS for developing and scaling their business.

-END-

Guest Blogger’s Profile:

Nate Vickery is a business technology expert mostly focused on future trends applicable to SMB and startup marketing and management processes. He has also been blogging in the past few years about before mentioned topics on various leading sites and communities. In the little free time left, Nate edits a business oriented website – Bizzmarkblog.com.

Assuage Alert Fatigue Mess with DevOps Intelligence

The problem of alert fatigue is considered to be the #1 pain point for both traditional IT teams as well as modern DevOps engineers. Especially for those who provide operational support for their applications and production infrastructure.

And with increased adoption of cloud and emergence of micro services architecture for building new generation systems, we are quadrupling the amount of metrics monitored, like server metrics, container metrics, app/web/DB server metrics, application metrics. Why? Due to monitoring hell — the need to monitor more things than we used to do in the traditional world. And this problem of alerts hell is only going to increase exponentially.

Undoubtedly, DevOps is maturing and there are a plethora of alert email management tools available. However, that does not solve the alerts overload (especially for non-critical events or events for which no action is required). Hence, engineers are increasingly becoming numb to them. In other words, the crying wolf syndrome,’ steps upon them where in they start ignoring even critical warnings, thinking they are meaningless alerts. Thus, the whole objective of sending alert emails becomes least effective.

To this end, Botmetric analyzed what DevOps and Operational Engineers want in exchange of these alert emails? And few interesting facts unearthed:

  • Ability to understand signal over noise
  • Need for scope aware alerting, to reduce the alerts flood
  • Pressing need for alerts intelligence and event diagnostics over emails
  • Requirement of alert event remediation with workflow handlers

To know further and to delve deep into it, read this post by Botmetric CEO Vijay Rayapati. In this post, Vijay will throw light on what DevOps and Operational Engineers seek in exchange of these alert emails, and how DevOps intelligence can be used to fix alerts hell in the cloud world. 

A Crisp Cheat-Sheet to 2016 AWS re:Invent Announcements

AWS, yet again, reigns the public cloud arena from all dimensions with a plethora of new announcements at AWS re:Invent 2016. With digital disruption treading in every vertical across the globe, AWS has now set a stage for itself to be a key part of this evolution. And the congregation, which had over 35,000 attendees, witnessed 25+ new announcements and 10+ updates by Amazon vice president and chief technology officer Werner Vogels. And to you, we present here a cool zippy cheat sheet of all the new & updated epoch-making-announcements at 2016 AWS re:Invent:

NEW ANNOUNCEMENTS

[mk_mini_callout title=”For Compute “]

[/mk_mini_callout]

Amazon EC2 C5: A new powerful compute optimized instance featuring the highest performing processors and the lowest price/compute performance in EC2. Read More »

Amazon EC2 Elastic GPUs: The debutant attachable Elastic GPUs, which are cost effective and provide a flexible way to add graphics acceleration to Amazon EC2 Instances. Read More »

Amazon EC2 F1: A compute instance with programmable hardware for application acceleration. Read More »

Amazon EC2 I3: The latest generation of storage optimized high I/O instances, featuring NVMe based SSDs for the most demanding I/O intensive relational, NoSQL, transactional, and analytics workloads. Read More »

Amazon EC2 R4: The latest generation of memory optimized instances that are 20% more price performant over R3 instances.Read More »

Amazon EC2 T2: Available in t2.xlarge and t2.2xlarge are the newest Amazon EC2 burstable-performance instances. Read More »

Amazon Lightsail: Helps launch and manage a virtual private server with AWS starting at $5/month. Read More »

AWS Batch: Enables developers, scientists, and engineers to run hundreds of thousands of batch computing jobs on AWS. Read More »

Amazon EC2 Systems Manager: Helps automate key management tasks like collecting system inventory, applying OS patches, automating image creation, and configuring OS and applications at scale. Additionally, users can record and govern instance’s software configuration with AWS Config.

C# support on Lambda: Enables use of C# with AWS Lambda. Read More »

Lambda@Edge: Allows developers to deliver a low latency user experience for customized web applications by enabling them to run code at CloudFront edge locations without provisioning or managing servers. Read More »

[mk_mini_callout title=”For Management”]

[/mk_mini_callout]

AWS Personal Health Dashboard: Provides a personalized view of AWS service health.

AWS OpsWorks for Chef Automate: Provides a fully managed Chef server and a suite of automation tools for continuous deployment, automated testing for compliance, and a user interface for visibility into nodes. Read More »

AWS Organizations: Helps IT teams to manage multiple AWS accounts. Read More »

[mk_mini_callout title=”For Security”]

[/mk_mini_callout]

AWS Shield: A managed DDoS protection service that safeguards web applications using Elastic Load Balancing (ELB), Amazon CloudFront, and Amazon Route 53. Read More »

[mk_mini_callout title=”For Migration”]

[/mk_mini_callout]

AWS Snowmobile: An exabyte-scale data transfer service used to move extremely large amounts of data. Read More »

[mk_mini_callout title=”For Containers”]

[/mk_mini_callout]

Blox: A collection of open source software that enables customers to build custom schedulers and integrate third-party schedulers on top of ECS. Read More »

[mk_mini_callout title=”For Database”]

[/mk_mini_callout]

Amazon Aurora with PostgreSQL Compatibility: Get up to twice the performance of the typical PostgreSQL database and the features you love in Amazon Aurora. Read More »

[mk_mini_callout title=”For Developers looking for Tools”]

[/mk_mini_callout]

AWS CodeBuild: Helps build and test code in the cloud as you scale. It can be used with other AWS services; and it is also integrated with AWS Elastic Beanstalk to enable testing of Elastic Beanstalk apps. Read More »

AWS X-Ray: Helps developers analyze and debug production, distributed applications, such as those built using a micro services architecture. It also shows how the underlying services are performing, so that you can identify and troubleshoot the root cause of performance issues and errors. Read More »

[mk_mini_callout title=”For Hybrid Environment”]

[/mk_mini_callout]

AWS Greengrass: Helps run IoT applications seamlessly across the AWS cloud and local devices using AWS Lambda. Read More »

AWS IoT Button: For improved developer experience. Read More »

AWS Snowball Edge: A petabyte-scale data transfer service with on-board storage and compute. Read More »

VMware on AWS Cloud: Helps run VMware workloads on the AWS Cloud. Read More »

[mk_mini_callout title=”For Mobile”]

[/mk_mini_callout]

Amazon Pinpoint: Helps run targeted push notification campaigns to improve user engagement in mobile apps. Read More »

AWS Mobile Hub integration with Amazon Lex: Helps build mobile apps that use speech and text, in addition to touch to your mobile app. Read More »

[mk_mini_callout title=”For Analytics”]

[/mk_mini_callout]

Amazon Athena: an interactive query service that helps analyze data in Amazon S3 using SQL.

AWS Glue: A fully managed ETL service that simplifies and automates data discovery, transformation, and job scheduling tasks. Read More »

[mk_mini_callout title=”For Application Services”]

[/mk_mini_callout]

AWS Step Functions: Helps coordinate the components of distributed applications and microservices using visual workflows. Read More »

[mk_mini_callout title=”For Artificial Intelligence (AI)”]

[/mk_mini_callout]

Amazon Lex: A deep learning engine (which powers Alexa) for building conversational interfaces using voice and text. Read More »

Amazon Polly: Built to revolutionize speech-enabled products, it turns text into lifelike speech, currently offering support for 24 languages and 47 lifelike voices. Read More »

Amazon Rekognition: Built to revolutionize the face recognition and object-recognition solutions, it helps to add image analysis to your application, help detect objects, scenes, and faces in images, and also search and compare faces. Read More »

KEY UPDATES

  • AWS Config integration with Amazon EC2 Systems Manager: Facilitates continuous monitoring and governance of software on EC2 instances and on-premises systems. Read More »
  • Amazon AppStream 2.0: A fully managed, secure application streaming service that can stream desktop applications from AWS to any device, without rewriting them. Read More »
  • IPv6 support for EC2 instances in Amazon VPC. Read More »
  • Enhanced context for custom authorizers in Amazon API gateway. Read More »
  • AWS CloudFormation updated with resource coverage. Read More »
  • Availability of Regional Edge Caches for Amazon CloudFront: A new type of edge location for CloudFront that further improves performance. Read More »
  • Amazon Aurora and Amazon RDS for PostgreSQL are now HIPAA-eligible services. Read More »
  • Addition of AWS Service Delivery Program to AWS Partner Network. Read More »
  • Amazon Aurora now supports T2.Medium instances. Read More »
  • Addition of new Edge Locations in Minneapolis, MN, Berlin, Germany and London to Amazon CloudFront. Read More »

Give us a shout out on Twitter, Facebook, and  LinkedIn if you think we have missed out any announcement, or if you want to reach out to any AWS expert who can clarify your doubts on these new AWS announcements/updates. We will continue to work on how we can help improve AWS cloud management for you.  Do stay tuned with us for other interesting news too!