Botmetric Brings Unique Click to Fix for AWS Security Group Audit

In today’s day and age, deploying solutions and products on the cloud has become the new norm. However, managing your cloud infrastructure, implementing critical cloud security controls, and preventing vulnerabilities can become quite challenging.

Security & Compliance

Botmetric’s Security & Compliance simplifies the process of discovering and rectifying the threats as well as shortcomings in your AWS infrastructure by providing a comprehensive set of audits and recommendations, which saves a lot of time and makes eliminating unused Security Groups easy.

Botmetric’s Security & Compliance imbibes culture of continuous security and DevSecOps by automating industry best practices for cloud compliance and security. For an AWS user this simplifies the process of discovering and rectifying the threats.

Remediation of Security Threats with Botmetric

At Botmetric, we believe in simplifying cloud management for our customers. To amplify this, we provide the ‘click to fix’ feature for many of our Security & Compliance audits. This feature enables users to implement the best practices recommended by Botmetric simply with the click of a button. While saving a lot of time and effort, it also eliminates the possibility of human error. Moreover, rather than manually fixing each and every resource, Botmetric allows you to select multiple resources and fix them all at once.  

Click to Fix Security Group Audit

In an effort to allow our users to easily secure their cloud, we have recently added ‘click to fix’ feature for all Botmetric security group audits.

Why Botmetric Built Click to Fix for AWS Security Group Audits?

Security groups in AWS provide an efficient way to assign access to resources on your network. The rules that you define in security groups should be scrutinized. For a simple reason that you could end up giving a wide open access resulting in an increased risk of security breaches. The security group audits provided by Botmetric discover issues, such as as security groups having rules with TCP/UDP ports open to public, servers open to public, port ranges open to public,  so on and so forth. These are serious security loopholes that could leave your cloud open to malicious attacks.

Botmetric’s ‘click to fix’ feature for AWS security group audits deletes the vulnerable security group rule, thereby securing access to your cloud resources and protecting your cloud infrastructure.

Botmetric- Click to Fix

List of AWS Security Group Audits provided by Botmetric

  • Database Ports

Protecting database ports is crucial as you wouldn’t want access leaks or open ports to your Database ports. Botmetric scans your database ports open to public, IP and private subnet. Securing these would ensure your database ports safety in a security group.

  • Server Ports

Very essential as a lot of security issues and vulnerabilities have been caused due to server ports. Botmetric secures ports open to public, IP and private subnet.

  • TCP UDP  and ICMP Ports

Relies everything we do on the internet, here Botmetric secures open ports to both public and IP.

There are few more controls for Security Group such as All Traffic and Port Range also secured by the audits.

How to Enable Click to Fix for AWS Security Group Audits?

To use the click to fix for security group audits, please ensure that you have added “ec2:RevokeSecurityGroupIngress” permission to the policy of the role whose ARN is configured for Security and Compliance.

The Bottom line:

At Botmetric, we will continue to add more AWS cloud security and compliance features. We will soon come up with a detailed post on Click to Fix feature for several key AWS Security Audits. Until then stay tuned with us.

This is a newly launched feature by Botmetric. To explore this feature, take up a 14 day trial . If you have any questions on AWS security or AWS security best practices, just drop in a line below in the comment section or Tweet to us at @BotmetricHQ.

10 Design Principles for AWS Cloud Architecture

Cloud computing is one of the boons of technology, making storage and access of documents easier and efficient. For it to be reliable, the cloud architecture need to be impeccable. It needs to be reliable, secure, high performing and cost efficient. A good cloud architecture design should take advantage of some of the inherent strengths of cloud computing – elasticity, ability to automate infrastructure management etc. Cloud architecture design needs to be well thought out because it forms the backbone of a vast network. It cannot be arbitrarily designed.

There are certain principles that one needs to follow to make the most of the tremendous capabilities of the Cloud. Here are ten design principles that you must consider while architecting for AWS cloud.

Think Adaptive and Elastic

The architecture of the cloud should be such that it support growth of users, traffic, or data size with no drop in performance. It should also allow for linear scalability when and where an additional resource is added. The system needs to be able to adapt and proportionally serve additional load. Whether the architecture includes vertical scaling, horizontal scaling or both; it is up to the designer, depending on the type of application or data to be stored. But your design should be equipped to take maximum advantage of the virtually unlimited on-demand capacity of cloud computing.

Consider whether your architecture is being built for a short-term purpose, wherein you can implement vertical scaling. Else, you will need to distribute your workload to multiple resources to build internet-scale applications by scaling horizontally. Either way, your architecture should be elastic enough to adapt to the demands of cloud computing.

Also, knowing when to engage stateless applications, stateful applications, stateless components and distributed processing, makes your cloud very effective in its storage.

Treat servers as disposable resources

One of the biggest advantages of cloud computing is that you can treat your servers as disposable resources instead of fixed components. However, resources should always be consistent and tested. One way to enable this is to implement the immutable infrastructure pattern, which enables you to replace the server with one that has the latest configuration instead of updating the old server.

It is important to keep the configuration and coding as an automated and repeatable process, either when deploying resources to new environments or increasing the capacity of the existing system to cope with extra load. Bootstrapping, Golden Images or a Hybrid of the two will help you keep the process automated and repeatable without any human errors.

Bootstrapping can be executed after launching an AWS resource with default configuration. This will let you reuse the same scripts without modifications.

But in comparison, the Golden Image approach results in faster start times and removes dependencies to configuration services or third-party repositories. Certain AWS resource types like Amazon EC2 instances, Amazon RDS DB instances, Amazon Elastic Block Store (Amazon EBS) volumes, etc., can be launched from a golden image.

When suitable, use a combination of the two approaches, where some parts of the configuration get captured in a golden image, while others are configured dynamically through a bootstrapping action.
Not to be limited to the individual resource level, you can apply techniques, practices, and tools from software development to make your whole infrastructure reusable, maintainable, extensible, and testable.

Automate Automate Automate

Unlike traditional IT infrastructure, Cloud enables automation of a number of events, improving both your system’s stability and the efficiency of your organization. Some of the AWS resources you can use to get automated are:

  • AWS Elastic Beanstalk: This resource is the fastest and simplest way to get an application up and running on AWS. You can simply upload their application code and the service automatically handles all the details, such as resource provisioning, load balancing, auto scaling, and monitoring.
  • Amazon EC2 Auto recovery: You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers it if it becomes impaired. But a word of caution – During instance recovery, the instance is migrated through an instance reboot, and any data that is in-memory is lost.
  • Auto Scaling: With Auto Scaling, you can maintain application availability and scale your Amazon EC2 capacity up or down automatically according to conditions you define.
  • Amazon CloudWatch Alarms: You can create a CloudWatch alarm that sends an Amazon Simple Notification Service (Amazon SNS) message when a particular metric goes beyond a specified threshold for a specified number of periods.
  • Amazon CloudWatch Events: The CloudWatch service delivers a near real-time stream of system events that describe changes in AWS resources. Using simple rules that you can set up in a couple of minutes, you can easily route each type of event to one or more targets: AWS Lambda functions, Amazon Kinesis streams, Amazon SNS topics, etc.
  • AWS OpsWorks Lifecycle events: AWS OpsWorks supports continuous configuration through lifecycle events that automatically update your instances’ configuration to adapt to environment changes. These events can be used to trigger Chef recipes on each instance to perform specific configuration tasks.
  • AWS Lambda Scheduled events: These events allow you to create a Lambda function and direct AWS Lambda to execute it on a regular schedule.

As an architect for the AWS Cloud, these automation resources are a great advantage to work with.

Implement loose coupling

IT systems should ideally be designed in a way that reduces inter-dependencies. Your components need to be loosely coupled to avoid changes or failure in one of the components from affecting others.

Your infrastructure also needs to have well defined interfaces that allow the various components to interact with each other only through specific, technology-agnostic interfaces. Modifying any underlying operations without affecting other components should be made possible.

In addition, by implementing service discovery, smaller services can be consumed without prior knowledge of their network topology details through loose coupling. This way, new resources can be launched or terminated at any point of time.

Loose coupling between services can also be done through asynchronous integration. It involves one component that generates events and another that consumes them. The two components do not integrate through direct point-to-point interaction, but usually through an intermediate durable storage layer. This approach decouples the two components and introduces additional resiliency. So, for example, if a process that is reading messages from the queue fails, messages can still be added to the queue to be processed when the system recovers.

Lastly, building applications in such a way that they handle component failure in a graceful manner helps you reduce impact on the end users and increase your ability to make progress on your offline procedures.

Focus on services, not servers

A wide variety of underlying technology components are required to develop manage and operate applications. Your architecture should leverage a broad set of compute, storage, database, analytics, application, and deployment services. On AWS, there are two ways to do that. The first is through managed services that include databases, machine learning, analytics, queuing, search, email, notifications, and more. For example, with the Amazon Simple Queue Service (Amazon SQS) you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use. Not only that, Amazon SQS is inherently scalable.

The second way is to reduce the operational complexity of running applications through server-less architectures. It is possible to build both event-driven and synchronous services for mobile, web, analytics, and the Internet of Things (IoT) without managing any server infrastructure.

Database is the base of it all

On AWS, managed database services help remove constraints that come with licensing costs and the ability to support diverse database engines that were a problem with the traditional IT infrastructure. You need to keep in mind that access to the information stored on these databases is the main purpose of cloud computing.

There are three different categories of databases to keep in mind while architecting:

  • Relational databases – Data here is normalized into tables and also provided with powerful query language, flexible indexing capabilities, strong integrity controls, and the ability to combine data from multiple tables in a fast and efficient manner. They can be scaled vertically and are highly available during failovers (designed for graceful failures).
  • NoSQL databases– These databases trade some of the query and transaction capabilities of relational databases for a more flexible data model that seamlessly scales horizontally. NoSQL databases utilize a variety of data models, including graphs, key-value pairs, and JSON documents. NoSQL databases are widely recognized for ease of development, scalable performance, high availability, and resilience.
  • Data warehouse – A specialized type of relational database, optimized for analysis and reporting of large amounts of data. It can be used to combine transactional data from disparate sources making them available for analysis and decision-making.

Be sure to remove single points of failure

A system is highly available when it can withstand the failure of an individual or multiple components (e.g., hard disks, servers, network links etc.). You can think about ways to automate recovery and reduce disruption at every layer of your architecture. This can be done with the following processes:

  • Introduce redundancy to remove single points of failure, by having multiple resources for the same task. Redundancy can be implemented in either standby mode (functionality is recovered through failover while the resource remains unavailable) or active mode (requests are distributed to multiple redundant compute resources, and when one of them fails, the rest can simply absorb a larger share of the workload).
  • Detection and reaction to failure should both be automated as much as possible.
  • It is crucial to have a durable data storage that protects both data availability and integrity. Redundant copies of data can be introduced either through synchronous, asynchronous or Quorum based replication.
  • Automated Multi –Data Center resilience is practiced through Availability Zones across data centers that reduce the impact of failures.
    Fault isolation improvement can be made to traditional horizontal scaling by sharding (a method of grouping instances into groups called shards, instead of sending the traffic from all users to every node like in the traditional IT structure.)

Optimize for cost

At the end of the day, it often boils down to cost. Your cloud architecture should be designed for cost optimization by keeping in mind the following principles:

  • You can reduce cost by selecting the right types, configurations and storage solutions to suit your needs.
    Implementing Auto Scaling so that you can scale horizontally when required or scale down when necessary can be done without any extra cost.
  • Taking advantage of the variety of Purchasing options (Reserved and spot instances) while buying EC2 instances will help reduce cost on computing capacity.

Caching

Applying data caching to multiple layers of an IT architecture can improve application performance and cost efficiency of application. There are two types of caching:

  • Application data caching- Information can be stored and retrieved from fast, managed, in-memory caches in the application, which decreases load for the database and increases latency for end users.
  • Edge caching – Content is served by infrastructure that is closer to the viewers lowering latency and giving you the high, sustained data transfer rates needed to deliver large popular objects to end users at scale.
  • Amazon CloudFront, the content delivery network consisting of multiple edge locations around the world is the edge caching service whereas Amazon ElastiCache makes it easy to deploy, operate and scale in-memory cache in the cloud.

Security

Security is everything! Most of the security tools and techniques used in the traditional IT infrastructure can be used in the cloud as well. AWS is a platform that allows you to formalize the design of security controls in the platform itself. It simplifies system use for administrators and those running IT, and makes your environment much easier to audit in a continuous manner. Some ways to improve security in AWS are:

  • Utilize AWS features for Defense in depth – Starting at the network level, you can build a VPC topology that isolates parts of the infrastructure through the use of subnets, security groups, and routing controls.
  • AWS operates under a shared security responsibility model, where AWS is responsible for the security of the underlying cloud infrastructure and you are responsible for securing the workloads you deploy in AWS.
  • Reduce privileged access to the programmable resources and servers to avoid breach of security. The overuse of guest operating systems and service accounts can breach security.
  • Create an AWS Cloud Formation script that captures your security policy and reliably deploys it, allowing you to perform security testing as part of your release cycle, and automatically discover application gaps and drift from your security policy.
  • Testing and auditing your environment is key to moving fast while staying safe. On AWS, it is possible to implement continuous monitoring and automation of controls to minimize exposure to security risks. Services like AWS Config, Amazon Inspector, and AWS Trusted Advisor continually monitor for compliance or vulnerabilities giving you a clear overview of which IT resources are in compliance, and which are not.

Now that you know the guidelines and principles to keep in mind while architecting for the AWS Cloud, start designing!

About Botmetric

Botmetric is a comprehensive cloud management platform that makes cloud operations, system administrator’s tasks, and DevOps automation a breeze so that cloud management is no more a distraction in a business. Featuring an intelligent automation engine, it provides an overarching set of features that help manage and optimize AWS infrastructure for cost, security & performance.

Headquartered in Santa Clara, CA, Botmetric, today helps Startups to Fortune 500 companies across the globe to save on cloud spend, bring more agility into their businesses and protect the cloud infrastructure from vulnerabilities. To know more about Botmetric, visit https://www.botmetric.com/

Top AWS Cloud Security Concerns Today’s Enterprises Need to Let Go

Given the humongous amounts of data being generated on a daily basis, there is no debate about the fact that cloud computing is crucial to running a competitive, modern digital business. The benefits are manifold – agility offered in IT and enterprise business operations, reduced capital outlays, efficiencies in processes, speed and overall productivity gains. It is not surprising at all then that Gartner has projected that the worldwide public cloud services market will grow 18 percent in 2017 to a total $246.8 billion, up from $209.2 billion in 2016. But despite its near explosive growth, cloud security remains a concern.

In today’s IT threat landscape, keeping pace with the attackers and ensuring security is more important than ever. Even though there is nothing inherently unsecure about the cloud, the fact remains that the responsibility of the apps that run on the cloud lies with the user and not the cloud vendor such as AWS. In fact, AWS features a shared responsibility model, which means that AWS takes responsibility of the facilities, physical security of hardware and virtualization infrastructure, and not the apps that run on it.

Therefore, it really boils down to changing the psychology or concerns mindset when it comes to thinking about cloud security. Here are some highlights for enterprises to deliberate on.

Security is Just the Cloud Vendor’s Responsibility

A number of recent global industry surveys on cloud security have indicated that enterprises consider cloud security is the sole responsibility of the cloud service providers. That’s an extremely flawed and dangerous assumption. Cloud Security is always a collective responsibility shared by the vendor and the user. This is true irrespective of whether you talk about a public or a private cloud. Of course, it has not been easy for the IT Security industry to keep up with the rapid growth of the cloud computing industry.

But that means that organizations need to go the extra mile, for example, by configuring apps that are compatible with the cloud infrastructure they are using. On their part, vendors need to ensure rigorous security on Virtual Machines (VMs) where storage space is shared by multiple clients and on data centres, addressing a lot of complexities with regard to this challenge can be successfully achieved. Regulatory compliance also helps.

For instance, AWS is compliant with PCI DSS 3.2 and many other compliances. This means that users can confidently leverage the certified AWS products and services to meet security and compliance objectives from infrastructure perspective while just focussing on the application level security. Because Amazon has validated PCI DSS compliance against the latest set of criteria. Users including early adopters of the standard can benefit from it.

From a regulatory perspective, Governments should work to mandate encryption and perhaps enforce penalties for companies that suffer data breaches in the times to come. Security vendors have some catching up to do too, to develop new cloud-first products. Currently, a majority of security tools being offered in the market do not work with the cloud, but are meant for the traditional networking environments.

It’s important to establish who is responsible for which aspects of security, so that measures can be put in place to ensure the system and data remain safe. To quote an example, for the Amazon ELB service, which is the shared infrastructure service, its default configuration is susceptible to some known SSL vulnerabilities. Also web application firewalls need to be configured for application level security. In this case, the responsibility to enforce a particular implementation doesn’t lie with Amazon, but with the provisioner to ensure that adequate configuration and testing takes place. Simply pinning the blame on the vendor is not an option. Ultimately, security should be a constant consideration of everyone involved.

Why Keep the IT Department in the Loop?

Often, the biggest risk associated with cloud is as a result of human factors. For example, business functions sometimes may sign up cloud services for their purpose, without even involving the IT department or CISO team.

There is a reason why IT departments are cautious about moving mission-critical applications to the cloud. There are some legitimate fears around security, downtime and control. Bypassing the IT department while making cloud investment decisions presents a genuine risk for enterprises. This is because such instances make it unlikely that security audits have been conducted, or safeguards have been put in place.

A sensible security policy is essential, to ensure that IT is involved in the decision-making process when it comes to any type of IT adoption. This information should be used to develop test cases, which can thoroughly test the status of cloud security. If there are any loopholes in security, they need to be addressed immediately via patches. At the same time, updates also need to be implemented on a regular basis.

Of course, an overly draconian cloud security policy too defeats the purpose since it risks being circumvented. Instead, the aim should be to build a solid security policy that empowers departments to achieve what they need without sacrificing security.

Regulated Bring Your Own Device (BYOD) Policies Will not Help

BYOD can put a whole different spin on security. In fact, 44% of security professionals listed BYOD as their biggest security concern in one recent survey, more than any other aspect of security. On one hand, lost or stolen devices could potentially give an unauthorized access to cloud services, as well as sensitive data stored locally or in caches, to unauthorized persons. BYOD also makes it more challenging to diagnose data breaches, as filtering and monitoring systems may not be in place on employees’ own devices.

In addition, family members and friends of staff may have access to a device used at work, so measures need to be put in place to restrict access to sensitive data.

Of course, BYOD brings with it some huge advantages. It gives staff the freedom to use devices that they are comfortable with, giving them more convenience and better features loaded than those provided by their employers. Implementing an acceptable use policy, as well as controlling access to sensitive data with a password or PIN and Multi Factor Authentication (MFA) can help.

Shared Resources on Public Clouds are Always Risky

Access to data on Virtual Machines (VMs) is a big concern in public clouds. By definition, public clouds share resources between different customers and use virtualization heavily, and this does create additional security vulnerabilities, both from access levels as well as from exploits in the virtualization software.

In theory, VMs hosted on the same physical server could suffer undetected network attacks between each other in the absence of suitable network detection. Hijacking VM hypervisors, and exploiting local storage in memory are also fairly common. Therefore, investigating the controls that providers have in place to secure the cloud environment is important. Vendors accredited to the highest industry standards, such as AWS and AZURE, do make this information available. In general, most security issues are enterprise-specific and are standard server security or admin related problems. Therefore, the best approach to mitigate security risks is tracking critical security patches at VM level and using the latest version of machine images. For critical applications, an External VAPT Security testing and application level security protection are important.

Less Availability of Data Security Breach Identification Solutions in the Market

The best way to ensure that data integrity is not compromised, whether it is through deliberate or accidental modification, is to use resource permissions that can limit the scope of users who can modify the data and also leverage data auditing controls to track who accessed what, from where and when information for compliance purpose.

Even for doing this, the threat of accidental deletion by a privileged user still remains. It could also be an attack in the form of a Trojan using the privileged user’s credentials). Measures such as performing data integrity checks, including Message Integrity Codes (parity, CRC), and Message Authentication Codes (MD5/SHA), or Hashed Message Authentication Codes (HMACs) to detect data integrity compromise are helpful. Above all, there are several solutions available solely for these purposes that provide Host Level Intrusion Detection(HIDS) and File Integrity Monitoring Solutions (FIMS).

Ignoring DevOps, Or Slow in Adopting it

In many ways, DevOps are bringing in a new wave when it comes to cloud security. In the DevOps automation cycle, for example, every code commit triggers a build that tests security and functionality of the application bundles using tools like Amazon Inspector and Selenium. In fact, while Selenium was earlier used for test automation only, it has since emerged as one of the top DevSecOps tools as it can easily trigger security scanning tests along with other application test scripts. At the same time, it also ensures that systems are always patched, vulnerabilities scanned and checked for functioning before deployment.

DevOps are giving enterprises a way to make application quality and security testing more scripted, continuous, and automated. DevSecOps enable an automation approach for security tests throughout development, even on the cloud. They are even integrating security-feature design and implementation into the development lifecycle in ways that wasn’t possible before.

In many ways, DevOps is helping application security to reach a level that many security professionals have been advocating for years. The only way to do this is through automation of security and regulatory compliance tests throughout development and deployment. For organisations, by leveraging automation tools to enforce security and compliance controls, DevSecOps will empower them to achieve regulatory compliance at speed, and at scale. DevSecOps also makes detection and closing of security vulnerabilities faster than before while on the cloud.

Partial Knowledge of Risks Involved

Knowing your cloud compliance and understanding security vulnerabilities completely in real-time is the first and foremost step. Once you are aware, taking steps to ensure business continuity is relatively easier. The comprehensive security process includes enabling auditing controls, logging access data, network security, IAM controls, data governance, passive/active protection for VMs and applications.

It is possible to quickly assess and mitigate vulnerabilities in real time and adopt a comprehensive security management for your cloud, for example with cloud management platforms like Botmetric Security and Governance. With such tools, it is possible to optimally improve your AWS cloud security and identify critical vulnerabilities at Cloud level quickly from various perspectives — data security, Disaster Recovery, user access, network security, etc.

Penetration (or VAPT) Testing is another process used traditionally to understand the risks and test if there is scope for hackers gaining access to application environment. This is equally useful for cloud systems too. And with the cloud, come additional vectors for attacks.

Integrating Security Across your Processes is Secondary

In the initial stages of adoption, companies experimented with storing mostly non-strategic data into the cloud. But now that they have made the transition to moving business critical apps and data into the cloud, processes to ensure compliance with legal and regulatory norms haven’t quite caught up yet.

Also, many organizations fail to integrate security as a seamless feature as part of their continuous methods like DevOps, and for some, security slows down the development methods. In order to realize the full potential of the cloud, built-for-cloud-security products must adhere to the DevOps process.

Amazon VPC is Not Safe

The Amazon Virtual Private Cloud (VPC) provides some great features that you can use to increase and monitor the security for your enterprise data and applications. For example, its security groups act as a firewall for associated Amazon EC2 instances, and they control both inbound and outbound traffic at the instance level. Its network access control lists (ACLs) act as a firewall for associated subnets, and control both inbound and outbound traffic at the subnet level. It also has flow logs that capture information about the IP traffic going to and from network interfaces in the organization’s VPC.

These tools make it possible to monitor the accepted and rejected IP traffic going to and from your instances by creating a flow log for a VPC, subnet, or individual network interface. Additionally, the organization can use AWS Identity and Access Management to control who in the organization has permission to create and manage security groups, network ACLs and flow logs.

To sum up, the risk isn’t from transitioning to the cloud; rather, it is a result of poor policies that might not be conducive to a secure your business whether it’s in cloud or on-premise. What is your take?

AWS Cloud Security Think Tank: 5 Reasons Why You Should Question Your Old Practices

Agile deployments and scalability seem to be the most dominant trend in public cloud, today; especially on AWS. While you scale your business on cloud, AWS too keeps scaling its services as well as upgrading its technology from time to time, to keep up with the technology disruptions happening across the globe. To that end, your cloud engineers have to constantly adapt to architectural changes as and when updates are announced. While all these architectural changes are made, AWS Cloud Security best practices and audits need to be relooked too from time to time.

As a CISO, have you ever questioned your old practices and relooked at them whether it’s relevant in the present day.

Here are few excerpts from our AWS Cloud Security Think Tank: A collation of deliberations we had recently at Botmetric HQ with our security experts on why anyone on cloud should question their old AWS cloud security best practices.

1. Relooking at Endpoint Security

“Securing the server end is just one part of enterprise cloud security. If there is a leakage at the endpoints, the net result is adverse impact on your cloud infrastructure.  Newer approaches to assert the legitimacy of the endpoint is more important than ever.” — Upaang Saxena, Botmetric LLC.

As most cloud apps provide APIs, the client authentication mechanisms have to be redesigned. Moreover, as the endpoints are now mobile devices, IOT devices, and laptops that might be anywhere in the world, increasingly the endpoint security is moving away from perimeter based security model giving way to Identity based endpoint security model. Hence, newer approaches to assert the legitimacy of the endpoint is more important than ever.

2. Revisiting Policies Usage

“Use managed policies, because with managed policies it easier to manage access across users. ” Jaiprakash Dave, Minjar Cloud Solutions

Earlier, only Identity-based (IAM) inline policies were available. Managed policies came later. So not all old AWS cloud best practices that existed during inline policies era might hold good in the present day. So, it is recommended to use managed policies that is available now. With managed policies you can manage permissions from a central place rather than having it attached directly to users. It also enables to properly categorize policies and reuse them. Updating permissions also becomes easier when a single managed policy is attached to multiple users. Plus, in managed policies you can add up to 10 managed policies to a user, role, or group. The size of each managed policy, however, cannot exceed 5,120 characters.

3. Make Multiple Account Switch Roles

“We encourage our clients to make multiple account switch roles for access controls as per their security needs.” Anoop Khandelwal, Botmetric LLC.  

Earlier, it was not recommended to switch roles for access controls while using VPC. However, now it is recommended to make multiple account switch roles for access controls as per their security needs. Plus, earlier VPCs came with de facto defaults, which was inherently less than ideal from a security perspective. Now, Amazon VPC provides features that you can use to increase and monitor the security for your Virtual Private Cloud (VPC).

4. Redesigning Architecture for New Attack Vectors

DDOS attacks through compromised IOT devices such as Mirai Bot attacks caught the security professionals by surprise. The possibility of the scale of the attack was not predicted by any security analyst. Such new attack vectors will be designed by hackers to penetrate popular and highly sensitive websites and it would be difficult to anticipate all potential attack vectors. So cloud professionals have to revisit their architecture and be ready with better contingency measures in case of such unanticipated attack vectors.

“You (cloud security engineer) need to relook into your architecture now and then and come up with better contingency measures for new age attack vectors like massively distributed denial of service(DDOS). ” Abhinay Dronavally, Botmetric LLC.

5. New API Security Mechanisms

Today, most enterprise applications consume data from external web services and also expose their data. The authentication mechanisms for the APIs cannot be the same as human user authentication, like earlier days. APIs must fit into machine to machine interactions. Focus more on integration API security mechanisms with specialized API security solution.

“As data breaches can happen through API, integration of API security mechanisms are a must.” — Shivanarayana Rayapati, Minjar Cloud Solutions.

Final Thoughts

As the sophistication of the attacks keep increasing, the security solutions too would have to improve their detection methods. Today’s security solutions leverage Artificial Intelligence (AI) algorithms like Random Forest Classification, Deep Learning techniques, etc. to study, organize, and identify the underlying access patterns of various users. A well thought-through  approach is pivotal in securing your AWS cloud. For that matter, any cloud.

Tightly Integrated Cloud Security Platform for AWS Just a Click Away — Get Started!

5 Interesting 2017 DevOps Trends You Cannot Miss Reading

Editorial Note: This exclusive post is written by Botmetric guest blogger, Kia Barocha.

If you think you heard and read a lot about DevOps in 2016, it’s not over yet. Gear up as there is much more expected in 2017. The key focus for DevOps in 2016 was to ensure security, enhancements and containerization. In 2017, there is a lot of noise about what will be the future of DevOps. Here is a look at what our thought leaders think (they definitely think that it will impact business hugely in 2017).

The main challenge, till date, for many professionals has been to clearly understand DevOps. Some call it a movement, while others think that it is a collection of concepts. If we were to properly define it, we would say that it is a combination of two terms, which are Development and Operations. Or as the definition goes, “it is a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing resilient systems at scale.”

DevOps is a practice where operations and development professionals or engineers participate in the entire service lifecycle (which means starting from design to development and till the production support stage). It is a recipe of success through cultural shift. It is characterized by autonomous teams and a constantly learning environment. If you are ready for this adoption, it implies that you are ready to change fast, develop fast, test fast, fail fast, recover fast, learn fast, and also prep fast for product launches.

With DevOps businesses have experienced higher business value, better alignment with IT, helps break down silos, helps build a flexible and software enabled IT Infrastructure.

DevOps Trends in 2017

Image Source: https://cloudsmartz.com/wp-content/uploads/2016/01/Devops-Team.jpg

Let’s take a look at the DevOps trends that will really be dominant through 2017:

1. Large enterprises will adopt DevOps

The predominant trend, till now, has been that organizations have experimented with DevOps in small and discrete projects. Experts feel that in 2017, large enterprises will be more comfortable to adopt DevOps at large (at an enterprise-level). It will finally take the center stage. One of the key benefits that DevOps enables is enhanced collaboration between developers, QA and testing professionals, Operations personnel, people from business planning and security teams.

2. Focus on enhanced security

Not that the focus on security was not there earlier but given the way attackers are getting smarter and more sophisticated, there will be much more focus on unifying development, continuous security and operations efforts. 2017 will be the year, user experience and advanced security measures will go hand in hand.

3. Consolidation of DevOps tools

There are too many DevOps tools that are good for different aspects of meeting the requirements of the delivery cycle. Most of the tools that are available help to automate some aspect of the process of software delivery. Tools like Jenkins, Docker, AWS, GitHub, and JIRA are already quite popular in the market. In 2017, there will be an integration of all these tools to a selected few which can cater to all the requirements across the continuous delivery cycle. This can mean that the biggies might acquire the smaller DevOps companies, as well as start a journey towards NoOps to put their operations on auto-pilot.

4. Confluence of Big Data and DevOps

Just like IoT, there is a lot of critical information generated when software releases are automated. And again like IoT, this large volumes of data needs to be analyzed. There is a dire need to apply machine learning to all this data so that there is some actionable business reports, and way to predict failures, and manage the releases more effectively.

5. More support for hybrid everything

The market is dynamic and bigger organizations have legacy applications in co-existence with micro services, on premise cloud infrastructure – basically hybrid everything (which includes infrastructure, tools, processes, applications, etc.). DevOps, in that respect, is ready and can support multiple aspects of an organization’s hybrid existence.

If you wish to focus on software innovation, and accelerate release of application updates, look at DevOps as an enterprise-wide investment.

-END-

Guest Blogger’s Profile:

Kia Barocha is a content marketing strategist at ISHIR, a leading Dallas-based software development company, offering high-quality Mobile App Development, Web Design & Development, Cloud Computing Solutions and Application Virtualization Services to the clients across the globe. 

Here’s Botmetric Wishing Thank You for a Fantastic 2016 & Happy 2017

Dear You,

We are thrilled to close this year with a bang! It’s hard to sum up 2016 in just a few scribbles, especially when we made many new friends and rolled-out so many enhancements & new features: a new platform, new audits, more ingrained intelligence, more cloud optimization options, a revamped website with new UI, and much much more.

And as the New Year sets in slowly across the world, minute by minute, second by second, and continues its journey with the same charm and diligence, we at Botmetric, likewise, will continue our journey towards making cloud management and optimization a breeze for you. With rolled sleeves. With more focus. With more features. And with more zeal. Everything for you, dear customer.

Here’s 2016 Year-in-Review: The Best Year so far, for Botmetric

Continues to be the Highest Rated Product on AWS Marketplace

We attribute our success to our dear customers. Thank you for the timely feedback, and those wonderful testimonials bedecked with five stars. Based on the feedback and our learning, we revamped Botmetric into a platform of three products that are use-case targeted instead of a ‘one-size-fits-all’ approach:

1. Cost management & Governance: This Botmetric product helps you control your cloud spend, save that extra bit, optimize spend through intelligent analytics and allocate cloud resources wisely for maximum ROI. It is built for businesses and CIO teams to enable them in decision making & maximizing cloud ROI.

2. Security & Compliance: This Botmetric product helps you get compliant and keeps your cloud secure with automated audits, health checks, and best practices. It provides the most comprehensive list of automated health checks. It is built for CISO and Security Engineers to proactively identify issues and fix vulnerabilities before they become problems.

3. Ops & Automation: This Botmetric product helps you save time and effort you invest in automating cloud operations. It has built-in operational intelligence that can analyse problems and fix events in seconds. Above all, speeds-up DevOps. It is built for CTOs and DevOps Engineers seeking alert diagnostics, event intelligence, and out of the box automation.

Chose any of the above, or any two of the above, or choose all. Your wish, your products, tailored for your AWS cloud. With these three products, you can realize the full potential of your cloud without any information overload. Find insights that matter to you, in just one-click.

There’s More:

To celebrate what you’ve helped us achieve this year, we have put together few 2016 Botmetric facts:2016 Botmetric Facts

New and Key Botmetric Product Features Rolled-Out in 2016:

1. Data Transfer Analysis for AWS Cost Management: Provides insights into your bandwidth costs on AWS cloud.

2. EC2 Cost Analysis to Optimize AWS Spend: Helps you understand your AWS EC2 spend easily and efficiently.

3. Internal Chargeback: Allocate cost easily across multiple cost centers and bring in the required parity in your AWS cost management.

4. Compliance Audit Policies for Heightened Security: Helps mitigate vulnerabilities in real time and adopt a comprehensive security management.

5. Cloud Reporting Module: Helps you quickly find your AWS cloud reports from one centralized module without scrolling, and to counter endless searching for what you need.

6. Reserved Instance Planner: Provides reservation recommendations at instance level. This revisited RI planner allows you to filter the recommendations, look at the details of the instance being recommended, and accordingly add it to a RI plan. You can also download the plan and work on budget approvals and actual reservations offline.

Work-in-progress

An Advanced DevOps Intelligence Feature: Assuages alert fatigue mess, helps easily understand the alert events through intelligence, and tells you why is it happening. It also checks for pattern in the problems.

We have much more coming up in 2017. So stay tuned with us.

Here’s Botmetric wishing you a very Happy New Year.

 

From,

Team Botmetric

 

Let’s blow the heartiest kisses to the cloud in 2017

Cloud is the new black. Let’s together embrace it more, with dexterity, in 2017.

Share all your 2016 cloud musings, learning, and accomplishments with Botmetric on Twitter, Facebook, or LinkedIn. We’re all ears, and we’ve your back.

Let’s make cloud an easier and a better place to grow our business.

 

P.S. If you have not signed-up with Botmetric yet, then go ahead and take a 14-day free trial. As an AWS user, we’re sure you’ll love it!

DevSecOps: The Next Wave of Cloud Security

The adoption of DevOps, agile and public cloud services among businesses worldwide is increasing by the day. These are seen as the major shift in enterprise IT, and as the next wave after Internet. Thanks to digital democratization, due to which businesses have to be nimble to remain competitive. That said, security threats and cybercrime continue to outsmart businesses despite having cutting-edge security wall around them. To this end, DevSecOps was born to bridge the security gap into DevOps, just as DevOps bridged the development and operations divide.

Plugging in the right chord: Security into DevOps, on the cloud

Business leaders now understand that moving to the cloud is not just any tech adaptation, but it is more about speed of service delivery and dynamic scalability. One of the most significant paybacks of the DevOps has been better software quality delivered faster, even on the cloud.

Cloud technology dissolves enterprise perimeter, the key construct around which security solutions have been developed. Earlier, security concerns were holding back many businesses from jumping on to the cloud bandwagon. And when the idea of perimeter and boundary was once again threatened by new security requirements such as those warranted by Bring Your Own Device (BYOD) policies, the IT industry slowly started to embrace the cloud. Security professionals are now leveraging real-time analytics and have also adopted “Continuous Security” in clear parallel to the “Continuous Integration” and “Continuous deployment” approach of the DevOps movement.

Image Source: RSAConference, 2016, DevSecOps In Baby Steps

DevSecOps Tools: Filling in the Security Gap

Many enterprises have started to explore ways of making application quality and security testing more scripted, continuous, and automated. With DevSecOps, they are taking an automation approach for security tests throughout development, even on the cloud. They are even integrating security-feature design and implementation into the development lifecycle in ways that wasn’t possible before.

For instance, in the DevOps automation cycle, every code commit triggers a build that tests security and functionality of the application using tools like Amazon Inspector and Selenium. Selenium, which was used for test automation only earlier, is now emerging as one of the top DevSecOps tools as it can easily trigger security scanning tests along with other application test scripts. Moreover, it ensures that systems are always patched, vulnerabilities scanned and checked for functioning before deployment.

To sum up: Application security is reaching a level that many security professionals have been advocating for years. This is possible only through automation of security and regulatory compliance tests throughout development and deployment. And by leveraging automation tools to enforce security and compliance controls, DevSecOps will empower organizations to achieve regulatory compliance at speed, and at scale. Furthermore, DevSecOps makes detection and closing of security vulnerabilities faster than before while on the cloud.

With DevSecOps on the cloud, security becomes an essential part of the development process itself instead of being an afterthought.

To Conclude:

The provisioning of the server infrastructure itself can be dynamic process on the cloud. DevSecOps processes can trigger both the platform and application security checks whenever a new version of application is deployed. Hence DevSecOps on the cloud effectively blurs lines between the platform security and application security, as the automation of compliance and regulatory tests along with application specific quality tests will be the norm. Clearly, DevSecOps is set to evolve as the next significant wave for cloud security.

Let us know what you think of this story. If you need to talk to experts on how to leverage DevSecOps for your cloud, write to us at support@botmetric.comor just give us a shout out on TwitterFacebook, or LinkedIn. You might as well explore Botmetric, an intelligent cloud management platform that has integrated DevOps and SecOps features in it. Do checkout how Botmetric can add value to your cloud infrastructure with a 14-day trail run.

Bridging the Cloud Security Gaps: With Freedom Comes Greater Responsibility

By 2019, global spending on public cloud services by businesses is expected to reach $141 billion, says  IDC reports. Approximately two-thirds of CIOs across the globe view cloud computing as a principal disruptive force in their businesses, says another leading survey. With cloud adoption currently gaining momentum, it is evident that thick-skin cloud computing is here to remain, despite cloud security concerns looming over the heads of many enterprises.

Here’s why: Apart from elasticity and agility the cloud offers, it is the freedom to swiftly launch an infrastructure with just a few clicks & have it ready in few minutes. And this is what has made  developers/engineers to be the prime drivers of cloud adoption across organizations. Plus, organizations are saving 14 percent of their budgets on an average as an outcome of public cloud adoption, according to a Gartner’s 2015 cloud adoption survey. The infographic below lists few influencing factors.

AlienVault Cloud Security Report 2016
Image Source: AlienVault Cloud Security Report 2016

True. However, this freedom to  scale up or scale down the infrastructure as and when required can very easily wash away that 14 percent saved on budgets if not handled with greater responsibility. Why? Due to cloud security gaps that need to be filled, says Amarkant Singh, Head of Product, Botmetric in one of his articles.

“With Freedom comes greater Responsibility.” And with the choice of public cloud that features shared responsibility model, you need to pay close attention to key security measures from time to time.

Security in the Cloud: A Shared Responsibility

Customers of public cloud services are responsible for their data security and access management of cloud resources. For instance, if you’re using AWS EC2 public cloud infrastructure service, you are responsible for Amazon Machine Images (AMIs), operating systems, applications, data in transit, data at rest, data stores, credentials, policies, and configurations. According to Amarkant, a public cloud user needs to tackle four major components when it comes to cloud security:

  1. Access Controls
  2. Network Security
  3. Data Security
  4. Activity & access trail

And here’re the top five best practices, as suggested by Amarkant, that will help close the cloud security gaps within your cloud infrastructure:

1.Grant least privileges

Use this a thumb rule when granting privileges to users and programs. If you’re using AWS, you must make full use of its IAM capabilities to define a very fine-grained permission level for all access points into your cloud infrastructure. Plus, make multi-factor authentication mandatory for your users. And don’t forget to rotate access credentials regularly.

2.Enable all the detective services

Leverage all the tools and configurations provided by your cloud service provider. This will help track activities within your cloud. For instance: If you use AWS, you must enable AWS CloudTrail Logs (Even in regions where you don’t have instances), VPC Flow Logs, ELB Access Logs, and AWS Config.

3.Encrypt data that is at rest and in transit

Despite knowing the importance of encryption, very few follow it, even though they store sensitive data on the cloud. Ignorance is bliss, however, can prove costly when it come to security of data. Not to worry. Major cloud service providers, like AWS, provides native encryption capabilities to all its data storage services like RDS, S3 and EBS. Great! Now, don’t forget to use HTTPS/SSL when transferring data over the Internet or across regions.

4.Architect networks with desired segmentation

While you architect, do follow the best practices. In case of AWS, you can create VPC and further segment your network into public and private subnets. Do not forget to keep your data stores in a private subnet.

5.Backup the backups

Yes! It is recommended to have one or multiple separate cloud accounts just to keep backups. Plus, only a few users should have access to these accounts. Why? For example, you’re using AWS EBS and you take regular snapshots for backup. When the account is compromised by a hacker, it is highly likely that both EBSand its snapshots(backup) are deleted.

To Conclude:

The statement “With Freedom comes great Responsibility” when it comes to looking into public cloud security, is neither a hype nor an understatement. Bring in the required discipline within the team to perform regular audits, follow best practices, and preferably automate key tasks, and see how cloud computing will never cease to amaze you. Try Botmetric Security & Compliance to see how it can help.

Do tell us what’s your cloud security posture, and how you are implementing the critical cloud security controls and tackling the threat landscape for your cloud. Tweet to us.  Comment to us on Facebook. Or  connect with us on LinkedIn. We’re all ears!

PS: Hear the Botmetric webinar recording on  AWS Security Do’s and Don’ts – Tackling the Threat Landscape  by Amarkant to know more.

Editor’s Note: This blog post  is an adaptation of LinkedIn Pulse post by Amarkant Singh, published on Sep 28, 2016.

November Round-up: Cloud Management Through the Eyes of the Customer

Cloud has indeed been a game changer for many enterprises across verticals. It holds the key to digital disruption, say many. And as the cloud adoption increases by the day, challenges that come along the path have not deterred them from embracing it further, even as they scale. In an effort to provide these cloud-ready and cloud-first companies the mojo to win the cloud game, the GenNext of Botmetric platform was released this November with complete and intelligent cloud management features.

A Milestone Achieved & Many More Extra Miles to Go

Botmetric 2.0 went live recently. The primary goal of building this new unified & intelligent cloud management platform altogether was to provide a great user experience, simplified and consistent design, with intelligent insights & in-context customer engagement. Essentially, Botmetric is a customer-obsessed company, conscientiously nurturing a customer-first culture ever since it was born. For this reason, it went a step ahead, saw cloud management through the eyes of the customer, and rebuilt a whole new platform of three applications offering unified cloud management:

  • Cost Management and Governance: Built for CIO & Business heads
  • Cloud Security and Compliance: Built for CISO & IT Security Engineers
  • Ops & Automation: Built for CTO & DevOps Engineers

During this journey, Botmetric team took a  strategic call to move to micro services architecture to build this appified platform, essentially to speed up its sprint process. And to further nurture  a delightful and seamless customer engagement, Botmetric resorted to Intercom app.

The ‘All-New’ Botmetirc is now ingrained with cutting-edge cloud intelligence. To get further details about the product, read the  exclusive launch blog post by our zealous Customer Success Manager, Richa Nath.

Over the next few months, Botmetric will have few more feathers in its cap. So stay tuned with us on  Twitter,  Facebook, and  LinkedIn.

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of November:

5 Surefire AWS Security Best Practices (not Just) for Dummies

5 Salient Amazon DynamoDB Features Every Beginner DevOps Guy Should Know

How Secure is AWS for E-Commerce Businesses? Doubt know more

There are many more insightful blogs populated on cloud and cloud management. Do read them  here.

Finally, continued excitement at AWS re:Invent 2016

If you are at the event, then Botmetric invites you to meet our leaders for a quick chat, and get first hand experience of the ‘All-New’ Botmetric.

If you have not signed-up with Botmetirc yet, then go ahead and  take a 14-day trial. As an AWS user, we are sure you’ll love it. To catch up with what happened last month, read Botmetric’s  October round-up blog. Rain or shine, Botmetric will continue to provide many more AWS cloud management features. Until the next month, keep in touch with us.

Expert Speak: What’s New in Botmetric 2.0?

Editor’s Note: This exclusive blog post is by our zealous Customer Success Manager, Richa Nath. Includes featured excerpts of an interview by her with Amarkant Singh, Head of Product, Botmetric.  

2017 is the year the cloud will zoom like a rocket, says an IDG  report. To that end, organizations, big or small, will have to be ready for challenges in getting the processes, technology, and people required to proceed in the cloud-driven and cloud-first journey. This is where Botmetric has been helping cloud users manage their public cloud as a unified cloud management platform, since 2014.

As a Customer Success Manager at Botmetric, my team and I have put our best foot forward (and will continue) to help several customers manage & optimize public cloud easily. From the time Botmetric launched till date, we have not shifted our focus a tad in achieving what we have believed — “We want to be the best friend in managing your cloud.’

On the occasion of Thanksgiving day, we released a new version of Botmetric. Our previous blog post, a cover story by Botmetric CEO Vijay Rayapati, already threw light on why the new Botmetric was built when the platform already had good feedback from the customers.

Today, I would like to share few excerpts of an exclusive tech talk I had arranged with Amarkant Singh, Head of Product, Botmetric, on the day of launch. These excerpts will spill more beans on what’s new in Botmetric 2.0!

Here it goes:

[mk_mini_callout title=”Richa: “]

The new Botmetric platform now has three applications: Cost Management & Governance, Security & Compliance, and Ops & Automation. What else should our customers know about the new Botmetric?

[/mk_mini_callout]

Amar: The new Botmetric is ingrained with cutting-edge cloud intelligence that will further help manage, optimize, and govern the cloud in the easiest way possible. Moreover, it is now customer centric. As you already know, a CIO will rarely be interested in Ops and Automation, while a CISO always wants to look into only security and compliance. In short, I can say that the new Botmetric is now user-targeted within a company, it is unified, and an intelligent cloud management platform.

[mk_mini_callout title=”Richa: “]

There is a new feature called Smart RI Management in the Botmetric Cost Management & Governance app. And under this feature, there is Confidence Score. How does this matter in RI planning for businesses on AWS?

[/mk_mini_callout]

Amar: The new Smart RI Management is the GenNext of the previous RI management module. This new feature is built with intelligence and has the Confidence Score. This Confidence Score is a wise metric that adds more insights in decision making while reserving cloud capacity for the businesses. We always wanted to give our customers the opportunity to make intelligent RI decisions to increase their Cloud ROI. The Confidence Score is that number calculated considering multiple factors like number of days a server was used, CPU usage, network I/O, etc. In essence, the Smart RI Management allows customers to filter confidence score accordingly to suit their RI planning.

[mk_mini_callout title=”Richa: “]

The new Cost Management & Governance app has EC2 Cost Analysis. How does it help Botmetric customers in understanding their AWS Cloud utilization?

[/mk_mini_callout]

Amar: In my experience, EC2 is the highest contender towards a customer’s AWS expenses. And it’s hard to analyze various related items in EC2 cost for customers. With the launch of EC2 Cost Analysis app, Botmetric customers can now find out various insights like which instances they are spending the most, what is the split cost for various sub services like EBS/EIP, etc. In addition, there are other involved costs such as subscription, NAT-Gateway and more. These costs needs to be taken care of too. We are very excited to offer EC2 cost analytics to our customers for simplifying their analysis.

[mk_mini_callout title=”Richa: “]

I’ve several customers asking me, if there is anything Botmetric can do about Data Transfer cost insights? Now that it’s included in the new Botmetric, can you throw some light on it? And tell our customers how does data transfer analysis help in cost reduction?

[/mk_mini_callout]

Amar: This is one of the most requested feature by our users because Data Transfer is a gray area for even AWS cloud power users. If you are a business with high data transfer nature in your infrastructure, then you would definitely want to realize which services contribute more towards data transfer expenditure and how it’s spread across different bandwidth costs of AWS. This realization of data transfer cost analysis through AWS bills is a tedious and manual task. With Botmetric’s new AWS Data Transfer Analysis feature, customers can understand their bandwidth costs at ease.

[mk_mini_callout title=”Richa: “]

RI recommendations was one of the most favorite feature of our customers. The new Botmetric has ‘Smart’ RI recommendations. How do you think it will add more value than the previous RI recommendation feature?

[/mk_mini_callout]

Amar: Yes, we received a lot of good feedback for the AWS Reserved Instance planning module in Botmetric. With the new Smart RI recommendation, like its predecessor, is also a process defined within Botmetric to simplify the cloud capacity purchases by businesses. It starts from identifying the capacity to be reserved (EC2 RIs in term of instances) and then identifing what kind of RI financial model provides the best ROI for the customers. It computes an intelligent Confidence Score to help customers take decisions for their AWS RI purchase. Within the detailed RI planner, customers can now filter in the Confidence Score, view type of RIs that are lined up for recommendations and add them to the plan. This plan can then be downloaded as CSV report and can be consumed offline to get approvals and plan budget. We are working on further simplifying this task so our customers with large cloud infrastructure can do this quickly.

[mk_mini_callout title=”Richa: “]

Before I move to other questions on Security & Compliance, do you want to talk about any other cool features available in the new Cost Management & Governance app?

[/mk_mini_callout]

Amar: Yes. We have also included Eliminate feature in Cost Management and Governance app, where customers can perform multiple clean-ups in their cloud account with 0% error, that too with just one click. This will definitely save hours of manual work and save money for our customers.

[mk_mini_callout title=”Richa: “]

In the new Security & Compliance app, we have introduced Health Score there. How does it help customers in understanding the security of their infrastructure?

[/mk_mini_callout]

Amar: Our thought of creating Health Score was to help our customers to reach a score of 100 for a fully secure and compliant infrastructure. This score is computed based on our security and compliance analysis with respect to customer known history from previous audits & our recommended best practices for securing infrastructure on the cloud. In essence, it will help customers keep track of their cloud security compliance.

[mk_mini_callout title=”Richa: “]

Many customers who tried Beta have asked me about the custom policies for audits. So, how does custom policies help customers in Security & Compliance?

[/mk_mini_callout]

Amar: We created various categories in audits to make it easy for customers to understand their network security and disaster recovery compliance, etc. If a customer feels that the pre-defined policies are not tailored to their needs, then Botmetric provides an opportunity for them to create custom policies by selecting various audits related to security and compliance with specific configuration suited for their business. It is not as complicated as it sounds at least for cloud security engineers 🙂

[mk_mini_callout title=”Richa: “]

In Security & Compliance app, there is an exclusive Security Group(SG) audit? What is it?

[/mk_mini_callout]

Amar: AWS Security Group (SG)  is the most used network services in AWS infrastructure for defining the access security (using virtual firewall) for a customer’s cloud infrastructure.  Most of the network access rules are  handled by it. The SG determines network access rules to define the inflow/outflow traffic and also regulates the access to specific IPs in a customer’s cloud infrastructure. Since the security groups are at the core of network security in AWS cloud, a dedicated audit section for such a critical service was much required so that our customers have a higher vigilance and easy way to locate any critical vulnerabilities quickly.  I strongly recommend this feature for network engineer responsible for managing the access security for their AWS cloud.

[mk_mini_callout title=”Richa: “]

Under the Ops & Automation app, there is a Resource Analyzer, which has a new visual map-view feature. What is the thinking behind it and how it will help?

[/mk_mini_callout]

Amar: Many customers have asked for a single view of their global cloud infrastructure in AWS. With Resource Analyzer, a customer can get an aggregated view of cloud resources distribution across the globe, discover various key cloud services consumed by them, and eliminate unused resources. We are working on making this much more intelligent to pinpoint problems and detect issues in customer cloud environment. Many of our beta customers have heavily used this feature to download reports by 1st, 2nd, and 3rd most utilized regions.

[mk_mini_callout title=”Richa: “]

What are the major features in the new Ops & Automation application that you think customers should be aware of?

[/mk_mini_callout]

Amar: Firstly, the new Ops and Automation application has an execution history feature that tracks every action taken by Botmetric on your behalf for automating operational jobs and handling any alert events. It is one of our most requested feature by our customers, as many wanted to view the previous executions and track the changes in Automation. This feature will help a lot of our customers who are performing internal audits & need to have visibility into their cloud automation changes. Secondly, we have included extensive filters in Resource Analyzer to help customers explore all resources across various cloud accounts and regions. Moreover, it will help customers drill down details to a cloud availability zone and their custom tag, and then allow them to download the report so that they can consume it offline.

If you’re on AWS, then do explore the new Botmetric platform for intelligent cloud management, for free, with an exclusive 14-day trial, and do give us a shout out on TwitterFacebook, and  LinkedIn  to tell us what you think about it. Stay tuned for other interesting news from us!

 

How Secure is AWS for ECommerce Businesses? Doubt No More

How secure is AWS for ecommerce businesses? As an IT leader of an ecommerce company, responsibility to conduct a thorough risk assessment of AWS is always on your onus. To this end, this question of how secure is AWS for your business might keep echoing in your mind time and again. Right. So, do you see security of your ecommerce business as a knife incessantly hanging on top of your head?

Just so you may know, AWS is not completely responsible for the security of any system built in AWS, however, it provides many tools that help reinforce security best practices, including audit tools, compliance checkers and more. The AWS’  Shared Responsibility Model explains it how.

The backdrop for how secure is AWS for ecommerce?

Gartner says that “Through 2020, 95 percent of cloud security failures will be the customer’s fault.” The reportclearly indicated that cloud security failures until 2020 will be caused by the users rather than cloud service providers. So, as a user of AWS and as a IT leader of an ecommerce company, you should be able to differentiate between the security ‘of’ the cloud and security ‘in’ the cloud.

When we say security of the cloud, it refers to the security of the physical and staff resources of AWS. However, when we say security in the cloud, it refers to the security of systems built on top of AWS. Even though AWS provides a simplified system for administrators to both implement and audit standard security measures, it by no means replaces these traditional measures nor promises the security of your systems. Ultimately, the security of your system is your responsibility.

And one of the stepping stones towards securing your system is to ensure that your online business is complaint with industry security standards like Payment Card Industry – Data Security Standard (PCI-DSS).

AWS and the PCI-DSS Standard

The good news is that AWS Security helps ecommerce comply with PCI DSS Level 1 standard for physical security. This means that the underlying physical infrastructure has been audited and approved by an authorized independent Qualified Security Assessor. It’s interesting to note that, AWS was the first cloud platform to earn PCI DSS Level 1 compliance. AWS also provides all other building blocks necessary for PCI DSS Level 2 as part of its ecosystem.

Security Measures of PCI-DSS Compliance Level 2 & Other Standards

AWS, in collaboration with Anitian – a leading PCI Compliance Assessor, has published a whitepaper on the best practices. These practices have to be followed by ecommerce sites hosted on AWS. In order to ensure that the PCI-DSS, ISO270001, and other recommendations are implemented effectively, the following security measures need to be deployed along with the AWS apps.

  • Implement Web Application Firewalls (either AWS WAF or 3rd party solutions such as ModSecurity) and ensure that sufficient rules are configured to protect against OWASP top 10 attacks.
  • Ensure that all system defaults like port numbers protocols like SSH, username/passwords, etc. are modified periodically.
  • Encrypt the entire data lifecycle, including “Data in Transit”, “Data in Use” and “Data at Rest”. For “Data in Transit”, AWS ELB (Elastic Load Balancing) should be deployed to enable SSL/TLS, which encrypts all data in transit. All the AWS resources holding critical data should in placed in appropriate security groups and NACLs, so that only secured protocols are used for data communication between them. For ‘Data at Rest’ in EBS and S3, AES256 encryption mechanisms should be used. The Private Keys can be stored in Key Management Systems (KMS) such as AWS KMS.
  • Scan for Bots and other malware periodically using vulnerability scanners like OpenVAS, OWASP ZAP, and Nexpose, etc. By doing so, it will ensure that there are no ports opened due to negligence. Logging mechanisms like AWS CloudTrail should be enabled. Tools like AWS Cloud Watch can be used to monitor and detect anomalies in system behavior and performance.
  • Proper management of identification and authentication of the people who can access the network resources is very critical. Because, this avoids hackers gain access to the network through identity theft methods. The System administration should be limited to very few set of people to reduce the probability of identity theft. AWS IAM (Identity and Access Management) tool should be linked Active Directory services using AWS Directory Services for securing Identity Management. Constant monitoring of access of protocols like SSH will also help detecting any malicious intrusions into the Network.

To Conclude

Even if an ecommerce website has obtained compliance for PCI DSS Level 2, it does not mean it is secure from cyber-attacks like DDOS. Security is not a destination like one time configuration setup. It is a continually ongoing journey. Hence, constant monitoring of the security posture is essential. Moreover, leading organizations today advocate security testing to be integrated with the DevOps process such that security tests like vulnerability scanning is performed every time a software update is made.

Checkout Botmetric’s Security and Compliance application, which can help DevOps to reinforce, manage, monitor, and govern AWS cloud Security measures mentioned above. Sign-up for a 14-day trial to get a hands-on experience of what Botmetric offers.

As an IT leader of an ecommerce company, if you want to know other AWS security facts and tips, do read the Botmetric blog, 5 Surefire AWS Security Best Practices (Not Just) For Dummies.  And to know about 21 AWS Cloud Security Best Practices, read the Botmetric blog here. Also, get in touch with us on Twitter,LinkedIn, Facebook to  know other facts about AWS and AWS security management.

6 AWS Cloud Security Best Practices for Dummies

In today’s IT threat landscape, keeping pace with the attackers and ensuring security is more important than ever. And with the choice of public cloud – especially AWS that features shared responsibility model, AWS users need to pay close attention to few security measures from time to time. For the reason that: the responsibility for anything provisioned on AWS and the apps that run on top of that needs to be taken care, even though AWS takes responsibility of the facilities, physical security of hardware and virtualization infrastructure. To ease things up for the beginners on AWS, we have collated few salient AWS Cloud Security Best Practices that an individual should follow to safeguard his complete infrastructure.

1. Enable MFA for IAM user

One of the most commonly ignored and the most important security measure is enabling the Multifactor Authentication (MFA) for all your IAM users. This adds an extra layer of protection to your AWS account access and negates the possibility of username/password being compromised. Enabling MFA gives you a secure two-step login that ensures the authenticity of a user. You can explore more AWS Cloud Security Best Practices for IAM, here.

2. Termination Protection

EC2 is a key AWS resource in your cloud architecture and any intrusive changes to it can be catastrophic. In order to protect your mission critical EC2 instances, always enable the Termination Protection API. This will prove crucial in avoiding erroneous termination of EC2 instances in your environment.

3. CloudTrail

If you want to empower yourself with every activity that is ongoing in your AWS environment, the best solution is to enable CloudTrail in your environment. CloudTrail is an AWS service that records API calls made on your account and delivers the log files to an S3 bucket. It helps track changes to your resources, user activity, and also ensures your environment is compliant. Most cloud practitioners consider CloudTrail as one of the AWS Cloud Security Best Practices to definitely have in place.

4. Admin Credentials Count

It is recommended not to keep too many IAM admin access keys. Multiple super-users can cause abrupt changes in the environment, which can be harmful for a planned architecture. Hence, a maximum count of 2-3 users depending on environment is considered the ideal number of admin access keys allowed.

5. Old IAM Access Keys

As an admin, you should regularly change/rotate IAM access keys for users in your account. If you have already set required permission for users to rotate access keys themselves, you should change them once in every 60 days to maintain better security paradigm.

6. Security Groups

A Security Group functions as a virtual firewall that controls the inbound and outbound traffic for one or more instances. We associate a security group with the launch of each instance. The Security Group in your environment may have an open IP port or might be open to public access. This may cause a data breach. To avoid exposure to security vulnerabilities, we recommend that only ports that are associated with relevant IP and security groups are kept open.

Through the above basic AWS Cloud Security Best Practices tips you can manage well. However, if you are worried and want to follow industry best practices to secure your AWS cloud, you should try Botmetric’s Cloud Insights for a complete perimeter check of AWS infrastructure with 85+ best practices.

Get Botmetric today! Take up a 14-day trial today, and see how Botmetric auto-diagnoses all the vulnerabilities in your cloud infrastructure, proactively checks for IAM policies applied for access controls, provides smart recommendations, solves issues with single-click, and ensures your data in the cloud is safe and secure.

Tell us what’s your security posture and how you are implementing the critical cloud security controls and tackling the threat landscape for your cloud. Tweet to us. Comment to us on Facebook. Or connect with us on LinkedIn. We’re happy to hear!