November Round-up: Cloud Management Through the Eyes of the Customer

Cloud has indeed been a game changer for many enterprises across verticals. It holds the key to digital disruption, say many. And as the cloud adoption increases by the day, challenges that come along the path have not deterred them from embracing it further, even as they scale. In an effort to provide these cloud-ready and cloud-first companies the mojo to win the cloud game, the GenNext of Botmetric platform was released this November with complete and intelligent cloud management features.

A Milestone Achieved & Many More Extra Miles to Go

Botmetric 2.0 went live recently. The primary goal of building this new unified & intelligent cloud management platform altogether was to provide a great user experience, simplified and consistent design, with intelligent insights & in-context customer engagement. Essentially, Botmetric is a customer-obsessed company, conscientiously nurturing a customer-first culture ever since it was born. For this reason, it went a step ahead, saw cloud management through the eyes of the customer, and rebuilt a whole new platform of three applications offering unified cloud management:

  • Cost Management and Governance: Built for CIO & Business heads
  • Cloud Security and Compliance: Built for CISO & IT Security Engineers
  • Ops & Automation: Built for CTO & DevOps Engineers

During this journey, Botmetric team took a  strategic call to move to micro services architecture to build this appified platform, essentially to speed up its sprint process. And to further nurture  a delightful and seamless customer engagement, Botmetric resorted to Intercom app.

The ‘All-New’ Botmetirc is now ingrained with cutting-edge cloud intelligence. To get further details about the product, read the  exclusive launch blog post by our zealous Customer Success Manager, Richa Nath.

Over the next few months, Botmetric will have few more feathers in its cap. So stay tuned with us on  Twitter,  Facebook, and  LinkedIn.

Knowledge Sharing @ Botmetric

Continuing our new tradition to provide quick bites and snippets on better AWS cloud management, here are few blogs that we covered in the month of November:

5 Surefire AWS Security Best Practices (not Just) for Dummies

5 Salient Amazon DynamoDB Features Every Beginner DevOps Guy Should Know

How Secure is AWS for E-Commerce Businesses? Doubt know more

There are many more insightful blogs populated on cloud and cloud management. Do read them  here.

Finally, continued excitement at AWS re:Invent 2016

If you are at the event, then Botmetric invites you to meet our leaders for a quick chat, and get first hand experience of the ‘All-New’ Botmetric.

If you have not signed-up with Botmetirc yet, then go ahead and  take a 14-day trial. As an AWS user, we are sure you’ll love it. To catch up with what happened last month, read Botmetric’s  October round-up blog. Rain or shine, Botmetric will continue to provide many more AWS cloud management features. Until the next month, keep in touch with us.

Top 5 Tips on Auto-Scaling DynamoDB

AWS DynamoDB, the Internet Scale NoSQL database technology with built-in high availability and scalability, features powerful horizontal scaling capabilities. So to say, it has a unique pricing strategy. And hence optimizing cost for it is slightly different from other AWS tools. As the optimization is tied up with its architectural aspects, there are a few points to be taken care of while auto-scaling AWS DynamoDB.

The Backdrop:

NoSQL database is now a common component of the technology stack of ecommerce apps, which require high-scalability. Amazon’s DynamoDB, which was created by leveraging the best practices at amazon.com, has a unique feature of ultra-fast access rates. For the reason that: The data in the tables are stored in Solid-state-drives (SSDS) which are internally managed by the platform itself. DynamoDB stores data tables internally as partitions of storage blocks with auto data-replication across multiple availability zones.

Auto-scaling AWS DynamoDB: How to go about it?

To optimize and autoscale AWS DynamoDB, users have to have a thorough understanding of the internal architectural model of the data partitioning strategy. Here’s a thumb-rule for computing the approximate number of partitions:

Approximate number of internal DynamoDB partitions = (R + W * 3) / 3000.

where

R = Provisioned Read IOPS per second for a table

W = Provisioned Write IOPS per second for a table

The Data tables are automatically split into multiple partitions when data is crossing the partition block size. The tables should be designed such as that the table ‘s key which will determine in which partition the data element will be stored is distributing the key values uniformly across the all the partitions. It is the architect’s responsibility that the Partition Keys generated are equal across the range between the topmost value and the bottom-most value. If the keys are frequently mapped to a narrower range, one partition would get more number of access and others will get less number of access. The cost of table will be biased towards the rate of accessing the values within that single partition instead of the average access rate across partitions.

Along With a basic understanding of the Partition Model, users should also understand the following auto-scaling conditions that needs to be met before an attempt to set up auto-scaling DynamoDB is made.

  1. The R and W time based access pattern should be ensured to be uniform before auto-scaling
  2. access keys are distributing the keys uniformly
  3. the average size of the partition is < 10 GB

If the above conditions for auto-scaling are met, then the users can confidently try auto-scaling DynamoDB database. Here are the five tips, which would help you in setting up auto-scaling AWS DynamoDB.

  1. Analyze Data Traffic Patterns to Predict and Adjust Limits

Analytics on Data traffic can be as simple as a moving average of the data volume over a period of time. Check whether the spikes in traffic is above or below the moving average. If the spike is above, we have to increase the provisioned throughput parameters. If it is less, we can confidently lower the parameters.

  1. Factors of Standard-Deviation as Risk Mitigation

One of the important factor to consider is the risk management. Since the app would throw error when the provisioned throughput parameters have been crossed, we should have to account for unpredictable surge of data traffic. The risk measure to be adopted is a standard technique in financial trading algorithms. An extra quantity equaling one or two standard-deviation of the mean traffic value should be added to the mean traffic value before setting up the thresholds. The standard deviations will account for any unexpected data traffic and avoid errors.

  1. Ensure Key Distribution as Uniform and Adjust Threshold Limit after Partitions are Created

At the design level, the architect should ensure that the partition keys are generating values which are equally distributed and not skewing any particular intermediate value in the key range.

  1. Cache Popular Items in Memory

The architect should also compute the most frequently accessed key-value pairs and move those values to a memory-cache outside the DynamoDB so that cost can be reduced drastically.

  1. Scale Up Faster and Scale Down Slower

While setting up auto-scaling, it is important to understand the AWS restricts the scale down limit to just 4 per day. Hence we should be scaling up faster if thresholds are breached but scaled down slower only if three of four additional lower level thresholds are breached so ensure that we do not cross the scale down limit of 4 per day.

To Wrap Up:

AWS DynamoDB is a critical component of any large enterprise, especially ecommerce player.  Because Product Catalog is the most accessed information resource of the system. Optimally managing it, without overpaying, is a task for the DevOps. If you are a DevOps, you should take a quick look at Botmetric to see how it can ease your day to day operations.

5 Salient AWS DynamoDB Features Every DevOps Professional Must Know

Today web and mobile applications can go viral any moment and DevOps have to be prepared to meet such contingencies. While it is easy to spawn more server instances using AWS EC2, databases could become bottlenecks in autoscale scenarios. AWS DynamoDB is one such internet-scale database, which can help DevOps scale quickly. How? AWS DynamoDB  stores data on Solid State Drives (SSDs)  and replicates it synchronously across multiple AWS Availability Zones in an AWS Region to provide built-in high availability and data durability.

As a fully managed NoSQL database service featuring fast performance at any scale, AWS DynamoDB eases developers challenges of provisioning hardware and software, including setting up and configuring a distributed database cluster and managing ongoing cluster operations. Above all, it has the capability to handle all the complexities of scaling and partitions and re-partitions your data over more machine resources to meet your I/O performance requirements. Further, it automatically replicates data across multiple Availability Zones to meet stringent availability and durability requirements.

Making Sense of the Benefits Offered by AWS DynamoDB

AWS DynamoDB was actually born out of AWS’ need for a highly reliable, ultra-scalable key/value database. Chiefly, this database was targeted at use cases that were core to the Amazon e-commerce operation, such as the shopping cart and session service. Here are the five salient Amazon DynamoDB features every DevOps professional must know:

1. AWS DynamoDB Scalability

Today’s web-based applications often encounter database scaling challenges when faced with growth in users, traffic, and data. To address the scalability issues of database, AWS offers DynamoDB which is a fully managed NoSQL database service designed for fast performance at any scale.  

By using Amazon DynamoDB, developers who want to develop scalable cloud-based applications can start small capacity initially and then increase the request capacity of specific tables as their app grows in popularity.

Their tables can also scale up to the limit pre-configured. Developers can typically achieve data retrieval in the single-digit milliseconds. DevOps need not worry about managing high availability and data durability because Amazon DynamoDB automatically replicates it synchronously across multiple AWS Availability Zones (AZs).

2. AWS DynamoDB Pricing Mechanism

AWS DynamoDB provides DevOps with the ability to configure the read and write units in seconds as the pricing scheme is based on read and write units per second. DynamoDB can be provisioned according to a number of Write units and a number of Read units allocated.

3. AWS DynamoDB Data Model and Indexing

In AWS DynamoDB, all tables are collections of items. Each item is in turn a collection of attributes. Each table should have one attribute as primary key. And there are two types of data: Scalar and multi-valued. The scalar data type can be a string, a number, or a binary. Whereas the multi-valued can be a string set, number set or binary Set. And every table is indexed by the primary key.

Along with the normal indexing mechanism for keys, AWS DynamoDB also provides a special hash-key mechanism called Hash-Range for indexing a range of values. In the earlier versions of DynamoDB, creating the indexes were mandatory and specified at the time of creation of table itself. Now, a Global Secondary Index (GSI) has been introduced to index alternate keys as well.

4. Data Partition and Replication

DynamoDB stores data in partitions. A partition is an allocation of storage for a table in solid-state drives (SSDs). The partitions gets automatically replicated across multiple AZs within an AWS Region. These partitions are self-management by DynamoDB and are transparent to the users.  The user’s database table remains always available based on provisioned throughput requirements like read-write units per second. Moreover, the indexes created for the tables are also stored in these partitions and are completely transparent to the user.

5. AWS DynamoDB APIs

DynamoDB utilizes JSON as a transport protocol, not as a storage format. The AWS SDKs use JSON to send data to DynamoDB, and DynamoDB responds with JSON, but DynamoDB does not store data persistently in JSON format. According to AWS, DynamoDB provides a low-level AWS DynamoDB API to allow developers to interact with the database. The AWS SDKs construct low-level DynamoDB API requests on your behalf and process the responses from DynamoDB. To call the AWS DynamoDB APIs, every HTTP(S) request must be correctly formatted and carry a valid digital signature. The API can be used to put and get objects from the database.

To Wrap Up

AWS DynamoDB is a handy for developers who want to implement e-commerce scale data-centric apps. It is also a great tool for DevOps as it facilitates auto scaling and reduces the complexity of managing the high availability and scaling for peak usage times.

If you are a DevOps person, you should check out how Botmetric can track unused AWS DynamoDB tables and help you optimize your overall AWS costs. Take up 14-day Botmetric trial to know how. And do share your thoughts and experience with us on Twitter, LinkedIn, Facebook.