Blue Whale Docker on AWS as CaaS, and How Enterprises Can Use it!
We live in an exciting era of data centers and cloud operations. And in these data centers, innovative technologies such as Docker containers eliminate all the superfluous processes that can bog down a machine and enable servers to live up to their potential. With the availability of Docker Datacenter (DDC) as Container-as-a-Service (CaaS), the excitement among enterprises is more than ever.
Making Sense of Docker Datacenter (DDC) as Container-as-a-Service (CaaS)
As you may know, containers make it easy to develop, deploy, and deliver applications that can be deployed and brought down in a matter of seconds. This flexibility makes it very useful for DevOps team to automate continuous integration and deploy containers.
And Docker datacenter offering, which can be deployed on-premise or in the cloud, makes it even more easier for enterprises to set up their own internal CaaS environments. Put simply, the package helps to integrate Docker into enterprise software delivery processes.
Basically, the CaaS platform provides both container and cluster orchestration. And with the availability of cloud templates pre-built for the DDC, developers and IT operations staff can move Dockerized applications not only into the cloud but also into and out of their premises.
Below is the brief architecture that DDC offers:
A pluggable architecture provides flexibility in regards to compute, network, and storage, which are generally a part of a CaaS infrastructure. Moreover, the pluggable architecture provides flexibility without disrupting the application code. So, enterprises can leverage existing technology investments with DDC. Plus, the Docker Datacenter consists of integrated solutions including open source and commercial software. And the integration between them includes full Docker API support, validated configurations, and commercial support for DDC environment. The open APIs allow DDC CaaS to easily integrate into an enterprise’s existing systems like LDAP/AD, monitoring, logging, and more.
Before we move to comprehending Docker on AWS, it is advisable to have a look at the pluggable Docker Architecture.
As we can see from the image, DDC is the mixture of several Docker Project:
- Docker Universal Control Plane [UCP]
- Docker Trusted Registry [DTR]
- Commercial Supported Docker Engine [CSE]
The Universal Control Plane (UCP)
The UCP is a cluster management solution that can be installed on-premise or on a virtual private cloud, says Docker. The UCP exposes the standard Docker API so that you can continue to use the tools that you already know to manage an entire cluster. With the Docker UCP, you can still manage the nodes of your infrastructure as apps, containers, networks, images, and volumes.
The Docker UCP has its own built-in authentication mechanism, and it supports LDAP and Active Directory as well. But, as role-based access control (RBAC). This ensures that only authorized users can access and make changes to the cluster.
In addition, the UCP, which is a containerized application, allows you to manage a set of nodes that are part of the same Docker Swarm. The core component of the UCP is a globally-scheduled service called ‘ucp-agent.’ Once this service is running, it deploys containers with other UCP components and ensures that they continue to run.
Docker Trusted Registry (DTR)
DTR allows you to store and manage your Docker images, either on-premise or in your virtual private cloud, to support security or regulatory compliance requirements. Docker security is one of the biggest challenges that developers face when it comes to enterprise adoption of Docker. The DTR uses the same authentication mechanism as the Docker UCP. It has a built-in authentication mechanism and integrates with LDAP. It also supports RBAC. This allows enterprises to implement individualized access control policies as necessary.
Commercial Supported Docker Engine (CSE)
CSE is nothing but a normal legacy Docker engine with commercial support and additional orchestration features.
For many enterprises, it seems that more the components in the architecture, more complex it gets. Hence, the deployment of DDC will be very painful. It’s a myth! Thanks to AWS and Docker. They already had prepared the recipe to deploy the entire DDC on AWS with the best practices of AWS. The recipe is prepared in CloudFormation Template by taking care of enhanced security. Below diagram shows the overall commercial architecture.
For a detailed resource utilization, enterprises need to just checkout the CloudFormation Stack Architecture as shown below. This seems very complex, but this is the most secure approach for the sake of enterprise production. To get the infrastructure ready, one has to Launch a Stack.
The CloudFormation template will create the below resources and activity:
- Creates a new VPC, private and public subnets in different AZs, ELBs, NAT gateways, internet gateways, AutoScaling Groups- all based on AWS best practices
- Creates and configures a S3 bucket for DDC to be used for cert backup and DTR image storage
- Deploys 3 UCP controllers across multiple AZs within VPC and creates a UCP ELB with preconfigured HTTP healthchecks
- Deploys a scalable cluster of UCP nodes, and backs up UCP Root CAs to S3
- Creates a 3 DTR Replicas across multiple AZs within VPC, and creates a DTR with preconfigured healthchecks
- Creates a jumphost EC2 instance to be able to SSH to the DDC nodes
- Creates a UCP Nodes ELB with pre-configured health checks (TCP Port 80). This can be used for your application that are deployed on UCP
- Deploys NGINX+Interlock to dynamically register your application containers
- Creates a CloudWatch Log Group (called DDCLogGroup)and allows log streams from DDC instances. It also automatically logs the UCP and DTR installation containers
A click on Launch Stack will redirect the user to AWS console where he/she will get the CloudFormation template page with already filled Amazon S3 template URL.
After getting the Status of the Stack as CREATE_COMPLETE, he/she needs to click on Outputs to get the Login URL of UCP and DTR. Refer the image below:
After the UCP Login, the user will get a secure login web page as shown below:
A beautiful, detailed Control Panel will appear as shown below:
And a DTR panel will look something like this:
Viola!!! Now, Docker DC (as CaaS) is ready to use on AWS host.
As a DDC user, you must be thinking, why anyone will go for the DDC for the Deployment of Application. Here’s why: The technology that makes the application more robust and nimble will be considered as the king. And with the adoption of microservices architecture, your application will be the most competitive.
To set the context here: Nowadays, Docker is considered as one of the best containerization technologies to deploy microservices-based applications. While implementing it, there are few key steps that needs to be followed. They are:
- Package the microservice as a (Docker) container image.
- Deploy each service instance as a container.
- Perform scaling, which is done based on changing the number of container instances.
- Build, deploy, and start a microservice, which is much faster than a regular VM.
- Write a Docker Compose file where we will mention all the image and their connectivity and then just build it.
Lots of enterprises are now considering to refactor their existing Java and C++ legacy applications by dockerizing them and deploying them as containers, says Docker. Hence, this technology was built to provide a distributed application deployment architecture that can manage workloads, plus can be deployed in both private and eventually public cloud environments.
In this way, DDC solves lots of problems today’s enterprises face, including BBC, Splunk, New Relic, Uber, PayPal, eBay, GE, Intuit, New York Times, Spotify, etc.
For demo purpose, Docker provides users with options to deploy a microservice application for different services, such as:
- A Python web app that lets you vote between two options
- A Redis queue that collects new votes
- A Java worker who consumes votes and stores them in
- A Postgres database backed by a Docker volume
- A Node.js web app that shows the results of the voting in real time
Here’s how: After going to the Resource of UCP, click on Application and then + Deploy Compose.yml
Hit the name of the application and write down the docker compose yaml. Or, you can also upload the file and click on create. After sometime, you will be able to see some logs, and then the application will be deployed.
In below image, we can see that 5 containers were spinned-up at the time of deployment of the application. Each and every container have their own services and worker.
If you want to try to hit the application from a web browser by getting the exact IP and Port, refer the image below:
By performing this activity, the application will be up and running. Refer the illustration below:
If you want to modify it in code, you can modify it from the Web UI. This means that you can access the container CLI from Web as well.
To Wrap Up:
DDC as CaaS is changing the delivery process of containerized application among the enterprises. The idea is that Docker wants to make it easy for enterprises to set up their own internal Containers-as-a-Service (CaaS) operations.
Are you an enterprise looking to leverage DDC as CaaS on AWS? As a Premier AWS Consulting Partner, we at Minjar have your back! Do share your comments and thought with us on Twitter, Facebook or LinkedIn. You can drop in a comment below too.
Latest posts by Nikit Swaraj (see all)
- The Ultimate Cheat Sheet On Deployment Automation Using AWS S3, CodeDeploy & Jenkins - April 27, 2017
- Containerized Application Deployment on AWS using Docker Cloud - February 11, 2017
- Blue Whale Docker on AWS as CaaS, and How Enterprises Can Use it! - January 19, 2017