Top 7 Blind Spots to Watch Out for in Your Public Cloud Strategy
A May 2016 survey cites that 51% of surveyed organizations took over a year to plan their public cloud strategy. Few may take up to three years too! It’s completely comprehensible why it takes so long and that a lot of detailing goes into it — understanding the precise costs and challenges that the cloud will introduce, knowing how to make the public cloud approach work for the organization, what tools & technology choices to make that will supplement the cloud adoption, etc.
Despite a detailed, pragmatic approach towards building the public cloud strategy, a majority of organizations still fail at some point. And our cloud geeks attribute it to ‘blind spots’ that get overlooked either due to complexity or lack of awareness. Soon enough, in some cases, these blind spots might take the team back to the boardroom.
To usher in the right approach towards building a seamless and successful public cloud strategy, we’ve collated the top seven blind spots that smart companies watch out for during their cloud-first and cloud-ready journey.
- Not calculating the ‘REAL’ Total Cost of Ownership (TCO)
Many companies have realized that the real benefit of cloud computing is not the cost savings it can bring. But it is the agility and time-to-market. And the prominent factor that plays a vital role in bringing such a nimbleness are the TCO models. However, many companies don’t define the actual TCO. They just go by the cost data alone, which may save some operational expenses in the short term but not in the long term. Hence, they end up missing the market when it comes to IT’s ability to deliver the real value of the business.
The way forward is to consider TCO models that also identify gray areas, and take them into account during calculations. Mainly, these models must understand the actual value of cloud-based technology. Plus, they should & must take critical factors into account too, like existing infrastructure in place, existing skills & workforce involved, the cost of all the cloud services when in operations, value of agility & time-to-market, future capital expenditures, and cost of risk around compliance issues.
- Not knowing who owns the data in the cloud, and how to recover it
Understanding the terms of a cloud service is paramount. Agreed. But it is more critical to know who owns the data in the system. The decisiveness lies in carefully checking the terms and conditions of the contract and ensure the data policy includes all the fine lines that ensure the actual owner owns the data.
By doing so, you, as a user, can own and recover the data on-demand. Above all, your service provider cannot access, use, or share your data in any shape or form without your written permission.
- Not having strong Service Level Agreements (SLAs)
While you focus on putting data policy and terms of cloud service in place, you should not change the spotlight on SLAs. A strong SLA goes a long way in monitoring, measuring, and managing how the cloud service provider’s services are performing. The essence lies in working closely with lawyers who can help define strong contracts. And also help you get what you want from the service, and whether this can be expressed in the contract.
If you still find this less important, then consider this scenario: You have SLAs with AWS but have no idea how its SaaS offering is performing. That’s because AWS gives them figures for the performance of the infrastructure, not the software.
- Not making complete use of elasticity of the cloud
Many enterprises fail to develop a cloud strategy that are linked to business outcomes, because they miss out leveraging the real benefits of elasticity feature that a cloud offers. They purchase instances in bulk to handle peak demands, like how they did with on-premise IT infra, and then turn a blind eye towards idle resources that could be optimized easily. They also overlook the fact that ‘anything and everything’ on the cloud can be codified. And APIs can be used to automate the tasks on the cloud completely.
Even if APIs are used, weak APIs and mismanagement of APIs can take a toll on the elasticity feature of the cloud. Essentially, going NoOps — with efficient APIs and APIs management — is the way forward.
- Not appointing a competent DevOps team
While Continuous Development, Continuous Testing, Continuous Integration, and Continuous Deployment play a significant role in bringing agility into the business process, workforce working on each of these Continuous Delivery stages ( which is the end goal of DevOps) contribute equally to the success of the cloud. Organizations need to identify the right talent and “ PEOPLE PROOF” their DevOps team to make it strong. Essentially, to ensure that there are no roadblocks in achieving any of the milestones due to the skills shortage.
The best way forward is to go the NoOps way, so that more Ops teams can work on innovating on the cloud, rather than operating.
- Not able to avoid cloud service provider lock-in
To date, vendor lock-in remains one of the major roadblocks in achieving success in the cloud. To this end, a majority of IT leaders consciously have been choosing not to invest in cloud fully. For the reason that they value long-term vendor flexibility over long-term cloud success, say experts.
One of the best approaches decreed by cloud experts is to avoid assigning business processes and data to the cloud service provider. Another solution, say, experts, not to keep one foot out of the cloud into on-premise, but to completely embrace it in a new way. Here’s how: By managing IT with governance models, taking cost control measures and the processes, etc.
- Not bridging the cloud security and compliance gaps properly
With the choice of public cloud, which features a shared responsibility model, its users are responsible for their data security and access management of all the cloud resources. While building a cloud strategy, one should respect the fact that the freedom of elasticity that the cloud offers is accompanied by greater responsibility. And this responsibility can be administered only by bridging the cloud security and compliance gaps correctly. How? By adopting ‘Continuous Security’ and making a habit of regular audits and backups, preferably automated.
The Final Word
Today’s public cloud is all about driving business innovation, agility, and enabling new processes and insights that were previously impossible. And for this to happen, a practical public cloud strategy is the cornerstone. A strategy that is based on your own unique landscape and requirements while also taking all the critical blind spots into account. This is our take. Tell us what’s your public cloud strategy for 2017 is? Share your learning and stories in the cloud with us on Twitter, Facebook, and LinkedIn.
Latest posts by Editor (see all)
- What is NoOps, Is it Agile Ops? - May 25, 2017
- Why Botmetric Chose InfluxDB — A Real-Time Metrics Data Store That Works - May 18, 2017
- The Biggest Roadblocks for Cloud Practitioners and Why You Should Know - May 16, 2017