There are many compelling reasons to migrate applications and workloads to the cloud, from scalability to easy maintenance, but moving data is never without risk. When IT systems or applications go down it can prove incredibly costly for businesses. A single hour of downtime costs over $100,000 according to 98% of organizations surveyed by ITIC.
Mistakes are easy to make in the rush to compete. There’s a lot that can go wrong, particularly if you don’t plan properly.
“Through 2022, at least 95% of cloud security failures will be the customer’s fault,” says Jay Heiser, research vice president at Gartner.
If you want to avoid being in that group, then you need to know about the pitfalls to avoid. To that end here are seven traps that companies often fall into.
1. No data protection strategy
It’s vital that your company data is safe at rest or in-transit. You need to be certain that it’s recoverable if disaster strikes. Consider the threat of corruption, ransomware, accidental deletion, and unrecoverable failures in cloud infrastructure. If the worst should happen, and you expect more than an apology or a refund, then a coherent, durable data protection strategy is essential. Put it to the test to make sure it works.
2. No data security strategy
It’s common practice for the data in a data center to be comingled and collocated on shared devices with countless unknown entities. Cloud vendors may promise that your data is kept separately, but regulatory concerns demand that you make sure. Think about access control, because basic cloud file services often fail to provide the same user authentication or granular control as traditional IT systems. The Ponemon Institute puts the average global cost of a data breach at $3.6 million. You need a multi-layered data security and access control strategy to block unauthorized access and ensure your data is safely and securely stored in encrypted form wherever it may be.
3. No rapid data recovery strategy
With storage snapshots and previous versions managed by dedicated NAS appliances rapid recovery from data corruption, deletion or other potentially catastrophic events was possible. But few cloud native storage systems provide snapshotting or offer easy rollback to previous versions, leaving you reliant on current backups. You need flexible, instant storage snapshots that provide rapid recovery and rollback capabilities for business-critical data and applications.
4. No data performance strategy
A shared, multi-tenant infrastructure can lead to unpredictable performance and many cloud storage services lack the facility to tune performance parameters. Too many simultaneous requests, network overloads, or equipment failures can lead to latency issues and sluggish performance. Look for a layer of performance control for your data that enables all your applications and users to get the level of responsiveness that’s expected. You should also ensure that it can readily adapt as demand and budgets grow over time.
5. No data availability strategy
Hardware fails, people commit errors, outages are an unfortunate fact of life. It’s best to plan for the worst, create replicas of your most important data and establish a means to quickly switch over whenever sporadic failure comes calling. Look for a cloud or storage vendor willing to provide an SLA guarantee for your business. Where necessary create a failsafe option, with a secondary storage controller to ensure your applications do not experience any outage.
6. No multi-cloud interoperability strategy
As many as 90% of organizations will adopt a hybrid infrastructure by 2020, according to Gartner analysts. There are plenty of positive driving forces as companies look to optimize efficiency and control costs, but you must properly assess your options and the impact on your business. Consider the ease with which you can switch vendors in the future and any code that may have to be rewritten. Vendors want to entangle you with proprietary APIs and services, but you need to keep your data and applications multi-cloud capable to stay agile and preserve choice.
7. No disaster recovery strategy
A simple mistake where a developer accidentally pushes a code drop into a public repository and forgets to remove the company’s cloud access keys from the code could be enough to compromise your data. Maybe your provider will be hacked and lose your data and backups. It’s critically important to keep redundant, offsite copies of everything required to fully restart your IT infrastructure in the event of a disaster or full-on hacker attack break-in.
The temptation to cut corners and keep costs down with data management is understandable, but it’s short term thinking that could end up costing you a great deal more in the long run. Take the time to craft the right strategy and you can drastically reduce the risk.
This article is published as part of the IDG Contributor Network. Want to Join?