Many companies moving from a private cloud to a cloud service are unaware of increased threats.
Because most companies that have followed relatively traditional IT strategies are now considering putting mission-critical applications and data into the public cloud, it’s worth examining the differences in private versus public clouds when it comes to threats that applications and data encounter. When I talk to customers about the differences, I use a metaphor of what’s happening onstage versus backstage in these two deployment scenarios.
In private data centers and public clouds, I define onstage as all the virtual machines (VMs) and containers a company runs inside the data center. We tend to protect what’s onstage in two ways: first by examining the inner behavior of each workload and second by watching the traffic entering and exiting the workload. The security industry supplies all manner of agents for the first use and provides physical and virtual firewalls and switches for the second.
In private data centers, what I define to be backstage is your hypervisor or container operating system, your storage, your server management (the Intelligent Platform Management Interface and the like), and your firewall, switches, and routers. Many sophisticated attacks involve backstage components because most defenders don’t think about needing to detect attacks on them. This is shown by the Shadow Brokers and WikiLeaks data dumps, where many attacks against switches, firewalls, and routers are shown to be in nation-states’ offensive cyber arsenals.
In public clouds, there’s a lot more backstage activity, and even some of the same things that are backstage in private clouds expose a substantially larger attack surface in public clouds.
New Threat Vectors Emerge in the Public Cloud
Take storage as an example. In your private data center, you may have a network-attached storage (NAS) system. It’s your NAS, and no one can get to it without getting past your perimeter firewall first, so threats can be contained. Of course, an attacker could first compromise some end-user system in your network and then pivot to the NAS, copy data from it, and send it to an external data drop, but the exfiltration of data would be seen by your firewall.
Now consider storage in Amazon Web Services. You store data in Amazon’s Simple Storage Service (S3). But if you fail to configure things correctly, you might expose the S3 bucket with your data to anyone on the Internet. That’s what happened in July to some Verizon data.
My point is that in the cloud, your virtual firewall can’t protect you from this type of threat because this traffic is outside your virtual network. S3 is effectively a multitenant NAS from which anyone knowing the right URL (as in the Verizon case) or possessing the right API key (in cases when the S3 bucket is better protected) can copy information.
Attackers Use New Services to Accomplish Their Goals
Compute services like Lambda are a part of AWS’s serverless compute infrastructure. If an attacker gets inside a workload you have running in AWS, or gets hold of the right API key without compromising your workload, she can install a Lambda function that is run whenever some event occurs.
One use of Lambda functions would have the attacker install a function that is run every time one of your S3 buckets is modified. The function would make a copy of any changed objects to Glacier backup storage, something that would appear relatively normal — except the attacker’s Lambda function would copy data to the attacker’s Glacier storage, not yours.
Note that you may never chose to utilize Lambda, so you might not even consider its security implications, but the fact that you didn’t utilize this service doesn’t mean an attacker is prevented from exploiting its existence.
Attack Surfaces inside New Services You Utilize
In the examples above, your only intent was to utilize compute and storage functions — basically what most people in the industry would refer to as infrastructure as a service.
But what if you want to make use of some more exotic services to speed up your time-to-market? Let’s say you’re deploying on Azure and you’re intrigued by the promise of the Azure Bot Service, which enables you to “reach customers on multiple channels.” When you utilize such services, you’re effectively in the land of platform as a service. You’re using services that are part of the Azure platform, which promises to make your organization more efficient but also could make it harder for you to migrate to another cloud platform.
The question you must ask yourself is, how secure is the Azure Bot Service? There is no guarantee that your functions built on this service will run in a VM or container dedicated to only your use. For scalability, your function might run in the same workload as many other subscribers’ functions utilizing the same service. While escaping out of a VM into a hypervisor to get into another VM on the same physical server is pretty difficult, will it be as difficult for an attacker to pull out of the Azure Bot Service? Given the imbalance of scrutiny that hypervisor code and the Azure Bot Service code are likely to undergo, I’d guess the answer is no.
In your own private cloud, the ratio of onstage to backstage attack vectors on which you may need to focus is about 90:10. In public clouds, it’s more like 60:40, because elements (for example, storage) that exist in both places are multitenant in public clouds, because public clouds provide services (such as AWS Lambda) that can be exploited to attack you, even if you’re not using them, and because any platform-specific serverless services (like Azure Bot Service) you utilize will potentially expose you to difficult-to-quantify multitenant threats.
The lesson: don’t assume that the same tools you use in your private cloud will adequately protect you in the public cloud.
Oliver Tavakoli is the chief technology officer at Vectra Networks, Inc. View Full Bio