STRATEGIC DEVELOPER   By Andrew C. Oliver, Columnist, InfoWorld | FEB 27, 2020

Outages are inevitable and vendors are unreliable. You can’t move fast enough unless you already have your service running on two or more clouds.

Something is rotten in the state of Denmark—in all of Europe actually—and Amazon has been tight-lipped about it. It seems there might have been a hack or a well-executed denial-of-service attack. I realize this was in October, but Google autocomplete suggests that “AWS DDoS attack” be followed by a year. These things happen frequently.

Denial-of-service attacks are as old if not older than the internet—and so is the lack of candor on the part of your data center operator or hosting provider. The thing that protected us all in the past from watching the whole net go black is the same thing that will protect us again: multiple data centers run by different providers. That is to say, multicloud.

Also on InfoWorld: Tiny clouds taking on AWS, Microsoft Azure, and Google Cloud ]

A multicloud strategy starts with the obvious — deploying (or maintaining your ability to deploy) on multiple vendors’ clouds. Meaning you keep your software on AWS and Azure and maybe even on GCP. You forego using any vendor services that might prevent your ability to move, and you pursue a data architecture that allows you to scale across data centers.

Single cloud advantages and drawbacks

Relying on a single vendor’s cloud allows you to eat the buffet of sometimes lower-cost alternatives from the cloud provider. Adding these is usually seamless. Meaning, if you’re an AWS customer, you use Amazon Elasticsearch Service instead of building your own search cluster. If you’re on Google, you can use their document database, Google Cloud Datastore, instead of rolling your own.

However, as with every vendor platform strategy, there is a cost: your freedom. Okay, that sounds heavy, but hear me out. Sure, your cloud vendor is cheaper now—but will it always be? Moreover, will it one day be unceremoniously canceled as the cloud vendor shifts strategy? They may never even really announce it. And what if your region’s AWS data center goes down, slows, or becomes unreliable for an extended period of time. Can you take the loss?

Some of these vendor-provided services (especially Amazon’s) are forks of more famous open source alternatives that are supposed to maintain API compatibility. However, they’re famously a release or more behind. That might be generally okay for larger, slow-to-upgrade large enterprises. However, “generally” isn’t always.

Even larger enterprises must move quickly when circumstances necessitate it. If there is a big security flaw that can’t be patched in the current release, you move. If there is something in the next release that is absolutely required for higher scale—and you need that—or some other feature needed for your own next release, then being on your cloud vendor’s schedule puts you behind the curve.

BrandPost Sponsored by HPE

HPE GreenLake: Balance IT Flexibility, Cost, and Control with Accelerated Outcomes on Your Terms

Business demands flexibility, but IT needs control. The answer? Consume IT on your terms.

When consuming cloud vendor services, it is important to ask what every actor or scriptwriter asks: “What’s their motivation?” Sure, they might want the extra 30% markup above their IaaS offering, but more likely, they want to keep you on their platform and get every last one of your compute dollars.

However, as reliable as each cloud vendor has become, they haven’t become completely reliable. There are multiple regional and even multi-region outages each year. Some last for awhile. If you can’t just up and install your code somewhere else (or better yet, have it there already as part of your process), then when disaster strikes—and it will—you’re just waiting.

Finally, when it’s time to negotiate pricing, how flexible do you think your cloud vendor is going to be if they know you can’t leave?

Also on InfoWorld: The best software development, cloud computing, data analytics, and machine learning products ]

(Full disclosure: I work for Couchbase. They have partnerships with multiple cloud vendors including Amazon.)

Multicloud advantages and drawbacks

A multicloud strategy necessitates both vendor neutral and more resilient architectural choices. This means more up-front complexity. It means negotiating, at times, with multiple vendors. It also means ensuring the integration points between the technology exist and exist securely.

However, a multicloud strategy gives you more freedom and security than using a single provider. We learned this during the platform wars—despite many companies standardizing first on mainframes and then DEC, HP, and Sun before trying to standardize on Windows NT.

Single-vendor platforms often fail to live up to their promise. Remember that in the 1990s, and even into the early 2000s, Microsoft’s technologies were often well-integrated but immature. Then came rapid changes. Seasoned developers remember the data access technologies, DAO, RDO, OleDB, and ADO, which were all released and advocated in rapid succession. Let’s not even speak of the .NET transition and the mis-marketing (i.e. Windows.NET) that occurred. It isn’t just Microsoft. I started my career writing OS/2 device drivers. Then IBM launched Warp 4 and it warped out of existence.

Despite the up-front costs of platform independence, companies that pursue it tend to produce more resilient architectures. These companies adopt standard interfaces between applications. They pick best-of-breed technologies that fit the use case as opposed to just whatever the platform is pushing (remember Visual SourceSafe?). Best of all, when a vendor proves to be an unreliable partner—or jacks up the price too much—platform independent companies have the freedom to exit.

Keep up with the latest developments in cloud computing with InfoWorld’s Cloud Computing Report newsletter ]

Minimum requirements for a multi-cloud strategy

The biggest requirement for multicloud is to rely on open standards and industry standards for key touch points. Here are some of the obvious ones:

  • Kubernetes. The open source container management platform is now the industry standard for deploying services. If you are creating standard Kubernetes deployments that run on your laptop, they should run on multiple cloud providers.
  • Open source. Using open source tools and technologies for your core architecture. This ensures that as platform strategies change, you can opt for a different path.
  • Open standards. This isn’t to say that you need to get really involved in the way your application server clusters itself, but all of the touch points with other software should follow open and vendor-neutral industry standards (e.g. JSON).
  • Caution towards branded services. If you need a fixed IP and various DNS services, Amazon brands its version of these pretty common network tools. Of course, you don’t need to run your own distributed DNS, and you do have to use your provider’s means of providing a fixed IP. It also doesn’t really lock you down as it is just configuration and works the same way on Azure and GCP. However, you should be a bit more circumspect when using a machine learning service, for instance.

In the end, “Just do it!” There is no way to ensure you can move quickly except to already have your service running on two or more clouds. Even if you’re mostly going to direct traffic to one cloud for various cost or accounting reasons, you should have some standbys and text on another provider. Then when the inevitable outage or eventual financial shakedown happens, you’re already there.

Related: 

Andrew C. Oliver is director of product marketing and evangelism at Couchbase, a provider of open source NoSQL database products.