5 approaches to cloud automation

September 1, 2021

Originally published on The Enterprisers Project.

How do you avoid the messy, painful business of manually provisioning and maintaining cloud resources – and keep an eye on costs? For starters, you can’t automate what you can’t see.

There’s plenty of overlap between infrastructure automation and one of its more modern (and large) subcategories: cloud automation. That makes sense because many of the principles or approaches to cloud automation here aren’t that different from on-premises infrastructure.

“Much of the automation you might put in place for a  hybrid cloud infrastructure will be similar to, or even the same as, automation you might want in an on-prem environment,” says Gordon Haff, technology evangelist at Red Hat. “For example, it’s important for a CI/CD pipeline to continuously test and scan wherever the associated infrastructure is located.”

Likewise, many of the key concepts of infrastructure automation – such as containersorchestrationmicroservices architecture, and automated build pipelines (or CI/CD) – still very much apply when talking about cloud automation. We dug into those concepts recently in our article, How to approach infrastructure automation.

Five cloud automation approaches

In this article, we’re focused more specifically on cloud automation approaches. How do you avoid the messy, painful business of manually provisioning and maintaining cloud resources? A question like this one becomes even more important when you’re asking in the context of hybrid cloud or multi-cloud environments.

1. Ensure full visibility as the foundation for cloud automation

You can’t automate what you can’t see, at least not in a manner conducive to positive results.

“The first need is visibility across all environments,” says Jesse Stockall, chief architect at Snow Software. “The discovery capabilities of cloud management platforms provide an inventory of all resources in a single pane of glass.”

The major cloud platforms offer built-in discovery and visibility capabilities, and Stockall says they might suit your needs if you’re working with a single provider or environment.

“But in hybrid, multi-cloud, and even multiple subscription/account environments, native tooling can’t aggregate all the data into a single view,” Stockall says.

That means you’d otherwise have to manually manage this need with a variety of tools instead of automatically bringing everything into a single place. The latter is a better approach for more complex, diverse environments. A cloud management or monitoring platform that offers that “single pane of glass” that Stockall describes – essentially, automatically unifying all of the needed data in one place – can streamline long-term operations.

For example, if you are using  Kubernetes or Red Hat’s OpenShift enterprise Kubernetes platform, there is a lot of cloud-native tooling either built in, in the process of being integrated, or available to add-on in the form of open source tools – such as Prometheus for monitoring, observability tools, Jaeger for distributed tracing, and Grafana to build consoles. For more detail, read 5 open source Kubernetes projects to watch in 2021.

2. Use auto-scaling wherever possible

One general benefit of hybrid cloud and multi-cloud is the ability to scale rapidly without having to build out your own physical infrastructure to handle peak or excess capacity. (In fact, early definitions of hybrid cloud were based on that premise: the ability to “burst” to a public cloud from on-premises infrastructure as needed. That’s too narrow to encompass today’s potential use cases, but it remains a key capability of hybrid cloud strategies.)

If you’re still manually adding cloud resources when they’re needed, however, you’re missing out on one of the fundamentals of cloud automation: auto-scaling. Felipe Gimenez, associate manager of cloud operations at Mission, recommends using it wherever possible.

“If you’ve ever received an ‘insufficient capacity’ error when trying to launch an application, you undoubtedly already know the productivity losses and frustration that not having enough instances can cause,” Gimenez says. “But [customers] who are using your applications to make purchases – or employees [who] depend on your mission-critical software – don’t have time to wait around for more of an instance type to become available. Tapping into automation tools can ensure that your cloud resources scale instantly to fit demand and server loads.”

This has become one of the big appeals of Kubernetes: It offers multiple approaches to autoscaling resources. The major cloud platforms also offer native tools.

Gimenez adds that autoscaling can also help keep cloud costs under control by only adding resources when they’re actually needed, whether you use a single cloud or have a hybrid cloud or multi-cloud environment. There’s some strategic decision-making involved that may depend on variables like the platforms and tools you’re using, not to mention your autoscaling goals.

“Do you want to make sure customers never experience slow response times, or can you afford some slowness to keep costs down?” Gimenez asks, for example. “Variables like these will inform how you define the best automated scaling strategy for your business.”

3. Develop a plan for cost monitoring and optimization

Speaking of spending, this is another area where automation can make a significant difference. It’s also one where the considerations for public cloud are very different than in on-premises environments.

“One aspect of public clouds is starkly different from infrastructure that’s running in your own datacenter,” Haff says. “That’s the pay-by-use billing model, which makes keeping close track of costs in one or more public clouds an imperative.”

There’s not really a catch-all solution here, but more likely a mix of tools and tactics – especially in hybrid cloud and multi-cloud settings.

“Understanding all the different costs associated with public clouds and optimizing for future spend requires a lot more know-how than just pressing a button,” Haff says. “However, savvy admins will use a combination of [largely automated] policies and alerts to steer users towards appropriate resource types, shutdown inactive resources, and inform them if usage has shot up for some reason.”

Cloud providers offer various reporting and planning tools, and there are third-party options, too. The general idea here: If you’re managing cloud spending in an entirely manual or ad hoc fashion, you’re probably spending more than is necessary or simply tying up people’s time.

“There’s no all-in-one tool to automate public cloud cost control, especially when multiple clouds are involved,” Haff says. “So it’s important to become familiar with and use the many options that are available to get a handle on costs so they don’t race out of control – and lead to an uncomfortable conversation with your CFO.”

Let’s look at two more important approaches that help:

4. Use (and automate) resource tagging

Stockall notes that many of the common goals and/or strategies of cloud optimization – whether spending or resource utilization or workload fit – require some method of managing classification and ownership.

Resource tagging is one of the key methods of doing so, and it’s also another where automation is key, perhaps especially in hybrid cloud or multi-cloud environments.

“It’s impossible to make informed decisions about resource optimization, decommissioning, and cost allocation if you don’t know who owns the resources,” Stockall says.

Resource tags themselves can be an automation enabler, but your actual resource tagging will likely also be best served by automating it. See the Red Hat blog post, Tagging resources for IT and business alignment, for how-to advice and details on overall business benefits.

5. Build automated, repeatable pipelines

Just as auto-scaling can more dynamically and efficiently respond to user demand, the same principle can be applied throughout a software pipeline: automated and repeatable infrastructure and application provisioning wherever possible.

“This can be self-service deployment from a catalog or automated DevOps pipelines,” Stockall says. “Automated provisioning ensures standards and best practices are followed, avoids error-prone manual tasks, and lets you treat infrastructure as ‘cattle, not pets.’”

As with infrastructure automation in general, the idea here is to standardize and automate wherever possible – not just in production but in all phases of the pipeline (whether you call it CI/CD or not) that your code and its dependencies travel to get there. You’re looking to get rid of those so-called snowflake deployments that tend to hog up people’s attention and effort.

Finally, don’t forget that cloud automation – like most forms of IT automation – is not a set-and-forget deal.

“Full lifecycle management, including continual optimization and automated decommissioning, is the last element to ensure your workloads do not live forever and are continually optimized for the duration,” Stockall says.

Related Posts