Cloud Balancing: Don't Put All Your Apps in One Basket

Feature Articles

Cloud Balancing: Don't Put All Your Apps in One Basket

By Erin Harrison, Executive Editor, Cloud Computing  |  August 03, 2012

As more organizations shift toward moving their technologies into the cloud, perfecting the balance of where applications reside is paramount to performance and, most certainly, avoiding potential outages. You may remember when Amazon Web Services (News - Alert) was hit last year with a multi-day service outage on the East Coast after a “misaligned network” brought down several EC2 services in its Northern Virginia data center. Cloud balancing acts as an insurance policy against such outages. In plainer terms, it’s the notion of “don’t put your eggs in one basket”– or in this case, don’t put all your applications in one cloud.

Cloud balancing, which is the routing of application requests across applications or workloads that reside in multiple clouds, can also allow organizations with smaller budgets and/or IT staff to benefit from the upside of global application delivery, according to F5’s Lori MacVittie, senior technical marketing manager.

“Most people would say it’s load balancing across a cloud, or two clouds and a data center,” MacVittie recently explained in an interview with Cloud Computing. A number of metrics are involved in cloud balancing with “the focus shifting from organization size to the user community in order to deliver applications as quickly as possible.”

But cloud balancing is not simply load balancing across clouds. In fact, CIOs and IT managers need to stop thinking of cloud as an autonomous system and start treating it as part of a global application delivery architecture, she says.

Cloud balancing uses a global application delivery solution to determine, on a per user/customer basis, the best location from where to deliver an application. The decision-making process, MacVittie says, should include traditional global server load balancing parameters such as:

  • Application response time;
  • Location of the user;
  • Availability of the application at a given implementation location;
  • Time of day; and
  • Current and total capacity of the data center/cloud computing environment in which an application is deployed.

Context is Critical

When IT managers start to look at cloud balancing from a technical and business standpoint, the considerations are somewhat intuitive but the decision-making process must involve the full user community – it’s a vital aspect that will ultimately help define the strategy. Context, MacVittie says, is critical.

A misstep MacVittie sees all too often is lack of consideration for the user base, which is critical to proper cloud balance. Evaluating the user community is becoming an increasing concern among small to medium-sized businesses (SMBs), which have to look at their traffic more from an external view.

“Size does matter. It’s about the user base as opposed to the business size; a small company can have enormous traffic that they have to handle. That’s going to be a switch for organizations in general,” MacVittie says. “Most successful start-ups have less than 40 people but they have millions of users. We have to shift that to look at the user community that they’re serving as opposed to the employee side.”

Balancing vs. Bursting

Although they may appear similar, cloud balancing and cloud bursting are not interchangeable terms. In a cloud balancing scenario, an organization with an application running in the cloud has to look at putting multiple regions within one cloud provider or distribute it to multiples cloud providers.

 Whereas, “cloud bursting is a capability where companies have an internal data center and are able to more dynamically provision their application out into the cloud,” according to Josh Odom, platform product line leader at Rackspace (News - Alert).

In fact, many large enterprises have invested enough into their own internal infrastructure, that the most logical scenario is a hybrid model deployment where they can leverage the best of both worlds.

“Many organizations already have a substantial investment in their own internal IT infrastructure. With cloud bursting, businesses are taking advantage of it so they can withstand those peaks that they would have to purchase more infrastructure for,” Odom explains. “Having that ability to burst out into the cloud allows them to experiment and understand what the cloud can do for them without investing in an all in strategy.”

Cloud balancing – which is a form of hybrid cloud computing – involves one or more corporate data centers and one or more public cloud data centers.

“Cloud balancing is akin to very large organizations who need to have a distributed model in order to address performance and availability or need to address regulations,” MacVittie says.

On the other hand, the business case for cloud bursting primarily revolves around seasonal or event-based peaks of traffic that push infrastructure over its capacity but are not consistent enough to justify the cost of investing in additional hardware that would otherwise sit idle, she adds.

However, in either case, business use cases, metrics and goals must all be part of the equation.

“In both cases the cloud-deployed application is still located at your ‘address’ – or should be – and you’ll need to ensure that it looks to consumers of that application like it’s just another component of your data center,” MacVittie explains.

The good news is, businesses are becoming savvier when it comes to balancing across multiple clouds, Odom says.

“More organizations are getting serious about moving applications into the cloud but they are moving smaller applications. We are seeing more companies wanting to go all in or bring subsets of their applications to Rackspace,” observes Odom.

Customers use Rackspace cloud load balancers for a variety of scenarios, but the two most common uses are the following:

Rapid Growth – In this scenario, a business is growing rapidly and needs more than a single cloud server. With cloud load balancers, traffic can be balanced across two or more cloud servers.

High Availability – Organizations running mission-critical business applications on the cloud need to be up 100 percent of the time. With cloud load balancers, customers can build a very robust high availability cloud configuration quickly and easily.

As companies consider cloud-based services, the issue of cloud balancing is an important tactic that will play a significant role in cloud deployments, opening up new possibilities for organizations with limited resources to benefit from cloud computing.

Cloud balancing still has a long way to go in terms of standards, but nevertheless, MacVittie concludes, cloud computing has introduced a cost-effective alternative to building out secondary – even tertiary data centers – as a means to improve application performance and ensure application availability.


Balancing Best Practices

Building out multiple data centers is expensive and lengthy process and is difficult to do on demand since many organizations have turned to cloud. Here are some best practices to consider to ensure you aren’t putting all your apps in one cloud:

  • Consider user patterns: For seasonal spikes, a combination of burst/balance is the best approach. Look at where are your users coming from. A sudden spike from all over the world or tight user community? If you are looking at broader distributed cloud balancing solution it should be based on where they are coming from. If a spike is related to a specific event, you want to control it locally, says Rackspace’s Josh Odom.
  • Cloud bursting: The so-called bridge model is going to be one of your best bets based on the fact that you have the least amount of infrastructure change and most amount of control and don’t have to change policies, according to F5’s Lori MacVittie.

Edited by Brooke Neuman
Get stories like this delivered straight to your inbox. [Free eNews Subscription]