Cloud: How Did We Get Here and What's Next?

By Special Guest
Pete Johnson, Technical Solutions Architect for Cloud, Cisco Systems Inc
  |  March 10, 2017

It wasn’t too long ago that companies used on-premises solutions for all of their IT and data storage needs. Now, with the growing popularity of Cloud services, the world of IT is rapidly changing. How did we get here? And more importantly, what is the future of IT and data storage?

It All Starts with Server Utilization

In the mid-1990s, when HTTP found its way outside of Tim Berners-Lee’s CERN lab and client-server computing emerged as the de facto standard for application architectures, it launched an Internet Boom in which every enterprise application had its own hardware. When you ordered that hardware, you had to think about ordering enough capacity to handle your spikes in demand as well as any high availability needs you might have. 

That resulted in a lot more hardware than you really needed for some random Tuesday in March, but it also ensured that you wouldn’t get fired when the servers crashed under heavy load. Because the Internet was this new and exciting thing, nobody cared that you might be spending too much on capital expense.

But then the Internet Bubble burst and CFO types suddenly cared a whole lot. Why have two applications sit side by side and use 30 percent of their hardware most days when you could have them both run on the same physical server and utilize more of it on the average day? While that reasoning looks great on a capitalization spreadsheet, what it failed to take into account was that if one application introduced a memory leak, it brought down the other application with it, giving rise to the noisy neighbor problem.

What if there was another way to separate physical resources in some sort of isolation technique so that you could reduce the chances that applications could bring each other down?

The Birth of Virtualization and Pets vs. Cattle

The answer turned out to be the hypervisor, which could isolate resources from one another on a physical machine to create a virtual machine. This technique didn’t completely eliminate the noisy neighbor problem, but it reduced it significantly. Early uses of virtualization enabled IT administrators to better utilize hardware across multiple applications and pool resources in a way that wasn’t possible before.

But in the early 2000s, developers started to think about their architectures differently. In a physical server-only world, resources are scarce and take months to expand upon. Because of that scarcity, production deployments had to be treated carefully and change control was tight. This era of thinking has come to be known as treating machines as pets, meaning, you give them great care and feeding, oftentimes you give them names, and you go to great lengths to protect them. In a pets-centric world, you were lucky if you released new features quarterly because a change to the system increased the chances that something would fail.

What if you thought about that differently, though, given that you can create a new virtual machine in minutes as opposed to waiting months for a physical one? Not only does that cause you to think about scaling differently and not plan for peak hardware if the pooled resources are large enough (remember that, it’ll be important later), but you think about deployments differently too. 

Consider the operating system patch upgrade. With pets thinking, you patch the virtual or physical machine that already exists. With this new thinking, treating virtual machines like cattle, you create the new virtual machine with the new patch and shut down the old one. This line of thinking led to more rapid releases and agile software development methodologies. Instead of quarterly releases, you could release hourly if you wanted to, since you now had the ability to introduce change or roll them back more easily. That led to a line of business teams turning to software developers as change agents for increased revenues.

Cloud: Virtualization Over HTTP and the Emergence of Hybrid Approaches

If you take the virtualization model to its next logical step, the larger the shared resource pool, the better. Make it large enough and you could share resources with people outside your organization. And since you can create virtual machines in minutes, you could rent them out by the hour. Welcome to the public cloud.

While there is a ton of innovative work going on in public cloud that takes cattle-based thinking to its extremes, larger companies in particular are noticing that a private cloud is appropriate for some of its applications. Specifically, applications with sensitive data and steady state demand are attractive for private cloud, which still offers the ability to create virtual machines in minutes even though, at the end of the day, you own the capital asset.

Given this idea that some applications run best on a public cloud while others run best on a private cloud, the concept of a cloud management platform has become popular to help navigate this hybrid cloud world. Typically these tools offer governance, benchmarking, and metering/billing so that a central IT department can put some controls around cloud usage while still giving their constituents in the line of business teams the self-service, on-demand provisioning they demand with cattle-style thinking.

What’s Next: Chickens and Feathers (Containers and FaaS)

Virtualization gave us better hardware utilization and helped developers come up with new application architectures that treated application components as disposable entities that can be created and destroyed on a whim, but it doesn’t end there. Containers, which use a lighter weight resource isolation technique than hypervisors do, can be created in seconds—a huge improvement over the minutes it takes to create a virtual machine. This is encouraging developers to think about smaller, more portable components. Some would extend the analogy to call this chickens-style thinking, in the form of microservices.

What’s better than creating a unit of compute in seconds? To do so in milliseconds, which is what Function-as-a-Service (FaaS) is all about. Sometimes this technology is known as Serverless, which is a bit of a misnomer since there is indeed a server providing the compute services, but what differentiates it from containers is that developers have to know nothing about the hot standby container within which their code runs. That means that a unit of compute can sit on disk when not in use instead of taking up memory cycles waiting for a transaction to come in. While the ramifications of this technology aren’t quite yet understood, a nanoservice approach like this extends the pets vs. cattle vs. chickens analogy to include feathers.

Conclusion

Just in the last 25 years or so, our industry has come a remarkably long way. Financial pressures forced applications to run coincident with siblings they might not have anything to do with, but which they could bring crumbling to their knees. Virtualization allowed us to separate resources and enabled developers to think about their application architectures very differently, leading to unprecedented innovation speed. Lighter weight resource isolation techniques make even more rapid innovations possible through containers and microservices. On the horizon, FaaS technologies show potential to push the envelope even further.

Speed and the ability to adapt to this ever-changing landscape rule the day, and that will be true for a long time to come.

About the Author

A 20+ year tech industry veteran, Pete Johnson (News - Alert) is the Technical Solutions Architect for Cloud in the Global Partner Organization at Cisco Systems Inc. He can be found on Twitter (News - Alert) at @nerdguru.




Edited by Alicia Young
Get stories like this delivered straight to your inbox. [Free eNews Subscription]