Scale-Out Virtual Environments and the Future of Storage Infrastructure

Scale-Out Virtual Environments and the Future of Storage Infrastructure

International research firm IDC (News - Alert) recently found that more than 50 percent of all server workloads today are implemented in a virtual machine (VM). Due to the capabilities of virtualization technologies, there has been a clear paradigm shift in data center infrastructure and management. Virtualization has become a mainstream technology used by data centers and service providers alike.

Yet the rising popularity of virtualization is demanding unprecedented levels of storage, since virtualization allows organizations to simultaneously run a significant number of applications at a given time. This jump in storage demands has instituted a renewed focus on storage strategies that can deliver sophisticated management, efficiency and flexibility.

A Lot to Offer

The increase in demand for virtualization solutions can be attributed to the benefits of flexibility and cost savings they have to offer. Significantly, virtualization lets organizations make more efficient use of the data center’s hardware. Typically, the physical servers in a data center are idling for the majority of the time. Organizations can enhance the use of server CPUs and hardware by installing virtual servers inside the hardware, a solution that optimizes the benefits of virtualization and saves money.

Another notable benefit of virtualization is its ability to allow for more flexibility. Having virtual machines in the organization’s infrastructure is far more convenient than physical machines. For example, if an organization needs to change hardware, the data center administrator can simply migrate the virtual server to the newer hardware, achieving enhanced performance at very little cost. Prior to the use of virtual servers, administrators were required to install the new server and then reinstall and migrate all the data stored on the old server; a much more complex process. It is substantially easier to migrate a virtual machine than it is to migrate a physical one.

Virtualization for All

Data centers that host a significant number of servers – somewhere in the range of 20-50 or above – are looking to transition these servers into virtual machines. For one, these organizations will be able to realize substantial cost reductions and increases in flexibility. Additionally, virtualized servers are far easier to manage. The sheer physical challenge of administrating several physical servers can become quite cumbersome for data center staff. Virtualization equips data center administrators with the ability to run the same total number of servers on fewer physical machines, easing their workload substantially.

Keeping Up With the Demand

Due to the growing trend towards virtualization, there is considerable stress being placed on traditional data center infrastructure and storage devices. In a sense, the problem is a direct result of the popularity of VMs. Initial VM models made use of local storage found within the physical server, making VM migration between physical servers impossible for administrators. Introducing shared storage – either a network-attached storage (NAS) or a storage area network (SAN) – to the VM hosts solved this problem, thereby paving the way for stacking on more and more VMs. Eventually the situation matured to today’s server virtualization technology, where all physical servers and VMs are connected to the same storage.

The challenge? Data congestion.

A single point of entry becomes a performance bottleneck very quickly, and with all data flowing through a single gateway, data can get congested during periods of heightened activity. The number of VMs and quantity of data are projected to grow exponentially, setting the stage for a shift in data center infrastructure design.

Lessons to be Learned

Early adopters of virtualized servers have already encountered this issue and are taking steps to reduce its impact. As other organizations transition their data centers towards virtual environments, they will run into this growing challenge as well.

There is hope for a solution yet for organizations opting to virtualize while evading the data congestion challenges caused by traditional scale-out environments.  By eliminating the single point of entry, they can ensure that their storage architectures are keeping up with their rate of VM usage. NAS or SAN storage solutions today inevitably have a single access that controls the flow of data, leading to congestion when demand spikes. Rather, organizations should pursue options that have various data gateways and distribute information evenly across all servers. That way when several users are accessing the system at once, it can sustain optimal performance and reduce lag time.

Although this approach represents the most direct solution, the upcoming generation of storage architecture is offering another alternative.

Unified Computing and Storage

To meet the storage challenge of scale-out virtual environments, an entirely new approach is taking shape.  Running VMs within storage nodes (or running the storage inside the VM hosts) – thereby turning it into a compute node – is quickly becoming the future of storage infrastructure.

Ultimately, this flattens the entire infrastructure. For example, if the organization is using shared storage in a SAN, usually the VM hosts from the upper-most storage layer, essentially transforming it into a solitary storage system with a single entry point. In order solve the data congestion problem this approach creates, organizations are transitioning away from the traditional dual-layer architecture that has both the virtual machines and the storage operating on the same layer.  

Moving Forward

The growing trend of infrastructure virtualization is not slowing down anytime soon, nor are the benefits associated with it. In the same IDC study referenced above, research showed that consumers were anticipating hardware utilization rates of 60-80 percent. Undeniably, more and more companies will implement virtualization and will subsequently run into the performance lag issues described above. However, by following in the footsteps of the early adopters and adhering to the best practices they created, organizations can develop a successful scale-out virtual environment that optimizes performance and keeps infrastructure expenditures low.

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson (News - Alert), the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.




Edited by Stefania Viscusi
Get stories like this delivered straight to your inbox. [Free eNews Subscription]