VISIT LIBRARY SPONSOR A BOOK HOW IT WORKS NEWSLETTER FEEDBACK

The Shortcut Guide to Virtualization and Service Automation

by Greg Shields

SYNOPSIS

You have read the news stories and seen the promise that virtualization brings to the enterprise data center. In just a few short years, the idea of virtualization and the business benefits it brings has spread to virtually every facet of IT. Whether consolidating servers, individual desktops, the network itself, your critical storage, or the applications that drive your data processing, virtualization is the hot topic throughout Information Technology and its technologies today.

Yet while all of these purported benefits are valid, the smart enterprise recognizes that virtualization’s promise is only truly achieved when virtualization augments the processing of business. The intent of this guide is to assist the smart enterprise with understanding virtualization’s fit into the rest of the IT environment. A major part of that fit is in aligning the promise of virtualization technology with the automation benefits associated with virtualization management. What you’ll find in reading this guide is that notwithstanding what technologies and technological improvements virtualization brings to the table, there are a set of management enhancements that also arrive. Those enhancements are a function of the levels of automation that naturally bundles with the move to virtualization.


CHAPTER PREVIEWS

Chapter 1: Virtualization and Service Management

Without the right integration into your enterprise’s business processes, virtualization is little more than technology hype.

It is likely you’ve read the news stories and seen the promise that virtualization brings to the enterprise data center. In just a few short years, the idea of virtualization and the business benefits it brings has spread to virtually every facet of IT. Whether consolidating servers, individual desktops, the network itself, your critical storage, or the applications that drive your data processing, virtualization is the hot topic throughout Information Technology (IT) and its technologies today. Virtualization’s play within the enterprise organization promises a host of easily recognizable benefits:

  • Reduced cost for power and cooling. More workloads per hardware chassis means fewer servers to power and keep cool.
  • Right-sizing of resources for workloads. Yesterday’s data center best practices recommended similar hardware compositions whenever possible due to cost savings. Yet this practice resulted in a wasteful under-subscription of hardware resources for light workloads.
  • Elimination of legacy hardware. Data centers are collections of hardware that was procured for various projects over time, yet maintaining that legacy hardware over extended periods grows to become a liability. Virtualization provides a mechanism to retain the service atop the hardware platform while eliminating the hardware itself.
  • Reduced total cost of ownership. Improvements in the ability to spin up new services and manage those that already exist reduce the management cost of doing business for IT. These improvements arrive through significant improvements to the speed in which common IT tasks can be accomplished.
  • Enhanced agility. Deploying new services and scaling those that already exist grows faster once virtualized due to virtualization’s intrinsic ability to rapidly deploy configurations across devices and environments.

Although all of these purported benefits are valid, the smart enterprise recognizes that virtualization’s promise is only truly achieved when virtualization augments the processing of business. Alone and without the right indicators in place, virtualization becomes yet another technology in a long string that improves the lives of individual IT administrators yet doesn’t demonstrably impact the bottom line.

The intent of this guide is to assist the smart enterprise with understanding virtualization’s fit into the rest of the IT environment. A major part of that fit is in aligning the promise of virtualization technology with the automation benefits associated with virtualization management. What you’ll find in reading this guide is that notwithstanding what technologies and technological improvements virtualization brings to the table, there are a set of management enhancements that also arrive. Those enhancements are a function of the levels of automation that naturally bundles with the move to virtualization.

In this guide, we’ll peruse those elements of automation from the perspective of IT automation frameworks, specifically focusing on those framed by the IT Information Library (ITIL) version 3. This guide isn’t intended to teach you the fundamentals of ITIL nor is it necessarily intended to fit virtualization technologies into this process framework. It is, however, intended to use that existing framework as a guidepost for explaining how virtualization and service automation join to improve the fulfillment of needs for IT’s customers.


Chapter 2: Virtualization Automation: The Pure-play Approach

The previous chapter spoke of virtualization from within the context of service automation. Viewed from within the lens of process frameworks such as ITIL v3, virtualization and the automation benefits it natively brings to the table assist the smart organization with service fulfillment across their entire life cycle:

  • Service Strategy. Data analysis, including performance and metrics analysis across virtualized workloads, can be used to identify areas in need of additional services or augmentation. The improved visibility gained through virtualization’s common basis across all services provides strategy teams with better data to make decisions about where to apply resources.
  • Service Design. Once identified, the speed of design of new services is enhanced through virtualization’s ability to roll configuration changes backwards and forwards at will. Virtualization’s snapshotting and rapid deployment capabilities provide service design teams with a more flexible platform upon which to fine tune designs before rolling them into production. These capabilities flow well into testing requirements as well, as virtualization’s platform enables test environments to be quickly reverted back to nominal states at the conclusion of each test phase.
  • Service Transition. Once ready for movement into production, virtualization elevates the level at which individual configuration items are logged by configuration control. With the virtual machine template capable of becoming the configuration item over and above the individual configuration setting, the level of effort involved with service documentation and change control is reduced. Validation activities prior to service operation gain the same benefits as seen in the testing phase.
  • Service Operation. Services in operation require regular care and feeding, the action of which can be a negative impact on workloads if not undertaken with careful precision. Virtualized workloads have the capacity for invalid changes to be rolled back when necessary, returning the environment to a pristine state with little impact to service quality. Additionally, the resiliency of services during non-nominal conditions is improved through virtualization’s enhancements to backup and restore as well as the disaster recovery processes.
  • Continual Service Improvement. Lastly, throughout all these steps is the constant need for gap identification and resolution. With the right management tools in place, metrics between templatized virtualized workloads are more easily measured against each either and against those in the physical world. This data gives service improvement teams the hard data they need to locate areas of improvement.

Although these automation improvements are a natural function of virtualization itself, they don’t necessarily arrive with the tools natively associated with virtualization platforms. Enterprises that leverage platform-specific tools alone may not be able to recognize all these benefits.

The right tools are indeed necessary to handle the management of virtualized workloads as well as manage the gathering and later visualization of this data. Metrics gathered through specialized management utilities discussed in the next section have the capability of analyzing workload performance, capacity, and behaviors across multiple servers and services. Only through the effective analysis of these metrics in relation to environment needs can business services be appropriately improved to meet the ever-changing needs of business.


Chapter 3: Automating Change Management in Virtualized Environments

Virtualization today is an absolute conversation starter. Organizations both big and small recognize it as a major game changer, forcing IT professionals to shift the ways in which we think about managing our environments. But virtualization arrives as a business enabler as well, realizing that promise because it delivers on value. In an IT ecosystem where technologies arise every day that hint towards solving business problems, virtualization is unique in its ability to rise above the hype cycle and truly ease the processing of business.

What’s particularly interesting about virtualization’s play is its potential for penetration across the broad spectrum of environment shapes and sizes. Whereas many game-changing technologies don’t show their full value until they’re incorporated at large scales, virtualization’s technologies can assist the small business as much as the enterprise. Although the small business will recognize a different facet of value than the enterprise, part of virtualization’s value is in how it touches almost every part of the change management needs of business:

  • Cost savings. With advancements in technology constantly impacting the speed of new hardware, modern data processing has grown from a focus on demand limits to one of virtually unlimited supply. The average non-virtualized computer system today runs useful work around 5% of the time. That means that 95% of its available process cycles are unnecessarily wasted. Consolidating physical workloads atop a virtual platform enables IT to more efficiently use computer resources. The end result is an overall reduction in need for physical hardware, reducing data center footprints and lowering the cost of doing business.
  • Availability and resiliency. Virtual workloads are by nature more resilient than those installed directly to physical hardware. This resiliency is a function of their decoupling from physical hardware. When data processing can be abstracted from physical hardware, the loss of any piece of hardware no longer results in a loss of service. Smart businesses see the value of improved availability and its direct impact on the bottom line.
  • Automation. Virtualization’s impact on process completion happens as a function of its capabilities for automation. The discussion in this chapter is written to give you an understanding of what those automation functions are and where they can be implemented to enhance the needs of change management.
  • Process alignment. Taking a new service from idea to implementation is a difficult task. And getting the hardware resources necessary to make it a reality can add costs to the point where new and needed services may not be fiscally viable. Virtualization’s rapid prototyping and rapid deployment capabilities—especially when managed through service solutions as described in Chapter 2—reduce barriers to entry for service development and ease the steps required for effective change control. In the end, virtualization’s alignment of technology with process brings levels of agility never before seen in IT.


Chapter 4: Problem Resolution in Virtualized Environments

This final chapter looks at the all-important processes that surround the identification and resolution of IT problems. Virtualization along with the service-centric management technologies that wrap around it come together in ways that greatly enhance problem identification. As this step usually consumes the largest amount of time in the resolution process, speeding problem identification has a substantial impact on overall data center health.

It is with this concept of “health” that this chapter will spend a large amount of time, as it is the health of systems that defines whether they are capable of fulfilling their stated missions. An unhealthy piece of the IT infrastructure will not be able to provide good quality service to its customers, while one that is healthy will. An unhealthy component is a ready source of IT problems, while a healthy one does not need special attention.

And yet identifying which systems are healthy and which are not is a complicated undertaking. What makes a physical or virtual machine unhealthy? Is there a functional problem with the system itself? Is it performing or failing at its task? Is the system even operational or has it gone down for some reason or another? All these are important questions to ask when considering the problem identification process, but the determination of a system’s health goes even deeper. Consider some of the deeper-level questions that must be asked:

  • Is it a problem if a system’s disks are reading and writing data at a lower rate than normal?
  • Is it a problem if memory or processor utilization is greater than a specific amount?
  • Are there problems with the underlying virtualization platform? Are they manifesting into issues with the virtual machines themselves?
  • Is there a downstream system that is relied upon whose problem impacts the system we are looking at?

With virtually every IT service requiring more than one element—server, network, storage, and so on—for its proper functionality, the job of problem identification is a complex one. As you’ll soon discover, making the move to virtualization also adds layers of complexity that further complicates problem identification and resolution.