Eric Johnson
Virtualization has been increasingly used for leveraging
underutilized compute resources, but there are questions about whether
we are trying to use virtual machines (VMs) in situations for which they
were not intended. Virtual computing has a much longer history than
most imagine. Like many technological approaches that have been
repurposed over time (Tag Switching/Multiprotocol Label Switching, or
MPLS, comes to mind), virtual computing initially had a different value
proposition, enabling legacy applications to execute on modern systems.
Today as we use VMs more extensively we tend to gloss over the fact
that from a systems engineering perspective, regardless of how much we
abstract the virtual from the physical we are still left with physical
system constraints. In cloud computing architectures one of the key
factors that will distinguish between successful — even viable — cloud
computing architectures and those that are unsuccessful or unreliable,
will be the degree to which these virtualization implementations map to
the physical world.
For example, we can mount dozens of virtual machines on a system, but
we will be gated by the physical channel within the physical host.
These physical resources can starve long before some system resources
such as CPU reach saturation. Even before that point, applications
executing on the host may receive sub-optimal service. The same holds
true for virtual switches. They are really just a logical forwarding
element, essentially a shared adapter. That distinction segues to
network virtualization.
In the case of network virtualization, many are using this term in a
manner analogous to server virtualization, and in doing so they misuse
the concept. Virtualization enables single resources to look and feel
like many resources and conversely many resources to look and feel like a
single resource; the network virtualization being discussed simply
doesn’t natively accomplish that, and to expect that behavior natively
only invites disappointment. Compute virtualization natively leverages
unused resources in a contained system, and is a significant tool when
used properly, as in with regard to physical constraints.
Network virtualization occurs in a system of systems. It has been
commonly codified as the re-location of control plane logic from that
which has been implemented by the vendor of a switch, to a control plane
managed remotely using a customer’s implementation of control plane
logic. Based upon this, most believe that the establishment of multiple
control planes is adequate to “virtualize” the network.
Yet this methodology by itself is inadequate to utilize unused
network resources. Greater global knowledge, higher layer knowledge, and
more dynamic affirmative capabilities than exist by simply abstracting
multiple control planes from the data plane are required for network
virtualization to natively use underutilized network resources and to be
more analogous to compute virtualization.
Positioned properly, both compute and network virtualizations are
incredibly powerful tools to architect and build unified data centers
and networks we all want and need. However, in current widely discussed
network virtualization approaches, the inability to provide affirmative
measures of service delivery simply means until we do so, these
abstractions will be of greater academic value and less value add for
production environments.
Photo: dcmorton/Flickr
Eric Johnson is Chairman and CEO of ADARA Networks. He is a
subject matter expert, and speaker on advanced technology, networking,
security, cloud computing and architecture, and he is an adviser to
Congress and the Department of Defense.