Enterprise Strategy Group | Getting to the bigger truth.TM

Posts Tagged ‘VMware’

Oracle Server Virtualization: The Quiet Killer Technology

Tuesday, November 9th, 2010

While Wall Street and the IT industry remain ga-ga over VMware, the company has some serious challenges ahead. According to ESG Research, large organizations are embracing server virtualization technology for workload consolidation, but continue to struggle with more sophisticated server virtualization implementation issues like:

  1. Performance management. Server virtualization creates numerous integrated “moving parts” when applications running on dedicated servers suddenly move to shared hardware.
  2. Complex multi-tiered application deployment. Server virtualization is fine for discrete workloads controlled by IT but application owners are much more cautious. This is especially true when it comes to applications that depend upon multiple horizontal and vertical tiers. Furthermore, middleware is particularly dicey since it anchors all application-to-application communications.

Given these complexities, many firms simply eschew server virtualization for complex mission-critical applications and point VMware and Xen at more basic workloads. This helps cut capital costs and optimizes hardware but it doesn’t really change IT fundamentals.

It is these very complex application workloads where Oracle has a distinct advantage. Rather than starting with server virtualization and looking up the technology stack, Oracle starts at the business application and looks down. In this way, Oracle can align its portfolio of business applications, development tools, and middleware with tight integration for server virtualization. Oracle is already doing this with WebLogic Server Virtualization Option and Oracle Virtual Assembly Builder. Oracle also tightly integrates these suites on its virtualization technology without the need for an operating system. Finally, Oracle is plugging its application and infrastructure management tools into server virtualization as well.

Server virtualization and cloud nirvana comes down to simple and automated provisioning and configuration of an application “stack” that includes business apps, middleware, databases, operating systems, networks, and storage. Once deployed, real-time management kicks in to ensure availability, security, and high performance. Oracle hasn’t got all of these pieces but it appears to me that it has more of them than others.

In my view, Oracle has one other distinct advantage. Server virtualization is a deployment option as far as Oracle is concerned so Oracle’s server virtualization market share won’t make or break the company or its multitude of business units. VMware on the other hand is anchored to virtualization so it must evolve its technology from its hypervisor roots into an enterprise and cloud computing platform in order to drive further growth and scale. VMware may be up to this task but Oracle has a much more straightforward server virtualization path ahead.

Get Ready for Multiple Virtualization Platforms

Tuesday, October 26th, 2010

My colleague Mark Bowker and I are at a Virtualization, Cloud Computing, and Green IT conference in Washington DC this week. In one of the panels we hosted, an IT executive from a cabinet-level agency mentioned that the agency was qualifying Microsoft Hyper-V even though it already has an enterprise license in place with VMware. When asked why the agency was doing this, he responded, “we are a Windows shop and have a great relationship with Microsoft. VMware has been great but we simply believe that the world is moving to heterogeneous virtualization platforms and we want to be ready for this.”

This IT executive is not alone. In a recent ESG Research study, 55% of the organizations’ surveyed say that their primary virtualization solution is VMware (VMware Server, ESx, ESxi, etc.). This relationship with VMware doesn’t preclude them from using other hypervisors however. In fact, 34% of survey respondents are using 2 virtualization solutions and 36% are using three or more. This was a survey of 463 North American-based IT professionals working at organizations with more than 500 employees.

My take-aways are as follows:

  1. Users should plan for multiple virtualization platforms. Standardization is great but it is likely that some applications and workloads will work best on one hypervisor versus another. This will demand training and management of disparate environments so standard processes and tools will be crucial.
  2. Training is key. Vendors need to realize that users need help with training and skills development before they buy the next virtualization widget.
  3. Vendors should develop broad partnering strategies. Two years ago, dedicating all virtualization resources to VMware was probably a good bet but this is no longer the case. Need proof? Cisco recently struck up a relationship with Citrix even though it has lots of resources invested in VMware and its 3 amigos relationship that also includes .

Yeah, I know, everyone would like one standard IT solution to meet all their needs. It hasn’t happened in the past and it won’t happen with virtualization either. The sooner that IT professionals and the industry recognize this the better.

Server Virtualization Security: A Lot More Work Is Needed

Monday, October 25th, 2010

If you attended VMworld in late August, you know that virtualization security was featured extensively. Ditto for VMworld Europe where VMware CEO Paul Maritz included a few security slides in his keynote presentation. Maritz and VMware get it–virtualization security has been somewhat neglected until recently. If server virtualization is truly to become next-generation cloud infrastructure, security must be integrated throughout the technology.

VMware vShield and partner products are a great start toward bridging this virtualization security gap. Unfortunately, security technology is only part of the problem. ESG recently surveyed 463 large mid-market (i.e., 500-1000 employees) and enterprise (i.e., more than 1000 employees) organizations in North America, to gauge how they were using server virtualization technology. The goal was to understand current use, future plans, successes, and challenges. It turns out that security problems are pretty persistent. For example:

  1. Security is often an afterthought. You know the “throw it over the wall” IT story? It happens here with security. Server virtualization projects are often well along the way before the security team gets involved. In these cases, server virtualization infrastructure adds security risk from the get-go.
  2. Security professionals lack server virtualization skills. When the security team gets called into the project, they aren’t really qualified to help. Since projects tend to continue, server virtualization security risks increase while the security team gets up to speed.
  3. There are no best practices. This may be changing but security professionals complain that server virtualization security doesn’t fit neatly into existing security frameworks and operating models.

In aggregate, there is a people problem (i.e., security skills), an organizational problem (i.e., project management/cooperation), and a process problem (i.e., no best practices). Yes, these issues do ease over time but it is clear to me that they never go away. At some point, highly-regulated organizations are likely to slow down server virtualization projects to address these security gaps. When this happens, server virtualization/cloud vendors will see sales slow to a crawl.

VMware is a technology company so it is doing what comes naturally–addressing security holes with new products and industry relationships. Nevertheless, VMware needs additional help from standards bodies, IT and security professional organizations, and professional services firms. The ESG Research clearly illustrates that server virtualization is a paradigm-shifting technology that changes IT organizations and processes. The real revolutionary potential of server virtualization won’t occur until IT organization and process changes become as pervasive as hypervisors.

Cloud Computing? We Still Haven’t Mastered Server Virtualization!

Tuesday, October 19th, 2010

According to ESG Research, only 7% of the large mid-market (i.e., 500-1000 employees) and enterprise (i.e., 1,000 employees or more) are not using server virtualization technology and have no plans to do so. Alternatively, 61% are using server virtualization technology extensively in test/development AND production environments.

Okay, so server virtualization technology is everywhere, but how are large organizations using it? Many technology vendors would have you believe that enterprises are using server virtualization as the on-ramp to cloud computing. The industry crows about server virtualization’s use for IT automation and self-service, as VMs are rapidly provisioned, dynamically re-configured, and moved constantly from physical server to physical server for load balancing and resource optimization.

It’s a great vision, it just isn’t happening today. Most organizations use server virtualization for web applications and file and print services but far fewer have taken on transaction-oriented applications or databases. Many firms still struggle with performance issues when trying to align physical networks, storage devices, and servers with virtualization technology. As for VM mobility (i.e., vMotion), only 30% of the organizations surveyed by ESG use VM mobility on a regular basis. Why eschew VM mobility? It turns out that 24% of organizations say they have no need to use VM mobility functionality at this time.

The ESG data does suggest that server virtualization represents paradigm shift driving huge changes in IT organizations, processes, and technologies, but these transitions will take time to work their way out. Many enterprises will get to a state of more dyanamic data center transformation–around 2013 or so.

Take my word for it, the IT rhetoric around server virtualization is visionary hype rather than actual reality. I’ve got tons of data to back this up. There are more average Joe IT shops out there than whiz-bang organizations like , , and Microsoft and there always will be.

Networking and Virtualization Vendors Should Join the Open vSwitch Effort

Thursday, September 16th, 2010

My colleague Mark Bowker and I are knee-deep in new research data on server virtualization. Within this mountain of data, we are discovering some existing and impending networking issues related to network switching.

Today, many server virtualization projects are led by server administrators, with little or no participation from the networking team. As you may imagine, this means that the server team configures all virtual switches to the best of its ability, without considering how physical switches are already configured. As things scale, the server team realizes the error of its ways and quickly calls the networking group in to help out. This is where things really break down. Before doing anything, the networking folks have to learn the virtualization platform, understand how the physical and virtual networks should interoperate, and then roll up their sleeves and start gluing everything together.

This is a painful learning curve but I believe that future issues will be far more difficult. As organizations increase the number of VMs deployed, networking configurations get more difficult — especially when VMs move around. Users regularly complain about the number of VLANs they have to configure, provision, and manage. This situation will grow worse and worse as VMs become the standard unit of IT.

In my mind, it makes no sense for virtualization vendors like Citrix, Microsoft, Oracle, and VMware to recreate the richness of physical L2 switches in the virtual world. So what can be done? Well one alternative is to eliminate virtual switches entirely and do all switching at the physical layer via the Virtual Ethernet Port Aggregator (VEPA) standard being developed in the IEEE.

I believe this will happen but in the meantime there is another alternative being discussed this week at the Citrix Industry Analyst Event — Open vSwitch. As described on the Apache web site, “Open vSwitch is a multilayer virtual switch licensed under the open source Apache 2.0 license. The goal is to build a production quality switch for VM environments that supports standard management interfaces (e.g., NetFlow, RSPAN, ERSPAN, CLI), and is open to programmatic extension and control.”

Here’s why this makes sense to me:

  1. Given a pool of collective resources, a collaborative open effort would provide more advanced switching functionality sooner rather than later.
  2. An open alternative would expose APIs that could be easily integrated with leading switch management tools from Brocade, Cisco, Extreme, Force 10, HP, Juniper, etc.
  3. Vendors would not have to integrate with each hypervisor independently. This would improve code quality and again speed time-to-market.

At the very least, Citrix, Microsoft, and Oracle should back this as a way to push back on VMware’s marketshare lead.

I’ve been around long enough to know the strengths and limitations of open source and standards but I think that with the right support, this one could have legs. I know that vendors have their own businesses to look after but isn’t another end goal to create products that the market wants? I think Open vSwitch would fit this bill.

VMware vShield: A Good Start, but . . .

Wednesday, September 1st, 2010

You’ve got to hand it to VMware — it clearly understands the strengths and weaknesses of the ESX environment and is focused on improving the platform. Case in point: this week’s VMworld, when the company announced the VMware vShield family of security products.

From the early announcement, it seems that vShield is composed of:

  • vShield Edge. To enable secure multi-tenancy, vShield Edge virtualizes data center perimeters and offers firewall, VPN, Web load balancer, NAT, and DHCP services.
  • vShield App. VMware calls this hypervisor-based application-aware firewall that creates application boundaries based upon policies. It’s a bit confusing, but I believe it manages and secures VM-to-VM traffic in a logical virtual application. VMware needs to clarify this as the term “application firewall” has a completely different meaning.
  • vShield endpoint. This one’s much easier to understand: rather than run endpoint security software on each virtual endpoint, vShield endpoint virtualizes security components like signature databases, scanning engines, and schedulers. Much more efficient than pretending that virtual endpoints are physical devices.
  • vShield zones. Again, a bit confusing, but it seems like basic ACL capability built into vSphere.

Now I’m not at VMworld, so I’m reading between the lines. Nevertheless, I like the direction VMware is taking. ESG Research indicates that security is a big issue with server/desktop virtualization. This is true for everyone from virtualization newbies to sophisticated shops.

The vShield products are a great foundation for VMware, but I believe there is still a lot of work to do beyond clearing up the messaging. I suggest that VMware:

  1. Dedicates ample resources for user education. ESG Research points to a general lack of virtualization knowledge and skills, especially with security professionals. Note to VMware: If security professionals don’t understand the ESX environment, they won’t buy your products.
  2. Clarifies its partnering strategy. I can’t really tell if VMware intends to partner with or compete with companies like F5, Juniper Networks, Check Point Software, etc. I’m sure I’m not the only one.
  3. Works on standards. If my standard firewall is a Juniper SRX, I really don’t want a one-off VMware product in my virtual infrastructure. If vShield can’t “talk” to other products through some new security standards, no one will want it.
  4. Stop talking about “better than physical security.” I get the concept, but the vast majority of users don’t have the baseline knowledge about server virtualization to believe this. Improved security should be a destination/vision and not an overly bold tag line.

Heterogeneous Server Virtualization

Wednesday, June 2nd, 2010

Hats off to VMware for its leadership in server virtualization. That said, I am hearing more and more stories about heterogeneous server virtualization in large organizations.

Does this mean that VMware is faltering? No. As virtualization has gone from cutting edge to mainstream, however, IT organizations are gaining virtualization experience, testing other platforms, and finding good fits for KVM, Microsoft, Oracle, and XenServer–alongside VMware.

At the beginning of 2010, ESG Research asked 345 mid-market (i.e.,  less than 1,000 employees) and enterprise (i.e., more than 1,000 employees) firms which server virtualization solutions were currently deployed at their organizations.  The data supports the scuttlebutt I heard on my recent travels:

  • VMware Server 38%
  • Microsoft Virtual Server 36%
  • VMware ESX: 32%
  • Citrix XenServer 30%
  • VMware ESXi: 18%
  • Oracle VM 19%
  • Microsoft Hyper-V: 17%
  • Sun xVM Server: 10%
  • KVM: 9%

Based on anecdotal evidence, I don’t think this is a phase–it looks like multiple server virtualization platforms in the enterprise is the future. What does this mean?

  1. Server virtualization will get more complex. IT will need specialization on multiple platforms.
  2. Vendors need to pick multiple dance partners. VMware is clearly a safe bet but IT infrastructure and tools vendors need to think beyond VMware alone. Microsoft and Citrix will likely recruit partners with an endpoint focus while KVM and Oracle will be more of a data center play.
  3. Services opportunities abound. IT complexity and skills deficits are on the horizon. Services vendors that can bridge these gaps will prosper in 2011 and 2012.

FedRAMP Seeks to Unify Cloud Computing Security Standards Across the U.S. Government

Wednesday, May 5th, 2010

Yesterday, I hosted a panel at the Cloud Computing summit focused on cloud security for the federal government. The panel was made up of some smart folks: Alex Hart from VMware, Bob Wambach from , and one of the primary authors of the Cloud Security Alliance guidelines, Chris Hoff from Cisco.

While these folks offered great contributions, most questions were focused on the fourth member of the panel, Peter Mell from NIST, the chair of the Federal Cloud Computing Advisory Council. Why? Let’s just say that Mell may be the single individual most focused on cloud security in the world. He has been tasked with defining cloud computing standards for the entire federal government–a big responsibility since President Obama and Federal CIO Vivek Kundra continue to trumpet the benefits of cloud computing and push federal agencies to adopt pilot projects.

Mell’s work will soon come to fruition when the feds introduce the Federal Risk and Authorization Management Pilot program (FedRAMP). FedRAMP has two primary goals:

  1. Aggregate cloud computing standards. Today, many agencies have their own sets of standards, which complicates procurement and frustrates federally-focused technology vendors. FedRAMP is intended to consolidate cloud computing requirements into one set of standards that span the entire federal government.
  2. Ease agency certification processes. Let’s say Microsoft’s federal cloud is FISMA-certified by the Dept. of Agriculture. In today’s world, this wouldn’t matter to any other agency–they would still be required to certify Microsoft’s cloud before procuring services. Kundra, Mell, et. al. recognize the redundancy and waste here. With FedRAMP, once a cloud provider passes the Certification and Accreditation (C and A) of one agency, all other agencies get a free pass.

Since FedRAMP is still a work in progress, the audience made up of federal IT people had a lot of questions about all of the fine points. Thus Mell was in the hot seat for most of the time.

Peter Mell deserves a lot of credit. Federal agencies have often acted independently with regard to IT, so Mell and his team are herding cats.

If FedRAMP works, cloud service providers can deliver to a single set of standards. This will encourage innovation and bolster competition. On the agency side, FedRAMP could pave the way for a wave of cloud computing consumption over the next few years. What happens if FedRAMP fails? The federal government becomes difficult to service, so most cloud service providers treat it as a market niche. If that happens, the federal government could lose its cloud computing leadership and momentum very, very quickly.

Cisco Announcement: More than the CRS-3

Wednesday, March 10th, 2010

Cisco is getting a lot of flack for billing its announcement yesterday as something that will “change the Internet forever.” I certainly understand this sentiment–will a new high-end core router, albeit with very impressive performance ratings, really change the Internet forever?

The answer is pretty simple: the router alone won’t change the Internet, but the underlying architecture? That’s another story.

Looking a bit below the surface, Cisco wants to build integrated network services that span the entire cloud. Internet data centers will be able to share network services like traffic management, prioritization, and security with service providers and cloud services with provisioning tools rather than complex networking devices. Want to burst processing or gain instant access to more storage? The network (in this scenario, the Cisco network) will help expedite and manage this. The fact that Cisco is arming CRS-3 with a networking positioning system should be a strong hint at where it ultimately wants to go.

Endpoints are also included in the architectural mix. PCs, smart phones, home routers, and even cable TV set tops will have “always-on” access to network services across wired and wireless public and private networks based upon business and security policies. Video and IP telephony instantly gain network priority over gaming or random web surfing. Even in your home, Cisco’s aim is to let you (and your service provider) create network policies for IP traffic, access control, and overall security.

To me, the “change the Internet” message is a one-two punch: embed the foundation technology everywhere and then provide Cisco’s strong enterprise and service provider customers with ample ways to use the services, improve communications and productivity, and make money.

Okay, so if this is a “seed and harvest” strategy, Cisco is still in the “seed” part of the process. Nevertheless, with Cisco UCS, CRS-3, set top boxes, VPN clients, etc., Cisco is planting a lot of seeds in a lot of places.

Cisco still has a lot of work ahead, but the roots are moving into place. I don’t think that the CRS-3 will impact Brocade, Dell, Extreme Networks, Force 10, HP, IBM, or Juniper overnight, but each incremental piece of the overall architecture makes the story more compelling for consumers, enterprises, and service providers. This where the “change the Internet” message becomes more real.

RSA 2010: Cloud Security Announcements Already Dominate

Tuesday, March 2nd, 2010

It’s pouring in San Francisco, but ironically, the RSA Conference is already pointed toward clouds–in this, case cloud computing security.

There were two announcements yesterday around securing private clouds. New initiative king Cisco announced its “Secure Borderless Network Architecture,” which is actually pretty interesting. Cisco wants to unite applications and mobile devices through an “always-on” VPN. In other words, Cisco software will enforce security policies for mobile devices regarding which applications they can use and when–without user intervention. Pretty cool, but you would need a whole bunch of new Cisco stuff to make this happen.

On another front, industry big-wigs EMC, Intel, and VMware are pushing for a “hardware root of trust” for cloud computing. The goal here is to create technology that lets cloud providers share system state, event, and configuration data with customers in real time. In this way, customers can integrate cloud security with their own security operations processes and management. This is extremely important for regulatory compliance. (Note: Another reason why EMC/RSA bought Archer Technologies).

These interesting announcement probably presage a 2010 RSA Conferernce trend: “all cloud all of the time.” Since ESG Research indicates that only 12% of midsized (i.e., 100 to 999 employees) and enterprise (i.e., more than 1,000 employees) will prioritize cloud spending in 2010, all of this cloud yackety yack may be a bit over the top.

Two other announcement worth noting here:

  1. An actual leading voice on cloud computing security, the Cloud Security Alliance (CSA), teamed up with IEEE to survey users about cloud computing security. Users overwhelmingly want to see industry standards and soon. Bravo CSA and IEEE, I couldn’t agree more.
  1. I like the F5 Networks/Infoblox announcement around DNSSEC. The two companies will offer integration technology between F5 load balancers and Infoblox DNSSEC. This partnership blends the security of DNSSEC with the reality of distributed web-based apps and infrastructure. Kudos to the companies, the federal government will be especially pleased.

See you at the show!

Search
© 2010 Enterprise Strategy Group, Milford, MA 01757 Main: Fax:

Switch to our mobile site