Enterprise Strategy Group | Getting to the bigger truth.TM

Posts Tagged ‘Citrix’

The Smart-Fat and Smart-Thin Edge of the Network

Wednesday, November 17th, 2010

Take a look at ESG Research and you’ll see a number of simultaneous trends. Enterprises are consolidating data centers, packing them full of virtual servers, and hosting more and more web applications within them. This means massive traffic coming into and leaving data centers.

Yes, this traffic needs to be switched and routed, but this is actually the easiest task. What’s much harder is processing this traffic at the network for security, acceleration, application networking, etc. This processing usually takes place at the network edge, but additional layers are also migrating into the data center network itself for network segmentation of specific application services.

Think of it this way: There is a smart-fat network edge that feeds multiple smart-thin network segments.

The smart-fat network edge aggregates lots of network device functionality into a physical device, cluster of devices, or virtual control plane. This is the domain of vendors like Cisco, Crossbeam Systems, and Juniper Networks for security and companies like A10 Networks, Citrix (Netscaler), and F5 Networks for application delivery. These companies will continue to add functionality to their systems (for example,  XML processing, application authentication/authorization, business logic, etc.) to do more packet and content processing over time. It wouldn’t surprise me at all if security vendors added application delivery features and the app delivery crowd added more security.

Once the smart-fat network edge treats all traffic, packets and content will be processed further within the data center (i.e., smart-thin network edge). This will most likely be done using virtual appliances like the Citrix VPX. Why? Virtual appliances can be provisioned on the fly with canned policies or customized for specific workloads. They can also follow applications that migrate around internal data centers or move to public clouds.

A few other thoughts here:

  1. I’m sure we’ll see new startups focused on smart-thin virtual appliances but I don’t expect them to succeed. Existing vendors will simply deliver virtual appliance form factors and dominate this business.
  2. Legacy vendors have the best opportunity here as many users will want common command-and-control for the smart-fat edge and the smart-thin edge. Nevertheless, this further network segmentation does provide an opportunity for aggressive vendors to usurp customer accounts and marketshare.
  3. Smart-fat edge systems are delivered as physical devices today but this isn’t necessarily true for the future. I can see virtual appliances with horizontal scalability running on , HP, or IBM blade servers in the future.

The smart-fat, smart-thin architecture is already playing out in cloud computing and wireless carrier networks today and I expect it to become mainstream in the enterprise segment over the next 24 months. The technology is ready today but many users have no idea how to implement this type of architecture or capitalize on its benefits. Vendors who can guide users along with knowledge transfer, best practices, and reference architectures are most likely to reap the financial rewards.

Get Ready for Multiple Virtualization Platforms

Tuesday, October 26th, 2010

My colleague Mark Bowker and I are at a Virtualization, Cloud Computing, and Green IT conference in Washington DC this week. In one of the panels we hosted, an IT executive from a cabinet-level agency mentioned that the agency was qualifying Microsoft Hyper-V even though it already has an enterprise license in place with VMware. When asked why the agency was doing this, he responded, “we are a Windows shop and have a great relationship with Microsoft. VMware has been great but we simply believe that the world is moving to heterogeneous virtualization platforms and we want to be ready for this.”

This IT executive is not alone. In a recent ESG Research study, 55% of the organizations’ surveyed say that their primary virtualization solution is VMware (VMware Server, ESx, ESxi, etc.). This relationship with VMware doesn’t preclude them from using other hypervisors however. In fact, 34% of survey respondents are using 2 virtualization solutions and 36% are using three or more. This was a survey of 463 North American-based IT professionals working at organizations with more than 500 employees.

My take-aways are as follows:

  1. Users should plan for multiple virtualization platforms. Standardization is great but it is likely that some applications and workloads will work best on one hypervisor versus another. This will demand training and management of disparate environments so standard processes and tools will be crucial.
  2. Training is key. Vendors need to realize that users need help with training and skills development before they buy the next virtualization widget.
  3. Vendors should develop broad partnering strategies. Two years ago, dedicating all virtualization resources to VMware was probably a good bet but this is no longer the case. Need proof? Cisco recently struck up a relationship with Citrix even though it has lots of resources invested in VMware and its 3 amigos relationship that also includes .

Yeah, I know, everyone would like one standard IT solution to meet all their needs. It hasn’t happened in the past and it won’t happen with virtualization either. The sooner that IT professionals and the industry recognize this the better.

Networking and Virtualization Vendors Should Join the Open vSwitch Effort

Thursday, September 16th, 2010

My colleague Mark Bowker and I are knee-deep in new research data on server virtualization. Within this mountain of data, we are discovering some existing and impending networking issues related to network switching.

Today, many server virtualization projects are led by server administrators, with little or no participation from the networking team. As you may imagine, this means that the server team configures all virtual switches to the best of its ability, without considering how physical switches are already configured. As things scale, the server team realizes the error of its ways and quickly calls the networking group in to help out. This is where things really break down. Before doing anything, the networking folks have to learn the virtualization platform, understand how the physical and virtual networks should interoperate, and then roll up their sleeves and start gluing everything together.

This is a painful learning curve but I believe that future issues will be far more difficult. As organizations increase the number of VMs deployed, networking configurations get more difficult — especially when VMs move around. Users regularly complain about the number of VLANs they have to configure, provision, and manage. This situation will grow worse and worse as VMs become the standard unit of IT.

In my mind, it makes no sense for virtualization vendors like Citrix, Microsoft, Oracle, and VMware to recreate the richness of physical L2 switches in the virtual world. So what can be done? Well one alternative is to eliminate virtual switches entirely and do all switching at the physical layer via the Virtual Ethernet Port Aggregator (VEPA) standard being developed in the IEEE.

I believe this will happen but in the meantime there is another alternative being discussed this week at the Citrix Industry Analyst Event — Open vSwitch. As described on the Apache web site, “Open vSwitch is a multilayer virtual switch licensed under the open source Apache 2.0 license. The goal is to build a production quality switch for VM environments that supports standard management interfaces (e.g., NetFlow, RSPAN, ERSPAN, CLI), and is open to programmatic extension and control.”

Here’s why this makes sense to me:

  1. Given a pool of collective resources, a collaborative open effort would provide more advanced switching functionality sooner rather than later.
  2. An open alternative would expose APIs that could be easily integrated with leading switch management tools from Brocade, Cisco, Extreme, Force 10, HP, Juniper, etc.
  3. Vendors would not have to integrate with each hypervisor independently. This would improve code quality and again speed time-to-market.

At the very least, Citrix, Microsoft, and Oracle should back this as a way to push back on VMware’s marketshare lead.

I’ve been around long enough to know the strengths and limitations of open source and standards but I think that with the right support, this one could have legs. I know that vendors have their own businesses to look after but isn’t another end goal to create products that the market wants? I think Open vSwitch would fit this bill.

Heterogeneous Server Virtualization

Wednesday, June 2nd, 2010

Hats off to VMware for its leadership in server virtualization. That said, I am hearing more and more stories about heterogeneous server virtualization in large organizations.

Does this mean that VMware is faltering? No. As virtualization has gone from cutting edge to mainstream, however, IT organizations are gaining virtualization experience, testing other platforms, and finding good fits for KVM, Microsoft, Oracle, and XenServer–alongside VMware.

At the beginning of 2010, ESG Research asked 345 mid-market (i.e.,  less than 1,000 employees) and enterprise (i.e., more than 1,000 employees) firms which server virtualization solutions were currently deployed at their organizations.  The data supports the scuttlebutt I heard on my recent travels:

  • VMware Server 38%
  • Microsoft Virtual Server 36%
  • VMware ESX: 32%
  • Citrix XenServer 30%
  • VMware ESXi: 18%
  • Oracle VM 19%
  • Microsoft Hyper-V: 17%
  • Sun xVM Server: 10%
  • KVM: 9%

Based on anecdotal evidence, I don’t think this is a phase–it looks like multiple server virtualization platforms in the enterprise is the future. What does this mean?

  1. Server virtualization will get more complex. IT will need specialization on multiple platforms.
  2. Vendors need to pick multiple dance partners. VMware is clearly a safe bet but IT infrastructure and tools vendors need to think beyond VMware alone. Microsoft and Citrix will likely recruit partners with an endpoint focus while KVM and Oracle will be more of a data center play.
  3. Services opportunities abound. IT complexity and skills deficits are on the horizon. Services vendors that can bridge these gaps will prosper in 2011 and 2012.

The Branch Office Network Form Factor Debate

Thursday, May 13th, 2010

There is an interesting debate happening in the networking industry that centers around branch office equipment. ESG Research points out that branch office servers and applications are moving to the data center and this move is driving more investment in WAN optimization technologies from Blue Coat, Cisco, Citrix, and Riverbed. At the same time, cheap bandwidth and cloud services are changing the network infrastructure. Large organizations are moving away from back-hauling all traffic through the data center and setting up a real network perimeter at the branches themselves.

While networking changes continue, there is also another trend happening. Lots of legacy networking and IT functionality (WAN optimization, firewall, IDS/IPS, file servers, print servers, domain controllers, etc.) is now available as a virtual machine. A single device can now take on multiple functions.

The debate centers on the “hybridization” of networking and server functionality at the branch office. Should branches deploy edge networking devices packaged with Intel processors for running VMs, or should they simply implement Intel blade servers from , HP, and IBM at the network perimeter and then use VMs for all networking and server needs?

The answer to this question could really impact the industry. For example, Fortinet is the king of UTM devices for branch offices but what if these appliances are suddenly replaced with standard Intel servers and virtual appliance software? Obviously this wouldn’t be good news for Fortinet.

For the most part, leading vendors are not pushing one model or another. Cisco WAAS equipment comes packaged with a Windows server while the Riverbed Service Platform (RSP) can run a Check Point firewall, a Websense gateway, an Infoblox DNS/DHCP server, or basic Windows services.

So which model wins? Both (Yeah, I know it is a cop out, but I truly believe this). It’s likely that smaller branches go with Intel servers and VMs while larger remote offices stick with networking gear. Large organizations will also lean toward their favorite vendors. Cisco’s networking dominance means it wins either way while Riverbed will likely do well in its extensive installed base and succeed at the expense of second-tier WAN optimization guys like Silver Peak.

In truth, there is no right or wrong way at the branch office network, but the vendor debate ought to be very entertaining.

Final thoughts on Interop — and Las Vegas

Friday, April 30th, 2010

Okay, I’m back in sunny Boston after four days at Interop. I’m now convinced that no normal person should be subject to Las Vegas for more than this amount of time. Everyone I ran into yesterday was looking forward to leaving. I flew out at 2:15 and found that people with later flights were jealous. This says it all.

Enough about the fake city however. As for Interop, a lot of people thought that the 2009 downer indicated that Interop may not be around much longer. In less than a year, the buzz has returned under the guise of strong financials, more market demand, and cloud computing. Here are my final thoughts on the show:

  1. I was certainly entertained by the Xirrus booth that featured a real boxing ring with live sparring. That said, Xirrus positioned this as the Wi-Fi battle between Arrays and APs. Hasn’t this debate been settled? Personally, I think that Wi-Fi must evolve into a smart mesh that seamlessly integrates into the wired network. Aerohive seems especially innovative in this regard.
  2. I was impressed last year by 3Com’s product line and bravado but wondered if it really had the resources to impact Cisco. Now that 3Com is part of HP, those concerns go away. At the very least, Cisco margins will be impacted every time HP is in a deal but HP’s product line and resources may represent the first real Cisco challenger since Wellfleet Networks. HP’s problem? Marketing. When Cisco leads with its compelling borderless network vision, HP can’t simply respond with price/performance. What’s HP’s vision of the network in a cloud computing-based world? To challenge Cisco, it needs its own vision and thought leadership — qualities that HP hasn’t been strong with in the past.
  3. The WAN optimization market continues to flourish with Blue Coat, Cisco, and Riverbed leading the pack. To me, the next battle royale here is desktop virtualization. Which vendor will offer the best support? Too early to tell but this certainly provides a new opportunity for Citrix and it Branch Repeater product.
  4. It seems like the application acceleration market has become a two horse race between F5 and Citrix/NetScaler. I was impressed by some new feature/functionality from Brocade and also like scrappy startup A10 Networks who play the “hot box” role in this market. Of course Cisco plays in this market as well.  I need to ask my friends in San Jose for an update as the competition is aggressive and confident.
  5. Yes, Juniper wasn’t at Interop. Should we read anything into this as some people have suggested? No. Just look at Juniper’s financial results and you’ll see that the company is doing quite well. With all due respect to the folks who run Interop, it is no longer a requirement to attend industry trade shows.

One final thought. I don’t think anyone really knows what the network will look like in a world with cloud computing, advanced mobile devices, and ubiquitous wireless broadband. In my opinion, this means that the network business is up for grabs in a way it hasn’t been in the past. This should make next-year’s Interop just as exciting — I just wish it were at the Moscone Center.

PS: Thanks to all the folks who provided feedback on my comments about Arista Networks. Clearly, I owe Jayshree a call.

Interop 2010: What to Expect Beyond Cloud Computing Rhetoric

Tuesday, April 20th, 2010

Like the RSA Security conference in March, Interop will likely offer non-stop hyperbole about all things related to cloud computing. Nevertheless, I expect a lot of additional and very useful dialogue around the following topics:

  1. 40Gb Ethernet. While 10GbE is still ramping up, expect vendors to turn up the heat on 40GbE. Why? High-density data centers running thousands of virtual machines will need 40GbE sooner rather than later. Aside from basic connectivity, 40GbE could also be a tipping point where Ethernet replaces Fibre Channel and Infiniband. On a completely separate note, more video traffic will drive the need for 40GbE network backbones.
  2. 802.11n. The Trapeze acquisition a few years ago was a bit of a downer for the Wi-Fi crowd, but 802.11n is the networking equivalent to that old ditty, “Happy Days are Here Again.” Lots of organizations are ripping out old b/g networks, especially those in health care, education, and state/local government. New 802.11n equipment is also the first true wireless access layer, so it is likely to replace a lot of wired access switches. On the vendor front, the Meru IPO offering bolstered Wi-Fi visibility while Aruba and Cisco continue to chug along. I also like Aerohive technology, which seems like a very good fit as 802.11n networks scale.
  3. Network security. ESG’s Research indicates network security is a high priority for both networking and security groups. This is due to several factors including high bandwidth network security requirements, firewall consolidation, server virtualization, and new types of threats. Look for lots of talk about Layer 7 visibility, high-end UTM boxes, and virtual capabilities.
  4. Server virtualization. Networks must be aware of virtual machines to enforce network security and segmentation policies. Look for more one-off relationships between networking vendors, Citrix, VMware, and Microsoft. I also anticipate more support for the VEPA standard.
  5. WAN optimization. Think of this as the other side of server consolidation. While market saturation limits new business, there is a lot of WAN optimization consolidation going on. Good news for market leaders Bluecoat, Cisco, and Riverbed.
  6. Questions around HP/3Com. These discussions fit into the Interop scuttlebutt category. How will this merger work? Will there be personnel changes? Can HP really challenge Cisco in the enterprise? All of these questions and many more will be debated ad nauseum next week.

See you in Vegas.

People May Be the Weakest Link in the Server Virtualization Chain

Tuesday, February 9th, 2010

Last week, I participated in a webinar on virtualization along with Extreme Networks and Microsoft. During the session, 113 audience members were asked two polling questions. Here are the questions and the results:

1. In your opinion, which of the following factors is holding your organization back from using server virtualization more prominently throughout the enterprise? (Choose all that apply)

  • Lack of virtualization skills/knowledge within IT (42%)
  • Security / regulatory concerns (10%)
  • Organizational complexity – separate groups mange different elements (32%)
  • Software licensing/support from ISVs (10%)

2. As you move forward with virtualization, which of the following IT groups need to become more educated and involved in the project? (Choose all that apply)

  • Security / Compliance group (45%)
  • Server Group (52%)
  • Networking group (72%)
  • Application developers (31%)
  • Storage Group (50%)

ESG Research indicates that server virtualization is one of  IT’s top priorities and it will generate a lot of IT spending in 2010. Ironically, it seems like that spending must be on hypervisors, virtualization tools, servers, and storage rather than on training and IT collaboration.

In my humble opinion, server virtualization technology is at a tipping point. Yes, we’ve squeezed a lot of value out of it to consolidate Windows server workloads, but future “dynamic virtual infrastructure” will require a lot more thought around IT processes and architecture. This means a lot of collective IT thought and preparation by virtualization-savvy IT folks.

If we are going to reach this plateau, the ESG and webinar data indicates that we better pay attention to people and process problems — not just technology problems. Without this the whole virtualization gravy train could slow down or come to an abrupt stop.

ESG Research Points to Lots of Windows 7 Migration in 2010

Friday, January 29th, 2010

For the last few years, I used Windows Vista on my laptop PC and felt like it was pretty good. I guess I was part of a small minority – most organizations eschewed Vista and stuck with tried-and-true XP.

Now that Windows 7 is out, it appears like the tides have turned. According to ESG Research, 44% of SMEs (i.e., organizations with less than 1,000 employees) and enterprises (i.e., organizations with more than 1,000 employees) will conduct significant upgrades from older versions of Windows to Windows 7 in 2010. By the end of 2011, 60% of large and small organizations will conduct significant upgrades to Windows 7. For the purposes of this research, ESG defined the term “significant upgrade” as at least 25% of total PCs. That’s a lot of PCs!

These upgrades will take place across the board: small and large companies, vertical industries, etc.

Regardless of what you thought about Windows Vista, it is clearly time to move on. ESG believes that the impending massive migration to Windows 7 means:

  1. A lot of user training. Companies must budget for training and prepare users and business managers for this requirement. Smart companies will refresh use knowledge about security while they have the opportunity. Services and training companies should be very busy.
  2. Increased utilization of the Windows infrastructure. Windows 7 will open the door to lots of Windows server functionality. Smart CIOs will explore options like Network Access Protection (NAP), server and domain isolation, server core, Active Directory group policies, etc.
  3. A new opportunity for virtualization technology. Rather than test and roll out applications for Windows 7, large organizations may choose application virtualization technologies from Citrix, Microsoft, or VMware instead. The Windows 7 upgrade could also be used as an opportunity to make two changes at once (i.e., Windows 7 and desktop virtualization) or to create a few solid corporate desktop images for future virtualization plans.

XP was a great version of Windows but it was first released in 2001 so many organizations are moving on. IT managers and technology vendors should prepare for this inevitability by viewing Windows 7 as an invitation to train users, bolster security, take advantage of Windows functionality, and sell complementary products and services.

F5 Networks Financials Tell a Bigger Market Story

Friday, January 22nd, 2010

This week, F5 Networks announced delightful Q1 financial results to Wall Street. Company revenue was up 11%, topping Wall Street estimates; the company hired nearly 100 new employees; and F5 now has a market cap in excess of $4 billion. Share price actually jumped 3% after the announcement.

These results say a lot about F5 Networks, a leader in the Application Delivery Controller (ADC) market staffed by a bunch of smart people who are also pretty fun to hang out with. In my opinion, however, these results suggest a few other market trends:

  1. Investments in data center consolidation, SOA, and web applications remain signficant. F5′s core ADC sales accounted for 93% of its total revenue. Yes, BigIP/TMOS/Viprion is F5′s strongest product set, but the numbers also suggest that purchasing in the Internet data center continues to be very strong.
  2. Big web applications are becoming ubiquitous. Enterprises are now building the type of scalable web applications that used to be the exclusive domain of folks like Amazon, eBay, Google, and Microsoft.
  3. ADCs are part of the web application infrastructure. It used to be that ADCs were thought of as load balancers in the networking domain, but no longer. With features like iRules, protocol acceleration, caching, and security, ADCs are being used to reduce business risk, improve the user experience, and enhance application business logic.
  4. ADCs and WAN Optimization are coming together. As Internet-facing applications span multiple data centers, there is a whole lot of application-layer data center-to-data center activity going on (i.e., global load balancing, file transfers, VMotion, etc.). These requirements unify ADC and WAN optimization functionality, which plays well for vendors like F5 with supercomputing-like processing capbility at the data center network edge.
  5. IT needs new networking/application specialists. F5 financial results and the whole evolution of ADC functionality suggest the need for a new IT skill set. I believe there is a growing requirement for hybrid IT specialists who understand both networking and application requirements. These people will become architects and application performance gurus — and make a ton of dough. F5 should work with application vendors like Microsoft or Oracle to create a certification program in this area.
  6. Whither Cisco? I know I’ll get some grief from San Jose, but it seems like Cisco continues to lose focus on the ADC market as it has in security. I know others like A10 Networks and Citrix are doing wel in the ADC market as well so I don’t get what’s going on. My guess is that Cisco has a next-generation product in the works, but until this hits the streets, it seems like Cisco is losing more than it is winning.

Congratulations to F5 for its innovation, focus, and sales execution. Others should take notice: new Internet data centers are clearly where the action is.

Search
© 2010 Enterprise Strategy Group, Milford, MA 01757 Main: Fax:

Switch to our mobile site