Enterprise Strategy Group | Getting to the bigger truth.TM

Posts Tagged ‘HP’

The Smart-Fat and Smart-Thin Edge of the Network

Wednesday, November 17th, 2010

Take a look at ESG Research and you’ll see a number of simultaneous trends. Enterprises are consolidating data centers, packing them full of virtual servers, and hosting more and more web applications within them. This means massive traffic coming into and leaving data centers.

Yes, this traffic needs to be switched and routed, but this is actually the easiest task. What’s much harder is processing this traffic at the network for security, acceleration, application networking, etc. This processing usually takes place at the network edge, but additional layers are also migrating into the data center network itself for network segmentation of specific application services.

Think of it this way: There is a smart-fat network edge that feeds multiple smart-thin network segments.

The smart-fat network edge aggregates lots of network device functionality into a physical device, cluster of devices, or virtual control plane. This is the domain of vendors like Cisco, Crossbeam Systems, and Juniper Networks for security and companies like A10 Networks, Citrix (Netscaler), and F5 Networks for application delivery. These companies will continue to add functionality to their systems (for example,  XML processing, application authentication/authorization, business logic, etc.) to do more packet and content processing over time. It wouldn’t surprise me at all if security vendors added application delivery features and the app delivery crowd added more security.

Once the smart-fat network edge treats all traffic, packets and content will be processed further within the data center (i.e., smart-thin network edge). This will most likely be done using virtual appliances like the Citrix VPX. Why? Virtual appliances can be provisioned on the fly with canned policies or customized for specific workloads. They can also follow applications that migrate around internal data centers or move to public clouds.

A few other thoughts here:

  1. I’m sure we’ll see new startups focused on smart-thin virtual appliances but I don’t expect them to succeed. Existing vendors will simply deliver virtual appliance form factors and dominate this business.
  2. Legacy vendors have the best opportunity here as many users will want common command-and-control for the smart-fat edge and the smart-thin edge. Nevertheless, this further network segmentation does provide an opportunity for aggressive vendors to usurp customer accounts and marketshare.
  3. Smart-fat edge systems are delivered as physical devices today but this isn’t necessarily true for the future. I can see virtual appliances with horizontal scalability running on , HP, or IBM blade servers in the future.

The smart-fat, smart-thin architecture is already playing out in cloud computing and wireless carrier networks today and I expect it to become mainstream in the enterprise segment over the next 24 months. The technology is ready today but many users have no idea how to implement this type of architecture or capitalize on its benefits. Vendors who can guide users along with knowledge transfer, best practices, and reference architectures are most likely to reap the financial rewards.

The CIA and the Encrypted Enterprise

Friday, October 29th, 2010

The international horse show wasn’t the only event in Washington DC this week; I participated in the Virtualization, Cloud, and Green Computing event in our nation’s capital. One of the guest speakers was Ira “Gus” Hunt, CTO at the CIA. If you haven’t seen Gus speak, you are missing something. He is very strong on the technical side and extremely energetic and entertaining.

Gus focused on cloud computing activities at the CIA (I’ll blog about this soon), but I was intrigued by one of his slide bullets that referred to something he called the “encrypted enterprise.” From the CIA’s perspective, all data is sensitive whether it resides on an enterprise disk system, lives in a database column, crosses an Ethernet switch, or gets backed up on a USB drive. Because of this, Hunt wants to create an “encrypted enterprise” where data is encrypted at all layers of the technology stack.

The CIA is ahead here, but ESG hears a similar goal from lots of other highly regulated firms. When will this happen? Unfortunately, it may take a few years to weave this together as there are several hurdles to overcome including:

  1. An encryption architecture. Before organizations encrypt all their data, they have to understand where the data needs to be decrypted. For example, remote office data could be encrypted when it is sent to the corporate data center, but it needs to be decrypted before it can be processed for large batch jobs like daily sales and inventory updates. There is a balancing act between data security and business processes here demanding a distributed, intelligent encryption architecture that maps encryption/decryption with business and IT workflow.
  2. Key management. Most encryption products come with their own integrated key management system. Many of these aren’t very sophisticated and an enterprise with hundreds of key management systems can’t scale. What’s needed is a distributed secure key management service across the network. Think of something that looks and behaves like DNS with security built in from the start. The Key Management Interoperability Protocol (KMIP) effort may get us there in the future as it is supported by a who’s who of technology vendors including EMC/RSA, HP, IBM, and Symantec, but it is just getting started.
  3. Technical experience. How should I encrypt my sensitive Oracle database? I could use Oracle tools to encrypt database columns. I could encrypt an entire file system using Windows EFS or tools from vendors like PGP. I could buy an encrypting disk array from IBM, or I could combine EMC PowerPath software with Emulex encrypting Host-based Adapters (HBAs). Which is best? It depends on performance needs, hardware resources, and financial concerns like asset amortization. Since there is no “one-size-fits-all” solution here, the entire enterprise market is learning on the fly.

A lot of the technical limitations are being worked on at this point, so the biggest impediment may be based upon people and not technology. We simply don’t have a lot of experience here, so we need to proceed with research, thought, and caution. To get to Gus Hunt’s vision of the “encrypted enterprise,” we need things like reference architectures, best practices, and maturity models as soon as possible. Look for service providers like CSC, HP, IBM, and SAIC to offer “encrypted enterprise” services within the next 24 months.

Cisco’s “Kitchen Sink” Product Announcements

Thursday, October 7th, 2010

Did you see the series of announcements Cisco made this week? It was pretty impressive. This is the traditional season where Cisco announces products and new initiatives but this week’s announcements were very extensive — new switches, routers, security devices, wireless access points, WAN optimization equipment, etc.

In its marketing mastery, Cisco related all of these announcements to two core strategic initiatives, data center virtualization and borderless networks. In other words, Cisco is talking about the way IT applications and services are hosted (central data centers, virtualization, cloud), and the way they are accessed (wired and wireless networks, security, access control).

Cisco is clearly demonstrating that it plays in a different space then it used to. It’s all about industries, business processes, and enterprise IT now; the network simply glues all the pieces together. So why all these announcements at once? Doesn’t this water down the individual piece parts? I don’t think so. Cisco is actually doubling down on integration across its products with an overall strategy aimed at:

  1. Competing on all fronts. In one day, Cisco delivered a response to a spectrum of IT vendors like Aruba, Check Point, Juniper Networks, and Riverbed. Cisco may not have the “best-of-breed” product in each category but it is reinforcing the message that the whole is greater than the sum of its parts.
  2. Out-executing the big competition. Cisco is betting that it can deliver technology integration and enterprise IT initiatives faster than its primary competitors — HP and IBM. There is some precedent here–HP and IBM business units haven’t always worked together well so Cisco believes it can capitalize on its organizational structure and market momentum.

Now I realize that the “integrated stack” story has limited value today since customers have a history of buying servers from HP, wired networks from Cisco, Wi-fi from Aruba, storage from , etc. That said, IT is radically changing. For example, ESG Research indicates that server virtualization is driving a lot more cooperation across disparate functional IT groups. As these organizations come together, it’s only natural that they will look for common solutions from fewer vendors.

In the meantime, service providers and financially-strapped organizations (i.e.,  State/local government, higher education, real estate, etc.) will look for IT savings anywhere they can, even if it means moving away from some vendors with relatively stronger point products in the process.

Cisco also has a services opportunity in that it gets to play services Switzerland and partner with companies like Accenture, CSC, and Unisys in competition with IBM Global Services and HP/EDS.

Lots of people knock Cisco products and point to better, faster, cheaper alternatives. Maybe, but the overall Cisco story seems pretty strong to me. As of Tuesday, Cisco has a bunch of new products that support its corporate strategy and make its story even stronger.

IBM Buys Blade Networks — An Obvious Marriage For Server Virtualization and Dynamic Data Centers

Monday, September 27th, 2010

Last week, 20-somethings on Wall Street were buzzing about self-serving rumors that IBM would buy Brocade Networks. Well that didn’t happen (and I don’t think it ever will), but IBM did make a networking acquisition when it scooped up Blade Networks today. Terms of this deal were not disclosed.

Why Blade and not Brocade? Several reasons:

  1. IBM anticipates increasingly dense blade server sales. ESG Research indicates a general trend from rack-mounted to blade servers. Why? Today, an average server hosts between five and ten VMs. As this ratio substantially increases over the next 2-3 years, IT managers will need blade server flexibility and manageability to cope with scale and complexity. Blade Networks provides another piece for tight integration between blades, virtual switches, and physical switches.
  2. Blade Networks runs JUNOS. I don’t think IBM cares about Blade’s top-of-rack switches. Rather than own this piece, it can now plug its dense blade servers into Juniper data center top-of-rack, aggregation, and core switches. Lots of form factors and the chance to leverage Juniper’s deep commitment toward flattening the network with its 3-2-1 initiative and the ultra-secret “Project Stratus.”
  3. The price was right. With 3Com and ProCurve in tow, HP has been pretty public about its intention to push Blade Networks aside. This really left IBM as the only logical place for Blade Network investors to turn. My guess is that the acquisition price was fair, but not overly generous.

IBM is also probably anticipating a technology change in the HPC market as 40 and 100 gigabit Ethernet replaces Infiniband. Once again, Blade Networks will provide a turnkey blade solution for scientific computing and smart planet analytics. Blade also provides port and device consolidation for the burgeoning trend toward Ethernet-based storage.

I really don’t think that IBM wants a stand-alone networking business again, so an acquisition of Brocade, Extreme, Force 10, or even Juniper seems unlikely. With Blade, IBM can deliver a data center unit–complete with memory, processors, and networking/storage IO–in a tightly-integrated can. My guess is that IBM will sell a ton of these.

Dario Zamarian

IBM To Buy Brocade And Other Stupid M&A Rumors

Thursday, September 23rd, 2010

I was at Oracle Open World yesterday when I heard the rumor that IBM was going to buy Brocade. At the time, I was meeting with a group that had collective industry experience of more than 100 years. We all laughed this off as hearsay.

The fact is that IBM already OEMs equipment from Brocade (as well as Juniper) so it is not lacking in engineering experience or alternatives. Does IBM want to start a stand-alone networking business? Does it want to OEM Fibre Channel switches to and HP? Does it want to bet on Brocade/Foundry Ethernet switches against the rest of the industry? No, no, and no.

This is not the only silly rumor we’ve heard lately. Last week, Microsoft was going to buy Symantec. Yeah sure, there are no antitrust implications there. And does Microsoft really want to buy a company that has about a dozen products that are redundant to its own?

How about Oracle buying HP? Larry may be spinning this up for fun, but it’s simply crazy talk. Oracle, a software company focused on business applications and industry solutions, wants to get into the PC and printer businesses? Yeah, I know, “What about servers and storage?” To which I answer, “What about Sun?”

These rumors are circulating because of the recent uptick in M&A activity, but my strong bet is that nothing remotely similar will happen. The rumors must then be coming from one of two sources:

  1. Wall Streeters executing a “pump and dump” play. Given the activity in Brocade’s stock yesterday, this is likely. I hope the SEC is all over this unethical practice.
  2. Bloggers and Tweeters trying to “stir the pot.” Maybe the Internet has become the great equalizer between intelligent discourse and ignorance.

Not all mergers make sense, but there tends to be some business logic inherent in most transactions. Let’s try and remember that before spreading rumors for personal or unethical gain.

Networking and Virtualization Vendors Should Join the Open vSwitch Effort

Thursday, September 16th, 2010

My colleague Mark Bowker and I are knee-deep in new research data on server virtualization. Within this mountain of data, we are discovering some existing and impending networking issues related to network switching.

Today, many server virtualization projects are led by server administrators, with little or no participation from the networking team. As you may imagine, this means that the server team configures all virtual switches to the best of its ability, without considering how physical switches are already configured. As things scale, the server team realizes the error of its ways and quickly calls the networking group in to help out. This is where things really break down. Before doing anything, the networking folks have to learn the virtualization platform, understand how the physical and virtual networks should interoperate, and then roll up their sleeves and start gluing everything together.

This is a painful learning curve but I believe that future issues will be far more difficult. As organizations increase the number of VMs deployed, networking configurations get more difficult — especially when VMs move around. Users regularly complain about the number of VLANs they have to configure, provision, and manage. This situation will grow worse and worse as VMs become the standard unit of IT.

In my mind, it makes no sense for virtualization vendors like Citrix, Microsoft, Oracle, and VMware to recreate the richness of physical L2 switches in the virtual world. So what can be done? Well one alternative is to eliminate virtual switches entirely and do all switching at the physical layer via the Virtual Ethernet Port Aggregator (VEPA) standard being developed in the IEEE.

I believe this will happen but in the meantime there is another alternative being discussed this week at the Citrix Industry Analyst Event — Open vSwitch. As described on the Apache web site, “Open vSwitch is a multilayer virtual switch licensed under the open source Apache 2.0 license. The goal is to build a production quality switch for VM environments that supports standard management interfaces (e.g., NetFlow, RSPAN, ERSPAN, CLI), and is open to programmatic extension and control.”

Here’s why this makes sense to me:

  1. Given a pool of collective resources, a collaborative open effort would provide more advanced switching functionality sooner rather than later.
  2. An open alternative would expose APIs that could be easily integrated with leading switch management tools from Brocade, Cisco, Extreme, Force 10, HP, Juniper, etc.
  3. Vendors would not have to integrate with each hypervisor independently. This would improve code quality and again speed time-to-market.

At the very least, Citrix, Microsoft, and Oracle should back this as a way to push back on VMware’s marketshare lead.

I’ve been around long enough to know the strengths and limitations of open source and standards but I think that with the right support, this one could have legs. I know that vendors have their own businesses to look after but isn’t another end goal to create products that the market wants? I think Open vSwitch would fit this bill.

The Many Reasons Why IBM/OpenPages Makes Sense

Wednesday, September 15th, 2010

Earlier today, IBM announced its intention to acquire OpenPages, a privately-held software company focused on identifying and managing risk and compliance.

There is obvious value in this deal based upon market interest in risk management alone. In the past ten years we’ve seen the subprime mortgage securities collapse, a rise in global terrorism, and explosive growth in cybercrime. Certainly businesses need better risk management tools to cope with these kinds of events.

With OpenPages, IBM gets to throw its hat further into the risk management ring, but that’s not all. OpenPages provides IBM with strong synergies around other IBM business opportunities like:

  1. Analytics. IBM has invested billions and dedicated thousands of people to create an advanced data analytics capability. Now that this expertise is in place, IBM has an analytics foundation to look at just about any type of data-centric issues. With OpenPages, IBM can combine risk management and analytics products with its existing IT and vertical industry strengths for new product and services sales.
  2. Information security. Over the past 10 years, information security has slowly evolved from tactical threat management to regulatory compliance controls. Given the global cybercrime wave, this is no longer enough — large organizations need real-time IT visibility and solid threat management analytics. IBM can combine OpenPages with the compliance management assets it purchased from Consul as well as its traditional Tivoli security products. If customers need help here, IBM Global and Managed services will be happy to chip in.
  3. “Smarter planet” projects. IBM has always told a great story around “smarter planet” projects like health care networks and next-generation smart grids. True, these visionary initiatives can cut cost and improve efficiency but what happens to the smart grid in the event of a Category 5 hurricane or a cyber supply chain attack that makes 1 million “smart toasters” part of a global botnet? With OpenPages, IBM can now build a “smarter planet” while keeping an eye focused on increasing risks.

Clearly the OpenPages wasn’t as newsworthy as HP buying ArcSight or Intel buying McAfee, but it certainly aligns with IBM’s strategy, complements existing products and services, and gives IBM sales reps another solution to sell to customers.

HP Buys ArcSight: More Than Just Security Management

Monday, September 13th, 2010

The waiting and guessing games are over; today, HP announced its intent to buy security management software leader ArcSight for $1.5 billion. I didn’t think HP would pull the trigger on another billion+ dollar acquisition before hiring a new CEO, but obviously I was wrong.

ArcSight is a true enterprise software company. As I recall, many of the early ArcSight management team members actually came from HP OpenView. With this model in mind, ArcSight went beyond technology and invested early in top field engineers, security experts, and sales people. This vaulted the company to a leadership position and it never looked back.

For HP, ArcSight fits with its overall focus on IT operations software solutions for Business Technology Optimization. In the future, security information will be one of many inputs that helps CIOs improve IT management and responsiveness. It won’t happen overnight, but think of all sources of IT management data (i.e., log data, SNMP, network flow data, configuration data, etc.) available for query, analysis, and reporting in a common repository. This is what HP has in mind over the long haul.

In the meantime, HP should get plenty of ArcSight bang-for-the-buck over the next 12-24 months by:

  1. Aligning ArcSight and EDS. Security is a top activity within professional services firms. Given ArcSight’s enterprise play, EDS will likely double down on IT risk management and push ArcSight wherever it can.
  2. Using ArcSight as a door opener in the federal market. Yes, HP already sells plenty of products and services to Uncle Sam, but it now has access to a CISO community with deep pockets. With CNCI 2.0 and FISMA 2.0 upon us, this will only increase.
  3. Bringing ArcSight into the virtual data center strategy. According to ESG Research, many enterprises don’t do a good job of coordinating security with server virtualization. This is a big problem given virtualization growth — which is why VMware was so vocal about its recent vShield announcement. HP can and should bring ArcSight into its strategic vision for CIOs with massive data center projects.

In spite of its security services and thought leadership, HP’s name has been notably absent from IT security leadership discussions in the past. ArcSight should change that.

A few other quick thoughts:

  1. In the past, ArcSight was built exclusively on top of Oracle databases. Great in terms of enterprise functionality, but it made the product expensive to buy, expensive to operate, and somewhat weak in terms of queries across large data sets. Look for HP to accelerate plans to decouple ArcSight from Oracle ASAP.
  2. If HP is still in buying mode, the obvious question is, “who is next?” Would anyone be surprised if HP made a move for Check Point, F5, or Riverbed soon?

IBM: An Encryption Key Management Leader

Thursday, September 9th, 2010

While many folks were sunning themselves at the beach this past summer, IBM introduced some pretty important security technology: the Tivoli Key Lifecycle Manager (TKLS). Basically, the TKLS products are designed to create, manage, secure, and store encryption keys as a service.

What’s so special about this? First, key management is one of those IT security disciplines that will go from relatively esoteric to an enterprise requirement in the next year or so. Why? More and more data is being encrypted each day, so key management is becoming increasingly important. Stolen encryption keys could compromise the confidentiality of sensitive data while lost encryption keys could transform critical data into meaningless ones and zeros. Pretty soon, all large enterprises will have something resembling TKLS.

As far as IBM TKLS goes, it looks good to me because:

  1. It is one of the first products built with the KMIP standard. The Oasis Key Management Interoperability Protocol(s) is at the heart of TKLS. IBM has already tested TKLS interoperability with key management products from HP, RSA, and SafeNet. This gives distributed organizations the ability to create a federated key management architecture without mandating one vendor technology or another.
  2. IBM took an architectural approach. Yes, TKLS is mainly linked to storage encryption today, but the product is built with other encryption in mind (laptops, file systems, databases, applications, etc.). By offering TKLS support on System z, IBM will gain a beach head at large organizations that will then build a TKLS architecture from the data center to the distributed network.
  3. TKLS is a comprehensive solution. Many key management systems are built for symmetric key management alone. Alternatively, TKLS is designed for management of symmetric and asymmetric keys as well as digital certificates. Again, enterprises will appreciate this more complete solution.

In general, neither key management nor TKLS will get much visibility or industry recognition — key management is just a bit too geeky for most IT folks. Nevertheless, next-generation cloud computing will depend upon ubiquitous trust and data security. IBM gets this more than most. Think of TKLS as its part of its security plumbing for a smarter planet.

Friday, September 3rd, 2010

Anyone remotely interested in identity management should definitely download a copy of the National Strategy for Trusted Identities in Cyberspace (NSTIC) document. It can be found at this link: .

A a very high level, the strategy calls for the formation of a standards-based interoperable identity ecosystem to establish trusted relationships between users, organizations, devices, and network services. The proposed identity ecosystem is composed of 3 layers: An execution layer for conducting transactions, a management layer for identity policy management and enforcement, and a governance layer that establishes and oversees the rules over the entire ecosystem.

There is way more detail that is far beyond this blog but suffice it to say the document is well thought out and pretty comprehensive in terms of its vision. This is exactly the kind of identity future we need to make cloud computing a reality. Kudos to Federal Cyber coordinator Howard Schmidt and his staff for kicking this off.

I will post my feedback on the official website, but a few of my suggestions are as follows:

  1. Build on top of existing standards. The feds should rally those working on things like Project Higgins, Shibboleth, Liberty, Web Services, Microsoft Geneva, OpenID, etc. Getting all these folks marching in the same direction early will be critical.
  2. Get the enterprise IAM vendors on board. No one has more to gain — or lose — than identity leaders like CA, IBM, Microsoft, Novell, and Oracle. Their participation will help rally the private sector.
  3. Encourage the development of PKI services. PKI is an enabling technology for an identity ecosystem but most organizations eschew PKI as too complex. The solution may be PKI as a cloud service that provides PKI trust without the on-site complexity. This is why Symantec bought the assets of Verisign. The Feds should push Symantec and others to embed certificates in more places, applications, and devices.

There will be lots of other needs as well. The document recommends identity and trust up and down the technology stack but it doesn’t talk about the expense or complexity of implementing more global use of IPSEC, BGPSEC, and DNSSEC. There is also the need for rapid maturity in encryption, key management, and certificate management. Good news for RSA, PGP, nCipher (Thales), IBM, HP, Venafi, and others.

The key to me is building a federated, plug-and-play, distributed identity ecosystem that doesn’t rely on any central authority or massive identity repository. This is an ambitious goal but one that can be achieved — over time — if the Feds get the right players on board and push everyone in the same direction.

Search
© 2011 Enterprise Strategy Group, Milford, MA 01757 Main: Fax:

Switch to our mobile site