Enterprise Strategy Group | Getting to the bigger truth.TM

Posts Tagged ‘Cloud Computing’

Worthwhile Cloud Computing Security Resources for CIOs

Tuesday, November 23rd, 2010

I recently participated in a Cloud Innovation Council CIO roundtable discussion focused on cloud computing in the insurance industry. As expected, the CIOs said that they were concerned about cloud computing security in areas like identity management, data security, and network security.

There was another issue, however, that came as a bit of a surprise to me. These IT executives said that cloud computing was so new that they really didn’t have a standard methodology to assess and audit cloud computing providers’ security. Yes, they had a general idea of what they wanted to know but were uncomfortable with informal evaluations and longed for some best practice guidelines.

This situation falls into the “I don’t know what I don’t know” category. Industry hype around cloud computing is off the charts, but when insurance industry CIOs really need some guidance, cloud computing noise makes it difficult to find help. For these and others in the same boat, I suggest they look into two different efforts focused on cloud computing security requirements and assessment processes.

The first is the great work being done by the Cloud Security Alliance (CSA). Now normally I am a bit skeptical of IT industry consortia, but the CSA really has looked thoroughly at cloud security and written several detailed documents around best practices. CSA has even looked beyond basic security and now offers several guidelines on cloud GRC as well.

In addition to the CSA, it is also worth looking into the cloud security work being done at the National Institute of Standards and Technology (NIST). While this has a federal government focus, NIST recently published its Federal Risk and Authorization Management Program (FedRAMP). According to the CIO.gov website, FedRAMP “has been established to provide a standard approach to Assessing and Authorizing (A&A) cloud computing services and products.” There are links to assessment guideline documents here.

With all of the money being spent on cloud computing marketing, you’d think there would be more focus on CSA and FedRAMP but this is not the case. As always, the IT industry loves to solve future, not current, problems. I hope that this blog calls attention to CSA and FedRAMP and provides some assistance to IT and security professionals in the process.

The Smart-Fat and Smart-Thin Edge of the Network

Wednesday, November 17th, 2010

Take a look at ESG Research and you’ll see a number of simultaneous trends. Enterprises are consolidating data centers, packing them full of virtual servers, and hosting more and more web applications within them. This means massive traffic coming into and leaving data centers.

Yes, this traffic needs to be switched and routed, but this is actually the easiest task. What’s much harder is processing this traffic at the network for security, acceleration, application networking, etc. This processing usually takes place at the network edge, but additional layers are also migrating into the data center network itself for network segmentation of specific application services.

Think of it this way: There is a smart-fat network edge that feeds multiple smart-thin network segments.

The smart-fat network edge aggregates lots of network device functionality into a physical device, cluster of devices, or virtual control plane. This is the domain of vendors like Cisco, Crossbeam Systems, and Juniper Networks for security and companies like A10 Networks, Citrix (Netscaler), and F5 Networks for application delivery. These companies will continue to add functionality to their systems (for example,  XML processing, application authentication/authorization, business logic, etc.) to do more packet and content processing over time. It wouldn’t surprise me at all if security vendors added application delivery features and the app delivery crowd added more security.

Once the smart-fat network edge treats all traffic, packets and content will be processed further within the data center (i.e., smart-thin network edge). This will most likely be done using virtual appliances like the Citrix VPX. Why? Virtual appliances can be provisioned on the fly with canned policies or customized for specific workloads. They can also follow applications that migrate around internal data centers or move to public clouds.

A few other thoughts here:

  1. I’m sure we’ll see new startups focused on smart-thin virtual appliances but I don’t expect them to succeed. Existing vendors will simply deliver virtual appliance form factors and dominate this business.
  2. Legacy vendors have the best opportunity here as many users will want common command-and-control for the smart-fat edge and the smart-thin edge. Nevertheless, this further network segmentation does provide an opportunity for aggressive vendors to usurp customer accounts and marketshare.
  3. Smart-fat edge systems are delivered as physical devices today but this isn’t necessarily true for the future. I can see virtual appliances with horizontal scalability running on , HP, or IBM blade servers in the future.

The smart-fat, smart-thin architecture is already playing out in cloud computing and wireless carrier networks today and I expect it to become mainstream in the enterprise segment over the next 24 months. The technology is ready today but many users have no idea how to implement this type of architecture or capitalize on its benefits. Vendors who can guide users along with knowledge transfer, best practices, and reference architectures are most likely to reap the financial rewards.

Technology CEO Council’s Lightweight Federal IT Recommendations

Wednesday, November 3rd, 2010

Have you heard of the Technology CEO Council?  Neither had I until recently.  The council is made up of a strange mix of tech CEOs from organizations including Applied Materials, , , IBM, Intel, Micron, and Motorola.  Why this group and not Adobe, Cisco, HP, Juniper Networks, Microsoft, Oracle, and Symantec?  Beats me.

Anyway, the group published a paper in early October called, “One Trillion Reasons:  How Commercial Best Practices to Maximize Productivity Can Save Taxpayer Money and Enhance Government Services.”  The paper stresses the need to reduce federal spending and suggests some IT initiatives in support of this objective.  The initiatives include:

  1. Consolidate information technology infrastructure
  2. Streamline government supply chains
  3. Reduce energy costs
  4. Move to shared services
  5. Apply advanced business analytics to reduce improper payments
  6. Reduce field operations footprint and move to electronic self-service
  7. Monetize government assets

The paper is available at www.techceocouncil.org.

I agree with the spirit of this paper as there are plenty of ways to use IT costs savings to reduce overall federal spending.  That said, the paper is pretty weak and self-serving.  Specifically:

  • The Feds are already doing most of these things today.  Federal CIO Vivek Kundra is already driving data center consolidation.  Agencies were asked to submit initial input on June 30, 2010 and finalized plans are due on December 31.  Lots of federal agencies including CIA, DHS, DISA, and NASA are well along the road to cloud computing as well.  Perhaps the Feds should be more aggressive, but the same could be said of any organization.
  • The paper ignores legislative challenges.  The paper suggests things like consolidating common IT services like payroll, finance, and human resources.  Once again, this is nothing new as this type of consolidation was suggested in 2001 as part of Karen Evan’s Federal Enterprise Architecture.  Moving beyond inter-departmental cooperation toward a federal IT organization could indeed save money, but it would require overhauling (or at least tweaking) the Klinger-Cohen Act of 1996.  This could be a long arduous process.
  • What about security?  Federal IT spending is dominated by military and intelligence agencies with deep security requirements.  You can’t just consolidate around these.  Yes, security standards and regulations should be changed to keep up with the times–this is exactly what’s happening with FISMA 2.0 and the FedRAMP strategy to streamline cloud computing certification and accreditation (C&A).  Again, these things take time, thought, and care–not ideas and papers.

The CEOs also need to remember that their own internal IT organizations are far different than those in the federal government. When EMC executives mandate a massive VMware project, all of IT jumps into formation.  It doesn’t work that way in the public sector.

There were certainly some good points in the paper, but overall it is really a marketing piece put out by a lobbying organization.  In my humble opinion, there is some irony in this paper and organization–while the Technology CEO Council puts out a paper about how the federal government can save money on IT, companies like Dell, EMC, IBM, and Intel are happily wasting dough on a half-baked lobbying/PR organization.  Funny world.

DISA, Cloud Computing, and The Last Mile in Afghanistan

Thursday, October 28th, 2010

If you’re interested in cloud computing, you should look into the activities at the Defense Information Systems Agency (DISA). DISA provides complex IT services for DoD including network services, computing services, and complex application development services. DISA is also a leading example of cloud computing in the U.S. Federal government. For example, it has created its Rapid Access Computing Environment (RACE) to automatically provision resources for application testing and development. RACE is complemented by FORGE.mil, a series of open source collaborative development components. DISA will also lead the effort to consolidate thousands of e-mail and Sharepoint domains across the military into global enterprise services.

I participated in the Virtualization, Cloud, and Green Computing summit in Washington DC for the past few days and heard a review of DISA’s cloud progress from its CIO, Henry Sienkiewicz. Henry was talking leading edge stuff and as a geeky analyst, I was all ears.

When it came to the Q&A portion of his presentation however, I was quickly brought back to earth by the reality of DISA’s mission. The first question came from an Air Force officer who was leaving Washington DC that evening headed back to the Middle East. In contrast to the whiz-bang cloud computing efforts in Washington, the officer asked what DISA could do to help with network communications in Afghanistan. Both the Army and Air Force are responsible for IT activities in theater and they go about their business in different ways. Army people tend to go in and set up quickly, ready to move IT assets at any time. The Air Force on the other hand takes a more strategic view and sets up for longer engagements. Neither approach is right or wrong–the problem is that Army and Air Force troops don’t really coordinate their efforts leading to redundancy, inefficiency, and IT downtime.

The second real problem is bandwidth. While we here in the States have a choice between fiber providers, there isn’t any glass in the ground in Afghanistan. Army guys may run fiber and then leave it in the ground when they leave, but most communication is based upon satellites. This makes for a very thin pipe–not nearly enough to take advantage of rich DISA cloud applications running in Ft. Meade, MD.

CIO Sienkiewicz said he was aware of the problems and responded to the requests in general terms. When I spoke to the Air Force officer later, he told me that Sienkiewicz approached him after his talk to reassure him that he understood his plight. It seems that DISA’s CIO started his career in the Army infantry so he was extremely empathetic. Sienkiewicz really doesn’t own this problem, but my guess is that he will try and work with others at DoD to fix it.

There is a lesson to be learned in this dialogue. We in IT love to work on vision and hate to fix the mundane things that are broken. The Air Force officer’s issue is nothing new–telecommunications carriers have been struggling with the “last mile” of the network forever. In this case however, the last mile isn’t between a telecom CO and a residential neighborhood demanding HDTV, it is between “boots on the ground” and command-and-control units engaged in life-and-death communications. Cloud computing rapid deployment, resource optimization, and burstable capacity-on-demand are extremely beneficial, assuming we have the networks in place to take advantage of these resources. For the sake of our troops, let’s all hope that these prosaic yet critical network issues are addressed ASAP.

Cloud Computing? We Still Haven’t Mastered Server Virtualization!

Tuesday, October 19th, 2010

According to ESG Research, only 7% of the large mid-market (i.e., 500-1000 employees) and enterprise (i.e., 1,000 employees or more) are not using server virtualization technology and have no plans to do so. Alternatively, 61% are using server virtualization technology extensively in test/development AND production environments.

Okay, so server virtualization technology is everywhere, but how are large organizations using it? Many technology vendors would have you believe that enterprises are using server virtualization as the on-ramp to cloud computing. The industry crows about server virtualization’s use for IT automation and self-service, as VMs are rapidly provisioned, dynamically re-configured, and moved constantly from physical server to physical server for load balancing and resource optimization.

It’s a great vision, it just isn’t happening today. Most organizations use server virtualization for web applications and file and print services but far fewer have taken on transaction-oriented applications or databases. Many firms still struggle with performance issues when trying to align physical networks, storage devices, and servers with virtualization technology. As for VM mobility (i.e., vMotion), only 30% of the organizations surveyed by ESG use VM mobility on a regular basis. Why eschew VM mobility? It turns out that 24% of organizations say they have no need to use VM mobility functionality at this time.

The ESG data does suggest that server virtualization represents paradigm shift driving huge changes in IT organizations, processes, and technologies, but these transitions will take time to work their way out. Many enterprises will get to a state of more dyanamic data center transformation–around 2013 or so.

Take my word for it, the IT rhetoric around server virtualization is visionary hype rather than actual reality. I’ve got tons of data to back this up. There are more average Joe IT shops out there than whiz-bang organizations like , , and Microsoft and there always will be.

First Impressions from Oracle Open World

Wednesday, September 22nd, 2010

I’m here in San Francisco for Oracle Open World. Just arrived, but I already have some first impressions.

  1. There are signs, billboards, and brochures boasting about Oracle’s commitment to integrated hardware and software. This is the ultimate irony to an industry old-timer like me as Oracle led the open systems charge in the 1990s, lambasting Digital Equipment and IBM for its autocratic systems control. I’ll have to poke around for some old Oracle ads and compare them to its new integrated stack mantra.
  2. VMware is here and if I stop by its booth, I get a free espresso. Funny thing is that as far as I know, Oracle doesn’t support its apps or databases running on top of VMware. Based upon ESG Research, I believe that within two years or so, large organizations will run Oracle virtualization infrastructure to run Oracle workloads next to VMware, Xen, or Hyper-V workloads. When this happens, I can’t imagine VMware will be very visible at Oracle Open World, let alone spring for coffee.
  3. Rumor has it that Oracle will either announce its own Ethernet switch or buy one of the remaining independents. Personally, I hope Oracle doesn’t go down this road and decides to work with everyone else.
  4. I fully expect Oracle to jump much deeper into the security waters. This is becoming a requirement for being in the systems business.
  5. Michael Dell spoke this morning about virtualization in the data center. Application and database folks used to refer to this stuff as “plumbing.” Why do they care now? Because distributed applications with huge Hadoop backends need to be tuned with virtual servers, networks, and storage IO in mind.
  6. I’m looking for Oracle to be one of the leaders that transform today’s monolithic enterprise-focused identity management technology to a more Web-friendly democratic model.
  7. It’s funny that after Larry ripped apart cloud computing as nothing but industry hype, many vendors are here at OOW preaching, you guessed it, cloud computing.

More soon, time to walk the floor and get indoctrinated.

RSA Security Extends Compliance to Virtualization

Tuesday, August 31st, 2010

In between the cloud rhetoric and virtualization hyperbole at this year’s VMworld, I’m starting to see a few significant announcements.

RSA Security made one of these by introducing virtualization intelligence in its Archer compliance suite.

What’s the big deal? IT operations needs standard server configurations to meet compliance mandates and auditors need visibility into both physical and virtual servers. Neither group wants to jump through hoops to get what they need. This is a pretty big deal. When ESG asked security professionals what security-specific developments need to take place in order to enable more widespread server virtualization usage, 27% responded that their organizations needed, “compliance management tools that recognize virtual server events.” This was the third most popular of all possible responses.

RSA is on to something here. When I move workloads to the cloud you can be damn sure that my auditors want to know what’s going on. I’d like to see more vendors follow RSA’s lead and I’d really like to see security and cloud computing vendors start to discuss data standards for compliance, event management, and log file formats as well as secure transport protocols. Alas, I’m getting ahead of myself.

The RSA announcement won’t get much pick up, as it lacks the buzz of some cloudy/virtualization vision thing. Nevertheless, it is exactly what customers are looking for.

Federal Government Remains Curious — but Skeptical — of Cloud Computing

Monday, May 3rd, 2010

I’m in Washington co-chairing a Cloud Computing summit along with my colleague Mark Bowker. Thus far, we’ve covered cloud computing drivers, virtualization, cloud computing governance/compliance, and new skill sets needed for the cloud.

The audience is made up of federal IT workers, for the most part. These folks are under the gun since the Obama administration is pushing cloud projects and setting aside budget dollars to persuade federal agencies to get on board with proof-of-concept efforts. Federal CIO Vivek Kundra has added fuel to the fire, acting as the poster child for federal cloud computing as a way to save taxpayer money and improve IT service.

The federal audience is certainly hungry for knowledge, but very leery about the cloud in general. The feedback today indicates that:

  1. Federal IT doesn’t know where to start. Perhaps industry hype has blurred the focus, but there were a lot of questions about which IT activities/applications were a good fit for the cloud. We talked about the “low hanging fruit” like cloud storage for non-sensitive data and perhaps e-mail, but the feds want more information. Beyond these obvious candidates, what’s next?
  2. Security and governance scare the heck out of the Washington crowd. Remember that a high percentage of data is considered confidential. In spite of FISMA-compliant cloud efforts, federal IT workers remain unconvinced. Vendors will have to do a lot of hand-holding inside the Beltway.
  3. State and local governments are much more open to the cloud. This is true for one good reason: they are out of money. A CIO from Colorado talked about the state buying services from Amazon and Google. The CIO stated, “you have to give up some control, but you can gain financial benefits.”

Federal IT people really want more basic information and education about the cloud; vendors should note this and ramp up their knowledge transfer capabilities. Furthermore, it is important to talk in federal terms like FISMA and NIST rather than a more generic presentation. Think security and governance from the get-go.

Finally, the feds are really afraid of vendor lock-in, so standards are important here. When and if the federal government agrees upon cloud standards, vendors must go along to get along. If the feds fail to agree upon standards, all bets are off and the federal cloud becomes a big free-for-all. The private sector, public sector, and technology industry should all work together to make sure that this won’t happen.

Final thoughts on Interop — and Las Vegas

Friday, April 30th, 2010

Okay, I’m back in sunny Boston after four days at Interop. I’m now convinced that no normal person should be subject to Las Vegas for more than this amount of time. Everyone I ran into yesterday was looking forward to leaving. I flew out at 2:15 and found that people with later flights were jealous. This says it all.

Enough about the fake city however. As for Interop, a lot of people thought that the 2009 downer indicated that Interop may not be around much longer. In less than a year, the buzz has returned under the guise of strong financials, more market demand, and cloud computing. Here are my final thoughts on the show:

  1. I was certainly entertained by the Xirrus booth that featured a real boxing ring with live sparring. That said, Xirrus positioned this as the Wi-Fi battle between Arrays and APs. Hasn’t this debate been settled? Personally, I think that Wi-Fi must evolve into a smart mesh that seamlessly integrates into the wired network. Aerohive seems especially innovative in this regard.
  2. I was impressed last year by 3Com’s product line and bravado but wondered if it really had the resources to impact Cisco. Now that 3Com is part of HP, those concerns go away. At the very least, Cisco margins will be impacted every time HP is in a deal but HP’s product line and resources may represent the first real Cisco challenger since Wellfleet Networks. HP’s problem? Marketing. When Cisco leads with its compelling borderless network vision, HP can’t simply respond with price/performance. What’s HP’s vision of the network in a cloud computing-based world? To challenge Cisco, it needs its own vision and thought leadership — qualities that HP hasn’t been strong with in the past.
  3. The WAN optimization market continues to flourish with Blue Coat, Cisco, and Riverbed leading the pack. To me, the next battle royale here is desktop virtualization. Which vendor will offer the best support? Too early to tell but this certainly provides a new opportunity for Citrix and it Branch Repeater product.
  4. It seems like the application acceleration market has become a two horse race between F5 and Citrix/NetScaler. I was impressed by some new feature/functionality from Brocade and also like scrappy startup A10 Networks who play the “hot box” role in this market. Of course Cisco plays in this market as well.  I need to ask my friends in San Jose for an update as the competition is aggressive and confident.
  5. Yes, Juniper wasn’t at Interop. Should we read anything into this as some people have suggested? No. Just look at Juniper’s financial results and you’ll see that the company is doing quite well. With all due respect to the folks who run Interop, it is no longer a requirement to attend industry trade shows.

One final thought. I don’t think anyone really knows what the network will look like in a world with cloud computing, advanced mobile devices, and ubiquitous wireless broadband. In my opinion, this means that the network business is up for grabs in a way it hasn’t been in the past. This should make next-year’s Interop just as exciting — I just wish it were at the Moscone Center.

PS: Thanks to all the folks who provided feedback on my comments about Arista Networks. Clearly, I owe Jayshree a call.

State of Michigan Rolls Out Cloud Computing Services in Easy-to-Consume Bites

Thursday, April 1st, 2010

Nothing changes overnight in IT. Even organizations on the cutting edge tend to cautiously adopt new technologies. Heck, they are doing this with Windows 7 even though we’ve all been running Windows since the early 1990s.

The same holds true for cloud computing. Yes, cloud is the “new new thing” to quote Michael Lewis and it does have the potential to radically alter the way applications are written, deployed, operated, and managed. That said, expect a slow, steady migration like all other IT transitions.

The State of Michigan is a great example of this mindset. Michigan is a leader in the transition to cloud computing and state CIO Ken Theis has been a visible public sector cloud evangelist. Michigan is also in the process of building the technology foundation for cloud computing: a new 100,000 square foot data center and a statewide fiber network.

And yet, with all of this activity, the state is being extremely deliberate in its cloud computing deployment. The initial pilot is really focused on a subset of a subset of cloud computing: cloud-based storage capacity. Cloud storage is offered to local governments, universities, and departments for non-sensitive data. The state department of transportation is a prime consumer of this service.

Michigan wants to have five to eight services running by the end of the month. Eventually, it will offer many more services, compete with public sector options, and create a chargeback system for its cloud consumers. The goal? Improve automation and data sharing across the state while lowering costs.

Michigan is well aware of the security holes in cloud computing today, so it will stick with non-sensitive applications and data for now while it watches cloud’s progress.

With this plan, is Michigan really a cloud visionary? I believe it is, but Theis is also being prudent and patient as well. To paraphrase Alexander Pope, “fools rush in where wise men fear to tread.” Michigan may be taking its time, but it is learning lessons and gaining experience now so it can improve services and cut costs sooner rather than later.

Search
© 2011 Enterprise Strategy Group, Milford, MA 01757 Main: Fax:

Switch to our mobile site