Thursday, August 8, 2013

Storage Seems Exciting and Networking Seems Dull

I have to apologize for not posting, as I've had a pretty busy week traveling to the Bay Area for meetings with investors and other assorted smart people. As some of you know, trying to define a product - and a business around it -  is generally a difficult thing. Lots things to think about, so let's dive in.

One of the things that I keep bringing up in conversation around here is the notion that I rudely expressed in the title of this post. Taking it down to the next level: If you were to come up with a new datacenter innovation, would you want to make it part of the networking infrastructure, or something else? What the hell has happened to innovation in the networking space, anyway? Sadly, the title of the post is pretty close to my conclusion... for now. After talking to a lot of folks, from a lot of different parts of the value chain, I got a lot of different perspectives, but really all pointing to the same conclusion. It's enough to get you depressed. Here's what people see:

Adoption curve - It used to be that network ports were like chocolate covered coffee beans in my office. You couldn't get enough of them. Moreover, there were manifest bottlenecks everywhere in the topology. Nothing was more urgent to a growing business than getting more, faster ports as soon as possible. Alas, this has changed a lot. The transition from 1G Ethernet to 10G is still not complete years after the technology was first introduced. Further, workloads that can fill those pipes are not commonplace. Except for some unique, large scale situations, IT does not see the need to go beyond current technology for a while.

Sales motion - The above adoption rate has led to a much longer sale cycle, and the expected life span of networking gear has grown commensurately longer as a result. Servers and storage get replaced every 2 or three years. Both get add-on investments as capacity is needed. Networking gear replacement has become a 5 to 8 year event. Put most succinctly by a reseller friend, "I can sell you storage today, and return to sell you more in a few months. I sell you switches, and then I have no reason to call you for 5 years." Add to that the obvious truth that one vendor dominates the space in a way that causes resellers great discomfort, and the general reluctance to compete is understandable. It just isn't fertile ground to grow new businesses.

Burden of Innovation - While nothing is more thorough than the interoperability testing that goes on in networking, it has a cost. Network infrastructure is completely closed. What does that mean? In order to propose a new innovation in the networking space, it is nearly impossible to do it on existing gear for technical reasons. Standards have reduced network management to a completely decoupled state, where all control plane services are inaccessible to software running outside the switches. There's nothing to be done with the standard interfaces. This means that the way to innovate is to build switches. Of course, if you build those, you quickly have to face the adoption curve issue, and the sales motion problems.

For a lot of reasons, many people might be reminded of the old stove-piped systems of the 70's and 80's when they look at the space. The typical approach back then was to build a new hardware platform, build an operating system (or buy some UNIX code from AT&T) and bring up the new machine. Then, you could build the pieces that make you different.

Isn't this what SDN is about? Well, yes. Software Defined Networks have the potential to change this pathology, but the evolution of the technology is taking some bizarre twists that make it a scary place to start a business. This is probably the subject of another post, but suffice it to say, if a reasonable person can't see a path to implementing a new forwarding algorithm on a network without building the entire stack from scratch, it will all fail.

So there you have it. The only counterpoint I can offer is that I remember a time when networking was exciting and storage was dull. I'd love to hear more thoughts from all of you.. especially if you disagree. What say ye?

4 comments:

  1. A lot has to do with what problems you're solving for the customer. As you alluded to above, customers don't have a lot of issues with their networking kit, other than cost.

    Look at the endpoints first. Servers have standardized multiple 1G LAN on motherboard. You can scale up bandwidth by bonding or adding 10G cards. At some point, you run out of processor bandwidth, so scaling up gives you diminishing returns. On the other hand, you can scale out simply and easily and cheaply.

    So do endpoints need more bandwidth?

    Looking at the switch side of data, the same question arises - scale up or scale out? I don't have an answer for this, but I surmise that customers are happy with switch port density vs. cost, and are scaling out rather than scaling up.

    As you noted, the control plane is dominated by the single dominant vendor. Are customers having problems not addressed by the current environment? The real question may be more about are network configurations relatively static? If so, and the customer treats it as a set-and-forget, than there's not much desire for change.

    The 5+ year refresh cycle may be an indication that customers are amortizing their large investment over a longer time frame. But this is countered by larger investments with quicker refresh cycles in servers and storage. So it is much more likely that, overall, networking vendors have developed a mature product that meets most of the customers' needs.

    So, where does SDN come into play? I think it's really about the control plane inside the virtualization engine. This is an area where the dominant switch vendors can't play, but still is a switch.

    ReplyDelete
    Replies
    1. There's no shortage of problems with the hierarchical tree topology most large datacenters deploy. You did point out one thing that I forgot to mention: The end nodes really need to be capable of saturating multiple 10G links before problems develop. Not happening. Hmm.

      Delete
  2. Just as CPE routers and WiFi access points have become a $50 commodity, I believe datacenter switches will fall into the same fate. There are currently numerous Chinese ODMs selling a generic 10G switches based on some merchant silicon reference hardware platforms. The switch hardware business will become dull, indeed, especially as all these platforms have the same basic features and wire-speed performance. The only things to decide are metal or plastic cases, the number of LEDs, how many buttons, and the number of units to buy.

    Just as the CPE routers and the WiFi access points, it is the software that differentiate these boxes. In this case, Jungo, Ashley Laurent, and later on the open source OpenWRT and derivatives rule the world. The business is then to combine hardware and firmware and sell to one big access provider. Case in point is Actiontec+Jungo used by Verizon Fios.

    I believe the business to provide SDN/OpenFlow firmware, controllers and a management platform to manage the ecosystem is still emerging and it is far from being dull. Somebody will be come the Jungo and the Ashley Laurent in the datacenter switches. One interesting company is Cumulus Networks. I don't think you need to build the stack from scratch, Cumulus started from the Linux stack.

    ReplyDelete
  3. I actually agree with you, Santa. The problem is still a market vs. technology thing. Without bandwidth demands getting out of hand, or management becoming out of control, it is hard to justify a change. After talking to a number of customers, I got the message loud and clear.

    Service provider market might be another matter... but we will have to wait for that to develop.

    ReplyDelete