Saturday 22 August 2015

Why Open Source Software Defined Storage Will Win

Last week NetApp posted it's first quarterly loss in many years. It's not something I take pleasure in, since I have many friends still working at that company and it still has some very interesting technologies. The storm clouds are gathering though, and I can't help but liken NetApp's situation to that of Sun Microsystems, another former employer, as the tech bubble burst in the early 2000's, especially when I look at some of the comments reported from the NetApp earnings call.

Back in the day, Sun was making a lot of money and good margins with high performance proprietary hardware

Then along came Linux running some simple tasks on commodity hardware. it was good enough to do the simple jobs, had a lot of the essential features & functionality that unix provided, and the hardware was at such a low price that the economics were impossible to ignore. Some of the first adopters were the same companies that been the early sun customers, those who had replaced mainframes & minicomputers with cheaper, yet effective Sun hardware.

Unfortunately the lesson of their  own early success wasn't remembered by Sun's senior management, who thought they could win customers back by offering Linux on Sun hardware. Of course, the software wasn't the point here – it was the low-cost hardware that was attracting attention. Some of us in the field & parts of engineering tried to convince Sun's management to put more effort behind Solaris for x86, but just at a critical juncture, Solaris 9 was released on SPARC, along with the message that x86  support would be put on the back-burner ... the die was cast and Sun's fate sealed as commodity hardware pushed the boundaries of performance and drove the need for proprietary hardware into an upper niche. I still contend that if Sun had instead fully committed support for x86 & embraced a software-first approach,  the Solaris would dominate the market & Linux would have been relegated to a similar position as FreeBSD occupies today.

What does this have to do with software defined storage & NetApp? Well it appears that there is a similar approach from NetApp's senior management:  they see that customers "want scale out and software defined storage functionality",  but seem to think that the only response is a solution running on NetApp hardware. Like Sun, they (and the other major storage vendors) are chained to their high-margin proprietary hardware. Breaking free of this kind of entanglement is at the crux of Christensen's Innovator's Dilemma.

Meanwhile open source, software-defined storage solutions running on commodity hardware, such as SUSE Enterprise Storage based on Ceph – the Linux of enterprise storage if you like – are starting to gain attention, especially for large bulk data stores facing exponential growth and price-sensitivity. For the moment these solutions are best suited to relatively simple (albeit large) deployments, but the technology is not standing still. Ceph already features high-end technologies like snapshots, zero-copy cloning, cache-tiering and erasure coding, and enterprises are finding that it is "good enough" and at a dramatically lower price-point.  The open source nature of the development means that progress is rapid, and the software-defined nature means that hardware costs are driven relentlessly down. This is the same dynamics as we saw with the transition of UNIX to Linux, and it likely to have the same impact on the proprietary enterprise storage vendors.

So, change is here: I hope NetApp can navigate the course better than Sun did, though it will be interesting to see how.

Meanwhile, enterprises looking to rein in the exponential costs of enterprise storage can now look to open source for answers, and take advantage of the power and economics of commodity hardware systems.