Tuesday, May 09, 2006

Are the server wars over?

A
 pparently my comment notification system isn't working right--or I just received a comment on a two-month-old post. The misconceptions in the comments were great enough that I thought it deserved its own post.

I had been engaged in a bit of repartee with Nick Carr (one of the best in the blogosphere, IMHO, so click on an ad next time you're on his site) about whether the server market is doomed (one of the few subjects in which I disagree with Mr. Carr). My post is here ( Because 1.21 gigaflops just aren't 1.21 gigawatts).

A reader--Bert Armijo of HotCluster left the following comment:
I'm afraid you're using the exceptions to build your rule.

90% of all compute cycles are exactly the same. If you doubt that, take a quick look at the server marketshare data. The processor, memory and I/O architecture battle has been settled. So, as long as you're running typical business or web applications and not HPC, computing resources can be considered a commodity.

How to deliver that resource is decided. It's TCP/IP over the internet.

What's been missing is a standardized way to organize the usage of remote cycles to power a distributed application. This challenge has also been solved and now we get to see how the market reacts.
As I said in my follow-on comment: There's a standard processor? Is there a standard OS? Are there standard "typical business applications?"

At Mr. Armijo's suggestion, I decided to take a quick look at the server marketshare data. Although he never said it explicitly, I can only assume that he means that the world has standardized on Linux. However, according to this article on LinuxInsider, there is no clear leader in the server market.

In terms of revenue, Windows has the largest share of the market at somewhere north of $17.7B in 2005. Second place? UNIX, of course, at about $17.5B. Linux systems come in third, with about $5B in revenue in 2005--the first time Linux systems have finished that high. Linux is growing quickly, of course, experiencing 20% revenue growth.

Surprising? A lot of people would think so--although not the server manufacturers themselves. I met with a major manufacturer recently who ships UNIX, Linux, and Windows servers, and over 40% of their servers (in terms of units) ship with Windows on them.

My point here? Simply that there is no clear market leader. UNIX was the server leader for more than a decade, and it is slowly slipping out of site (the sad news about SGI this week was one more indicator). Windows has taken the lion's share of the UNIX slice of the pie, but Linux is growing very quickly (and will continue to grow).

Mr. Armijo also says that the processor battle has been settled. Has it? AMD has been enormously successful in the 64-bit arena--so much so that Intel has rethought its entire strategy for the 64-bit platform.

So to say there's a standard is to willingly put blinders on. There are multiple operating systems out there. There are multiple hardware platforms out there. And many are thriving. Many will continue to thrive.

Mr. Armijo didn't explicitly call out any standard operating system, and it's possible that I've mistaken his point. But all software doesn't run on all operating systems, and that's part of my point when I say "compute cycles aren't all alike." Software needs an OS. Those "typical business applications" that he talks about run on different operating systems, and many are dependent on a single OS.

To repeat myself: I'm a believer in utility computing. I think it will have an ever-increasing role in business. But it's not a panacea, and it's not easy. Using utility computing is still a difficult, nasty business (see CRN's review of Sun Grid).

Moreover, there are the concerns I raised in my previous post:
Not all compute problems can be "shipped out" easily. There are huge data concerns. First of all, there are the ubiquitous privacy and security issues: some data people just don't want leaving their building.

Beyond that, though, there's the issue of data size and compute-to-byte ratio. If I need to do a quick search of a huge dataset I just collected from my genome lab or my jet design windtunnel, it may not make sense to move that to a "computing powerplant." Heck, I may be collecting tons of data in real time, and I need to analyze it in real time. I need my computes where my data is. As Jim Gray says, "Put the computation near the data." If your data are with you, that means your computes are with you as well.
For the last time: I'm not arguing against utility computing. But if you think that it will apply to every computing problem, that enterprises are going to get all of their CPU cycles over the net and no one will buy servers anymore, you're just not looking at the real world.

(I also used some numbers directly from this IDC press release, which was the source for the LinuxInsider article).

Technorati tags: , , ,