Friday, March 03, 2006

Because 1.21 gigaflops just aren't 1.21 gigawatts

1.21 Gigawatts?  What was I thinking?

In his normally insightful blog this week, Nicholas Carr made a rather off-the-wall suggestion. He posits that the server industry is doomed, and as proof he writes about trends he sees happening: shifts away from high-end servers toward either blades or grids of commodity, off-the-shelf hardware (COTS).

He cites two examples: Sumitomo Mitsui Bank reduced 149 traditional servers with 14 blade servers, and Google runs all of its software on machines it assembles itself.

Of course, I don't think these two examples prove much at all. After all, the blade systems that he is referring to need to be supplied by somebody--and I think we will continue to see the traditional server manufacturers continuing to move in that direction. Blades won't kill the server market; they'll be part of it.

As for Google ("It buys cheap, commodity components and assembles them itself into vast clusters of computers"). Not every company is Google--we can't all buy so many machines that no one even notices when one dies. We can't all have people on staff to build our own machines, then spend their days roaming our aisles of racks pulling out the dead ones. Many of us depend on the quality that the server manufacturers deliver. Google isn't a typical company, or even a typical web company; one might say they're unique. So I don't think their lack of name-brand servers is a harbinger of doom.

But Carr gets way off-track when he then suggests that utility computing will kill the server industry:

If large, expert-run utility grids supplant subscale corporate data centers as the engines of computing, the need to buy branded servers would evaporate. The highly sophisticated engineers who build and operate the grids would, like Google's engineers, simply buy cheap subcomponents and use sophisticated software to tie them all together into large-scale computing powerplants.
I've seen many references to utility computing before, and I just don't buy it.
windmill
Partly, it's just physics. All electrons look alike (let's not get into electron spin here: as far as my appliances are concerned, every electron looks the same). It doesn't matter to me if the power that's lighting up my life, running my refrigerator, and powering my PC came from a wind farm, a hydroelectric plant, or a diesel turbine. Well, for environmental reasons, I might prefer the former two, but the point is that when an electron gets to me, I can't tell where it came from.

Computes just aren't the same. Computes look different on different operating systems. Not all software runs on all operating systems. Different people prefer different toolsets, and they always will. Some OSs are better for some things than others, and people choose the appropriate OSs for them. Yes, we've all read about "write once, run everywhere" software--but a small minority of software actually runs that way. OSs are different, and they will continue to be different. People will continue to write software that takes advantage of particular OSs.

Not all compute problems can be "shipped out" easily. There are huge data concerns. First of all, there are the ubiquitous privacy and security issues: some data people just don't want leaving their building.

Beyond that, though, there's the issue of data size and compute-to-byte ratio. If I need to do a quick search of a huge dataset I just collected from my genome lab or my jet design windtunnel, it may not make sense to move that to a "computing powerplant." Heck, I may be collecting tons of data in real time, and I need to analyze it in real time. I need my computes where my data is. As Jim Gray says, "Put the computation near the data." If your data are with you, that means your computes are with you as well.

Don't get me wrong: I'm a big believer in distributed computing, and I'm a big believer in grid computing. But I don't think that, in the future, I'm going to flip on the "compute switch" the way I flip on a light switch today.

Is the server market changing? Of course it is. Blades, virtualization, distributed computing: these are all changing the needs of the market. There will continue to be a high end market. There will continue to be a low end market. But utility computing will not kill servers.

1.21 gigawatts? What was I thinking?