Friday, March 03, 2006

Because 1.21 gigaflops just aren't 1.21 gigawatts

1.21 Gigawatts?  What was I thinking?

In his normally insightful blog this week, Nicholas Carr made a rather off-the-wall suggestion. He posits that the server industry is doomed, and as proof he writes about trends he sees happening: shifts away from high-end servers toward either blades or grids of commodity, off-the-shelf hardware (COTS).

He cites two examples: Sumitomo Mitsui Bank reduced 149 traditional servers with 14 blade servers, and Google runs all of its software on machines it assembles itself.

Of course, I don't think these two examples prove much at all. After all, the blade systems that he is referring to need to be supplied by somebody--and I think we will continue to see the traditional server manufacturers continuing to move in that direction. Blades won't kill the server market; they'll be part of it.

As for Google ("It buys cheap, commodity components and assembles them itself into vast clusters of computers"). Not every company is Google--we can't all buy so many machines that no one even notices when one dies. We can't all have people on staff to build our own machines, then spend their days roaming our aisles of racks pulling out the dead ones. Many of us depend on the quality that the server manufacturers deliver. Google isn't a typical company, or even a typical web company; one might say they're unique. So I don't think their lack of name-brand servers is a harbinger of doom.

But Carr gets way off-track when he then suggests that utility computing will kill the server industry:
If large, expert-run utility grids supplant subscale corporate data centers as the engines of computing, the need to buy branded servers would evaporate. The highly sophisticated engineers who build and operate the grids would, like Google's engineers, simply buy cheap subcomponents and use sophisticated software to tie them all together into large-scale computing powerplants.
I've seen many references to utility computing before, and I just don't buy it.
windmill
Partly, it's just physics. All electrons look alike (let's not get into electron spin here: as far as my appliances are concerned, every electron looks the same). It doesn't matter to me if the power that's lighting up my life, running my refrigerator, and powering my PC came from a wind farm, a hydroelectric plant, or a diesel turbine. Well, for environmental reasons, I might prefer the former two, but the point is that when an electron gets to me, I can't tell where it came from.

Computes just aren't the same. Computes look different on different operating systems. Not all software runs on all operating systems. Different people prefer different toolsets, and they always will. Some OSs are better for some things than others, and people choose the appropriate OSs for them. Yes, we've all read about "write once, run everywhere" software--but a small minority of software actually runs that way. OSs are different, and they will continue to be different. People will continue to write software that takes advantage of particular OSs.

Not all compute problems can be "shipped out" easily. There are huge data concerns. First of all, there are the ubiquitous privacy and security issues: some data people just don't want leaving their building.

Beyond that, though, there's the issue of data size and compute-to-byte ratio. If I need to do a quick search of a huge dataset I just collected from my genome lab or my jet design windtunnel, it may not make sense to move that to a "computing powerplant." Heck, I may be collecting tons of data in real time, and I need to analyze it in real time. I need my computes where my data is. As Jim Gray says, "Put the computation near the data." If your data are with you, that means your computes are with you as well.

Don't get me wrong: I'm a big believer in distributed computing, and I'm a big believer in grid computing. But I don't think that, in the future, I'm going to flip on the "compute switch" the way I flip on a light switch today.

Is the server market changing? Of course it is. Blades, virtualization, distributed computing: these are all changing the needs of the market. There will continue to be a high end market. There will continue to be a low end market. But utility computing will not kill servers.

1.21 gigawatts? What was I thinking?

3 comments:

  1. I expanded on this post but I was unable to get trackback to work. To see my post follow this link.

    Kim

    ReplyDelete
  2. Anonymous5:31 PM

    I'm afraid you're using the exceptions to build your rule.

    90% of all compute cycles are exactly the same. If you doubt that, take a quick look at the server marketshare data. The processor, memory and I/O architecture battle has been settled. So, as long as you're running typical business or web applications and not HPC, computing resources can be considered a commodity.

    How to deliver that resource is decided. It's TCP/IP over the internet.

    What's been missing is a standardized way to organize the usage of remote cycles to power a distributed application. This challenge has also been solved and now we get to see how the market reacts.

    ReplyDelete
  3. Sorry I missed your comment before, Bert. My comment notification must be broken!

    There's a standard processor? There's a standard O/S? There are standard "typical business applications?"

    No one operating system ships on more than 30 or 40% of servers out there--Linux and Windows are both gaining market share as UNIX slowly bows out.

    So when you say "The processor, memory and I/O architecture battle has been settled. So, as long as you're running typical business or web applications and not HPC, computing resources can be considered a commodity." I'm not even sure what you mean. Yes, there is "commodity, off-the-shelf hardware" out there--but it's the configuration of that hardware that makes running different pieces of software on it difficult.

    What OS(s) are installed? What patches? What libraries?

    For people who write and run custom software, those are big issues.

    Now, don't get me wrong: I think there is a market for some utility computing out there. I stand by my point, though: it won't kill the server market.

    ReplyDelete