Herb Sutter just posted a link to his latest article in Dr. Dobb's Journal, and in it he advocates breaking the law.
Don't worry -- he's not about to get any jail time. He's offering a great take on how to break Amdahl's Law. It's very practical advice for getting the most of your multiprocessor and multicore machines.
One of the strategies he sees is Gustafson's Law: to increase the speed up of your parallelized application, do more parallel work.
Amdahl's law says that the total time taken by your application is computed thusly:
totaltime = s + p/n(where s is the serial component of your work, p is the parallel component, and n is the number of cores), and therefore
speedup = (s + p)/(s + p/n) John Gustafson pointed this out: What happens when you vastly increase p and n? You get a much better speedup value.
It's a very practical look at what we see in our business every day: having access to much more parallel computing power gives you the ability to take on much, much larger jobs. You do analysis that you would never dream of doing on a single core on a single machine.
Of course, Herb is talking about the difference between single core machines and multicore machines -- but the exact same concepts apply to multiple machines. If going from 1 to 4 cores changes the very nature of the work you can do, imagine what going from 4 to 400 does!
And, as Herb points out, this is very, very common.
As I like to say, "Any algorithm worth doing once is worth doing many times." If you've got earth's best stock-price algorithm, would you run it on only one stock? If you have a fantastic rendering tool, would you use it to render only one frame? Of course not. And dividing up your data doesn't make something embarrassingly parallel--it makes it delightfully parallel!