Ian Foster has a terrific post up about a program happening at the University of Notre Dame -- they've put an HPC cluster in a greenhouse, where the heat it generates is actually welcome. They're saving money on heating in the greenhouse, and on cooling in the datacenter.
He then describes an idea from Paul Brenner from ND's Center for Research Computing:
Paul then described a fascinating idea: placing low-cost (but high-heat) "grid heating appliaces" (CPU+memory+network) in campus offices... By scheduling jobs only to cold rooms, a grid scheduler can do double duty as a source of both low-cost computing and free heating (or is it heating and free computing?).I love that.
My question is: who's going to write the first thermostat to grid-scheduler interface module? It would be absolutely fantastic to see a scheduler that is dynamically allocating jobs based on temperatures in rooms.
Of course, it's a bit of a pipe dream. Clusters that generate lots of heat also tend to generate lots of noise, and you can't have that just anywhere.
Still, creative ideas like this can lead to practical innovations -- you can imagine a university eliminating a large datacenter in favor of "compute closet/heat rooms" throughout the campus. Or a large datacenter where the generated heat is used to heat water -- as the datacenter in Uitikon, is doing.