Thursday, January 31, 2008

Hey, Excel: Resolver One understands .NET. Are you learning?

I've posted extensively about Excel and Excel Services, and without a doubt my biggest disappointment with Excel 2007 was the lack of .NET integration. Excel forces a developer to jump back into the 20th century to do COM development. I lamented this; I wish that the Excel team had adopted .NET.

Well, earlier this week I found a spreadsheet that goes far beyond my expectations for .NET integration: Resolver One. (BTW, when Harry Pierson and Larry O'Brien both mention a product within a couple of days of each other, check it out).

Resolver One is a powerful spreadsheet tool based on Python. It translates your spreadsheet logic into Python, and it even lets you write your spreadsheet logic in Python.

But what really impressed me was the .NET integration. They've got a very interesting take on how to integrate -- much more interesting than Microsoft's VSTO.

Rather than simply let you put .NET code behind your spreadsheet, ResolverOne actually allows you to put .NET objects into your spreadsheet cells. That's right -- simply add a reference to your DLL, then you can enter =MyClass() into a cell. It will instantiate your class, and store it "in the cell." If your constructor takes arguments, no problem: Enter something like =MyClass(B1, B2) to pass the contents of cells B1 and B2 to your constructor.

How can you take advantage of this object? Well, now other cells can use that object -- pulling properties out for example. So if your object is in cell C1 and you want to pull out one of its properties into cell D1, you can enter =C1.MyProperty.

Other great things about this product: it's $99 for a commercial license. Free for a non commercial license.

This .NET integration made it a snap for me to adapt to commanding an app on the grid. I had my standard "MonteCarloPi" grid application running after about 5 minutes of coding (I'll post that source soon). And when I say "5 minutes," I mean 5 minutes. It was ridiculously easy, and that's a very good thing. With 5 minutes of coding, I had my simple, single-threaded .NET object running in parallel on all of the multicore machines in my office, and the results were being displayed in the spreadsheet.

In a way, Resolver One's use of .NET reminded me a lot of our own use of .NET. Whereas previous spreadsheets (and grid computing software) required you to take an enormous step backwards in programming technology in order to write programs, Resolver One (and Digipede) actually leverage the programming models of .NET to make the developer more at home and more productive.

All in all, it's a very impressive product. Microsoft, are you watching?

Update 2008-01-31 12:02 Harry Pierson just listed Resolver One as one of the highlights of LANG.net, and points out that the talks should be online soon.

Update 2008-01-31 12:32 Half an hour later, Larry O'Brien calls Resolver One the most impressive application from LANG.net.

Technorati tags: , ,

Thursday, January 24, 2008

Off topic: Where's my HD Movie Rental?

This is way off topic for a grid computing blog -- but I have to get it off my chest.

I will never, ever consider buying a set-top box dedicated to online movie rentals (see Vudu, AppleTV, Netflix).

But if Microsoft or Sony announces HD rentals through an XBox 360 or the PS3, I'm buying one immediately.

Why doesn't this exist?

I want a multi-purpose box. Plays my DVDs. Plays games. HD Movie rentals online.

The XBox and PS3 already have HD out and internet connections. Can they just write this software, make some deals with studios, and be done with it? Bill, this is your chance to beat Steve to a punch for once.

Wednesday, January 23, 2008

Your Grid Ate My Battery!

I've read a lot lately about the prospect of doing cloud computing on mobile devices.

I first read about the idea at ThinkInGrid (in Spanish, no less--they've since switched to English and it's much easier for me to digest!).

Lately, with the attention on cloud computing and with Google announcing Android for mobile devices, I've seen more references. Nikita over at GridGain blogged about it and subsequently got a bunch of attention: Bob Lozano picked up on the thread, and Olga Kharif at Business Week even did a column on it.

However, with the exception of Alex Miller's post on the subject, everyone seems to skip over a very salient point: battery life is perhaps the most important feature in a mobile device.

CPUs use tons of power. Mobile OSs work very hard to preserve battery life as much as possible. My phone has games built in to it, and if I make the mistake of not exiting one of those games after playing, my battery will be gone within an hour.

While I love the idea of cloud computing, I'd never trade a single cycle of my phone's CPU in exchange for shorter battery life. I don't want a thousand free minutes--I want to be able to leave Bluetooth on all the time without draining my battery!

And this doesn't even get into the question of bandwidth.

I do think that there is life beyond servers for grid computing: we have customers with desktops and even laptops contributing cycles. But if it doesn't have an AC adaptor plugged into it, it's not going to do very much good on a grid...

Photo credit: gracey

Friday, January 18, 2008

Herb Sutter: Lawbreaker!


Herb Sutter just posted a link to his latest article in Dr. Dobb's Journal, and in it he advocates breaking the law.

Don't worry -- he's not about to get any jail time. He's offering a great take on how to break Amdahl's Law. It's very practical advice for getting the most of your multiprocessor and multicore machines.

One of the strategies he sees is Gustafson's Law: to increase the speed up of your parallelized application, do more parallel work.

Amdahl's law says that the total time taken by your application is computed thusly:

totaltime = s + p/n
(where s is the serial component of your work, p is the parallel component, and n is the number of cores), and therefore
speedup = (s + p)/(s + p/n)
John Gustafson pointed this out: What happens when you vastly increase p and n? You get a much better speedup value.

It's a very practical look at what we see in our business every day: having access to much more parallel computing power gives you the ability to take on much, much larger jobs. You do analysis that you would never dream of doing on a single core on a single machine.

Of course, Herb is talking about the difference between single core machines and multicore machines -- but the exact same concepts apply to multiple machines. If going from 1 to 4 cores changes the very nature of the work you can do, imagine what going from 4 to 400 does!

And, as Herb points out, this is very, very common.

As I like to say, "Any algorithm worth doing once is worth doing many times." If you've got earth's best stock-price algorithm, would you run it on only one stock? If you have a fantastic rendering tool, would you use it to render only one frame? Of course not. And dividing up your data doesn't make something embarrassingly parallel--it makes it delightfully parallel!


Tuesday, January 15, 2008

Sun Abandons Computers?

Strange news out of Santa Clara...

Two of my favorite reads, Nicholas Carr's Rough Type and John E. West's insideHPC, both pointed me to the same blog post by Sun's Data Center Architect, Brian Cinque. When those guys link to the same post, you know it's worth reading.

Cinque gives the broad strokes of Sun's plans for data center consolidation, which include reducing total square footage and electric consumption by 50% over the next 5 years. Admirable enough.

But Cinque continues...within two years of that, Sun plans to own ZERO data centers.

While he neglects to give any other details, it seems clear that this can only mean one thing: Sun is going to spend the next five years consolidating into a small number of data centers...and then they're going to sell them.

Clearly, after a 5 year effort of consolidation and rearchitecture, they're not going to spend the next 2 years scrapping all that work. Rather, they have decided that they'll let someone else take care of running them. Why? Well, ostensibly, it's because they think someone else can do it better/faster/cheaper then they can.

It's a bit surprising to me. Sun has spent the last year and a half convincing people to run their grid applications on Sun Grid--now they're telling the market that someone else is more qualified to run their own data centers. If I were a Sun customer, I'd be running to EC2 immediately.

On the other hand, maybe it's not so surprising at all, given that Sun now seems to be a database company...

Technorati tags: , ,

Tuesday, January 08, 2008

Partnering with Microsoft: The Ugly


My colleague John Powers has frequently posted about the good, the bad, and the ugly of partnering with Microsoft--and today's post would definitely qualify as "the ugly."

He's going through the annual process of renewing our Gold Certified status, and, as usual, he's suffering through hours of terrible user experience on their website.

I'll give a short summary here, but you should go read John's post for the full, gory detail.

Unfortunately, part of the process involves getting customer references, and that means having our best customers try to use the Microsoft partner website. This means that we are now subjecting our best, most valuable customers to the frustrations of trying to use a slow, buggy site. Our customers are already doing us a favor by filling out these forms--now Microsoft making is them take (a lot of) extra time, suffering through numerous timeouts, crashes, etc.

Put another way: Microsoft is asking us to piss off our most important customers for the sake of our partner status.

We see a lot of value in the partner program and in Gold Certified status; we're willing to put up with a lot of hassle. But asking our customers to deal with that hassle is too much.

Technorati tags:

Thursday, January 03, 2008

MSFT Supercomputing video online

Right before my weeks-long blogging blackout at the end of last year, I wrote that I was headed to Supercomputing in Reno. I never wrote an SC07 wrapup (although John Powers wrote one here), and it seems a bit late to do one now.

However, Patrick O'Rourke of Microsoft just told me about some videos they produced at SC07 and have gone online recently, including one that features an interview with me.

Check out the HPC++ page on the Microsoft site, and look for the Windows HPC in Financial Services link. It's a 9 minute video that features an interview with Jeff Wierer of Microsoft and a few different partners.

Patrick saved the best for last, by the way--my mug appears about 7 minutes in. If you watch the whole thing you'll notice, of course, that we're the only partner who can stake the claim to being all .NET.

(Note: ironically, I can't get that video to play in IE, but it works fine in Firefox. Don't tell my Microsoft friends I've got Firefox on my machine).

Technorati tags: , ,

Wednesday, January 02, 2008

Distributed F# Sample

The week between Christmas and New Year's Day is typically a slow one; I took advantage of some quiet time in the office to play with F#. What with F# becoming a "first class" .NET language and shipping with VS 2008, I wanted to see what it would take to adapt one of our standard samples (using a Monte Carlo calculation to compute Pi in parallel across different cores and machines on the grid).

I was a bit rusty in functional programming (I took Lisp in about 1990), and the eventing took a bit of research to understand, but I got something working. (See code below) I'm not sure if I've made any grievous errors -- if you see any, please let me know!

I know Don Syme recently posted about using Parallel Extensions in F#; next I'll have to take a look to see if I can extend that from using many cores to using many cores on many boxes.

Anyway, here's the F# code I ginned up:
// F# Digipede Sample

#light
open System

let mutable AccumulatedPi = 0.0

let numTasks = 100
let numIterationsPerTask = 1000000

// This function determines if a point lies within the Unit Circle
let InUnitCircle x y = sqrt(x * x + y * y)
// This is the class that will be distributed for remote computation; the DoWork
// method is overridden, and will be invoked on the objects when
// they are on the remote nodes
type MyWorker =
  class
    inherit Digipede.Framework.Api.Worker
    val mutable _Pi : double
    val _NumIterations : int new(a) = { _NumIterations=a; _Pi=0.0 }

    override x.DoWork() =
      let r = new Random(x.Task.TaskId)
      let mutable XCoord = 0.0
      let mutable YCoord = 0.0
      let mutable NumberInUnitCircle = 0

      for i = 0 to x._NumIterations do
        XCoord <- r.NextDouble()
        YCoord <- r.NextDouble()
        NumberInUnitCircle <- NumberInUnitCircle + if InUnitCircle XCoord YCoord then 1 else 0

      x._Pi <- 4.0 * double NumberInUnitCircle / (double x._NumIterations)
    end


// This function will be called (in the "master application") for each task that
// completes (see reference below)
let OnTaskCompleted (args) =
  let myArgs = (args :> Digipede.Framework.Api.TaskStatusEventArgs )
  let returnedWorker = (myArgs.Worker :?> MyWorker)
  printf " Task %d calculated %f \n" myArgs.TaskId returnedWorker._Pi
  AccumulatedPi <- AccumulatedPi + returnedWorker._Pi
  printf " Current total is %f\n" AccumulatedPi


// Ok, this is the code that is the "master" application. It will define and submit the job

// First, we instantiate a Digipede Client and tell it where the Digipede Server is
let Client = new Digipede.Framework.Api.DigipedeClient()
Client.SetUrlFromHost("myservername")

// Now, tell the system what class we're distributing. It will automatically
// determine which binaries need to be distributed to use this class
let JT = Digipede.Framework.JobTemplate.NewWorkerJobTemplate(type MyWorker ) ;
// this line tells the Digipede Agent to run these tasks "per core" on multi-core machines
JT.Control.Concurrency <- Digipede.Framework.ApplicationConcurrency.MultiplePerCore // Create a job, and add tasks to it. Each task gets a new MyWorker object
let aJob = new Digipede.Framework.Job()
aJob.Name <- "F# Job"
for i = 1 to numTasks do
let myTask = new Digipede.Framework.Task()

let mw = new MyWorker(numIterationsPerTask)
myTask.Worker <- (mw :> Digipede.Framework.Api.Worker)
aJob.Tasks.Add(myTask)


// Next, I set up some events.
(* This is like an anonymous delegate. It will get executed each time we get notified that a task has completed. *)

do aJob.TaskCompleted.Add
(fun args -> ( printf "Task %d executed on %s \n" args.TaskId args.TaskStatusSummary.ComputeResourceName ) )

do aJob.TaskFailed.Add (fun args -> ( printf "Task %d failed \n" args.TaskId) )

// Another way to handle events is to give it a function to call on task completion
// I'm doing both, which is redundant
do aJob.TaskCompleted.Add(fun args -> OnTaskCompleted args)


let sr = Client.SubmitJob(JT, aJob)

printf "Submitted job %s\n" (sr.JobId .ToString() )
printf "Waiting for results\n"

let mybool = Client.WaitForJob(sr.JobId)
printf "Job finished\n"

AccumulatedPi <- AccumulatedPi / (double numTasks) printf "Calculated pi is %f\n" AccumulatedPi Threading.Thread.Sleep(5000)


As I said before: this is my very first dabble into F#, and I may be making mistakes. But it runs, and I've figured out a couple of ways to handle events.

Technorati tags: , ,