Monday, September 22, 2008

HPC Server 2008 RTM!

Congrats to Kyril Faenov, Ryan Waite, and the rest of the HPC team up in Redmond.

Today at the HPC on Wall Street show in New York, Microsoft announced that the second version of their high performance computing tool has been released to manufacturing.

I got to sit down with Kyril (who runs the HPC team) back at Super Computing. He talked about some of the new features coming in the latest version, and broke them into four categories:

  1. Scalability: They really want to address the top end of the market, which meant adding features to ensure that Windows clusters can scale as large as the big Linux clusters. That included addressing issues all over the place, from their MPI stack (by the way, they're seeing a 30% improvement in LINPACK) to their management tools.
  2. Ease of use: More is available out of the box, including better management tools, improved diagnostics, and reporting capabilities.
  3. Integration with other applications: The HPC team worked overtime to improve integration with all sorts of stuff, from Microsoft's own tools (like System Center and Active Directory) to shared storage from other vendors (like Panassas, Ibrix, and IBM) and standards groups (HPC Basic Profile, GGF, etc).
  4. Applications: Kyril mentioned that more and more "traditional" ISVs are now running on Windows. By "traditional," of course, he meant "traditionally running on Linux or Unix clusters."
What's interesting to me is what Kyril didn't emphasize (and, indeed, what's barely mentioned in today's press release): their new .NET tools for load balancing SOA applications using the WCF Router, and their integration with Windows Clustering to provide head-node failover for high availability. 

Maybe they're not important enough to emphasize in the PR, but they address some needs that many users have wanted (better development tools, including their first real foray into .NET development tools, and a good strategy for high-availability applications).

Following on last week's announcement of a partnership with Cray, it's clear that Microsoft is working to expand the footprint that they've created with the first version of this product, which was called Compute Cluster Server.

I'm not sure if they've announced anything official about prices, but Kyril said that CCS's price point (as much as 80% off a full Server 2003 license) had been "very well received." So I don't expect a price change.

By the way, I asked Kyril about the name change -- did the words "high performance" mean that they were ready to take on the upper echelons of the Top 500 list?

Kyril smiled. "We've earned our stripes."

With an entry at number 23 in the latest list, I guess that's starting to be true.

Technorati tags: ,

Thursday, August 28, 2008

Is my blog named wrong?

Somehow I found a link to Wordle, a very cool tool that creates word cloud graphics based on text or URLs. Naturally, I ran http://westcoastgrid.blogspot.com through it to see what my grid cloud would look like...



...and promptly found out that my "grid cloud" is actually more of a "cloud cloud."

That last sentence points out several things I've noticed lately:

  • Those of us who have been writing about Grid Computing are increasingly writing about cloud computing, and of course that's no surprise. While clouds are opening up the prospect of distributed computing to a much wider audience than ever, using a cloud effectively means possibly managing many machines effectively. Alternatively, it may mean writing software that effectively runs on many machines simultaneously. In either case, the grid computing industry has been thinking about (and solving!) these problems for years. If you want a firsthand look at the expertise these "grid" folks have in "cloud" efforts, hop onto the Google Cloud Computing group and check out Rich Wellner's contributions. As I said, the grid folks have been thinking about these problems for years (albeit in a slightly different implementation).
  • The term "Cloud" already has far too many meanings in the marketplace (another parallel to grid, come to think of it)
  • If I'm writing more about cloud computing than grid computing, is it time to rename my blog?
I've got a few posts I've been thinking about in terms of the intersection of cloud and grid (and where cloud is going) -- I'll try to get back on the blogging bandwagon and pump some of those out.

In the meantime, I'm not changing the name of the blog. It may be an antiquated name, but at least people know where to find me.

Technorati tags:

Friday, August 01, 2008

I'm all for scalability

I love being quoted by that coffee-roasting, free-diving, Hawai'i living, .NET expert Larry O'Brien, so I was quite please to read my name in his latest SD Times column. He quoted a tweet (yes, I love Twitter) where I quoted a fellow CloudCamp attendee saying "Designing your app to scale is guaranteed failure—it will take too long to write."

Unfortunately (and due primarily to the 140 character Twitter limit), Larry didn't realize that I didn't agree with the guy I was quoting -- I just found it amusing.

I've actually blogged quite a few times about designing scalability into an app. In a 2005 post (Of course scalability matters!), I said this:

Most importantly, [designing scalable software] means acknowledging the possibility, however remote, that you may actually succeed and build something that people eventually use. Many people.

This point applies equally to those designing web sites and those planning on deploying SaaS. If you are going to make it available on the web, and you're not designing for scalability, then you just aren't planning for success: you're planning for failure.
I followed that up with a post a month later, and I was quite pleased to learn that Werner Vogels's viewpoint coincided with my own.

So I wholeheartedly agree with Larry's sentiment:
However, I’m uncomfortable with the idea of dealing with scaling only when it becomes a problem. While laissez-faire attitudes have come to dominate code and design approaches, I still resist the idea of abandoning upfront architectural work.
In fact, when I overheard the comment at CloudCamp, my first reaction was this: the only reason building scalability into your product would hurt you is if your idea is so unoriginal that someone else is 5 minutes behind you.

So: thanks for the mention, Larry. I'm on your side.

(And I really am going to ride over to Sweet Maria's next week, so send me an e-mail)

Technorati tags:

Tuesday, July 29, 2008

Sarah Perez Looks at Microsoft's Cloud

Sarah Perez at ReadWriteWeb has a pretty darn good post up about Microsoft's cloud efforts (at least the publicly announced cloud efforts).

While it's not terribly in-depth, it does highlight the breadth of Microsoft's efforts in cloud computing: the Connected OS, the Software Stack, the Developer Tools, and the Datacenter effort. I don't think I'm going out on too much of a limb to say that Microsoft is taking a broader approach to cloud than any other vendors out there.

Their vision of a "connected OS" is deeper than anything I see from any vendor. Their software stack spans consumer apps and enterprise apps.

Their development tools (many of which have yet to be announced) are broad and varied, and will continue to become richer. Always remember: Microsoft is a tools vendor first and foremost.

And, of course, they're building data centers at a pace that only they and Google can.

As with many efforts from Microsoft, I expect this to be ragged at times. It's a big company, and they are making this a huge effort. There may be conflicting offerings. There will definitely be failures.

But as Sarah points out, Ray Ozzie's Microsoft 2.0 is focused on this. So, mark my words: there will be some major successes.

Monday, July 28, 2008

Robert continues to nail Cloud Taxonomy

Perhaps partially inspired by Matias Wolsky's SaaS Taxonomy Map, my friend, co-worker, and colleague Robert W. Anderson has written a great post called The Cloud Services Stack -- Infrastructure. In his breakdown of the varying forms of services being offered in the cloud today, he proves himself to be the Linnaeus of the XaaS products on the market today.

With so many "What is Cloud Computing" posts and articles on the net that have only served to blur rather than sharpen distinctions, I think his post should be required reading. Building on his earlier post (Cloud Services Continuum), he's accurately analyzing the landscape, providing a context that allows us to group (and therefore, ultimately, compare) the differing cloud offerings.

It's not just a useful exercise, it's a necessary exercise. So many posts (and even articles in mainstream publications) say things like "You've got lots of choices, including Amazon EC2 and Google's AppEngine." Those two offerings are so very different that they can hardly be considered competitors--yet because they're lumped into the very broad category of "Cloud," people keep mentioning them in the same breath.

Rob's diagram breaks out three main parts to the cloud services stack: SaaS (or, as he sometimes calls it, Applications as a Service), Platform as a Service, and Infrastructure as a Service. It's just as useless to try to compare an IaaS offering to a PaaS offering (e.g., AppEngine and EC2) as it is to compare GMail and GoGrid--they simply occupy different niches in the ecology.

But, interestingly, Rob's Venn diagram makes it clear that unlike the Linnaean taxonomy of the the biological kingdom, these groups that make up cloud offerings are overlapping rather than heirarchical. For instance, several offerings that started as SaaS (NetSuite, FaceBook, and SalesForce.com) have added PaaS functionality to their suites.

Similarly, Twitter and Identi.ca have SaaS offerings that are being pushed all the way down to the Infrastructure as a Service level, being used to provide a messaging layer in the cloud. Biztalk Lab's Workflow Services sits astride the PaaS/IaaS boundary. That's not to say that all offerings can be compared, but rather that an offering can have multiple facets.

The other thing I think that is quite interesting is the fragmented nature of the IaaS market -- Rob separates it generally into three submarkets: Storage, Virtual Hardware, and "Other." (The same could be said, I suppose of the SaaS market, but that's a much more mature, well understood, and less interesting topic). I'll have more to write about this particular market later, because I think there is lots of room for analysis here.

Public domain image from the Wikimedia Commons.

Technorati tags: , , ,

Tuesday, June 17, 2008

News flash: .NET runs on XP and Vista

I was surprised to read the post that Matt Asay (of CNET) wrote (that hit TechMeme in a big way), touting Evans Data's report that claimed that only 8% of developers are targeting Windows Vista, while 49% are targeting XP.

It showed an astounding ignorance of how software is developed for the two operating systems -- without having read the report, it certainly calls into question whether Evans Data and Asay have any idea what developing software for these platforms is like.

In almost every case, it's not an either/or situation: .NET runs perfectly on both OSs, and almost all software written in .NET runs on both. If Evans and Asay don't know that, they're not qualified to be writing reports like these (or writing about reports like these).

If the survey forced developers to choose XP or Vista-- it was either designed by someone who doesn't understand the platform, or it was designed to try to make developers make a choice that would result in a controversial headline.

And for Asay to blindly quote the survey shows a predilection for Microsoft-bashing (and ignorance as well). The comments on his post, however, make it clear that many of his readers do understand the software world, and understand that the dichotomy is a false one.

The study didn't mention Mac OS X at all, but Asay still finds reason to put in a good word for it. I wonder if it ever occurred to him that most software that runs in Leopard would still run in Tiger. Or possibly even Panther. And that developers can target the platform without targeting a particular version.

Anand Iyer of Microsoft has a good post up here that discusses this in more detail.

Technorati tags: , ,

Friday, June 13, 2008

Calling Early Adopters

Our customers range from the very pro-Microsoft to the quite agnostic--we've tried to walk a line that allowed the .NET-embracers to get the most out of technology, while letting non-programmers with a pile of desktops tap into their computing power quickly and easily.

It's sometimes a difficult line to walk.

Anyway, this post goes out to the former group: the .NET-lovers, the early adopters.

Rob and I have posted (and he's done some twittering) about some of the technologies we've been investigating over the last couple of weeks: Workflow Foundation, Microsoft Distributed Cache (Velocity), Mono. We've done some very cool, very interesting things here in the lab -- and we are interested to hear your perspective on them.

We'd like to talk to you if you're using any of these, or if you're interested in using them. How are you doing it? How would you like it to interoperate with your grid?

Drop me a line at dan at you-know-who-ipede dot net and we'll set up a LiveMeeting.

Technorati tags: , , ,

Tuesday, June 10, 2008

In the Digipede Lab: Velocity and Mono


I haven't been keeping up with the other Digipede bloggers...but in case you haven't seen it on John's and Rob's blogs, we've got some interesting stuff happening in the lab here at Digipede world headquarters.

Rob talked about the work he's done playing with Mono. He's pretty understated about how cool the work he did was. Of course, as Rob says, it can launch Linux specific binaries. But also:

it is able to run our .NET development patterns.
That, to me, is the real potential here (and it's also what makes Mono so cool) -- taking the awesome developer experience of .NET and making it available on multiple platforms. We've said it over and over again: developer experience matters, and reducing the time it takes a developer to get his software running on the grid is extremely important. This could let people leverage our development tools even more.

Rob's and John's caveats all stand: this is not a product, it's not slated for release, etc.

Running elsewhere in the lab: Velocity!

I'm very excited about Microsoft's foray into the distributed object cache field -- mostly because I talk with our customers, and our customers have been begging for this.

I started doing performance testing here in our lab, and I can tell you this: it can dramatically improve performance for moving data on the grid.

Hey, Digipede customers -- if you want to know more about these proofs-of-concept, e-mail me directly: dan at youknowwhere dot net.

Photo credit: blary54

Technorati tags: , ,

Tuesday, June 03, 2008

Increasing Velocity

At TechEd yesterday, BillG and friends made a very interesting announcement: Microsoft is releasing a distributed, in-memory object cache (code named Velocity). For details, check out the Velocity Blog.

MDavey is already on record asking the right questions: how will it interact with the grid (thanks for the mention, Matt)? Will there be push? How does it compare with the commercial object cache solutions already on the market?

Can't wait to get my hands on that CTP!

Technorati tags: ,

Thursday, May 29, 2008

Digipede On Board!


I know I can't use the word "inside" (apparently, Intel owns it now). But we've now got tiny little decals that let you tell the world your computer is on the grid!

If you're a Digipede customer and you want some of these nifty stickers to throw around your datacenter, give me a shout (dan {at} digipede {dot} net).

Technorati tags: ,

Tuesday, May 13, 2008

Something for everybody

I know there are two distinct flavors to my readers: those who care about distributed grid application virtualization parallel computing in general, and those who read specifically to hear about the latest Digipede news. This post has a little something for everyone.

Bill McColl has a great post up over at Computing at Scale entitled "Domain-Specific Parallel Programming." His most important point: "(thirty years of research and funding) has gone into parallel supercomputing, an area that is in many ways the opposite of industrial and commercial computing." He then contrasts supercomputing programmers ("Ph.D. level scientists with deep experience of parallel software development") with developers in the commercial world ("have only a limited range of programming skills, and usually no experience whatsoever of parallelism").

He's absolutely correct--and the latter set needs access to powerful distributed computing just as much as the former.

On the Digipede front, I've just put up a couple of posts on the Digipede Community site specifically for developers using the Digipede Framework SDK: here's a list of the new features in version 2.1 of the SDK, and here's a little sample that uses some of the new API functionality.

Finally, for my social-networking-addicted-readers (both of you): I got totally annoyed at the criticism of Scoble's claim that Twitter beat the USGS with news of the earthquake. The lesson isn't "You should get earthquake news from Twitter because the USGS takes two minutes." The lesson is "There is an amazing new, very widespread information gathering and distributing network--wider and faster than anything that has ever existed." That is news, and it was worth trumpeting.

Technorati tags: ,

Wednesday, May 07, 2008

Release Me!


Whew!

As I'm sure Deatle is about to announce, Digipede Network version 2.1 just gained "general availability" status.

The frequency of my posts here is one indicator that a boatload of work has gone into this release -- by all rights, it should probably be called v3.0. Here's a quick rundown of my favorite features:

  • Certified for Windows Server 2008: Announced previously, but cool nonetheless.
  • Risk-free sharing: "Pool Rank" permits risk-free sharing of resources: you can add your departmental servers to the enterprise grid and ensure that they always work on your jobs first. That means that by joining the grid, you can only improve your application performance.
  • Job concurrency: The improved Digipede Agent software can manage different applications simultaneously, maximizing utilization of compute nodes on the grid. This allows your multi-core machines to be used most efficiently.
  • Management APIs: New management APIs give developers programmatic ability to create, modify, and delete resource pools.
  • Improved task concurrency: More detailed specification of task concurrency lets you specify the number of cores per task (for multithreaded applications) or the number of tasks per core.
  • Improved server efficiency: The Digipede Server has been vastly improved in its use of storage and memory. It will handle more applications, larger applications, better than ever before.
And, of course, a host of other small changes.

If you're already a customer, make sure you let us know when you're ready to upgrade. And if you've been waiting for a chance for an evaluation, here's a perfect chance.

I'll be hosting some webcasts over the next couple of weeks to go over new features. If you're interested in signing up for one of those, head over here...

Photo credit: M42
Technorati tags: ,

Friday, April 25, 2008

Use that heat!

Ian Foster has a terrific post up about a program happening at the University of Notre Dame -- they've put an HPC cluster in a greenhouse, where the heat it generates is actually welcome. They're saving money on heating in the greenhouse, and on cooling in the datacenter.

It's genius!

He then describes an idea from Paul Brenner from ND's Center for Research Computing:

Paul then described a fascinating idea: placing low-cost (but high-heat) "grid heating appliaces" (CPU+memory+network) in campus offices... By scheduling jobs only to cold rooms, a grid scheduler can do double duty as a source of both low-cost computing and free heating (or is it heating and free computing?).
I love that.

My question is: who's going to write the first thermostat to grid-scheduler interface module? It would be absolutely fantastic to see a scheduler that is dynamically allocating jobs based on temperatures in rooms.

Of course, it's a bit of a pipe dream. Clusters that generate lots of heat also tend to generate lots of noise, and you can't have that just anywhere.

Still, creative ideas like this can lead to practical innovations -- you can imagine a university eliminating a large datacenter in favor of "compute closet/heat rooms" throughout the campus. Or a large datacenter where the generated heat is used to heat water -- as the datacenter in Uitikon, is doing.

Technorati tags: ,

Thursday, April 24, 2008

MDavey in DDJ: PFX, PLINQ, and Digipede

Matt Davey of Lab49 must live in a world where the days are 30 hours long.

Read his blog and you'll soon find out that he's an expert at user interface (Lab 49 has been working with Microsoft for a while on cutting edge UI with Silverlight and WPF), but he's also delved quite deeply into complex event processing as well as distributed computing.

He also manages to write articles for Dr. Dobb's -- oh, and don't forget that he's a consultant, so you know he's working for clients as well.

I don't know where he finds the time.

But I'm glad he does. In his article in the current Dr. Dobb's, he discusses parallelism and concurrency, PLINQ and ParallelFX. He writes about his experience taking PLINQ and implementing it to run on a compute grid (using the Digipede Network). Check it out.

One thing he doesn't mention is that some people developing in .NET are solving their multicore problem using Digipede alone -- the API makes it dead simple to take single-threaded code and run it in parallel (on separate threads or in separate processes) on multicore and multi-processor machines.

As an aside: we are just about ready to release v2.1 of the software. It's been heads-down around here for quite a while as we get ready for this, which is by far our best release ever. Haven't had time to blog about it (or anything else, for that matter), but all should return to normal very soon.

Technorati tags: , ,

Friday, March 07, 2008

Desktop software is hard, eh Google?

Blackberry curve image
I was excited to hear about Google Calendar Sync. I got a Blackberry Curve last week, and I've been playing around with the best way to get both my personal (Google) and business (Outlook) calendars sync'd with it.

We don't use Blackberry Enterprise Server, so the Outlook syncing seemed to be a bit spotty. It got some events over the air (maybe the ones for which I was e-mailed invitations), but didn't get all of them. If I plugged it in and used Blackberry Desktop Manager it worked fine -- but I didn't want to have to plug it in.

But with the Google Sync download for my phone, it started grabbing my Google Calendar items with no problem.

Google Calendar Sync seems like it's an ideal solution for me: it can keep my Outlook calendar in sync with a Google Calendar, and my phone can grab those events directly from Google over the internet. Fantastic!

Google Calendar Sync supports Microsoft Outlook 2003 and 2007 onlySo I headed over to Google, downloaded the software, and ran the install. And this is what I see: "Google Calendar Sync supports Microsoft Outlook 2003 and 2007 only"

Now, this is a brand new laptop that has only ever had Outlook 2007 on it. It's never been uninstalled, reinstalled, or anything hinky. Should be pretty vanilla.

What's worse is that there's nothing I can do. No setup pages to look at, no documentation to read. I guess I'm just SOL.

Has anyone else seen this? And more importantly: has anyone else solved this?

Technorati tags: ,

Tuesday, March 04, 2008

Where Was HPC?


Nathan Trueblood, John Powers, and I went to the Windows Server 2008 launch last week (of course, we had to show off our shiny new Certified for Server 2008 logo).

It was surprisingly well attended (and I wasn't the only one who was surprised; apparently the catering staff was as well. In the continuing battle of Microsoft vs. Ciruli on the lunch front, I lost -- no lunch for me. How hard is it to count attendees at an event that requires pre-registration??).

Anyway, there were thousands of people there. We had hundreds walking around with the cool-looking Digipede stickers, and one lucky sticker-wearer went home with an XBox.

With a triple-product launch, Microsoft had an enormous contingent there, both attending and demonstrating. In the Microsoft pavilion, they had 30 booths -- most of them centered around Server 2008. Many of those booths weren't for products that were launching: SharePoint Server was there, Microsoft Forefront, Exchange Server. Many of the booths were related to Server 2008: Hyper-V, File and Storage Solutions for Server 2008, Scalability with Server 2008.

But you know what had no mention at all? HPC Server 2008.

It was conspicuous in its absence.

Now, HPC Server 2008 won't be out for months...then again, neither will SQL Server 2008 and it was launched at this event.

So, what's the deal? While I think HPC Server 2008 will go far beyond what Windows Server 2003 CCE did (both in terms of capabilities and sales), missing an event like this shows that Microsoft still isn't thinking of the server market as a continuum. They're dividing server users into HPC (high performance computing) and what may as well be called LPC (low performance computing).

In reality, of course, there's no strict division. It's a continuum. And Microsoft should be doing everything it can to bridge the gap between HPC and "the rest of us." As Jim Gray used to refer to it: Indoor Computing. It runs the gamut.

I guess the HPC crew are huddled in Redmond, preparing for their release later this year...too bad they couldn't find the time to market to the thousands of Windows Server fans who gathered in LA last week.

Tuesday, February 26, 2008

Win an XBox 360 from Digipede at Server 2008 Launch

Wear and WinAre you going to the Windows Server 2008 / Visual Studio 2008 / SQL Server 2008 global launch in LA tomorrow?

If you'll be there, stop by the Digipede kiosk in the Partner Pavilion and pick up a nifty sticker...and it could win you an XBox 360!

The stickers are cool -- they feature Deatle, our lovable, binary mascot. And if we see you wearing one in the afternoon break, you could end up going home with an XBox 360.

See you there...

Wednesday, February 20, 2008

How Does Your Grid Help Your Multicore Problem?


One of the benefits of Digipede's object oriented programming model is the ease with which it lets you take advantage of the multiprocessor and multicore machines on your grid. The Digipede Agent knows how many cores are on each box, and it can execute work accordingly, taking advantage of the individual cores without forcing the developer to do multithreaded programming.

We've been talking about this for a while, but John Powers made a short video that makes it crystal clear.

Check it out...

Technorati tags: ,

Thursday, February 14, 2008

Worst .NET Bug I've Ever Seen

Question: What exception(s) will this code produce? And why?

while (true) {
   using (Stream sw = File.Open(strFileName, FileMode.Create)) {
         using (BinaryWriter bw = new BinaryWriter(sw)) {
            BinaryFormatter bf = new BinaryFormatter();
            bf.Serialize(bw.BaseStream, this);
         }
   }
}


Answer: Well, it seems like it shouldn't produce any exceptions. It should run forever: create a file, write data to it, close the file. Same thing, over and over again.

The usings should ensure that the BinaryWriter and Stream are closed each time through the loop.

But that's not what happens. Run it enough times, and you'll get an exception: System.IO.IOException: Cannot create a file when that file already exists. How can that file be in use? You clearly closed it last time through the loop!

Even stranger: if you follow your using with code that is doing something else with the file (like, say, moving it), you'll occasionally see a System.UnauthorizedAccessException: Access to the path is denied exception. This is a file that you clearly are authorized to access--you just created it!

Note: adding explicit calls to sw.Close() and bw.Close() doesn't change the behavior--you still get exceptions eventually.

This seems like some unholy combination of a problem between .NET, Win32, and the OS, combined with an incorrect exception being thrown sometimes.

Unfortunately, this little nasty reared its head on a customer site. And, naturally, it wasn't tightly packaged like the code above. Occasionally, inexplicably, exceptions were being thrown. It's hard to reproduce in the wild, and it took us a few days to track down and boil down.

Wow. Any .NET experts care to weigh in on why this would throw exceptions?

Watch Robert's blog to see how we ended up fixing this...

Update 2/15/2008 9:12: After a conversation with Robert, I felt I should make it clear: our software didn't have a loop like the one written above; we came up with that when trying to reproduce the behavior. He'll have more details later...

Technorati tags:

Friday, February 08, 2008

Quick visit to the 212

John Powers and I will be at the Web Services on Wall Street conference on Monday, February 11th, at the Roosevelt Hotel in Manhattan.

This conference was created by the same folks who put on the High Performance on Wall Street conference -- and that was probably the best event I attended last year.

If you're in Midtown on Monday, stop on by...

Monday, February 04, 2008

Digipede Network Free for MS MVPs

I'm very excited about this announcement.

As of today, Microsoft MVPs can get a free license to the Digipede Network Professional Edition, with 10 agent processor licenses.

Digipede joins a list of over 100 companies that make licenses free for this vibrant community of technical specialists, and we're proud to do it.

Many people outside the .NET world don't realize what a community Microsoft has fostered, and MVPs are a perfect example. They aren't Microsoft employees, but they spend a good deal of time engaging in the community, essentially helping Microsoft to evangelize the platform. They help other users on message boards, they facilitate users' groups, they put on code camps.

I can't wait to see what some of these folks do with the Digipede Framework SDK!

If you're an MVP, head over here to claim your software...

Technorati tags: , ,

Thursday, January 31, 2008

Hey, Excel: Resolver One understands .NET. Are you learning?

I've posted extensively about Excel and Excel Services, and without a doubt my biggest disappointment with Excel 2007 was the lack of .NET integration. Excel forces a developer to jump back into the 20th century to do COM development. I lamented this; I wish that the Excel team had adopted .NET.

Well, earlier this week I found a spreadsheet that goes far beyond my expectations for .NET integration: Resolver One. (BTW, when Harry Pierson and Larry O'Brien both mention a product within a couple of days of each other, check it out).

Resolver One is a powerful spreadsheet tool based on Python. It translates your spreadsheet logic into Python, and it even lets you write your spreadsheet logic in Python.

But what really impressed me was the .NET integration. They've got a very interesting take on how to integrate -- much more interesting than Microsoft's VSTO.

Rather than simply let you put .NET code behind your spreadsheet, ResolverOne actually allows you to put .NET objects into your spreadsheet cells. That's right -- simply add a reference to your DLL, then you can enter =MyClass() into a cell. It will instantiate your class, and store it "in the cell." If your constructor takes arguments, no problem: Enter something like =MyClass(B1, B2) to pass the contents of cells B1 and B2 to your constructor.

How can you take advantage of this object? Well, now other cells can use that object -- pulling properties out for example. So if your object is in cell C1 and you want to pull out one of its properties into cell D1, you can enter =C1.MyProperty.

Other great things about this product: it's $99 for a commercial license. Free for a non commercial license.

This .NET integration made it a snap for me to adapt to commanding an app on the grid. I had my standard "MonteCarloPi" grid application running after about 5 minutes of coding (I'll post that source soon). And when I say "5 minutes," I mean 5 minutes. It was ridiculously easy, and that's a very good thing. With 5 minutes of coding, I had my simple, single-threaded .NET object running in parallel on all of the multicore machines in my office, and the results were being displayed in the spreadsheet.

In a way, Resolver One's use of .NET reminded me a lot of our own use of .NET. Whereas previous spreadsheets (and grid computing software) required you to take an enormous step backwards in programming technology in order to write programs, Resolver One (and Digipede) actually leverage the programming models of .NET to make the developer more at home and more productive.

All in all, it's a very impressive product. Microsoft, are you watching?

Update 2008-01-31 12:02 Harry Pierson just listed Resolver One as one of the highlights of LANG.net, and points out that the talks should be online soon.

Update 2008-01-31 12:32 Half an hour later, Larry O'Brien calls Resolver One the most impressive application from LANG.net.

Technorati tags: , ,

Thursday, January 24, 2008

Off topic: Where's my HD Movie Rental?

This is way off topic for a grid computing blog -- but I have to get it off my chest.

I will never, ever consider buying a set-top box dedicated to online movie rentals (see Vudu, AppleTV, Netflix).

But if Microsoft or Sony announces HD rentals through an XBox 360 or the PS3, I'm buying one immediately.

Why doesn't this exist?

I want a multi-purpose box. Plays my DVDs. Plays games. HD Movie rentals online.

The XBox and PS3 already have HD out and internet connections. Can they just write this software, make some deals with studios, and be done with it? Bill, this is your chance to beat Steve to a punch for once.

Wednesday, January 23, 2008

Your Grid Ate My Battery!

I've read a lot lately about the prospect of doing cloud computing on mobile devices.

I first read about the idea at ThinkInGrid (in Spanish, no less--they've since switched to English and it's much easier for me to digest!).

Lately, with the attention on cloud computing and with Google announcing Android for mobile devices, I've seen more references. Nikita over at GridGain blogged about it and subsequently got a bunch of attention: Bob Lozano picked up on the thread, and Olga Kharif at Business Week even did a column on it.

However, with the exception of Alex Miller's post on the subject, everyone seems to skip over a very salient point: battery life is perhaps the most important feature in a mobile device.

CPUs use tons of power. Mobile OSs work very hard to preserve battery life as much as possible. My phone has games built in to it, and if I make the mistake of not exiting one of those games after playing, my battery will be gone within an hour.

While I love the idea of cloud computing, I'd never trade a single cycle of my phone's CPU in exchange for shorter battery life. I don't want a thousand free minutes--I want to be able to leave Bluetooth on all the time without draining my battery!

And this doesn't even get into the question of bandwidth.

I do think that there is life beyond servers for grid computing: we have customers with desktops and even laptops contributing cycles. But if it doesn't have an AC adaptor plugged into it, it's not going to do very much good on a grid...

Photo credit: gracey

Friday, January 18, 2008

Herb Sutter: Lawbreaker!


Herb Sutter just posted a link to his latest article in Dr. Dobb's Journal, and in it he advocates breaking the law.

Don't worry -- he's not about to get any jail time. He's offering a great take on how to break Amdahl's Law. It's very practical advice for getting the most of your multiprocessor and multicore machines.

One of the strategies he sees is Gustafson's Law: to increase the speed up of your parallelized application, do more parallel work.

Amdahl's law says that the total time taken by your application is computed thusly:

totaltime = s + p/n
(where s is the serial component of your work, p is the parallel component, and n is the number of cores), and therefore
speedup = (s + p)/(s + p/n)
John Gustafson pointed this out: What happens when you vastly increase p and n? You get a much better speedup value.

It's a very practical look at what we see in our business every day: having access to much more parallel computing power gives you the ability to take on much, much larger jobs. You do analysis that you would never dream of doing on a single core on a single machine.

Of course, Herb is talking about the difference between single core machines and multicore machines -- but the exact same concepts apply to multiple machines. If going from 1 to 4 cores changes the very nature of the work you can do, imagine what going from 4 to 400 does!

And, as Herb points out, this is very, very common.

As I like to say, "Any algorithm worth doing once is worth doing many times." If you've got earth's best stock-price algorithm, would you run it on only one stock? If you have a fantastic rendering tool, would you use it to render only one frame? Of course not. And dividing up your data doesn't make something embarrassingly parallel--it makes it delightfully parallel!


Tuesday, January 15, 2008

Sun Abandons Computers?

Strange news out of Santa Clara...

Two of my favorite reads, Nicholas Carr's Rough Type and John E. West's insideHPC, both pointed me to the same blog post by Sun's Data Center Architect, Brian Cinque. When those guys link to the same post, you know it's worth reading.

Cinque gives the broad strokes of Sun's plans for data center consolidation, which include reducing total square footage and electric consumption by 50% over the next 5 years. Admirable enough.

But Cinque continues...within two years of that, Sun plans to own ZERO data centers.

While he neglects to give any other details, it seems clear that this can only mean one thing: Sun is going to spend the next five years consolidating into a small number of data centers...and then they're going to sell them.

Clearly, after a 5 year effort of consolidation and rearchitecture, they're not going to spend the next 2 years scrapping all that work. Rather, they have decided that they'll let someone else take care of running them. Why? Well, ostensibly, it's because they think someone else can do it better/faster/cheaper then they can.

It's a bit surprising to me. Sun has spent the last year and a half convincing people to run their grid applications on Sun Grid--now they're telling the market that someone else is more qualified to run their own data centers. If I were a Sun customer, I'd be running to EC2 immediately.

On the other hand, maybe it's not so surprising at all, given that Sun now seems to be a database company...

Technorati tags: , ,

Tuesday, January 08, 2008

Partnering with Microsoft: The Ugly


My colleague John Powers has frequently posted about the good, the bad, and the ugly of partnering with Microsoft--and today's post would definitely qualify as "the ugly."

He's going through the annual process of renewing our Gold Certified status, and, as usual, he's suffering through hours of terrible user experience on their website.

I'll give a short summary here, but you should go read John's post for the full, gory detail.

Unfortunately, part of the process involves getting customer references, and that means having our best customers try to use the Microsoft partner website. This means that we are now subjecting our best, most valuable customers to the frustrations of trying to use a slow, buggy site. Our customers are already doing us a favor by filling out these forms--now Microsoft making is them take (a lot of) extra time, suffering through numerous timeouts, crashes, etc.

Put another way: Microsoft is asking us to piss off our most important customers for the sake of our partner status.

We see a lot of value in the partner program and in Gold Certified status; we're willing to put up with a lot of hassle. But asking our customers to deal with that hassle is too much.

Technorati tags:

Thursday, January 03, 2008

MSFT Supercomputing video online

Right before my weeks-long blogging blackout at the end of last year, I wrote that I was headed to Supercomputing in Reno. I never wrote an SC07 wrapup (although John Powers wrote one here), and it seems a bit late to do one now.

However, Patrick O'Rourke of Microsoft just told me about some videos they produced at SC07 and have gone online recently, including one that features an interview with me.

Check out the HPC++ page on the Microsoft site, and look for the Windows HPC in Financial Services link. It's a 9 minute video that features an interview with Jeff Wierer of Microsoft and a few different partners.

Patrick saved the best for last, by the way--my mug appears about 7 minutes in. If you watch the whole thing you'll notice, of course, that we're the only partner who can stake the claim to being all .NET.

(Note: ironically, I can't get that video to play in IE, but it works fine in Firefox. Don't tell my Microsoft friends I've got Firefox on my machine).

Technorati tags: , ,

Wednesday, January 02, 2008

Distributed F# Sample

The week between Christmas and New Year's Day is typically a slow one; I took advantage of some quiet time in the office to play with F#. What with F# becoming a "first class" .NET language and shipping with VS 2008, I wanted to see what it would take to adapt one of our standard samples (using a Monte Carlo calculation to compute Pi in parallel across different cores and machines on the grid).

I was a bit rusty in functional programming (I took Lisp in about 1990), and the eventing took a bit of research to understand, but I got something working. (See code below) I'm not sure if I've made any grievous errors -- if you see any, please let me know!

I know Don Syme recently posted about using Parallel Extensions in F#; next I'll have to take a look to see if I can extend that from using many cores to using many cores on many boxes.

Anyway, here's the F# code I ginned up:
// F# Digipede Sample

#light
open System

let mutable AccumulatedPi = 0.0

let numTasks = 100
let numIterationsPerTask = 1000000

// This function determines if a point lies within the Unit Circle
let InUnitCircle x y = sqrt(x * x + y * y)
// This is the class that will be distributed for remote computation; the DoWork
// method is overridden, and will be invoked on the objects when
// they are on the remote nodes
type MyWorker =
  class
    inherit Digipede.Framework.Api.Worker
    val mutable _Pi : double
    val _NumIterations : int new(a) = { _NumIterations=a; _Pi=0.0 }

    override x.DoWork() =
      let r = new Random(x.Task.TaskId)
      let mutable XCoord = 0.0
      let mutable YCoord = 0.0
      let mutable NumberInUnitCircle = 0

      for i = 0 to x._NumIterations do
        XCoord <- r.NextDouble()
        YCoord <- r.NextDouble()
        NumberInUnitCircle <- NumberInUnitCircle + if InUnitCircle XCoord YCoord then 1 else 0

      x._Pi <- 4.0 * double NumberInUnitCircle / (double x._NumIterations)
    end


// This function will be called (in the "master application") for each task that
// completes (see reference below)
let OnTaskCompleted (args) =
  let myArgs = (args :> Digipede.Framework.Api.TaskStatusEventArgs )
  let returnedWorker = (myArgs.Worker :?> MyWorker)
  printf " Task %d calculated %f \n" myArgs.TaskId returnedWorker._Pi
  AccumulatedPi <- AccumulatedPi + returnedWorker._Pi
  printf " Current total is %f\n" AccumulatedPi


// Ok, this is the code that is the "master" application. It will define and submit the job

// First, we instantiate a Digipede Client and tell it where the Digipede Server is
let Client = new Digipede.Framework.Api.DigipedeClient()
Client.SetUrlFromHost("myservername")

// Now, tell the system what class we're distributing. It will automatically
// determine which binaries need to be distributed to use this class
let JT = Digipede.Framework.JobTemplate.NewWorkerJobTemplate(type MyWorker ) ;
// this line tells the Digipede Agent to run these tasks "per core" on multi-core machines
JT.Control.Concurrency <- Digipede.Framework.ApplicationConcurrency.MultiplePerCore // Create a job, and add tasks to it. Each task gets a new MyWorker object
let aJob = new Digipede.Framework.Job()
aJob.Name <- "F# Job"
for i = 1 to numTasks do
let myTask = new Digipede.Framework.Task()

let mw = new MyWorker(numIterationsPerTask)
myTask.Worker <- (mw :> Digipede.Framework.Api.Worker)
aJob.Tasks.Add(myTask)


// Next, I set up some events.
(* This is like an anonymous delegate. It will get executed each time we get notified that a task has completed. *)

do aJob.TaskCompleted.Add
(fun args -> ( printf "Task %d executed on %s \n" args.TaskId args.TaskStatusSummary.ComputeResourceName ) )

do aJob.TaskFailed.Add (fun args -> ( printf "Task %d failed \n" args.TaskId) )

// Another way to handle events is to give it a function to call on task completion
// I'm doing both, which is redundant
do aJob.TaskCompleted.Add(fun args -> OnTaskCompleted args)


let sr = Client.SubmitJob(JT, aJob)

printf "Submitted job %s\n" (sr.JobId .ToString() )
printf "Waiting for results\n"

let mybool = Client.WaitForJob(sr.JobId)
printf "Job finished\n"

AccumulatedPi <- AccumulatedPi / (double numTasks) printf "Calculated pi is %f\n" AccumulatedPi Threading.Thread.Sleep(5000)


As I said before: this is my very first dabble into F#, and I may be making mistakes. But it runs, and I've figured out a couple of ways to handle events.

Technorati tags: , ,