Monday, November 12, 2007

See you at SuperComputing!

One more time: see you in Reno at SuperComputing. I've got a bunch of meetings, but when I'm not in a meeting I'll be standing in the AMD booth showing off a .NET deskside "supercomputer." (I have to put that in quotes because there will be real super computers there...)

I'll probably try to do some blogging from there, but I can guarantee you that your best bet for many, many informative blog posts will be John E. West's insideHPC. That guy finds more HPC nuggets than seems humanly possible, and he somehow manages to write about them in a way that makes it fun to read!

So watch this space, but watch that space, too.

If you're in Reno and can't find me, I'm at 510-816-7551.

Technorati tags:

Friday, November 09, 2007

Feedburned!

Seems to have been a dramatic downturn in my number of subscribers overnight--I'm down about 40%. Wow!

I'm the type of attention-seeking, fragile-ego'd blogger whose happiness depends on stats like these!

Checking my Feed stats, though, I can see what happened: Google Feedfetcher went from reporting 80 users to reporting 0! That's weird, because I use Feedfetcher...you'd think there would be at least one.

Google, can't you buy Feedburner so you can straighten this out?


Technorati tags: ,

Thursday, November 08, 2007

What Do Digipede and FreshBooks Have In Common?

I saw a post about going the extra mile by Josh Catone at Read/WriteWeb, and I wanted to give credit to one of my colleagues. Josh recounted the story that Oceanic had with FreshBooks, and notes the importance of good customer service.

I made a reference to this the other day, but I want to tell the whole story to give a hat tip to my colleague Nathan.

A couple of weeks ago, one our partners wanted to do some unsupported work with the Digipede Network. Matt Davey of Lab49 wanted to install the Digipede Network Developer Edition (which includes the server, an agent, and the SDK) on Vista. While we support Vista for the agent and the SDK, we hadn't yet upgraded our server installs to work with IIS 7 on Vista (not a problem, since most customers run our Server on Server 2003).

When Matt had a problem with the install (predictably), he contacted my colleague Nathan Trueblood. Rather than tell a valuable partner that we don't support that configuration yet, Nathan took time to figure out how to make things work for Matt. He could have said "Just install a VM and run it in there," he could have told him to wait a couple of months, he could have done a lot of other things.

Instead, he went the extra mile. He tweaked an install, and even had to use LogMeIn to help configure things properly on Matt's machine.

Matt loved it: "I have to say digipede probably have the best customer support in the world. " He also went on to create a very cool LINQ/Digipede sample.

As Josh at Read/WriteWeb put it:

But doing the little things that allow you to form a connection with your customers on a personal level can score you a lot of capital with them.
Good job, Nathan.

Social Networking Post

If you read this blog for grid computing and technical discussion, feel free to ignore this post.

If you're into the world of Web 2.0-style networking, here's how you can follow my every move:

On Facebook, I'm Dan Ciruli.
On LinkedIn, I'm Dan Ciruli
I twitter as Oaktowner.

Can't wait that long? IM me:
AIM: ciruli
MSN: dan@digipede.net
GTalk: ciruli
skype: danciruli

Wednesday, November 07, 2007

Vista: Burn!

I've neither loved nor hated my Vista experience. It's got lots of GIFWOM (Gratuitous Interface Fluff/Waste of MIPS*), but on the whole it hasn't radically changed my computer-using experience.

Today has been extremely frustrating, though. We're participating in Microsoft's Server 2008 Early Adopter Program, and we're excited to get the RC0 of Server 2008 installed on some of our machines for testing.

They haven't released a VHD yet, though, so we're stuck with downloading a 2.6GB ISO file, burning it to DVD, then installing from DVD. Fine.

However, my machine (a Compaq nc8430), my OS (Vista Business), my DVD drive (an "HL-DT-ST DVDRAM GSA-4084N ATA Device), and Roxio 9 all seemed to get in an enormous fight. And when they fight, I lose.

Burns got 95%, then stopped. Burns got 1 sector in, then stopped. The drive stopped ejecting. Roxio wouldn't shut down for any reason.

After several attempts and several reboots, I got pretty frustrated. A little Googling Live Searching brought me to this post from Rick Hallihan.

I downloaded the Windows Resource kit, and had at my disposal a command line DVD burner.

dvdburn.exe d: en_windows_server_2008_rc0_enterprise_datacenter_standard_x64_dvd.iso was all it took. A few minutes later, I was installing.

If it's this easy on the command line, why is it so hard through a GUI?

*credit for the term GIFWOM goes to my old friend and colleague Jeff Weidner (and if you want to remember what the internet was like in 1997 and people were first putting up "home pages," I highly recommend that you click on that link!

Updated 2007-11-08: added a link to the WRK.

Technorati tags:

Wednesday, October 31, 2007

Upcoming: High Performance Events

For those aching to meet your West Coast Grid host face-to-face in the near future, I'll be at a couple of public events in November.

For the nth year in a row, I'll be at SuperComputing. This year's glorious location is the Biggest Little City in the World, which is fine with me (assuming they can feed all 9,000 attendees simultaneously, a feat which was beyond the ken of Seattle in '05). I don't mind mixing a little blackjack in with my HPC.

If you want to find the Digipede entourage in Reno, head for the AMD booth. We'll be demonstrating software running on some very cool new hardware. The manufacturer hasn't made an official announcement, so I can't name names or give details, but these guys are making some screaming hardware: the deskside, personal xsupercomputer is a reality. It's a great fit for a partnership: how can a developer keep 40 cores running on 5 motherboards busy? We've got the perfect SDK for you: turn that box into a .NET powerhouse.

Anyway, see you in Reno.

And for those of you across the pond, wait a couple weeks and you'll get your chance, too.

The National e-Science Centre (note the spelling of "Centre," this is the UK we're talking about) has invited us to participate in High Throughput Computing Week at the e-Science Institute in Edinburg.

I am very excited about this. Europe is years ahead of the US in terms of large grids, and David Wallom from Oxford is one of the top people in the UK in this regard. I'm happy we're going to be able to do a half-day hands-on lab, but I'm even happier at the opportunity to spend 4 days with some great grid thinkers.

And it certainly won't hurt that we'll be in the land for which my favorite beverage was named. David assures me we'll be within stumbling distance of a pub...

Thursday, October 25, 2007

Why West Coast Grid? PowerShell and LINQ are two great reasons

My very first post explained why I call this blog "West Coast Grid:" simply put, we think there's an advantage to specializing on one platform (in our case, the Microsoft platform, which happens to be developed here on the West Coast).

There are many advantages to working on one platform (deep integration with .NET, integration with the development environment, easy integration with Office and other tools, etc.).

Two cutting-edge examples of why stack integration is so useful have sprung up in just the last couple of days.

First, Digipede CTO Robert Anderson posted a PowerShell SnapIn for the Digipede Framework. I'm really excited about the SnapIn for a couple of reasons.

  • First of all, it allows for command-line management of the Digipede Network (of course, PowerShell's .NET interface would allow this anyway, but Robert's cmdlets make it easier and more script-friendly).
  • Secondly, we decided to open-source this SnapIn. That means that this is, in effect, an in-depth code sample for the Digipede Framework Management namespace. Most of our code samples have focused on the different distributed application patterns, so this is quite valuable. And, by releasing the source, we hope that our users may feel inspired to offer their own improvements!
You can download the binaries and the source here.

The other great example came from outside our organization.

Matt Davey, the Lab49 WPF guru who has been doing a bunch of writing about the use of cutting-edge Microsoft technologies in finance, posted a very simple "GridLINQ" this morning, using Digipede and LINQ together. It's very simple, but very powerful. And, according to him, it took him about an hour to put together!

One side note: In a recent post (where, by the way, he said that "probably have the best customer support in the world"), Matt mentioned that he had some trouble using Digipede and Vista together. While we do support Vista for the Digipede Agent and the Digipede Framework SDK, we haven't yet released a version of the Digipede Server that runs on Vista (which is where he ran into some installation problems). Not to worry, we'll be releasing a version of the server that supports Vista (and Server 2008) in the near future!

Technorati tags: , ,

Tuesday, October 23, 2007

Another dead technology: the horseless carriage


I love finding new blogs about grid computing, and I did just that over the weekend. Guy Tel-Zur's Blog is "Mainly dedicated to IT, Parallel Processing and Grid Computing."

But I have to take issue with the first post I found: The End of Grid Computing? Guy says that "something is not going well with 'Grid Computing.'"

His evidence? Google Trends. He notes that the term "grid computing" is not as commonly searched-for as it once was, and concludes that something is not going right. I think there's a far simpler explanation, however: I think people aren't using the term "grid computing" as much as they used to.

To test my hypothesis, I looked at the Google Trends for "Information Superhighway." Not surprisingly, it turned up no results. Now, in 1996, this was almost a standard synonym for "internet." As far as I can tell, the internet is alive and well despite the lack of searching on "information super highway."

Simply put, the term "grid computing" is going out of fashion (and for good reason). As far back as 2005, I posted on the fact that the term meant so many different things to so many different people, it was effectively useless. To some people, "grid computing" means harnessing individual machines for SETI. To some Oracle, it's a clustered database. To others, it's two clusters. To others, it's a cluster with heterogeneous hardware.

Guy notes that since 2003, the term virtualization has increased its Google Trends momentum--he failed to note that companies like Platform Computing and DataSynapse no longer use "grid computing" in their marketing materials, preferring the trendier "application virtualization."

So, no, grid computing is not dead. But "grid computing" may be on its way out.

Technorati tags: ,

Monday, October 15, 2007

So does "utility computing" mean we'll each buy generators?

While Rob and I have both blogged about the differences between compute utilities and electric utilities (Rob here and I in one of my most popular posts here), the computing world in general continues to use the electric utility analogy when talking about utility computing.

Way back in March of last year I took Nick Carr to task (well, that may be overstating it a bit--he's a hell of a writer and technologist--let's just say I "attempted" to take him to task) for concluding that utility computing would lead to the death of the server market. "Nonsense," I said: "did the advent of electric utilities spell the death of the generator market?"

So it was with great amusement that I read Nick's latest post ("Caterpillar: Web 2.0 giant"): He says one of the big winners in the Web 2.0 boom is none other that Caterpillar. Caterpillar? Yep, Caterpillar. In addition to the equipment that makes our highways, they make the generators that power the information super highway (well, at least the generators used to supplement or backup existing power sources). Caterpillar's large generator sales are up 41%, and there is a year-long wait for a 2MW model.

In other words, the generator market is booming. It hasn't been killed by the electric utility industry. I maintain the same thing is true of computing: utility computing sure as hell won't kill the server market.

There's one other aspect to this: these generators are so important and selling so well, even though they are primarily used as backups. These datacenters buy power as cheaply as possible (think hydro electric). This incredible demand for generators exists because power is so very important; its ubiquity and utility have made it critically important. Datacenters need these expensive generators just in case their main power sources go down.

The same may be said of the server market. As more and more companies become dependent on utility computing, they will require more servers, not fewer. As John Clingan said:

Value drives adoption. Adoption drives volume. Volume drives down price. Lower price results in broader applicability. Broader applicability results in more servers.
In other words: the more important servers get, the more will be sold.

So why is all of this important? Well, again, the metaphor is dead.

But if electric utilities haven't killed the generator market, why would compute utilities kill the server market?

Technorati tags:

Friday, October 12, 2007

RDB is a good idea, but should you write your own?


Dr. Dobb's, always worth reading, has a couple of interesting articles this month.

Matt Davey of Lab49 has a good read on WPF and Complex Event Processing (but where are the illustrations?). He blogs here.

However, even more exciting for me, there's an article on grid computing (regular readers will recall that Robert Anderson and I wrote an article on scaling an SOA on a grid for them last year).

In this month's article (entitled "Grid-Enabling Resource-Intensive Applications"), Timothy Hoehn and Bob Zeidman of Zeidman Consulting compare several different strategies and methods for grid-enabling an application. For the most part, I love the conclusions they draw:

Among the architectures they examined, Distributed Objects provided the most scalable, flexible solution. They preferred it over Client/Server ("Overhead of managing socket connections can be tedious. If server fails, clients are useless."), Peer to Peer ("Harder to manage froma central location"), and Clustering ("Needs homogeneous hardware. High Administrative overhead for processes and network.")

They also examine "communication strategies," and again I like the way they think: they prefer Remote Method Invocation to Sockets or Remote Procedure Calls.

Next, they examine the "Push" and "Pull" distribution models, and they conclude that Pull offers some obvious advantages.

Finally, they discuss three different "frameworks:" .NET, JNI, and EJB. However, they're unable to actually do a comparison here. Tight timelines (and a lack of C# expertise) kept them from working in .NET; they preferred EJB to JNI.

Timothy and Bob then implemented their solution, getting good speedup on large jobs (they don't offer actual numbers, but I certainly believe them).

So, the things I like: they validated many of the decisions we made in our own product, which has a pull model that distributes objects and invokes methods on them. That's every cool.

But the one thing that I didn't like about their methods: they wrote all of the grid infrastructure themselves!

I understand that they weren't trying to do a "vendor bake off," and I really do appreciate the research they did.

But by essentially recommending that people write their own grid is a bit like saying "We've done some research into databases; if you need one, we recommend you write a Relational database." In other words, the conclusion is perfectly valid right up until the "write it yourself" part. There are many vendors out there who have written very good tools here--it would be silly to write your own database. You'd spend far more in development time than you would on a far superior software product.

The same is true in distributed computing. You could write your own...but with the selection of high quality vendors out there, why would you?

And Tim and Bob, if you happen to come across this post, I really did like the article. I think you reached all the right conclusions. I'd love the chance to see what it would take to grid-enable CodeMatch using my favorite grid computing toolset!

Photo Credit: Emily Roesly

Technorati tags:

Monday, October 08, 2007

Another voice on low-latency computing

I haven't blogged about the Univa-UD merger (mostly because it confuses me a bit, but that may be because I haven't talked firsthand to any of the players). But I liked a piece that I read that was written by Rich Wellner of the new Univa UD. I found it here on Grid Today, but it was originally published over on Grid Gurus.

Rich takes on the fallacy that utilization rate is the most important measure of a grid's (or cluster's) effectiveness. He cites a theme he hears "over and over again: 'How can grid computing help us meet our goal of 80 percent utilization?'"

As Rich points out, having a grid running at 80% does nothing to help your business directly. Quoting again..."How does 80 percent create a new chip? How does 80 percent get financial results or insurance calculations done more quickly?"

The questions are rhetorical, because the answer is obvious: it doesn't. The goal of a grid is not to use your hardware more (or more efficiently): it is to get your answers faster. Create that chip faster. Get those results more quickly.

Sometimes, the best way to answer questions is to take them to logical extremes. What's the best way to increase your utilization? Why, it's obvious: reduce the number of machines. Make the cluster smaller, and its utilization will go up. Only have 75% utilization on your 200 node cluster? Throw away half of the machines--sure, your wait times during peaks will more than double, but your utilization may hit 100%! Will your users be happy now?

In truth, I've never had a customer ask a question like that. A much more common question: how can I reduce the time it will take to run my end-of-trading day jobs?

That fits exactly what Rich sees:

For most businesses, it's queue time and latency that matters more than utilization rates. Latency is the time that your most expensive resources -- your scientists, designers, engineers, economists and other researchers -- are waiting for results from the system.
That's what real-world users are concerned with. "If I add 100 nodes to my grid, how will that affect wait times during the day? How will it affect processing times on my most important jobs?"

As an aside, here's something I've noticed in potential customers. Sometimes, I'll have someone call up and say "We've got all of these CPUs that sit around all night doing nothing, can you guys help us use them?" Of course, the answer is "Yes," but there's not a great likelihood of a sale there. They have hardware they could use more efficiently, but they don't have a need.

Sometimes, someone calls up and says "I'm running analysis jobs that take 27 hours on a single machine and I need them to run in under an hour--can you guys help?" Again, the answer is yes--but now there's a very good likelihood of a sale, because there is an actual need.

Photo credit: Jane M. Sawyer
Technorati tags: ,

Friday, September 28, 2007

Ultra Low Latency Grid Computing

High Performance on Wall Street was the hands down the best HPC conference I've ever attended. The quality of the sessions was high. The quality of the attendees was high. Even the quality of the vendors was high (full disclosure: we were one of those high quality vendors). There was nary a "booth babe" nor "tchotchke hound" in sight--everyone was there to talk HPC.

The theme of this year's conference was "Low Latency," and my favorite panel was specifically about low latency and multicore processors. There was great talk about shaving microseconds off of compute times for complex event processing, but one of the customers (I can't remember if it was someone from Citigroup or from Deutsche Bank) made an interesting point: ultra low latency is not the biggest problem he faces, because it's really only involved in a small number of his applications. He said that latency is an issue for them, but brought out the old "when all you have is a hammer, the whole world looks like a nail" cliché: there's no need to apply ultra-low latency tactics to all of the computing they do.

Most applications they are running do not need to run in microsecond or milliseconds because of an external event--most applications are running either at user request or in batch processes. And he brought up what is one of his biggest problems: the time it takes them to develop an application to run on their grid.

In other words, he's not worried about the milliseconds it takes to start processing: he's worried about the days, weeks, and months it takes to develop applications and adapt them for distribution on the grid.

It struck home with me, of course, because reducing developer time is what we've been preaching since we first released our SDK. It's also the thing that our customers have liked the most about our products: developers can adapt their applications to run on a grid in as few as 20 lines of code.

I'm not the only one noticing the importance--Nikita Ivanov of GridGain (they have an open source grid product for Java) just posted about the same topic.

So is a grid ideal for computations that need to start in the next couple of microseconds? Nope.

But if you need your app running on the grid tomorrow instead of next week some time, we can help you shave billions of microseconds off of your development time.

Now that is what I call ultra low latency grid computing!

Photo credit: Darren Hester

Technorati tags: , ,

Friday, August 24, 2007

Video: distributed rendering

A good development framework can make software development a joy--unencumbered by the banalities of software development, you can spend your time on the interesting and productive elements of programming. I know I'm waxing poetic a bit, but I'm really excited by what I just saw.

Yesterday, we gave a project to our newest employee, Pandelis, in order to have him familiarize himself with our development framework. We asked him to create a distributed rendering demonstration. We gave him POVRay, a free rendering tool. I also gave him the application we frequently demonstrate with--distributed Mandelbrot (you may have seen this video, which shows that demonstration).

This morning, he showed me this:



In a few hours' time, he took the "wrap and adapt" approach: threw a .NET class around the data, had that class invoke the renderer, and he was done. No modifications to the POVRay source code. No extra work preinstalling anything on the nodes. No extra code to handle serialization, node failure, etc.

As a total bonus (and complete coincidence), I had a sales call with a company that does a ton of graphics work today. Pandelis quickly made an installation (Visual Studio makes simple MSIs so easy to deal with, btw), and I literally ran the MSI, did one practice render, then demo'd it to the potential customer. He was very impressed.

Great job, Pandelis!

Speaking of distributed rendering, why does it take so long for my video to be available after I upload it? Google, do you need some help with your distributed processing? ;-)

Updated 2008/01/02: Fixed a link.

Technorati tags: , ,

Thursday, August 23, 2007

New Papers: Using Digipede with Matlab, Java

O
ver at the Digipede Community site I posted two papers yesterday. These are fairly technical documents that give step-by-step details on how to use the Digipede Network with some popular technologies.

The first paper explains how to integrate MATLAB M code into a Digipede Worker; it's a fairly simple process thanks to MATLAB Builder for .NET (and we have several customers doing this in production today). Later, we'll release some more touch-points between Digipede and MATLAB: distributing an executable compiled with the MATLAB Compiler, submitting a Digipede job from within MATLAB code, and using Digipede with the MATLAB Distributed Computing Toolbox.

The second paper explains how to use the Digipede Network with Java. Now, it's possible to invoke COM from within Java if you really want to, so technically it has always been possible to launch and monitor jobs. However, our .NET users know that the real power in the Digipede Network is the distribution and execution of objects. Well, thanks to our partners at JNBridge, you can now take advantage of our powerful object execution model natively from within Java--you can distribute and execute Java objects on your .NET grid. That is powerful stuff. Robert Anderson put this together; check it out.

Technorati tags: , , ,

Friday, August 17, 2007

Outlook! In a flash!

T
his morning my Outlook client greeted me with the following behavior:
See that? The login screen was flashing repeatedly--appearing and disappearing in less than a second, then doing it again a few seconds later.

A system restart seems to have corrected it.

Anyone seen this before?

Monday, August 13, 2007

CCS and Digipede: More Differences

L
ast week I posted about the differences between Windows Compute Cluster Server 2003 and the Digipede Network--the post generated a lot more traffic than I would have guessed. Apparently, many people have been wondering how our products differ. I said I'd follow up with more details, so here goes.

Developer focus. In summary, CCS includes tools for scientific developers, the Digipede Network includes tools for enterprise .NET developers. For more details, see my previous post.

What does it run on? The biggest difference between our product and theirs is that, at its heart, CCS is an operating system (Windows Server 2003 Compute Cluster Edition). It runs on bare iron. The Digipede Network is a grid computing framework that runs on top of the operating system and on top of the .NET Framework as well. We're two floors up from CCS.

Mind your bitness! Compute Cluster Edition is built on Server 2003 x64--it only runs on 64-bit hardware. The Digipede Network runs on top of Server 2003, XP, Vista, and Server 2000--32- or 64 bit editions.

Microsoft technologies manage your machines, Digipede manages your grid. Microsoft makes tools that make it easier to manage large numbers of machines (MMC, MOM, etc). These tools are designed to help you manage computers. The Digipede Network doesn't help an administrator manage machines, it helps an administrator manage a grid. Which users can submit to which machines? What times are the desktops running grid jobs? What priority are the overnight batch runs? These are the things that the Digipede Network takes care of.

Homogeneity and heterogeneity. A cluster is nearly always a tightly coupled group of homogeneous machines: identical hardware, identical operating systems, all linked (often with a high bandwidth backplane). A grid, on the other hand, usually has a heterogeneous blend of machines: different OSs, different hardware, different connectivity.

Dedicated vs. shared. Clusters are nearly always what we refer to as "dedicated" hardware: the machines in the cluster are used purely for computation. However, grids can have any combination of dedicated hardware and "shared" hardware--that is, hardware that is used for computation on the grid but may also be used for other purposes: file servers, app servers, even desktops. Some of our customers use all dedicated hardware, but a fair number are supplementing their dedicated hardware by "cycle scavenging" from other resources in their enterprise (the white paper I linked to recently details one customer's experience with cycle scavenging).

Grid.Contains(Cluster) = true. As a side effect of the previous two points, you may realize that a grid can contain one (or many) clusters. Many of our customers have clusters as a part of their grid. Some jobs may run only on the cluster, but by using the cluster as a part of the grid, they have one comprehensive tool that has information about all of their work.

Final analysis: two great tastes that taste great together. This is the point I make over and over: these are very complementary products. The Digipede Network runs on top of CCS, extending it both in terms of capabilities (adding .NET and enterprise features) and in terms of capacity (by running loosely coupled and 32-bit jobs on other machines, ensuring that your cluster is available for those tightly coupled MPI jobs).

Wednesday, August 08, 2007

Digipede Network and CCS: What's the Dif?

One of the questions I get frequently from potential customers is "How is the Digipede Network different from Windows Compute Cluster Server?" I've spent the last 16 months or so reciting the same answer over and over again; when John Powers suggested I write a blog entry about it, I was (frankly) surprised I hadn't already.

But, searching through my del.icio.us tags in the sidebar, I see that I haven't. So here goes. This will take a couple of posts; in this one, I'll concentrate on my favorite part of the answer: how the Digipede Network differs from CCS from the developer's perspective.

Let's start with why CCS exists in the first place: in order to compete in the scientific and technical computing space, Microsoft knew they had to revamp their OS.

Windows Compute Cluster Edition is basically Windows Server 2003 x64 Edition. Microsoft took Server 2003, and added support for the type of hardware frequently seen in clusters (high-bandwidth networking like Infiniband, for example), and additional support for Remote Direct Memory Access (necessary for MPI implementations).

Next, they added in the Compute Cluster Pack. CCP is a set of tools that sits on top of the OS and provides additional software support for technical computing: specifically, an MPI stack, a cluster job scheduler, and a set of management tools.

In other words, CCP sits on top of CCE to make the OS into a tool usable for scientific and technical computing. CCS (Compute Cluster Server) is simply CCE and CCP together.

Ok, that's the Microsoft product line. How does Digipede fit in?

Well, while CCP has plenty of tools for the scientific developer, it has almost nothing for the enterprise developer. If you're developing in .NET, it doesn't translate naturally to a cluster paradigm--in order to work with CCP, you'd have to compile your .NET down to a command-line executable, and do all of your data passing either in files or on the command line.

The Digipede Network, however, can put your CCE nodes at the disposal of a .NET developer. Without restructuring your application, without moving to a command-line paradigm, without deploying your EXE to the different nodes. By automatically deploying .NET assemblies (and related files), then distributing and executing .NET objects natively, the Digipede Network adds a layer of .NET support onto CCE.

We handle the .NET parts of things. They handle the OS. Want 64 new nodes on your grid? Buy a cluster of CCE nodes. You'll save a boatload of dough over the Windows Server 2003 license costs, and you'll get good deployment tools. Throw Digipede Agents on those nodes, and suddenly you've got a high-powered, .NET supercomputer on your hands.

Want to see the difference between CCS and CCS + Digipede? Watch these two MSDN webcasts. In the first, Ming Xu and Sanjay Kulkarni from Microsoft put a calculation on a CCS cluster behind a spreadsheet.

In the second, I do the same kind of thing, but I use the Digipede Network on top of CCS.

The difference? In the first demo in that webcast, Ming and Sanjay want to put some .NET logic behind a spreadsheet. They had to write a Web Service that the UDF in Excel talks to, then deploy that Web Service to the head node on the cluster. When that Web Service is called, it communicates with the CCS job scheduler to start a job. They wrote a command line executable that actually did the analysis, deployed that EXE ona file share, then had his CCS job invoke that command line executable. That command line executable in turn writes to stdout, and stdout from each task is redirected into a file on the file share. The web service polls the job scheduler to see when the job completes, waits for the job to finish, collected the results from the command line executable by opening each file, then returns the results to the UDF (which, you recall, had originally invoked the web service).

Whew! That's a lot of moving parts. It uses technologies common to technical and scientific computing (e.g., using command line executables, handling data passing using command lines and files on a share), but perhaps not as familiar to enterprise developers working in Excel.

In the second demo in this webcast, I implement a similar pattern, but I do it behind Excel Services (the same thing can happen behind Excel). The big difference is that it's much, much simpler. My User Defined Function in Excel Services simply passes .NET objects to the Digipede Network. Those objects are automatically distributed around the cluster, executed, and returned to the UDF. There are many fewer moving parts. I didn't have to predeploy or prestage my executables or DLLs, and I didn't have to mess with web services, command lines, command line EXEs, or getting data into and out of files.

To be clear, I am definitely not slamming CCS--it's a very good product, and it's making inroads in exactly the market it was aimed at: scientific and technical computing. What we've done is what good partners do: extend that product so it can be used by in another way entirely: as a plank in a grid computing platform.

More later on other differences between CCE (the OS) and the Digipede Network (the grid computing platform).

Update 2008-01-03: See my follow-up post for more differences...


Technorati tags: , ,

Cool Grid White Paper


Many of our customers are what you might think of as typical grid computing customers: finance, government, digital content creation, storm simulation. These are the applications that people typically think of when they think about grid computing.

But we have many customers far outside of the typical grid computing realm. These are people who have .NET applications that need speed and scale, and are using grid computing to solve their problems. The breadth of these applications is amazing--people use .NET to do a lot of interesting things.

One of those interesting customers is Trekk Cross Media. Trekk is a marketing and communications company that can tap into some great tech expertise to create fantastic solutions. Jeff Stewart, their VP of Technical Services, recently gave a talk and presented a paper at the ESRI International User Conference. The paper shows how they used grid computing (the Digipede Network, of course) to dramatically increase the throughput of their custom mapping application.

Great job, Jeff!

Check it out here.

Technorati tags: ,

Monday, July 30, 2007

Gettin' Griddy Wit It on .NET Rocks!


I can't resist a Will Smith reference, especially when it comes from the keyboard of one of the esteemed .NET digirati--in this case, Richard Campbell, co-host of .NET Rocks!

Last April, Carl and Richard interviewed me to find out about this new-fangled "grid computing for .NET" thing I kept babbling about. It was a blast (and totally made me want to do that for a living)--and I found out that Richard is not only a .NET guru, but also a closet SETI@Home fan (and when I say "closet" fan, I mean he's got a closet filled with water-cooled, over-clocked machines!).

Well, it's been a year and we decided it was time to update the world on what's happening in the land of grid. It's been a big year for Digipede, we have an upcoming release, and we just won a big award...there was plenty to talk about.

As usual, the interview passed in a blur--these guys are so darn good at what they do--but I don't think I had any huge gaffes. In fact, Carl invited me (on air!) to do an episode of dnrTV (which means I can hold him to it).

If you're doing .NET development and you don't already subscribe to their podcast, you should. They both blog, too; subscribe here and here.

And in any case, make sure you listen to tomorrow's show!

Thursday, July 26, 2007

Enterprise: It's the other HPC


My recent post debating the merits of Windows Compute Cluster Server as an operating system concentrated on cost and ease of use. However, the concentration on those two factors to the exclusion of other, often more important, factors was unfortunate.

Because for most of our customers, neither cost nor ease of use is what's driving their platform choice.

Robert Anderson alluded to this as he yawned about my post: "And Dan doesn’t even mention the developer-productivity story..."

Rob hints one of the major reasons that makes our customers choose this platform: Microsoft has a cutting-edge platform of development tools, and they are continuing to add and innovate. In fact, in this arena, it feels like their rate of innovation is increasing. .NET continues to improve with .NET 3 out (with WCF, WPF, and Windows Workflow) and .NET 3.5 on the way. Visual Studio 2008 will be out this year, and they're also doing great things with Iron Python (open sourced, I might add) and PLINQ.

The other major reason has less to do with the current state of the art as the history: many of our customers have a huge code base that runs on Windows. Especially in finance, these guys have been buying and developing software on this platform for at least a decade and a half. And, like many enterprise customers, they're still running some pretty darn old code. And, like many enterprise customers, they use many third party applications and libraries.

So when it comes time for them to buy a cluster, the cost in changing operating system for a new cluster is much larger than the cost of the OS, the cost of installing that OS, and the cost of managing that OS.

The much, much larger cost would be to change those back-end systems, to rewrite or port a generation's worth of code, and to retrain new developers.

So when I hear people say things like "ISVs aren't sure about clusters of Windows machines, so they're not porting yet" I can tell immediately that the person is talking about the traditional HPC world. A world where clusters were UNIX and are now Linux, and the software that runs on them the same.

That's a different world than where I come from--an enterprise software world, where most applications are running on Windows, where the most common application of all is Excel, and where it would require a port to move to another OS.

As always, I'm not criticizing either world. They just are. Lots of people are developing in .NET, lots of people have loads of COM applications, and with ever increasing data volumes they need distributed computing to do it. Compute Cluster Server is one thing that's helping them get it done.

Photo credit: rosevita via MorgueFile

Technorati tags: ,

Thursday, July 19, 2007

Yet another poof piece


Joe Landman over at Scalability.org has another "poof" piece up about Compute Cluster Server. Why is it a poof piece? Because when you examine his logic, it goes up in smoke.

First, let me make it clear: Joe has serious chops. His company makes terrific, screamingly fast storage products. And he has vast knowledge and experience in HPC.

However, when it comes to Microsoft, he also has a chip on his shoulder the size of the Upper Peninsula--and that lets some faulty logic into his many posts he devotes to ripping on CCS.

In his latest, he compares acquisition costs of clusters with Linux and CCS (note he doesn't try to make any TCO comparisons, perhaps because he's read the independent studies that show Windows is cheaper in the long run).

Joe quite correctly notes that Windows Server is a commercial OS that costs money; no secret there--Microsoft charges money for software. I'm not sure if the $469 is correct for volume purchases, but it's probably close enough.

But then he starts adding spurious costs...

As the size of the system scales, so do per unit costs. In the case of windows, the $469/unit cost means that a moderate 16 node system + head node + file server adds another $8500 to the purchase cost. Not to mention the yearly additional costs of the OS support, the necessary per node anti-virus, the necessary per node anti-spam … That would add in another about $1500 or so. So call it an addition $10,000 per 16 node cluster.
I'm not going to debate the OS costs, but anti-spam? On a cluster?

Our customers don't let their clusters have a peep at the internet. Even within their networks, access is strictly controlled. No need to waste time and money on anti-spam on those nodes, and to pretend that they do need it is just silly. As I said before, Joe's a really smart guy: he doesn't need to fall back on specious arguments like this.

Another strange argument Joe makes is
Meanwhile we are left with the indelible impression of a small market segment (CCS) that is not growing as fast as the cluster market as a whole (which means it may be shrinking in relative terms).
It's unclear exactly what he's trying to say. Is it that Windows servers aren't selling? Unlikely, since according to Linux-Watch.com, not only is Windows server revenue triple Linux server revenue, it's also growing faster.

Maybe he was saying that the growth in HPC isn't in the small market segment...but that isn't true, either. According to IDC, the greatest growth in the HPC market is in the capacity (under $1,000,000) segment.

He goes one step further when he compares ease-of-use: he compares a one-line shell command (to list the tasks on a node) with a 30-line PowerShell script to point out how much easier Linux clusters are to manage. It's a faulty comparison, and Joe knows it. He chose a script that iterates through every node on a cluster, writing out all sorts of information about the node and the tasks currently running on it. He's comparing one apple to a bag of oranges, and complaining that the oranges are too heavy.

The fact that PowerShell is available for Compute Cluster Server is a very good thing--Windows has long lacked powerful scripting tools (I'm sure Joe will back me up on that), and PowerShell combines a good scripting tool with .NET functionality. Will it absolutely save keystrokes over comparable Linux tools? Beats me. Maybe not. But it will make it far easier to integrate CCS clusters with other .NET resources, and that's a huge thing for shops with Microsoft expertise in house.

Ah, Microsoft expertise. That's my final point.

In concentrating on acquisition cost and ignoring total cost of ownership, Joe is ignoring the thing that is selling the most CCS clusters: people who are already managing many Windows servers can easily manage CCS clusters.

The largest market for CCS is not organizations that have UNIX or Linux clusters. People who already have Linux expertise are already buying Linux clusters--Linux and UNIX have dominated the HPC space.

The people who are buying CCS clusters are people who don't have Linux experience. They've got Windows desktops. They've got Windows servers. They've got Active Directory and SharePoint and loads of .NET developers. And they want to add a cluster. What's the natural way for them to do that? Windows. I'm not going to try to pretend that CCS is objectively better than or easier than Linux--I think that depends on your personal experience and expertise. But, remember that outside of HPC, Linux's market penetration is very, very small. And not everyone finds it easy.

So for a small to medium size business (or a department within a large business looking at a cluster), add this cost in to your Linux cluster acquisition: the $100,000 you'll have to pay to hire a decent IT guy to run an OS you've never seen. Wow. TCO just got a lot higher for that cluster.

That's why CCS is valuable. Not because it's necessarily "better." Not because it's necessarily "easier."

Because it's familiar. Because it is very easy to integrate into a corporate infrastructure that already has so many Microsoft products. That's why Microsoft insists that CCS is bringing HPC to the masses.

I know Joe understands this--he's written well reasoned posts on the topic before. But when Joe starts making such specious arguments as he makes in this post, he's no better than the "marketing types" he derides.

Look, I'm not deriding Linux as an OS, or as an HPC OS. It's been very successful, and it will continue to have success. The fact is: if you're using UNIX or Linux, it probably doesn't make sense to port to Windows.

But if you're already using Windows, it certainly doesn't make sense to port to Linux.

Update 2007-07-17 4:39pm: John West at insideHPC has a much more balanced look and honest critique of the same article. I encourage you to check it out. Also, I forgot to include a link to the article Joe was writing about over at Search Data Center.


Update 2007-07-23 10:45am:
Joe responded to my post (and another at insideHPC that appeared before mine) with a three of long, well-thought out posts.

I'm updating this rather than adding a new post (because neither of us wants his blog to become a series of posts about this subject only).

First of all, I'm sorry about my "Upper Peninsula" joke, Joe. I was attempting to make a joke (and take into account your state of residence)--if it came across as an ad hominem attack, I failed in my attempt.

Secondly, Joe did a TON of research into the "independent" TCO study I linked to. As it turns out, its data gathering and data analysis methods were suspect. I admit to lazy research: I did one Google search, read the summary, and linked. Joe summed it up nicely: "Try not to buy into such studies without thoroughly understanding them, and their limitations, and their applicability to your situation. Doing otherwise will leave you with egg on your face. Lets call this one done."

Third, he spends a long time acidly agreeing that I was correct about server growth (Microsoft server sales are growing faster than Linux). Remember, Microsoft is growing faster on much larger sales, meaning their absolute growth is much larger than Linux's. Also, please remember that I'm not trying to say "Linux is dead"--I was simply trying to refute Joe's assertion that "we are left with the indelible impression of a small market segment (CCS) that is not growing as fast as the cluster market as a whole (which means it may be shrinking in relative terms)." In your words, Joe, you were "wrong, so very wrong."

Fourth, he mentions the Mono project. I love the Mono project--Miguel de Icaza has a great thing going there--and I wish it were farther along. I wish Microsoft and Novell would put their "partnership" to good work, and fund this thing to completion. Unfortunately, Microsoft hasn't stepped up to the plate (I'm sure the internal politics are huge). This is a case where I think Microsoft is missing the boat. I can't wait until they wake up and heartily endorse this thing.

Finally, apples-to-apples: Joe defends his apples-to-oranges comparison of a command line tool with a scripting tool! If you want terse command line tools to work with CCS, then use the command line tools. He seems to think that PowerShell is the only way to interact with CCS, and it's just not. And if you prefer to use a third party tool like LSF, Platform sure claims it works on CCS. This is the thing I don't get. In the sake of being unbiased, Joe did hours of valuable research into the TCO studies--then complained about how complicated PowerShell was, as if he was unaware of the existence of CCS's command-line tools and hadn't done a 2-second Google search. It gives away his bias.

Joe was obviously ticked off by this post (which he called a "fisking," a term that I wasn't familiar with). My entire point was that Windows fits well for some people, Linux for others. As outlandish as that position may be, I maintain it's true.

Photo Credit: Kenn Kiser
Technorati tags: ,

Monday, July 16, 2007

Worldwide Partner Conference Wrap-up


John is taking some well-earned time off and has passed the WWPC-blogging mantle to me, so I'll do the wrap-up post.

As I alluded to in my previous post, the most rewarding part of a Partner Conference is the chance to meet with so many Microsofties in one place.

Of course, the keynotes were fun and flashy (and, for at least the third year in a row, the music was provided by the incomparable EB Fraley band—that guy has become the “other” face of the partner conference). They’re usually pretty darn rah-rah, and there is much to rah about this year (again). As Don Dodge noted last year, Microsoft is growing by approximately one Google per year. It’s phenomenal. Windows Server is not only selling more than Linux, it’s also growing faster than Linux. BizTalk sales were up 30% last year. We didn’t get any firm numbers (the fiscal year just ended), but they’re optimistic about another banner year.

Most of that cheering and backslapping about sales is to get the partners excited about selling more Microsoft products (96% of Microsoft sales happen through partners, and each dollar of Microsoft license is accompanied on average by about fifteen dollars of partner revenue—how’s that for an enormous ecosystem?). But for an ISV, that stuff isn’t as compelling as the sessions.

My favorite session was Chris Bernard and Chris Treadway’s session on Silverlight. Very impressive technology, and it was cool to see a bit of code. I would like to have seen a bit more, but then again I suppose I should have gone to TechEd if I wanted to see code. Wilhelmina Duyvestyn gave a good session on Windows Server 2008 (informative enough that I’ll do a separate post on that product later).

But, as I’ve said, the real value was in the networking. We went to a dinner for the ISV Award finalists, where John sat with Dan’l Lewin of the Emerging Business Team while I sat next to Justine White who manages marketing for the ISV Group. Chris Olsen and Naseem Tuffaha of the ISV sales and marketing team were there, too. We went to a reception sponsored by Ansys and Mathworks, and ran into Dieter Mai and Travis Hatmaker from the HPC Team. We stayed at the same hotel as Amy Lucia and her team—she’s the director of marketing for US ISV Strategy.

John had a 30 minute sit-down with Andy Lees, who heads up the Servers and Tools division. I ran into Gianpaolo Carraro, SaaS Architect, at the ISV party.

Add those Microsoft meetings the countless encounters with partners—at meals, on buses, every where you turn.

John has complained many times about how difficult it is to use the partner online tools. It’s hard to find partners, it’s hard to find sales, it’s hard to do just about anything. But the three days of the Worldwide Partner Conference make all of that easy. I’ve said it before, and I’ll end with it now: anyone serious about working in this ecosystem simply can’t miss this conference.


Technorati tags: ,

Thursday, July 12, 2007

I Owe Allison Watson Lunch

Ok, not really. But if I run into her this year, I'll offer to buy her lunch.

Last year's Microsoft Worldwide Partner Conference was a logistical nightmare. The city of Boston just didn't seem capable of hosting 10,000 people: the 2-mile bus ride to the hotel routinely took an hour, and the convention center simply could not feed everyone (I went three consecutive days without getting lunch, which I blogged about in a post famously titled "Allison Watson Owes Me Lunch.")

This year, however, is night-and-day in comparison. Logistically, everything here has gone just about perfectly. The buses run frequently and quickly. The food in the convention center is not only abundant, but actually good. There are snacks (both healthy and unhealthy) available constantly.

And the city of Denver has definitely played great hosts. The employees at the convention center greet us with friendly smiles everywhere. This morning as we walked in to breakfast, they were actually cheering us in (I kept thinking, "Did Margo Day put them up to this?").

Even when the logistics suck like last year, Worldwide Partner Conference is worth attending for any Microsoft partner. I'll have one more WPC post where I'll talk about some of the things I've learned, but suffice it to say: the content is always good.

More important than the content, though, are the contacts. Microsoft sends thousands of people here, from Ballmer on down. They're surprisingly accessible, and they're all here to meet partners. You get good information, and you get good networking.

But when you add in an actual pleasurable experience in a great city, it makes it even more worthwhile. So if I bump in to Allison in the halls of Redmond this year--I'll ask her if she wants me to treat her to lunch! Oh, and Allison: if Denver asks to have us back, tell them yes!

Technorati tags:

The Winners!


A
t some point in the not-too-distant future, I promise this blog will return to its roots of grid computing and .NET. Lately, I feel like most of my posts have been along the lines of "What's Going on at Digipede."

But this one is just too good to pass up.

Last night we were awarded the "ISV/Software Solutions, Innovation Partner of the Year" at the Microsoft Partner Awards ceremony. In other words, the most innovative software partner.

It was a huge rush and an absolute blast. They do their best to make it a glamorous event, and it felt terrific to get this acknowledgment from the folks in Redmond. They are very excited about what we're doing.

Congratulations to the team back at Digipede HQ--especially the guys over in development, who have worked so hard on this product.

Technorati tags: ,

Wednesday, July 11, 2007

CRN Article: Digipede is an "ISV You Must Know"

This hit the interwebs a couple of days ago, but I've been traveling and have fallen behind on my feed reading. But this was a great surprise!

Barbara Darrow of CRN published an article called "25 ISVs You Must Know." It's a list of 25 companies developing on the Microsoft platform that, well, in her words, you must know!

I was excited to see that we made the list (see page 2). And I was even more excited to read the quotes she got from one of our systems integrator partners, West Monroe Partners:

West Monroe Partners, a Chicago-based consultancy, is fully aboard. "The ability of this product to accelerate application performance for our customers is tremendous," said Nathan Ulery, technology solutions practice leader at West Monroe Partners.

The consultancy's first implementation required two days of training and a day of setup at the customer site. In that time, they "grid-enabled" the client's component that performed myriad complex transactions.
That's the kind of thing we've been saying for a couple of years now--get your grid up and running in a day, and get apps running in just a couple of days. We've had many customers experience the exact same thing: developers adapt their .NET apps to run on the grid in a matter of hours, not a matter of weeks. But many of our customers are too secretive to say this kind of thing in public.

So thanks to Barbara for the mention and thanks to Nate Ulery for the great quote!

Tonight John and I will attend the Microsoft Partner Awards dinner--and we'll find out if we got the ISV Innovation Partner of the Year Award. More on that, and on some of the announcements we've heard here at the Worldwide Partner Conference, later.

Update 4:54pm: Wandering around here at WWPC, I just realized that this article is a big feature in a handout here called SolutionsInc. Fun to see this kind of stuff in print!


Technorati tags: ,

Thursday, June 28, 2007

WWPC Bound--Are You?


F
or the third year in a row, I'll be heading to the Microsoft Worldwide Partner Conference. This year, it's in Denver. Last year's event was in Boston and, except for the deplorable lack of lunches, extremely worthwhile.

We are dedicated Microsoft partners, and I always enjoy meeting and networking with other partners. The sessions at these things are good, but the networking is great.

With that said, the tools they give the partners to do networking always suck. It's difficult to use the networking tools and the "tables" are always booked. As John pointed out, their various tools can't even agree on time zones. And, of course, I had to create another user name and password (oh, and a screen name, too--what is this? MySpace? Am I fourteen? I don't need a screen name to hide behind, I'm trying to actually meet business partners).

This year, they've added a new tool: "blogs." And, once again, they've mucked it all up. Rather than create an RSS aggregator (just like PDC did in 2005), they've created their own blogging tool that allows you to have a blog hosted on and only available from their site. It's like someone who has no idea what blogging is about decided that they should have blogs. Guys: many of us already have blogs. We don't want to have some new, hidden-from-the-world, proprietary-software, blogging platform. By the way: I did create a blog on their site (it's called "I Already Have a Blog") and I'll be crossposting WPC-related material there).

Networking problems aside, I expect to have a great time in Denver. On Tuesday, I'll go to INVESCO for the US party (I really hope that they continue Margo Day's tradition of the high-five tunnel on the way in!). On Wednesday the 12th, John and I will be attending the Partner Awards (and, with any luck, picking up our Innovation ISV Partner of the Year Award).

So, with that said, are you going? If so, want to meet up? Who do you think the "Classic eighties band" will be?

Technorati tags:

Tuesday, June 26, 2007

Microsoft Partner DVD Available


Want to try over 100 add-ins and packages for Visual Studio 2005?

Microsoft has just released the 2007 VSIP Partner DVD. VSIP is the Visual Studio® Industry Partners (disclosure: our CTO Robert W. Anderson sits on the advisory board), and the Partner DVD has 5 gigabytes worth of products, add-ins, and evals that help make Visual Studio more productive.

Naturally, we put something on it: the Digipede Network Developer Edition. It's not a short term eval--it's a full-featured, Digipede Network on-a-box.

It's free, but you have to order it rather than download (did I mention that it's FIVE GIGABYTES?).

Order it here.

Technorati tags: , ,

Really. Small is better.

My reader caught a good post about SOA over at ZDNet this morning. In SOA business case: smaller is better by Joe McKendrick, he quotes heavy hitters from BEA and HP and makes the case for starting small with SOA.

There’s been quite a bit of debate as of late as to whether SOA should start small and incrementally, or be introduced from the top down as a transformative venture. The word out of the recent BEA Systems executives annual Arch 2 Arch customer conference in Nice, France, is ‘start small, and build from there.’
I've been preaching this for a while now, and lately I've gotten some firsthand experience seeing it in action.

We have several customers who are "starting small" with their SOA. In each case, they've identified a single (or small number) of applications and began by adapting those to run within the construct of an SOA.

Why start small?

The first and most obvious reason is budget. Starting small often allows for a budget as small as a few tens of thousands of dollars (including refactoring the app to run as a service, a software infrastructure, and hardware to run it on). Rearchitecting your enterprise's entire infrastructure is often millions of dollars (yep, a TWO order of magnitude difference).

Starting small also allows for learning lessons. Your developers will better understand both SOA in general and how you've chosen to implement it on site. Your IT department will learn about implementation and deployment in incremental steps. And your users won't be confronted with all new systems simultaneously.

Another great benefit of starting small is the ability to show incremental success. Some of our best customer successes have come at customers who started with very small implementations--as small as 10 nodes (of course, being a grid vendor, I measure the size of a project by the number of machines it's running on). As other users and developers see the success of a service running 10 times faster on the grid, they want a piece. They want to replicate that success themselves.

Rather than a corporate mandate to implement a scalable SOA, we have the users themselves clamoring for more applications on the grid. We have developers working on their own to adapt applications. Success begets success--even small success.

When IBM and their grid partners amble up to talk to a CIO to talk SOA, everyone knows it's a multimillion dollar project--and a huge risk. But when a department spends a fraction of their budget, is able to SOA-up an app, and pave the way for further success--they make everyone look good.

By the way, speaking of looking good, our customer who started with a single service oriented app running on 10 nodes is now up to about 400 CPUs, and their grid is growing fast--this is definitely an example of success begetting success.

Technorati tags:

Wednesday, June 13, 2007

HPC in Finance: Real World Story

M
  arc Jacobs is the smartest guy whose blog you're not reading (unless, of course, you are).

He's one of those classics-majors-turned-IT-superstars who has an incredibly rare combination of talents that allows him to dive in and understand deep technical problems, then explain it in such a way that makes you feel like you're reading a Delillo novel. I know many people who can boil a problem down so even the layperson can understand it--Marc goes way beyond that. He turns a technical hurdle into a piece of prose that you want to enjoy like you would a work of fiction.

Until recently, he developed distributed trade generation and portfolio optimization systems for an enormous hedge fund; now, he's at Lab49 and is also writing a book about developing distributed applications.

This guy knows distributed applications.

He began blogging here not long ago (I'm sure his readership is already killing mine), but he recently started a series of posts that any of my readers will find compelling: High-Performance Computing in Finance: A Customer’s Perspective. He's breaking it up into 7 parts, and he's already published parts 1, 2, and 3.

Check it out.

Technorati tags:

Tuesday, June 12, 2007

We're going to the finals!

I'm very proud to say that Digipede has been chosen as a finalist for Microsoft's ISV Innovation Partner of the Year! The awards will be handed out in July at the Worldwide Partner Conference in Denver. We attend every year, so I was going to be there anyway...but now I'm really, really excited!
Innovation Partner of the Year FinalistMicrosoft has hundreds of thousands of partners, and thousands attend the WWPC. I believe around 2,000 applied for Partner of the Year; it is an extreme honor to be named one of the 3 finalists along with Fractal Edge and Tecnologia de Gerencia Comercial (from Brasil--after all, these are Worldwide partners!).

We've worked hard to extend the Microsoft stack in innovative ways, and it is great to see them acknowledge and appreciate it. We couldn't have come this far without lots of help from our friends throughout Microsoft's organization.

By the way, if you are an ISV developing on the Microsoft platform, you should absolutely join the Partner Program and attend WWPC. More than any other event, it can help you learn how to succeed in the partner ecosystem. Microsoft works hard to help its partners--but you have to work hard at it, too. This is the perfect place to learn how to do that...

...and maybe see us pick up an award while you're at it!

Technorati tags: , ,

Friday, June 08, 2007

Worst Name Ever: VSTO != VSTO

VSTO!=VSTO

I
  have been using Excel as a vehicle for demonstrations since our product was in Alpha testing--after all, one of the greatest aspects of .NET-based grid computing product is the ability to integrate well with ubiquitous tools like Excel.

With Visual Studio Tools for Office and Visual Studio 2005, it was a snap for me to build a "supercomputing" spreadsheet that performed analysis across a grid of many machines. I routinely give demonstrations where I show how, with only 20 lines of .NET code, a spreadsheet with a .NET add-in can be adapted to run on a grid. I love it, potential customers love it--everyone loves it.

Last week, I got a new laptop (HP Compaq nc8430 running Vista), with shiny new software--including Office 2007! After getting everything set up, one of my first jobs was to port our old Office 2003 demos to run in Office 2007. Well, to be clear, they already ran in 2007 in "compatibility mode," but I had beenbuilding them on my old laptop with Office 2003. I wanted to build them on my new machine.Compatibility ModeSo I installed Visual Studio 2005, Visual Studio 2005 Visual Studio Tools for Office, and the brand-spanking new Visual Studio Tools for Office 2005 Second Edition for the Microsoft Office 2007 System. Phew.

And what happens when I try to open my project?
A compatible version of Excel 2003 is not installed on this computer

"A compatible version of Excel 2003 is not installed on this computer?" Ouch. Does this thing really mean to tell me that I need to install Office 2003 to open this project? That makes no sense, right? I mean, I know Excel 2007 can open my XLS file.

Well, a bunch of googling searching led me to this post by Martin Sawicki on the VSTO blog. Put your thinking cap on, and read this carefully:
The important thing to note here is that "Cypress", now officially known as Visual Studio 2005 Tools for Office Second Edition Beta, is NOT the "v3" of VSTO. It's a new product in its own right that will be available long before "v3". Now, the most confusing part perhaps is that despite its naming, VSTO 2005 Second Edition Beta is actually largely orthogonal to and independent of VSTO 2005. You can install VSTO 2005 SE Beta on top of VSTO 2005, but you can also install it on top of Visual Studio 2005 Professional, which doesn't contain any VSTO 2005 functionality. The "Second Edition" has its own unique feature set that does not overlap with VSTO 2005, as far as the design-time functionality goes. (It does, however, borrow a number of ideas from the "v3" CTPs, especially related to ribbon, task pane, and add-in support.)
Ignore all that stuff about betas (the post was written last September). The important stuff: Visual Studio Tools for Office 2005SE is a completely different product than Visual Studio Tools for Office.

You can understand why I was confused.

I understand this is a big company, and I understand that naming things is hard. But Microsoft is approaching ridiculousness with this! Remember, this is coming from the company that released .NET 3.0, which isn't a new version of .NET at all because it's really .NET 2.0 with some stuff added to it, and that a new version of .NET will come later.

But my problem here isn't with the name--it's that they seem to have discontinued the "old" Visual Studio Tools for Office line, leaving those of us who had a codebase written in it out in the cold. From what I can tell, I've got to rewrite my project if I want it to run in Office 2007 using the latest tools.

Naming the new product the same as the old product not only left me feeling abandoned, but it gave me the mistaken impression that I'd be able to simply upgrade to the new version--when, instead, I've got to start from scratch with the new version.

I'll keep you posted as I delve further into this.

Technorati tags: ,