Friday, September 30, 2005

K.I.S.S.? BILU!

Anyone who has ever studied UI (or UX, as they're calling it these days) knows the acronym K.I.S.S.: Keep It Simple, Stupid!

Grid computing guru Greg Nawrocki notes this in his latest post: Complexity...

is one of the primary barriers of widespread adoption. Quite simply, Grid computing needs to be a transparent technology before it is widespread. How many of us would be browsing the web if we had to hand assemble http queries in a telnet window?

All I can say to that is, "And how!"

With all of the attention that has been paid to flexibility, interoperability, multi-os, and a lot of the other great features that distributed computing systems have today, it's clear that one thing got dropped off the list: usability. I read a lot of blogs and newsgroups, and I am continually amazed at the number of people who spend their time messing with perl scripts in order to run distributed processes (and this is on top of the grid software they're running). Perl scripts? That's the moral equivalent of hand assembling your http queries!

When we started designing the Digipede Network, we had a mantra: "Radically easier to Buy, Install, Learn and Use." We repeated it to ourselves over and over again. We concentrated on ease-of-use in every phase of the product--from how it would be sold, how it would be installed, how developers can work with it, and how it would work for people who have never written a line of code in their lives.

Why? Because we believe that distributed computing has the potential for a much, much wider audience than it has gained so far. Greg points out that "It's refreshing to see the discussion expand beyond the traditional (pharma, financial services, energy) markets" by talking about the 451Group's new report on the use of grid in the Digital Media Industry. But at Digipede, we see potential beyond specific verticals. We think that distributed computing is a tool that can be used by anyone in enterprise computing.

Anyone can use distributed computing?

Yep.

If it's radically easier to BILU.

Thursday, September 29, 2005

Channel 9ized!

I made a video for the Show Off event at PDC--you could submit a video of up to 5 minutes in length, showing yourself doing something cool. (I blogged about the video process here, here, and here).

What did I do for my video? I took a spreadsheet that had some .NET code running behind it, and I grid-enabled it. It took about 20 lines of code using the Digipede Framework SDK in Visual Studio. Pretty sweet, actually. It went from running for over 4 minutes to running in about 20 or 30 seconds.

Channel 9 put the videos up; you can see a list of all of them here. If you want to jump straight to mine, it's over here.

Wednesday, September 28, 2005

Robert on ISVChalkTalk

My colleague Robert was interviewed by David "Doc" Holladay of ISV Chalk Talk at the PDC. A video of the conversation is available here.

Take a look at what Digipede's CTO has to say!

[update]
Robert posted about it...

Networks keep getting faster

I saw this post on Grid Today about the network that Force10 set up for the iGrid 2005 conference. It's prety darn impressive stuff: the Force10 Terascale E1200 supports 56 line-rate 10 Gigabit and 1,260 Gigabit Ethernet ports.

It's a reminder of one reason why distributed computing continues to make better and better sense. As quickly as Moore's law predicts that CPU speeds will increase (via increased transistor density), bandwidth increases faster. So does the data available (measured via bits / sq inch).

This Scientific American article details how Moore's Law just can't keep up with the relative increases in bandwidth and storage.

This image shows a graph of the performance improvements of CPU speed (doubles every 18 months), data storage density (doubles every 12 months), and bandwidth (doubles ever 9 months).

What does this tell us? CPUs are losing. Even though they keep getting faster and faster, the amount of data they can have locally and the speed with which they can get data increases much faster than they can possibly deal with it.

And one thing we've learned about data is that the more we can store, the more we do store. Today's databases are several orders of magnitude larger than those of just a couple of years ago. No matter what field you are in, you are gathering, storing, analyzing and reporting on much more data now than you ever were. Sensor networks have more sensors. Supply chains have RFID tracking each individual item. Megabyte databases have become gigabyte databases and gigabyte databases have become terabyte databases.

And the speed increases for storage and bandwidth are so much faster than CPU speeds, even multi-core technology won't help in the long run. It will provide an extra doubling or two, but that will only make up for 1 or 2 years of innovation on the storage/bandwidth problem.

So, where does that leave us? Distributing our processing, of course. The answer isn't faster chips: it's more chips brought to bear on the problem. It's the ability to coordinate many machines to work on those huge datasets. The increasing speed of the network and storage make it more practical than ever to move bits before acting on them so that more work can be done in parallel.

And all indications are that these trends will just continue. More storage, more bandwidth, and the need for more computes.

Grid up!

Sunday, September 25, 2005

Off to the MTC

Several of my colleagues and I will be off at the Microsoft Technology Center in Mountain View over the next few days, putting the Digipede Network on lots and lots of machines. It should be fun. Posts might be fewer and farther between, because I'll have less time to spend reading and pondering.

Technical note: I've tweaked my blog a bit; the center column is now wider (I'm wordier than I ever thought I'd be!). I also changed the rules for posting comments; you don't have to be a Blogger member anymore, but you do have to type a word for verification. Maybe that'll shake loose a few comments from a reader or two!

Friday, September 23, 2005

More from the Googleplex

Looking over the lists of Google's innovations that Stephen E. Arnold details in The Google Legacy, I found this quote:

Another key notion of speed at Google concerns writing computer programs to deploy to Google users. Google has developed short cuts to programming. An example is Google's creating a library of canned functions to make it easy for a programmer to optimize a program to run on the Googleplex computer. At Microsoft or Yahoo, a programmer must write some code or fiddle with code to get different pieces of a program to execute simultaneously using mulitple processors. Not at Google. A programmer writes a program, uses a function from a Google bundle of canned routines, and lets the Googleplex handle the details. Google's programmers are freed from much of the tedium associated with writing software for a distributed, parallel computer.


A great idea!

We had the same idea when we created The Digipede Framework SDK. A developer who needs to scale or speed his application (whether it's a web app, an n-tiered app, or something that handles many transaction) doesn't want to become a master of distributed computing. Sure, it's not rocket science these days to start a process on another machine. But what happens when machines go down? What happens when new machines come online? How do you install the right software, guarantee execution, reassign tasks as necessary?

Quickly, this becomes much more complicated than the program the developer was improving in the first place.

This is exactly why the Digipede Framework SDK is so valuable. It frees developers from thinking about the vagaries and subtleties of moving processes around the network. It lets them spend their time working on their software. And it gives them the speedup or scalability they need in a fraction of the time it would have taken to do it themselves.

Thursday, September 22, 2005

What is a googleplex?

Any numbers hound (and Rob's four-year-old son) knows that a googol is a 1 with a hundred zeroes after it (10 to the hundredth power), and a googolplex is a one with a googol zeroes after it (10 to the googolth power).

After thinking about buying The Google Legacy by Stephen E. Arnold, I now know what a Googleplex is.

A dollar sign with a 180 after it.

$180? For a PDF? Wow. Um. Thanks for the free chapter.

[Update 8/2/2006]
I see lots of people who hit this post after searching on "What is a googleplex?". To you people, I have two things to say:

1. A googol is 10 to the hundredth power:

10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.
A googolplex is 10 to the googolth power; I'm not going to write that number here. It's huge.
2. Don't type "what is a" into Google. It doesn't help your search. Just type "googleplex".

Googleplex: The Next Grid Thing?

All sorts of posts around the blogosphere have been talking about Stephen E. Arnold's new ebook, The Google Legacy.

His website summarizes it thusly:

  • Google's computing platform -- named the Googleplex by Arnold after the name given by the company to its Mountain View headquarters complex -- is a better (faster, cheaper and simpler to operate) computer processor and operating system than systems now available from competitors. Its price advantage is five or six to one over other hardware. Massively parallelized and distributed, its processing capability can be expanded indefinitely. As a virtual system or network utility, the user simply faces no need for backup or setup or restore.
  • Google has re-coded Linux to meet its needs. This recoding enables Google to deploy numerous current and future applications -- 50 or more -- without degrading performance.
  • Google products have the potential to be assembled into a version of MS Office -- including word processing -- and many other applications.

I'm only one chapter in, but I see flaws in his arguments (along with some great points and fascinating insight).

Google is a very interesting company that has always done things very differently than others, and they have consistently created fantastic technology.

However, this doesn't make them "about to unseat Microsoft from its throne."

Google has deployed its own version of Linux. That's great. But what they have is a purpose-built operating system. It may be built to accept many different applications (they're running dozens of public applications and who knows how many secret ones), but, in all likelihood, it was not built to run on every desktop, server and cluster node in an enterprise. It would surprise me greatly to find out that Google has any interest in all at making an all-purpose operating system. One of the hardest things about making any public OS is the hundreds of thousands of drivers that have to be written to handle peripherals; does Google want to get in the business of making sure every printer on earth works with their OS?

While Google has been making an increasing number of applications available to the public (and allowing webservices to get at some of the data), they have not made a general application framework available to the public. And here is where I think Arnold goes too far in his appraisal of Google's reach. For an enterprise distributed computing system, enterprises want an OS that is deployed within their enterprise. They want an OS that they develop on all the time. And they want their IP (their software and their data) to stay within their walls.

Google is creating an architecture and OS that are stretching the capabilities of distributed computing and, without a doubt, proving that the power of commodity machines is immense and scalable. But I wouldn't go looking for enterprises to replace their existing platform operating systems with Googleplex anytime soon.

Wednesday, September 21, 2005

Sun's view of the Enterprise Grid

Paul Strong of Sun has a very good article in the most recent of ACMQueue. His subject (and the subject of the entire issue) is enterprise grid.

He makes some great points. He starts by looking at the enterprise data center.

Today’s data center could be viewed as a primordial enterprise grid. The resources form a networked fabric and the applications are disaggregated and distributed. Thus, the innate performance, scaling, resilience, and availability attributes of a grid are in some sense realized. The economies of scale, latent efficiency, and agility remain untapped, however, because of management silos. How can this be changed? And what is the difference between an enterprise grid and a traditional data center?

This is exactly right. The data center has great potential for grid-enablement. He points to Microsoft's Dynamic Systems Initiative and Sun's N1 Software as examples of how the data center is evolving. But Microsoft is quick to point out that DSI is an initiative, not a product:
The Dynamic Systems Initiative (DSI) is a commitment from Microsoft and its partners to help IT teams capture and use knowledge to design more manageable systems and automate ongoing operations, reducing costs and freeing up their time so they can proactively focus on what is truly important.

Similarly, N1 is software that helps run a datacenter.

What neither of them do is dynamically take advantage of the compute resources in a datacenter to ensure that the tasks that need compute power are receiving it, and that would-be idle machines are lending compute power when they can.

Paul's article spends a lot of time talking about why people might want a grid, then extols the benefits of virtualization, seamless use of heterogeneous resources, "holistic architecture," and abstraction. These are all long term benefits that people are looking forward to; organizations like the Enterprise Grid Alliance and the Global Grid Forum are creating standards to ensure that these goals become reality.

However, those goals are fairly abstract and, for most organizations, not the place to start experimenting with grid computing. Above all, the promise of grid is what Paul Strong notes in the first page of his article:
You can apply far more resources to a particular workload across a network than you ever could within a single, traditional computer. This may result in greater throughput for a transactional application or perhaps a shorter completion time for a computationally intensive workload.

Oddly enough, this is in the section of the article entitled "Hype." Strange, because this is the promise of grid that can actually be realized today. Grid computing is about making things go faster, and people are doing this now. It isn't hype. It's been happening for a few years with a few different Linux and UNIX solutions; it's happening now with Windows solutions (because of solutions like Digipede.

I look forward to the hype of grid computing--millions of PCs available just by plugging a computer in the wall, massive virtualization, seamless repurposing of millions of heterogeneous machines. But I also like where grid is now: tapping dozens, hundreds, or thousands of machines in the enterprise, making slow things happen faster.

Tuesday, September 20, 2005

Beware the Borg!

This entry on the A.Word.A.Day mailing list led to a discussion of collective nouns (as it turns out, a sounder is a group of wild boars).

It reminded me of a discussion we had at . A session had just come out, the Expo Hall had just opened, and about a hundred people were converging on us. Someone said "Here come the geeks!" (not in a pejorative sense, of course--this is a conference where it's cool to be a geek). That, in turn, led to a spirited discussion of what the collective noun is for a group of geeks!

The answer (and, yes, I believe it's the answer):

A Borg of geeks.

Any better ideas?

Monday, September 19, 2005

PDC Round up, WCG Style

I know there have been about 10,000 PDC roundups posted so far (how long until the bloggers outnumber the non-bloggers at an event like this?), but I thought I'd write a post to talk about how the various Microsoft products announced affect grid computing, West Coast style.

Windows Workflow Foundation
Workflow has a slightly different meaning in distributed computing than it does in enterprise computing. Often, workflow in enterprises has a lot of human interaction (an order generates an e-mail, it needs to be approved by a manager, the order is forwarded to the warehouse, etc). In distributed computing, it tends to have more to do with the interdependencies between tasks in a jobs and jobs themselves. Paul Andrew, the Technical Product Manager for WWF, has a post here that has all of the WWF presentations from PDC. I'm going to look those over tonight and tomorrow; I look forward to knowing more about this.

Windows Server 2003R2
Bob Muglia provided details on this release, which will be out this year. In addition to .NET 2.0 and WSE 3.0 (which we already knew about), this release is going to include MMC 3.0--and this will enable managed code snap-ins to MMC. Having managed code snap-ins opens a world of opportunity; we'll have to consider whether we want to hop on that bandwagon.

Compute Cluster Solution: CCE + CCP = CCS
This was our first hands-on experience with Microsoft's toolkit; it was great to play with it in the lab. It was also good to hear them explain CCE/CCS/CCP (Compute Cluster Edition is the OS [which is complete], Compute Cluster Pack is the toolkit, and Compute Cluster Solution is the combination of the two). CCS is going to put Microsoft into square competition against linux in scientific computing (which has largely been ceded to linux solutions to this point). I can't wait for this to come out; Microsoft's success in HPC can only help us. Of course, by the time CCS is released commercially (2006Q2), Digipede will already be a leader in enterprise distributed computing.

ISV Chalk Talk
Am I really excited about the existence of a new blog? I am. Because whenever Microsoft announces more support for ISVs, I think it's a great idea.

There really is a bunch more. I'm going to have to make this (at least) two posts.

Friday, September 16, 2005

Grid Computing: Free (the) Enterprise!

We spent a lot of time at explaining to people what "grid computing" is. Sometimes, we use the term "distributed computing" (more on that in a later post).

But even for those who know what grid computing means, we often had to explain that we weren't necessarily talking about HPC. Microsoft has an OS and a set of tools for that. The type of application that needs HPC is generally scientific or engineering in nature; they are "chatty" algorithms that require messaging technology such as MPI, and they frequently require very high connection speeds (e.g., Infiniband).

However, there is a much larger universe of problems that can be addressed by using distributed computing. At Digipede, none of our customers has addressed a traditional HPC problem. Our customers have been using the Digipede Network to solve problems that are much more "enterprise computing" than "HPC."

There are many things that go on in the enterprise that can happen in parallel: a perfect example is report writing. I don't know if anyone could name a company that doesn't have report generation built on top of databases. Almost always, the report generation involves transforming the data into a user-consumable format. Invariably, that's a compute-intensive operation (sometimes lasting minutes, sometimes lasting seconds). Even reports that only take a few seconds to generate can become compute hogs if there are lots of them.

And this is just the type of thing that can be distributed. If you need to generate 1,000 reports, why not have a hundred (or more!) PCs working on them? Even if each report only takes a couple of seconds, that's over a half-hour on one machine. Distributing that across tens or hundreds of machines uses existing hardware efficiently (in other words, there's no need to buy new hardware to scale your report generation), and it makes employees more efficient because they no longer have to wait for their reports.

That's a simple example, but a clear indicator of how enterprise computing can benefit from distributed execution. I'll have more examples over the coming days.

Thursday, September 15, 2005

Two for the Show

The second day at was even more exciting than the first.

This is a great conference for exhibitors--the best I've ever seen. I've been in exhibit halls before where the number of exhibitors seemed to be equal to the number of attendeees; in those conferences, exhibitors are practically dragging people into the booths. Here, there are about 10,000 attendees (or so I've heard) and probably around 100 booths. The aisles in here are frequently packed. We've had so many people come by the booth. Surprisingly, many of them aren't ISVs--most seem to be "in-house" developers (or, as we think of them, potential customers).

Yesterday we had some great visitors that weren't potential customers. Scoble stopped by (that guy always seems to have a crowd around him: first of all, he knows everyone; secondly, everyone who doesn't know him wants to know him so there's always a gaggle of people around him. He's a rock star.). Anyway, he stopped by briefly and was glad to hear that our SDK is coming out. He may stop by with his camera today. That would be awesome.

Then, at the end of the day, Jim Gray stopped by. Jim Gray is positively a luminary in the field of distributed computing. His writings are very instructive. He happened by the booth when I was the only Digipeder here, so I had him all to myself. It was a great 15 minutes or so. I told him about the system in great detail (we've talked to him several times before, but not when we had a computer with us and could show diagrams, etc). I did a demo of "Digipede-enabling" the code behind a spreadsheet. He loved it. He talked about how chip manufacturers will soon be putting 4 cores to a chip (or more), and then manufacturers will be putting 4 chips in a box. Now you've got 16 or more processors; how do you keep them busy?

He went on to say that "no one" is thinking about how programmers will take advantage of that many processors--it's a large enough number that it's difficult to manage that many threads. However, he said that he thought that our Framework is the perfect programming model. It allows developers to think the way they already think--in terms of objects. We already have the ability to tell a machine to "start as many objects as you have CPUs."

Suffice it to say, in a world where he feels that the development tools are at risk of falling behind the hardware trends, he called Digipede "one of the only points of light" in this area. It was high praise, and it meant a lot coming from him.

Wednesday, September 14, 2005

In a State of REST

In catching up on <savas:weblog /> this morning, I came across his post about a paper by Ian Foster (well, Savas is a little humble here; he claims it's by Dr. Foster, then later owns up to the fact that he's a co-author). The paper is about modeling state for web services.

It's very serendipitous, because I had a potential customer walk in the booth and ask me a question about this very thing. They have an app they've written that has a web-service interface. It's (sometimes, at least) a very compute intensive application. They'd like to be able to scale it fairly widely, and they'd like to take advantage of idle resources. Their concern is about state.

I haven't had time to read the paper in-depth yet--getting ready for another day at the . I'm looking forward to it, though. Dr. Foster is one of the world leaders in this field; I can always benefit from reading what he has to say.

Tuesday, September 13, 2005

One down...

The first full day at is complete!

Smart folks abound
It a great gathering of really, really smart people. More than any conference I've ever been to (and it feels like I've been to about a million of them), the people that I meet are very informed about technology. I guess it's because that's who PDC attracts: people who are interested in what's coming next in technology. The people who stop by the booth ask great questions. Other booths I stop by have interesting products. I guess it's a self-selecting group--only people who are interested in new technology attend a conference like PDC, so of course everyone there is interested in new technology. It's just nice to have a conference full of smart people.

It's about time for a chalk talk!
Rob did an interview for Doc Holladay's ISV Chalk Talk site. "Doc" has started a blog about "cutting edge tech trends," so it's cool that he asked Rob to be on. It's also great that he's got a blog that's aimed at ISVs. After attending the Worldwide Partner Conference I kind of felt like the ISVs get ignored in the huge partner ecosystem. Doc shows that Microsoft is listening to ISVs--we need places to go to find each other, too. It's going to be a great site--blog, forums, videos, podcasts--it'll be full of content.

Compute Cluster Solution
I had a good talk with Ryan Waite, Program Manager for the Compute Cluster Solution. They released their product to beta today, and they're pretty excited about it. He showed us their COM API, and talked a bit about their scheduler and its features. We talked about how the Digipede Network would work as a companion product to CCS. Most of the problems we solve are different than the problems you'd use CCS to solve, but it will be great to offer the products together so that CCS users can wring the most out of their cluster while taking advantage of all of the other machines in their enterprise.

One more cool thing
I think the coolest announcement from Microsoft I've read about (and it's hard to keep up; there are tons of them) is LINQ (Language Integrated Query). It

allows query expressions to benefit from the rich metadata, compile-time syntax checking, static typing and IntelliSense that was previously available only to imperative code.

In other words, you can write queries directly in your C# (or other languages); you get all of the benefits of fully typed variables. Compare that to how you structured queries before: write SQL queries as strings in your code, and have no possibility of checking syntax or types until runtime. It's awesome.

All dressed up...

...and ready to go!

is underway! Right now, the throngs are off in the West Hall listening to Bill G. give his keynote. Over here in the Big Room, all of us exhibitors are doing the last minute preparation.

Strange networking problems here. We were here until about 9 last night; we can't get our VPN or SSH2 working. We're not sure why; supposedly, any NAT-T compliant VPN client should work. Neither our networking experts nor the team here can figure out why.

With his typical stroke of genius, Rob suggested we try SSH1. Boom! Everything worked.

Our neighbors here at booth 123 are Odyssey Software, SavvySoft (who makes a really interesting Excel tool), Codesmith, and Sun.

Sun, here to promote the interoperability between Java and .NET, are giving away t-shirts that say "I went to PDC and got this Java t-shirt from Sun." Kind of funny, because we're giving away postcards that show a guy in a t-shirt that says "My CTO went to PDC 2005 and all I got was radically improved performance." Different marketing budgets.

Of course, with our postcard you can get a free copy of the software, which may not be flashy tchotschke but is a hell of a lot more useful!

Monday, September 12, 2005

PDC 2005: The Darkness

No, "The Darkness" is not a reference to the band that tried to bring back heavy metal hair last year. It's a reference to the blackout at PDC this afternoon.

Like many other exhibitors, I was in here setting up my booth at about 1:00 pm, when all the lights went out. Of course, the folks here in the exhibit hall had the same reaction that has occurred in every single blackout I've ever been in since first grade: a big cheer went up. Followed by a strange silence, wherein the collective group was thinking "Umm...what do we do now?" Within about 30 seconds, a backup system turned on and we had dim light again. It wasn't light enough to read a book, but I kept assemblying my booth.

Hilariously, about 10 minutes later, there was an announcement, explaining that there was a power outage.

I took off to get some food--not so easy in this neighborhood--and saw that the whole area was without power. By the time I bought a burrito (in a darkened restaurant that was persevering without power) and got back, the power had been restored.

The A/V department arrived and delivered a beautiful 42" monitor. Unfortunately, it came with a different stand than the one we ordered. Fortunately, the stand that they delivered works better with our booth than the one we ordered.

The headache of the hour is that we can't connect to our VPN through their firewall right now. Of course, our software is distributed computing software. I didn't bring a stack of servers with me; I decided it would be easier to just VPN to my stack of servers in Oakland. That's going to be difficult if I can't VPN anywhere! I'm sure we'll be able to figure it out.

Sunday, September 11, 2005

Kyril Faenov at PDC (& West Coast Grid)

Kyril Faenov, the product manager for Compute Cluster Edition, is giving a session at PDC. The description is:

This session provides an overview of the Windows Server Compute Cluster Solution and guidance on designing applications that can exploit parallel processing capability to address high-performance scenarios. We cover parallel programming techniques, job scheduler integration, message passing interfaces and related concepts. See how you can exploit cluster processing to deliver the performance your customers need.

I wish I could go; I'll be stuck in our booth. My colleague Robert will be there for sure.

I'm really excited for CCE to come out; it's a critical move for Microsoft in the cluster market--and I think the Digipede Network is, too.

Windows CCE will address some of the big technical and license issues that have made Linux dominant over Microsoft in the cluster space. With it, Microsoft will start to make significant inroads in that market.

The Digipede Network will complement it well; we have talked for a couple of years about the "extended cluster" or "extending the cluster to the enterprise." For people who would like to, say, use a 64-node cluster and combine that with the power of several hundred desktops, the Digipede Network will give a way to take advantage of all of that power.

That's one of the big advantages to West Coast Grid computing--you don't need to have a different operating system on your cluster than the rest of your machines. And you certainly don't need to find a certain flavor of a certain operating system, either. That happens a lot with linux clusters: "This software works with this flavor of the OS, but if you want to use it with that other flavor, you're going to have to recompile and re-link with different libraries."

That seems like the "bad old days" of computing--dedicated, purpose-built hardware for a particular application. It's not flexible, and it's not extensible.

Friday, September 09, 2005

Ready for PDC!

We can't wait to get down to PDC 2005. We finished designing our booth (we found a great company who was terrific to work with). That should be, as they say, in the mail.

We also have some cool new sample applications we'll be showing in PDC.

I finally finished our video for the Show Off event. It was a ton of work; I hope they like what we did!

I actually had some problems with Visual Communicator. Their support staff was extremely responsive, but they couldn't figure out the problem. I had difficulties with the audio getting really messed up and jerky. Finally, my co-workers Kim and Nathan suggested something really obvious: defrag my hard drive! Of course, we're talking about some huge files here--the five minutes of video is approximately a gig--and there's no way I had a contiguous gig on my hard drive.

That made an improvement. I then cut the video into pieces so I was only dealing with files of about 500 megs; that did the trick.

The nice folks at Show Off had granted me an extension until this morning, and I just finished the video (of course, compressing it to high quality WMV took about an hour--we need to talk to the Serious Magic folks about Digipede-enabling their software!)

I'm really looking forward to PDC. I'll be blogging from LA next week, giving updates.

And then I'll be able to start talking about what this blog is really all about: West Coast Grid!

Thursday, September 01, 2005

Exhibitor Passes Needed!

As anyone going to PDC knows, they've been sold out for a while now. Trouble is, we need two more exhibitor passes!

Anyone have a line on how to track some down?

We're on the waiting list, but I'd love a way to circumvent it if it's possible!

Three ideas to Show Off

We have three good ideas for our Show Off video.

First--a fictional story involving, of course, a .NET programmer whose software runs too slowly, a pointy-haired boss who demands that it run faster, and a genius who suggests increasing applicaiton performance by using the Digipede Network.

Second--a mock news report, where an anchor reports on an impending tragedy (of course, a .NET developer whose software runs too slowly), with on-the-scene reports as another programmer (or team? hero?) suggests using the Digipede Network.

Third--an infomercial for the Digipede Network.

The first is the type of thing I've done many times before, but it's harder to imagine it working on a technical topic. The second is particularly well suited for the Visual Communicator software we're going to use to put it together. And the third has a few really funny angles (then again, it's always really hard to do comedy right).

One week left to finish this thing, and I'm travelling for the long weekend. I might have some late nights next week!