Tuesday, January 31, 2006

Slow and steady wins the race

I don't know if I've ever made a New Year's resolution in my life. They've always rung hollow for me--if I need to improve myself, I don't need to wait for the change of year to start doing it. In that sense, they seem more like a way to postpone self-improvement more than initiate self-improvement (i.e., the fellow who says to himself in mid-November "I really need to take off a few pounds; that'll be my New Year's resolution," then proceeds to gorge himself through two months of holidays before dieting for the first few weeks of the New Year).

Well, this year that changed. I finally found a resolution worth doing. Not for the first time in my life, I'm going to copy my older brother.

Last year, my brother Dave had a great resolution: on January 1 he did a pushup and a situp. On the 2nd, he did two pushups and two situps. And so on. By April, he was doing 100 of each per day. By July, 200 per day. And, of course, on December 31st, he did 365 pushups and 365 situps. And during the course of the year, his physique changed drastically. In September, when he turned 40, his wife gave a toast at his birthday dinner and said that he looked better then than he had in their entire relationship.

So I decided to do it this year. It's a fun resolution because it starts so easy (it felt almost silly when I got down on the floor and did one pushup; but I did it, and I was on my way). Even now, 31 days in, it's a very brief "workout," but it's starting to feel like an honest set of pushups (and crunches).

The sheer numbers involved get staggering. Over the course of the year, I'll do 66,795 pushups (you can figure this out in Excel, or you can use the fun method: 1 + 2 + ... + n = (n + 1) * (n / 2)). I know I'll be breaking them into sets eventually, but for now it's fun to see how long I'll be able to do them in one set.

What's making this even more fun is that a bunch of my friends are joining me: John, Robert, and Nathan from Digipede are doing it, and my old friend Marc. It's great having other people doing it, because we can harangue each other to make sure no one is sliding.

I'll do my 500th pushup and situp of the year tomorrow morning; I'll hit 1,000 on Valentine's Day and 10,000 on May 21st. On my birthday in August I'll do my 26,000th situp. And in the month of December alone I'll do over 10,000. I hope I (and all of my friends) can stick with it. I think the inspiration for me is to see what I can accomplish by dedication, tenacity, and an incremental increase in effort. And the next trick will be to apply those principles to other parts of my life (certainly working at a startup for the last two years feels similar!).

Duke Listens! : Weblog

Over at Paul Lamere's blog last week, he had a post about The simplest possible grid computing platform:

In his Ongoing blog Tim Bray mentions that he'll be giving a talk at this year's JavaOne called: Sigrid: The simplest possible grid computing platform. If I make it to JavaOne this year, I'll put this talk at the top of my list. There's a big gap right now in Grid Computing. There are Grid Computing APIs, but these seem to be designed for the traditional big iron apps like those used by chemical companies and Wall Street. It is hard to write grid apps with these APIs. In order to get more programmers and companies to think about moving their apps to the grid, it has to be easier to write grid apps. In particular this means:
  • Use a friendly programming language like Java
  • Be able to develop and test on a desktop system
  • Have a minimal API
  • That's a great idea, Tim. Of course, not everyone programs in Java, so that solution isn't for everyone.

    I love the qualifications, though, because they apply perfectly to the Digipede Network. Friendly language? See my post from yesterday to see a comparison of C# and Java (of course, you can use the Digipede Network in any language that has a .NET or COM interface). Test on a desktop system? I run the whole thing on my laptop routinely. Minimal API? If you haven't seen my webinar where I grid enable an existing app in 20 lines of code, drop me a line and I'll invite you to the next one and show you the coolest API known to grid computing!

    As soon as we get permission, we'll publish case studies about companies that have ported their systems to our APIs in less than a day!

    Technorati tags: , , ,

    Monday, January 30, 2006

    (Nearly) VSLiveBlogging

    It looks like Robert is LiveBlogging from VSLive (well, almost live).

    I don't know how good his connectivity will be, but if you're interested in what's happening at VS Live SF, follow him at Expert Texture.

    Technorati Tags:

    AgentMine.com on C# and Java

    Over on AgentMine.com there is a great post comparing C# to Java Paul Graham’s Essay “Java’s Cover”:

    In 2001, Paul Graham wrote an essay about Java in which he summed up his opinion of Java thusly:

    Java seems like a stinker to me.
    Graham then goes on to give a 12 point list of why he thinks Java stinks. I won't go into it point-by-point, because AgentMine does.

    AgentMine then does something very interesting: he takes the same 12 points, and applies them to C#. He looks at them to decide if, well, C# seems like a stinker. As it turns out, C# has a much better score than Java.

    His final analysis:

    The results:
    Microsoft’s C# language should only have 25% of the total level of suck that Java has.

    It's tongue-in-cheek, but it's very informative. Take a peek.

    Technorati tags: , ,

    Monday, January 23, 2006

    And the winner is...

    I love awards ceremonies as much as the next guy.

    Well, to tell you the truth, I don't love awards ceremonies. I habitually avoid the Oscars/Emmies/Grammies/Tonies/YouNameIties broadcasts unless someone really cool is throwing a party. In fact, I think I've been avoiding awards ceremonies ever since my band, Lawsuit, was nominated for a SAMMIE Award (Sacramento Area Music Award) and invited to play at the ceremony, then suffered the indignity of performing immediately before finding out that we didn't win. Apparently the booker liked us more than the voters.

    Anyway, here's an award ceremony I am excited for:
    CODiE Award 2006 Finalist
    The Digipede Network has been named a finalist for a 2006 CODiE Award in the Distributed Computing Solution category!

    For those of you not in the know, the CODiEs are awards presented by the Software & Information Industry Association (SIIA). This is their 21st year; they are the longest running, most prestigious independent award in the industry. Getting recognition from their panel of experts is fantastic.

    The competition is fierce and widely varied: Everdream, Gigaspaces, Novell, and Solace Systems are all finalists as well.

    Every SIIA member company gets a vote; the CODiE Awards Gala will be on May 16th (my mom's birthday!) at the St. Francis Hotel in San Francisco.

    I hope that the wonderful, intelligent, good-looking SIIA voters (hey, a little brown nosing never hurt!) like what we're doing!

    Sunday, January 22, 2006

    More .NET SaaS

    For a while now, I've been preaching the importance of both .NET and grid computing in the area of SaaS; they're both enabling technologies.

    I'm always happy to find validation.

    Over on the dotnetSaaS blog, Glen Cameron has a long quote from Romesh Wadhwani of Symphony Technology Group.

    Romesh writes a lot about the software industry (some of which is right on, some of which I don't entirely agree with). One part I do like is this:

    Enterprises want unified platforms and we're increasingly seeing a move towards the use of grid computing. But the grid is still nascent. There is a lot of infrastructure, underlying operating systems, middleware and visualization tools that are missing or not robust enough today.

    There is a significant opportunity to develop the enabling tools and technology that accelerate the delivery of applications and solutions on grid computing platforms.


    Absolutely correct. I'd argue that the essential middleware components are increasingly available. Adoption is starting to happen now.

    And, on a side note: I hope that Sam Ramji knows that there's a blog out there called dotnetSaaS. Sam promotes SaaS for Microsoft's Emerging Business Team, and he'll be glad to see it being promoted!

    Technorati tags: , ,

    Monday, January 16, 2006

    Windows grid: cheaper than you think

    As I mentioned in my post yesterday (Grids: All HPC? All *nix? Not anymore), I promised to follow up on one of the arguments used against using a Microsoft OS on a compute grid: cost.

    In Kim's eBig talk, one of the audience members mentioned why he thought that Microsoft was behind in grid computing, and he gave a concrete example. He talked about the cost involved in setting up a 256 node compute cluster. If you're paying $400 retail for each copy of Windows Server 2003, that adds $100,000 onto the price of your cluster (even $300 copies of XP Pro would run about $75K). As the audience member observed, that's nothing to sneeze at.

    When you compare it to $0 for 256 copies of your favorite Linux flavor, the Microsoft solution does seem expensive.

    However, on further reflection, it becomes clear that there are costs beyond OS, and that OS cost alone is not a fair comparison.

    First of all, the example assumes a purpose-built cluster. This goes against one of the primary reasons for grid computing: taking advantage of the computers that already exist. If you want 256 Linux boxes in a typical organization, you need to go buy them. But if you need the power of 256 Windows machines--you probably already own them! Because Windows is the dominant operating system, your organization probably has thousands of Windows machines that can contribute to your grid.

    Even if you need to buy some new hardware, you can have an "extended" cluster featuring a combination of dedicated hardware (in your cluster) and shared hardware (underutilized servers and/or idle desktops). By using existing hardware, you save not only OS costs but hardware costs as well.

    Furthermore, the example only counts the operating system cost--not the cost of any other aspects of the system. Some of the most popular *nix based distributed computing solutions cost one to two thousand dollars per node! Sure, you saved $300 by getting a free OS--then you spent more than triple that on your grid solution. That dwarfs the cost of the OS.

    And, it doesn't count what is often the largest cost of all--the cost of setting up the grid. Many of the *nix solutions are what we like to call "thinly disguised consulting projects." They are so complicated that setting them up involves hundreds (or sometimes thousands) of hours of consulting time. Some of the big consulting companies have excellent grid computing practices--but I assure you, they're not cheap. Put a small team of consultants on your payroll at $200 an hour for a couple of months and watch how quickly they, too, outpace your OS costs.

    So, are the Microsoft OSs too expensive for compute grids? Probably not. Most likely, you can get an extended cluster using some existing hardware (and OS), some new hardware, add the Digipede Network (at under $200 a node), and set it up without an expensive consulting project.

    One more X-factor here: as far as I know, Microsoft has not yet announced pricing for its Compute Cluster Solution, which will be out later this year. Who knows? When that price is announced, it may make the OS decision even more economical...

    Sunday, January 15, 2006

    Grids: All HPC? All *nix? Not anymore.

    Kim's eBig talk on Thursday night went very well. It was her first public speaking engagement as a Digipede evangelist, and I thought she did great.

    Her audience was diverse and obviously very well versed in the concepts (and practices) of distributed computing.

    One of the great things that Kim was able to do was to let the audience understand that distributed computing does not apply only to HPC or technical computing anymore. Even the attendees who had experience using grid systems or HPC systems before understood that, while distributed computing was once the hallmark of HPC, its applications today go far beyond technical computing.

    One question that came up was why so much distributed computing occurs in the *nix OSs, and not on the Windows platform. A little discussion ensued, and the ideas that came forth were pretty accurate.

    First, it was pointed out that much of the research into distributed computing has come from academia. Academics, of course, have always preferred UNIX and the Linux alternatives to Windows (I know that when I graduated from Berkeley in Computer Sciece in 1991, I had never used Windows at school).

    Second, the commercial push to grid (as always, I'll alternate between the terms "grid" and "distributed" without going into the distinctions between the two) has come from firms with strong *nix leanings: IBM and Sun. Both have UNIX histories; more recently, of course, IBM has been pushing Linux.

    The third reason that the group came up with was the price of the OS. If you are buying 256 boxes for a dedicated cluster, the OS cost becomes a large part of the cost.

    There's no doubting the first two reasons. They're indicative of the history of grid computing, but things are changing. The third, though, is a bit of a red herring. Tomorrow I'll get into the reasons why.

    Thursday, January 12, 2006

    Something to do on a Thursday...

    If you're in the Bay Area, you should mosey on over to Pleasanton to see Digipede's #1 Evangelista Kim Greenlee as she gives an eBig presentation entitled "Grid Computing for Windows."

    Kim's going to give a introduction to and history of grid computing, but then she's going to drill down on Windows. How do the concepts of grid computing trnanslate to a Windows environment? How can you take advantage of Windows machines?

    If you're interested, register here.

    Wednesday, January 04, 2006

    Four Stars for Digipede Network

    As I mentioned previously, Software Development Magazine published a review of the Digipede Network in the January edition. The verdict? Four stars!

    Now they've got the article available online. Read the full article here.

    On Demand in 2006

    eWeek has been doing a series called "Innovations 2006 Analysis," in which it takes in-depth looks at technologies that will have great impacts in 2006. One of the technologies they highlighted was On-Demand Software, or Software as a Service.

    The attention paid to SaaS sometimes confuses me a bit--SaaS has certainly been possible for 10 years now (my previous company, Energy Interactive, had a successful product called Energy Profiler Online, which has been sold as a service for nearly a decade).

    So why the attention now?

    The underlying technologies are now available that make SaaS better able to compete with traditional software offerings. The biggest difference between 1996 and 2006? High bandwidth internet availability. It seems archaic to think about now, but when we began selling EPO to electric utilities, one of our big concerns was the time it would take their users (who were often connected to the net via modem) to download the graphics we used for buttons!

    Another change: the quality and security in OSs/web servers/databases/etc. No matter what platform you're talking about, great strides have been made in the security and quality of service available from the underlying tools. Microsoft's initial forays into these areas (early versions of IIS, NT, and SQL Server) did not provide the quality necessary for vendors to guarantee quality of service; the open-source community had some decent tools but were nascent at the time. In the ensuing decade, both camps have made significant improvements--Microsoft's tools have improved dramatically, and the open-source community has continued to improve Apache, MySQL, Linux, etc.

    But I'll add one more innovation that has made SaaS a viable alternative to traditional software delivery: distributed computing. Key technologies have emerged in the last couple of years that make it easier to scale software across many machines; as noted in my previous post here, scalability is one of the keys to providing software online.

    Back in 1997 when we were writing EPO, scalability was one of our biggest concerns--there was no easy way to take the compute intensive operations inherent in the software and distribute them across many servers. We had to write that code ourselves. It was laborious work, and it wasn't really part of our core competency at the time.

    Nowadays, products like the Digipede Network make distributed execution easy. Instead of spending months hardcoding solutions to allow their software to scale, SaaS developers can write a few lines of code that add inherent scalability to their product. That, in turn, enables people starting SaaS companies to spend their programmer dollars adding features to their software. They end up with a product that not only has a better feature set (because they spent more time and money on it), but is also more scalable and robust (because they aren't using a jury-rigged solution for distribution).

    2006 may be the year of SaaS--but if that's true, than it's the year for distributed computing as well.