If you haven't seen the video showcasing OnLive's online gaming platform from this year's Game Developers Conference, you should check it out here.
OnLive's product is simple: deliver high-end gaming experience of multi-platform games (PC, XBox 360, PS3) through a browser. Supposedly, with very little porting, game developers can adapt their game to run on the OnLive server platform.
Users connect a controller to their PC, open up a browser, and faster than you can saw "Lara Croft is hotter than the sun," they're playing real games via the internet. All of the controller's movements are sent via your high-bandwidth internet connection (at least 1.5MB for SD, 5 MB for HD) to one of the servers in OnLive's farms, where the game is actually running. Only the video itself is sent back to your screen -- you don't need a GPU, you don't need a high end system at all.
For people who want to game on their TV without a PC, OnLive is manufacturing a small box (the size of paperback book) that plugs into your home network and can accept wired (USB)or wireless controllers.
In addition to offering games from multiple platforms without investing in lots of expensive hardware, OnLive claims to have some value add on top of the games themselves: improved social networking, the ability to save "brag clips" of your best moves, and the ability to watch other people play games are all built in.
It seems that latency would be a huge issue, even on those high bandwidth connections -- they claim that it's imperceptible, but only real game play will tell.
If their product does everything that they say it does, though -- this could be the "killer app" that cloud computing has been waiting for. This could quickly turn a multibillion market on its ear. Why invest hundreds of dollars in a console when there is an option that requires none (and could potentially play more games)? Why invest $60 per title in games?
While some industries (say, enterprise software) may have difficulty convincing customers to move data, try something new, and pay-per-use, the video game market will have no such hurdles. They're marketing to a generation who has never purchased a CD, who use more SaaS in the cloud, and who would love to avoid the sunk cost that a console represents (I myself have a PS1 and a PS2 gathering dust downstairs).
OnLive has that rare opportunity to be groundbreaking in two industries (gaming and cloud computing) simultaneously.
And if it is the killer app, there is a strange side effect: while many people have been assuming that servers would be the first industry killed by the move to cloud computing, it would be the console manufacturers (Nintendo, Sony, and Microsoft) who get affected most.
Update 3/25/2009 4:16 - added the last paragraph
Thursday, March 26, 2009
Friday, March 13, 2009
I hadn't actually forgotten that I have a blog...but from the date since my last post, it certainly looks like I have.
Certainly 2008 was a tough year, and I ended it by entering that fraternity known as "fatherhood" -- so I've spent less time blogging than I should.
I have still been following the grid and cloud spaces quite closely, though, and plan to start crystallizing more of my thoughts here.
In the meantime, some Digipede-in-the-news: Penny Crosman at Wall Street & Technology wrote an article called Adapting Legacy Applications to Multicore Servers that featured Digipede very prominently.
I was glad to see it, because it's one of the benefits we've been touting for quite a while now: many enterprises have a decade (or more!) of legacy code that they run, and any multi-core/multi-machine strategy (whether it is internal to their data center or external in a cloud) absolutely has to address the issue of how to adapt that code to take advantage of newer hardware.
A Digipede customer is quoted in the article quite extensively, but my favorite quote by far is this one:
...staff have become almost obsessed with throwing applications on the grid because it's so easy to do.
Posted by Dan Ciruli at 4:50 PM
Monday, September 22, 2008
Congrats to Kyril Faenov, Ryan Waite, and the rest of the HPC team up in Redmond.
Today at the HPC on Wall Street show in New York, Microsoft announced that the second version of their high performance computing tool has been released to manufacturing.
I got to sit down with Kyril (who runs the HPC team) back at Super Computing. He talked about some of the new features coming in the latest version, and broke them into four categories:
- Scalability: They really want to address the top end of the market, which meant adding features to ensure that Windows clusters can scale as large as the big Linux clusters. That included addressing issues all over the place, from their MPI stack (by the way, they're seeing a 30% improvement in LINPACK) to their management tools.
- Ease of use: More is available out of the box, including better management tools, improved diagnostics, and reporting capabilities.
- Integration with other applications: The HPC team worked overtime to improve integration with all sorts of stuff, from Microsoft's own tools (like System Center and Active Directory) to shared storage from other vendors (like Panassas, Ibrix, and IBM) and standards groups (HPC Basic Profile, GGF, etc).
- Applications: Kyril mentioned that more and more "traditional" ISVs are now running on Windows. By "traditional," of course, he meant "traditionally running on Linux or Unix clusters."
Posted by Dan Ciruli at 10:41 AM
Thursday, August 28, 2008
Somehow I found a link to Wordle, a very cool tool that creates word cloud graphics based on text or URLs. Naturally, I ran http://westcoastgrid.blogspot.com through it to see what my grid cloud would look like...
...and promptly found out that my "grid cloud" is actually more of a "cloud cloud."
That last sentence points out several things I've noticed lately:
- Those of us who have been writing about Grid Computing are increasingly writing about cloud computing, and of course that's no surprise. While clouds are opening up the prospect of distributed computing to a much wider audience than ever, using a cloud effectively means possibly managing many machines effectively. Alternatively, it may mean writing software that effectively runs on many machines simultaneously. In either case, the grid computing industry has been thinking about (and solving!) these problems for years. If you want a firsthand look at the expertise these "grid" folks have in "cloud" efforts, hop onto the Google Cloud Computing group and check out Rich Wellner's contributions. As I said, the grid folks have been thinking about these problems for years (albeit in a slightly different implementation).
- The term "Cloud" already has far too many meanings in the marketplace (another parallel to grid, come to think of it)
- If I'm writing more about cloud computing than grid computing, is it time to rename my blog?
In the meantime, I'm not changing the name of the blog. It may be an antiquated name, but at least people know where to find me.
Posted by Dan Ciruli at 11:06 AM
Friday, August 01, 2008
I love being quoted by that coffee-roasting, free-diving, Hawai'i living, .NET expert Larry O'Brien, so I was quite please to read my name in his latest SD Times column. He quoted a tweet (yes, I love Twitter) where I quoted a fellow CloudCamp attendee saying "Designing your app to scale is guaranteed failure—it will take too long to write."
Unfortunately (and due primarily to the 140 character Twitter limit), Larry didn't realize that I didn't agree with the guy I was quoting -- I just found it amusing.
I've actually blogged quite a few times about designing scalability into an app. In a 2005 post (Of course scalability matters!), I said this:
Most importantly, [designing scalable software] means acknowledging the possibility, however remote, that you may actually succeed and build something that people eventually use. Many people.I followed that up with a post a month later, and I was quite pleased to learn that Werner Vogels's viewpoint coincided with my own.
This point applies equally to those designing web sites and those planning on deploying SaaS. If you are going to make it available on the web, and you're not designing for scalability, then you just aren't planning for success: you're planning for failure.
So I wholeheartedly agree with Larry's sentiment:
However, I’m uncomfortable with the idea of dealing with scaling only when it becomes a problem. While laissez-faire attitudes have come to dominate code and design approaches, I still resist the idea of abandoning upfront architectural work.In fact, when I overheard the comment at CloudCamp, my first reaction was this: the only reason building scalability into your product would hurt you is if your idea is so unoriginal that someone else is 5 minutes behind you.
So: thanks for the mention, Larry. I'm on your side.
(And I really am going to ride over to Sweet Maria's next week, so send me an e-mail)
Posted by Dan Ciruli at 10:22 AM
Tuesday, July 29, 2008
Sarah Perez at ReadWriteWeb has a pretty darn good post up about Microsoft's cloud efforts (at least the publicly announced cloud efforts).
While it's not terribly in-depth, it does highlight the breadth of Microsoft's efforts in cloud computing: the Connected OS, the Software Stack, the Developer Tools, and the Datacenter effort. I don't think I'm going out on too much of a limb to say that Microsoft is taking a broader approach to cloud than any other vendors out there.
Their vision of a "connected OS" is deeper than anything I see from any vendor. Their software stack spans consumer apps and enterprise apps.
Their development tools (many of which have yet to be announced) are broad and varied, and will continue to become richer. Always remember: Microsoft is a tools vendor first and foremost.
And, of course, they're building data centers at a pace that only they and Google can.
As with many efforts from Microsoft, I expect this to be ragged at times. It's a big company, and they are making this a huge effort. There may be conflicting offerings. There will definitely be failures.
But as Sarah points out, Ray Ozzie's Microsoft 2.0 is focused on this. So, mark my words: there will be some major successes.
Posted by Dan Ciruli at 9:01 AM
Monday, July 28, 2008
Perhaps partially inspired by Matias Wolsky's SaaS Taxonomy Map, my friend, co-worker, and colleague Robert W. Anderson has written a great post called The Cloud Services Stack -- Infrastructure. In his breakdown of the varying forms of services being offered in the cloud today, he proves himself to be the Linnaeus of the XaaS products on the market today.
With so many "What is Cloud Computing" posts and articles on the net that have only served to blur rather than sharpen distinctions, I think his post should be required reading. Building on his earlier post (Cloud Services Continuum), he's accurately analyzing the landscape, providing a context that allows us to group (and therefore, ultimately, compare) the differing cloud offerings.
It's not just a useful exercise, it's a necessary exercise. So many posts (and even articles in mainstream publications) say things like "You've got lots of choices, including Amazon EC2 and Google's AppEngine." Those two offerings are so very different that they can hardly be considered competitors--yet because they're lumped into the very broad category of "Cloud," people keep mentioning them in the same breath.
Rob's diagram breaks out three main parts to the cloud services stack: SaaS (or, as he sometimes calls it, Applications as a Service), Platform as a Service, and Infrastructure as a Service. It's just as useless to try to compare an IaaS offering to a PaaS offering (e.g., AppEngine and EC2) as it is to compare GMail and GoGrid--they simply occupy different niches in the ecology.
But, interestingly, Rob's Venn diagram makes it clear that unlike the Linnaean taxonomy of the the biological kingdom, these groups that make up cloud offerings are overlapping rather than heirarchical. For instance, several offerings that started as SaaS (NetSuite, FaceBook, and SalesForce.com) have added PaaS functionality to their suites.
Similarly, Twitter and Identi.ca have SaaS offerings that are being pushed all the way down to the Infrastructure as a Service level, being used to provide a messaging layer in the cloud. Biztalk Lab's Workflow Services sits astride the PaaS/IaaS boundary. That's not to say that all offerings can be compared, but rather that an offering can have multiple facets.
The other thing I think that is quite interesting is the fragmented nature of the IaaS market -- Rob separates it generally into three submarkets: Storage, Virtual Hardware, and "Other." (The same could be said, I suppose of the SaaS market, but that's a much more mature, well understood, and less interesting topic). I'll have more to write about this particular market later, because I think there is lots of room for analysis here.
Public domain image from the Wikimedia Commons.
Posted by Dan Ciruli at 3:34 PM