I'm fascinated to discover a novel use for Sony's Playstation 3 game consoles.
Dr. Gaurav Khanna and his colleagues at the University of Massachusetts at Dartmouth have put together a network of 16 PS3's to form what they call the "gravity grid". I was at a loss to figure out why they'd use game consoles instead of "proper" computers for the purpose until I read their explanation:
The Sony PlayStation 3 has a number of unique features that make it particularly suited for scientific computation. To start with, the PS3 is an open platform, which essentially means that one can run a different system software on it, for example, PowerPC Linux. Next, it has a revolutionary processor called the Cell processor which was developed by Sony, IBM and Toshiba. This processor has a main CPU, called the PPU and several (six for the PS3) special compute engines, called SPUs available for raw computation. Moreover, each SPU performs vector operations, which implies that it can compute on multiple data, in a single step. Finally, its incredibly low cost makes it very attractive as a scientific computing node, that is part of a cluster. In fact, its highly plausible that the raw computing power per dollar that the PS3 offers, is significantly higher than anything else on the market today!
Well, color me impressed! You can read more about the team's use of PS3's in their research into "Binary Black Hole Coalescence using Perturbation Theory" here (and no, I have no idea what that means!).
It seems that Dr. Khanna isn't the only one to discover this novel use for the PS3. Dr. Tod Martinez, Professor of Chemistry at the University of Illinois, runs a cluster of consoles from IBM using the same Cell chip technology as the PS3, and links them to graphics processor chips from Nvidia. Dr. Martinez' research examines:
... how molecules behave when you shine light on them - which has wide ramifications for the fields of agriculture, solar energy and the study of human vision.
"We have done tests of algorithms and Nvidia cards are four to five times faster than the Cell chip, which is 20 times faster than an ordinary high end computer," he says.
Because both technologies can be classified as "stream processors" they are highly suited to moving massive volumes of data - unlike the general purpose processing for ordinary computing.
"Some people think it's just about having a faster computer. They don't realise how big a change it is to do computing at your desk after accessing a computer in a room somewhere where you have to wait around for results.
"Of course it does cost less, but what needs to be recognized is that it also changes the way people think about problems when they are given a hundred times more computer power."
More about Dr. Martinez' work (and Dr. Khanna's) here.
I'm from the "old school" of computers. I started working with them in the 1970's using Siemens and Telefunken military systems and IBM System/360 and System/370 mainframes. I went on to work with progressively larger IBM mainframes, DEC PDP and VAX mini-computers, the first PC's, and so on. I can still recall when, in 1976, the Shell oil company subsidiary in South Africa upgraded its mainframe computer from 1MB of core memory (little magnetized rings wrapped around a wire framework) to 2MB. This is what core memory (the predecessor of today's memory chip) looked like, magnified fifty times:
There were screams of outrage from the bean-counters, along the lines of "What does an oil company need with two megabytes of memory? You'll never use that much!" Well, I'm typing this post on a desktop computer with two gigabytes of memory - a thousand times more than that mainframe upgrade! I guess that answers the memory-expansion critics from thirty-plus years ago . . .
I've never used a games console: but as a former computer systems engineer, I'm definitely impressed by this evidence of their raw power. I might just have to get hold of one and play around with it to see what it can do.
Peter
2 comments:
I attended a microcomputer users group meeting about 1975 and overheard someone say that if he had 4 kilobytes of memory he could do ANYTHING...
Things like this are always fascinating. I remember the scare stories when the PS2 first came out about how Saddam Hussein was responsible for the shortages of them (as opposed to rabid nerds) as he had a program set up to network them for missile defense, or some similar variant depending on who was spreading the rumor (probably Sony themselves, given their marketing department).
More interstingly to me is how the market has really gotten on board with this notion. Just three years ago, somthing like nVidia has done with the Quadro Plex and Tesla (GPUs and the bare bones to tie them together in SLI - one of which is almost certainly the product Dr. Martinez is using) would've been a colossal failure. Now not only is it marketable, but they're planning on models scaling up to 32 gpus for the Quadra, and 1024 for the Tesla (last I heard; this may have changed in the last two or three months). I think it's fascinating that a notion like SLI, which at its inception had only one viable market, the extreme-performance computer gamer, has taken a tiny niche willing to pay a steep premium, and grown into something of much wider practical use - while still rewarding the tiny niche by offering bigger and bigger performance gains.
Post a Comment