This has been a popular subject on Artima, e.g.here & here. Most of the discussion has revolved around syntax, simplicity, speed, etc. IMO, this is interesting, but, ultimately, not important.
In order for a parallel programming language to take off and grab mindshare, it must be well integrated with low cost GPUs from ATI and NVIDIA. Because, ultimately, to take advantage of parallel programming, one either has to have a million dollar compute cluster, or a $100 graphics card. My bet is on cheap.
A pure, elegant language that doesn't readily talk to CUDA will lose in the marketplace to some hack language that does. It will be Beta vs. VHS, or 68000 vs. 8088 revisited. So Clojure, Scala, Fantom, Groovy enthusiasts, write some GPU libraries!
Sunday, September 26, 2010
Subscribe to:
Post Comments (Atom)
This is only true for apps that talk to the GPU frequently. In fact, it's only true for low-level layers of apps that talk to the GPU. Many apps (including all Web apps) don't talk to the GPU themselves at all; they let the client's Web browser do that for them.
ReplyDeleteMaybe that's true today, but I could imagine a future web server app using GPUs to, say, create simple HTML output.
ReplyDeleteAlso, I'm not talking about doing actual graphics on the GPU. But doing massively parallel calculations or algorithms.