This has been a popular subject on Artima, e.g.here & here. Most of the discussion has revolved around syntax, simplicity, speed, etc. IMO, this is interesting, but, ultimately, not important.
In order for a parallel programming language to take off and grab mindshare, it must be well integrated with low cost GPUs from ATI and NVIDIA. Because, ultimately, to take advantage of parallel programming, one either has to have a million dollar compute cluster, or a $100 graphics card. My bet is on cheap.
A pure, elegant language that doesn't readily talk to CUDA will lose in the marketplace to some hack language that does. It will be Beta vs. VHS, or 68000 vs. 8088 revisited. So Clojure, Scala, Fantom, Groovy enthusiasts, write some GPU libraries!