tag:blogger.com,1999:blog-3722233.post92562984050841205..comments2023-03-27T17:26:46.668-05:00Comments on Computational Complexity: Parallel ThinkingLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-3722233.post-52537411895796432832011-05-27T00:45:05.886-05:002011-05-27T00:45:05.886-05:00"Clean tradeoff" is an oxymoron.
Using..."Clean tradeoff" is an oxymoron. <br /><br />Using multiple processors allows more problems to be solved in a "reasonable time" (and other ambiguities). However there will always be problems which don't fit into how the "tradeoff" was made. <br /><br />To give somewhat of a concrete example, Chinook can play a "perfect" game of checkers in that it will never make a mistake and loose. However the game of checkers is not completely solved. There are just too many possible checker games to spin through them all.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-51356131638275201162010-05-26T07:08:20.972-05:002010-05-26T07:08:20.972-05:00What's the best survey/book/resource on parall...What's the best survey/book/resource on parallel algorithms (from a TCS perspective)?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-29055922165150332682010-05-25T14:04:51.540-05:002010-05-25T14:04:51.540-05:00Paul Beame sums up the issues nicely.Paul Beame sums up the issues nicely.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-55516391207550120912010-05-25T10:27:20.294-05:002010-05-25T10:27:20.294-05:00I don't see how the goal is at all in conflict...I don't see how the goal is at all in conflict with the PRAM model. A lot of the later PRAM work was in "work-optimal" (or nearly so) parallel algorithms. If you have a work-optimal fast parallel algorithm then using a simple slow-down simulation you get an algorithm with the parameters you suggest. <br /><br />It isn't the PRAM that is the issue, it's the complexity class NC that's the red herring. It focuses on an optimization that is not so critical at this point. (For example, n^{1/3} is only roughly (log n)^2 for n around a billion.) It is not clear what the right complexity view is here. There are natural P-complete problems where one can get speed-ups down to sqrt{n} time in a fairly work-efficient way.Paul Beamenoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-67615332924654791992010-05-25T09:27:53.109-05:002010-05-25T09:27:53.109-05:001. What's the quantification over epsilon? For...1. What's the quantification over epsilon? For all 0<epsilon<1 there exists an algorithm?<br /><br />2. s-t reachability in a directed graph is the special-case of shortest paths where all weights are zero or infinity. The best currently known algorithm is in a paper by Ullman and Yannakakis in SPAA 1990 titled "High-probability parallel transitive closure algorithms". On sparse graphs it has runtime t=O~(m/sqrt(p)), where m is the input size and p is the number of processors. (This works for any 1<=p<=n^{4/3}.) This is within logarithms of optimal when p=1, but the work p*t grows substantially with the number of processors. So even this special case of shortest path does not have a known algorithm satisfying Guy's wish list.Unknownhttps://www.blogger.com/profile/01106301822827737278noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-13353592531685554472010-05-25T08:34:45.033-05:002010-05-25T08:34:45.033-05:00This topic links-up nicely with several of Dick Li...This topic links-up nicely with several of Dick Lipton's recent topics ("The Shadow Case Model of Complexity" ... "Natural, Natural, and Natural") on the general theme: What measures of complexity are <i>natural</i>? <br /><br />My bibtex files have many quotations to the effect that ideas of naturality in mathematics have historically develops "by the treatment of many specific problems". <br /><br />So let's look at some concrete problems that nowadays are receiving plenty of attention: (1) migrate LAPACK onto GPUs ... and while we're at it, (2) migrate our congugate gradient methods onto GPUs too ... and heck (3) migrate our sparse solvers and compressive sampling algorithms onto GPUs too.<br /><br />All of the preceding are receiving huge investments of engineering effort nowadays, for the simple reason that *lots* of folks want to solve practical problemsâ€”in dynamics, thermodynamics, and optimizationâ€”by evolving trajectories on state-spaces of (say) 10^6 dimensions or more, that are pulled-back from state-spaces of much larger numbers of dimensions ... and they want to solve these practical trajectory-tracing problems quickly, cheaply, and generally.<br /><br />The result is a dynamic confluence of our various notions of naturality in relation to our evolving notions of computational complexity: the technological naturality of multicore processing, the engineering naturality of simulating trajectories, the physical naturality of dynamic compression onto low-dimension state-spaces, the geometric naturality of pullback, the algebraic naturality of symmetries in the dynamics, the informatic naturality of preconditioners.<br /><br />To be useful, new notions of computational complexity somehow have to represent the confluence of *ALL* of the preceding notions of naturality ... technological, physical, geometric, algebraic, and informatic (and more).<br /><br />Dovetailing these various notions of naturality sure ain't easy ... but it has global practical consequences ... and that's why this post ... and Dick Lipton's columns too ... are making a very important contribution by posing these wonderful questions.<br /><br />The point is that posing new definitions of computational complexity isn't solely an abstract problem; it is also a very practical problem, that is intimately linked to our most fundamental ideas of naturality, that not just mathematicians care about, but practicing engineers too.John Sidleshttp://www.mrfm.orgnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-20825411487078155742010-05-25T07:26:12.747-05:002010-05-25T07:26:12.747-05:00Parallelism could be introduced in a framework wit...Parallelism could be introduced in a framework without the pitfalls you mention. For example, if there simply <i>isn't</i> any shared state, then you don't need to synchronize data access. Here's an explanation of <a href="http://ghcmutterings.wordpress.com/2009/10/06/parallelism-concurrency/" rel="nofollow">parallelism!=concurrency</a>.<a href="http://supertech.csail.mit.edu/cilk/" rel="nofollow">Cilk</a> might look more familiar, but it still has some explicit synchronization.rgrighttps://www.blogger.com/profile/02991214367108471744noreply@blogger.com