tag:blogger.com,1999:blog-6204005997345471829.post2190474848035098414..comments2023-03-25T11:24:24.318+00:00Comments on Bad Concurrency: Troubles with ParallelismMichael Barkerhttp://www.blogger.com/profile/16232688819921959379noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-6204005997345471829.post-45297502699648593772011-04-05T21:35:45.480+01:002011-04-05T21:35:45.480+01:00> If the sequential version of the algorithm th...> If the sequential version of the algorithm that uses the fold right operation (O(1) concatenation operation) could be scaled in a linear fashion across the 16 cores I had available, it would still only have ~2/3 of the performance of the imperative version.<br /><br />The sequential implementation ran at 73 ops/sec, if that scaled linearly across 16 cores it would run at 1168 ops/sec. The imperative version ran at 1886 ops/sec. 1168/1886 * 100% = 62%, that's fairly close to 2/3.<br /><br />> You have a string of length N, and K cores. Split the string in K pieces, use the sequential, imperative algorithm on each in parallel: this takes O(N/K). Then sequentially perform K joins at the cost of O(1) each (joins also can be parallelized but we don't need it). We get O(N/K + K), no? Which means for very large N and K we gain a lot for this algorithm compared to using O(N) sequential one.<br /><br />Maybe, most algorithm performance is dominated by the numbers of cache misses that occur. So if the algorithm is able to balance the cache miss costs across the cores then it'll probably work, but that's harder than it sounds. The code and test framework is up on Github (https://github.com/mikeb01/jpr11-dojo), feel free to knock up and implementation and post the results. I have an algorithm in my head that should be able to do this reasonably efficiently for most cases, but has a horrible O(n^2) for the worst (very rare) case.Michael Barkerhttps://www.blogger.com/profile/16232688819921959379noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-65157493423342531222011-04-05T21:35:18.497+01:002011-04-05T21:35:18.497+01:00First off I made a mistake with my calculation of ...First off I made a mistake with my calculation of the complexity. The Scala implementation is O(n log n) and the Fortress one is O(n). I'm writing a blog post the details the math, so I won't explain the working here.<br /><br />Have you watched the Guy Steele presentation? It's fairly important as it sets the context for the original post.<br /><br />There was strong proposition made that:<br /><br />- Iterators and accumulators (common imperative constructs) are bad and shouldn't be used.<br />- A functional style using algebraic operations should be adopted such that compiler and library implementations have "wiggle room" to run the code in parallel if required.<br /><br />I have 2 arguments against this approach:<br /><br />1. Algorithm complexity when using Fork/Join is often a hairy business and simple operations, e.g. list concatenation, can increase complexity in unexpected ways.<br /><br />2. Memory efficiency is incredibly important (perhaps even more important that being parallel) not necessarily in terms of complexity. I need to spend more time explaining this (perhaps another post).<br /><br />> Approach: you use your *very poor* (in terms algorithmic complexity) parallel and sequential functional implementations<br /><br />I agree with with assertion that the parallel implementation is very poor complexity-wise. It was a straight forward implementation of the algorithm presented by Guy Steele using the default collections supplied by my language implementation (the approach suggested, i.e. let the implementation do the optimisation).<br /><br />I disagree about the sequential implementation, it's O(n).<br /><br />> First, much better functional implementations can be devised.<br /><br />I agree, I working on one the uses mutable intermediate results and significantly reduces the memory allocation cost in a couple of other ways, but the results aren't encouraging. Will post more results soon.<br /><br />> Second, parallel does not imply functional and vice versa - this algorithm can use imperative join and still exploit parallelism to run faster on multicores.<br /><br />True, but this was not the approach I'm arguing against. There's common argument (one I keep hearing from very smart people) that we must go parallel and use functional constructs so that language implementations can do all of the hard work.<br /><br />> Conclusions: from the fact that a quadratic algorithm runs slower than a linear one, you conclude that functional implementations are inferior because of constant-factor slowdowns, such as generating too much garbage. How valid is that conclusion?<br /><br />I didn't really present the conclusion well. I saw an algorithm presented as example of using functional constructs to allow a simple task to be parallelised. However, it was trivial to write a imperative single threaded implementation that was faster and way more economical. I saw 2 reasons for this, hidden complexity (actually only in the Scala implementation, not in the Fortress one) and inefficient use of memory.<br /><br />I've been hearing the argument for the necessity to use functional constructs and parallelism for some time and this was the first example I've seen where the approach has been applied to a more general task. However when I compared it too an imperative approach the argument didn't stand up.Michael Barkerhttps://www.blogger.com/profile/16232688819921959379noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-44416184403461011112011-04-05T16:08:44.685+01:002011-04-05T16:08:44.685+01:00@Mike, I strongly disagree with your approach and ...@Mike, I strongly disagree with your approach and conclusions.<br /><br />Approach: you use your *very poor* (in terms algorithmic complexity) parallel and sequential functional implementations to benchmark with a good O(n) sequential imperative implementation. First, much better functional implementations can be devised. Second, parallel does not imply functional and vice versa - this algorithm can use imperative join and still exploit parallelism to run faster on multicores.<br /><br />Conclusions: from the fact that a quadratic algorithm runs slower than a linear one, you conclude that functional implementations are inferior because of constant-factor slowdowns, such as generating too much garbage. How valid is that conclusion?<br /><br />And how do you justify this:<br /><br />If the sequential version of the algorithm that uses the fold right operation (O(1) concatenation operation) could be scaled in a linear fashion across the 16 cores I had available, it would still only have ~2/3 of the performance of the imperative version<br /><br />You have a string of length N, and K cores. Split the string in K pieces, use the sequential, imperative algorithm on each in parallel: this takes O(N/K). Then sequentially perform K joins at the cost of O(1) each (joins also can be parallelized but we don't need it). We get O(N/K + K), no? Which means for very large N and K we gain a lot for this algorithm compared to using O(N) sequential one.Anonymoushttps://www.blogger.com/profile/08313802559573057206noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-5745659797622272052011-04-05T15:52:12.924+01:002011-04-05T15:52:12.924+01:00Correction:
Sorry for my complexity explanation b...Correction:<br /><br />Sorry for my complexity explanation being confusing and<br />incorrect. To clarify, we are considering a simple list append<br />operation similar to the following:<br /><br /> append [] b = b<br /> append (x:xs) b = x : append xs b<br /><br />The cost C of appending two lists a and b is proportional to the<br />length L of the first list, roughly:<br /><br />C(a ++ b) = L(a)<br /><br />Now consider:<br /><br />C(a ++ b) = L(a)<br />C(a ++ b ++ c) = C(a ++ b) + L(a ++ b)<br />C(a ++ b ++ c ++ d) = C(a ++ b ++ c) + L(a ++ b ++ c)<br /><br />Then<br /><br />C(a ++ b ++ c ++ d)<br />= L(a) + L(a ++ b) + L(a ++ b ++ c)<br />= 3L(a) + 2L(b) + L(c)<br /><br />Suppose that we have a list of length N partitioned into<br />one-element lists that are joined with append. Then<br /><br />C = (n-1) + (n-2) + ... + 1 = n * (n - 1) / 2 ~ O(n^2)Anonymoushttps://www.blogger.com/profile/08313802559573057206noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-60830394717625894262011-04-05T15:41:35.425+01:002011-04-05T15:41:35.425+01:00On list concatenation: let us consider concatenati...On list concatenation: let us consider concatenating 4 lists.<br /><br />a ++ b ++ c ++ d<br /><br />a ++ b takes L(a) + L(b)<br />(a ++ b) ++ c takes L(a) + L(b) + L(c)<br />(a ++ b ++ c) ++ d takes L(a) + L(b) + L(c) + L(d)<br /><br />Then the operation takes 3L(a) + 3L(b) + 2L(c) + L(d). We could have one-element lists. The worst-case bound is then O(n^2) where n is the sum of the lengths of all lists.<br /><br />That being said about regular lists, I am now reading Okasaki's _Purely Functional Data Structures_. It describes a CatenableList collection that supports head, tail, cons and append in O(1) amortized time.<br /><br />Moreover, the join part of the algorithm in the article does not require persistence. You can easily devise and use a mutable implementation of catenable lists with O(1) worst-case bounds for all operations.Anonymoushttps://www.blogger.com/profile/08313802559573057206noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-10181760145971024372011-03-28T16:21:35.227+01:002011-03-28T16:21:35.227+01:00Yeah, I didn't really explain my reasoning par...Yeah, I didn't really explain my reasoning particularly well. I'm in the process of drafting another blog that goes into more detail on how the complexity is calculated. Hopefully that will better explain it.Michael Barkerhttps://www.blogger.com/profile/16232688819921959379noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-16235310172302227382011-03-28T09:34:50.173+01:002011-03-28T09:34:50.173+01:00Sorry for the stupid question but.. why the scala ...Sorry for the stupid question but.. why the scala "concat" operator will increase complexity from N to N^2 in parallel processor?<br /><br />I really don't get itthermzhttps://www.blogger.com/profile/15676450550283759500noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-4420116942214304882011-03-14T17:50:58.024+00:002011-03-14T17:50:58.024+00:00> Are Fortress lists persistent or mutable? If ...> Are Fortress lists persistent or mutable? If persistent, how do they work exactly?<br /><br />The "PureList" implementation in Fortress used in the example is immutable and uses Finger Trees implementation. All mutation operations are O(log n), which will make the overall algorithm O(n log n).<br /><br />> Is not the problem simply that modern programmers rush to use data structures before bothering to check their complexity guarantees?<br /><br />I think the problem is worse than that. I feel that algorithmic complexity is being forgotten altogether, or sacrificed in the name of parallelism.<br /><br />Another problem is the lack of appreciation for other costs, such as memory allocation, garbage etc. If the sequential version of the algorithm that uses the fold right operation (O(1) concatenation operation) could be scaled in a linear fashion across the 16 cores I had available, it would still only have ~2/3 of the performance of the imperative version. That seems to be awfully high price to pay for a multi-threaded solution.<br /><br />The final issue I have is that the argument being presented is that the functional/persistent collection/implicit multi-threaded approach is the only approach to take and imperative techniques are fundamentally broken. To quote the talk: "DO loops are so 1950s!", "JavaTM -style iterators are so last millennium!". However very few people are publishing real results from the application of these techniques thereby demonstrating that the approach is better.<br /><br />While I think that there is some value in the approach, probably for a good number of problems, I believe (like Fred Brookes) there is no silver bullet. My argument is that developers should treat all the possible approaches, be it imperative, functional, whatever, as tools that work well for certain jobs and use empirical evidence when deciding which one to apply to a given problem.Michael Barkerhttps://www.blogger.com/profile/16232688819921959379noreply@blogger.comtag:blogger.com,1999:blog-6204005997345471829.post-29089212033663070882011-03-13T22:58:31.965+00:002011-03-13T22:58:31.965+00:00Are Fortress lists persistent or mutable? If persi...Are Fortress lists persistent or mutable? If persistent, how do they work exactly?<br /><br />There seem to be some folklore on persistent catenable (in better-than-linear-time) lists, for example:<br /><br />http://www.eecs.usma.edu/webs/people/okasaki/focs95/index.html<br /><br />I do not quite see how your conclusion follows. Is not the problem simply that modern programmers rush to use data structures before bothering to check their complexity guarantees? <br /><br />And of course functional data structures are not automatically "better" than imperative ones - you get persistence, but you pay in complexity. For applications that do not need persistence it is a net loss.<br /><br />The cost is not usually *linear* complexity - most of the time the trade-off is between constant time for an imperative DS and logarithmic time for its functional counterpart.Anonymoushttps://www.blogger.com/profile/08313802559573057206noreply@blogger.com