The fallacies of parallel computing
time to read 1 min | 25 words
Notes from alt.net parallel session.
- Locality doesn't matter
- Locks / syncronization are cheap
- Higher parallelism equates to faster code
- All actors see the same state
- Parallel programming is easy
Comments
Maybe some formatting would do this post some good?
fixed
hum? wtf? serious? is that session live? who was the speaker?
And if you believe this, I have a car you might like to buy...
Peter,
did you see the fallacies in there?
Either the authior has screwed up... Or there is some misunderstanding. May be the author was talking about his own, rather special case?
well, that's why I asked for more info on that session...without context, it's wrong and that's why I'm curious in understanding why he said all that...
I understand points 2 and 5 (at least in the broadest sense).
With respect to point 3, are you referring to the overheads introduced by synchronisation and management of the parallel workstreams (meaning that parallelism is only faster if the workload meets certain criteria)?
I am unsure what you mean when you refer to locality and would be curious to see more about that and all actors seeing the same state discussed further.
Well, I'm not sure I understand 2. locks (and I'm assuming we're talking about .NET locks) are cheap only when they don't end up waiting on the a kernel object (which isn't guaranteed). When that happens, you'll incur into kernel transitions and those are expensive. btw, in .NET, locking means that you will always spin lock for some predefined time and this might not be good (image you're in a plane in a laptop...is spinning a good option?).
5 couldn't be wrong...if things can go wrong in a sequential program, then you can be sure that there are much more things that can (and probably will) go wrong.
locality is an interesting and complex topic : http://en.wikipedia.org/wiki/Locality_of_reference
btw, in the previous entry, it should have been "5 couldn't be more wrong"
What have the all links in your feed suddenly changed just now to links such as feedproxy.google.com/.../...arallel-computing.aspx which forward to your actual blog? Are you aware of that?
I think you meant 'Phalluses' or 'Fallacies'.
me,
Yes, sorry, blogging from the iPhone doesn't really works.
@me, I doubt he meant phalluses.
Of course, I could be wrong :)
I know that's the wrong post to ask you about one thing, but lets go. I am included in one group about architecture best pratices ( groups.google.com/.../b8a4caaf0ab44e53) and at this week one discussion started. It's about databases primary keys and composed key and dificulties using that ones with ORMs. Several members prefer just one key in the table and that's keys like artificial keys, others doen'st open hand of having a well ER modeling. What you think about?
Use surrogate keys
Nearl,
With 3 I usually refer to some people thinking about optimization with "let us parallelize that", which is not always (or often) a good idea
I love when people attempt to parallelize domains which are inherently not parallel. For example, a former boss of mine had an app that had as many as 30 threads running at a time, only one of which was necessary. It was written in extremely old, terrible C code, and the compilers/debuggers that they used there couldn't even handle the threading issues that came up (just too many threads).
I like some of Lanier's ideas. I wrote a short news article about one of his ideas a couple of years ago. I can't say I am too fond of Minsky' work. Minsky is from last century's school of symbolic artificial intelligence, which I believe to be complete crackpottery.
Comment preview