I'm in a Parallel Algorithms Theory class right now, and since I've already taken a few classes in parallel programming and had lots of experience, I thought it might be insightful. Unfortunately, I learned the truth about the class today and am still trying to find the missing link to reality.
It seems that there are three general steps to creating a "work-time optimal" algorithm. These are, in order:
- Assume an extremem number of processors, such that
p>>n. This works best for doing something trivial, such as searching a sorted array for a given element.ncan be any arbitrary number between one and the size of an integer (around 4 billion). - Create a convoluted algorithm such that:
- The number of parallel steps is very small (Try getting close to
O(log log n)) - The number of steps required to orchestrate parallelization and setup is significantly large (preferably close to
n2) - Hand-wave and use several mathematical approximations with Big-Oh to show that your new algorithm is actually constant time
O(1).
Once this has been done, publish your results and teach a Parallel Algorithms course.