A key problem that complicates the automatic parallelization of irregular, object-oriented computations written in languages such as Java and C++ is the pervasive use of irregular linked data structures such as lists, trees and graphs. Our research attacks this problem using a new analysis technique called commutativity analysis.
Commutativity analysis exploits the structure of the object-oriented paradigm to view the computation as consisting of operations on objects. It then analyzes the computation at this granularity to determine if operations commute (i.e. generate the same result regardless of the order in which they execute). If all of the operations in a given computation commute, the compiler can automatically generate parallel code. The generated code uses mutual exclusion locks to make operations in parallel phases execute atomically.
Our experimental results demonstrate the potential of this approach. We have built a prototype compiler that uses commutativity analysis as its primary analysis technique. This compiler accepts an unannotated program written in a subset of C++ and automatically generates a parallel C++ program that performs the same computation. In addition to the parallelization transformation, we also implemented a variety of synchronization optimizations . We used the compiler to automatically parallelize several complete computations:
Here are some papers on commutativity analysis:
Here are some papers on related optimization techniques:
For more information, see my list of papers.
Back to home page