Automatic Parallelization

In the last several years my research group has developed a range of techniques for automatically parallelizing serial programs. Our most recent research focuses on symbolic analysis techniques for divide and conquer programs. Several challenging aspects of this class of programs include the use of recursion as a primary control construct and the use of pointer and pointer arithmetic to access dynamically allocated arrays. We therefore use pointer analysis as a foundation for the remaining symbolic analyses. Because this analysis works for multithreaded programs, we are able to extend our analysis technique to solve problems such as static data race detection and elimination of array bounds checks.

We also developed a new technique, commutativity analysis, for automatically parallelizing irregular, object-oriented programs. The basic principle behind this technique is to recognize and exploit commuting operations, or operations that generate the same final result regardless of the order in which they execute. If all operations in a given phase of the computation commute, the compiler can generate parallel code. The generated code uses mutual exclusion locks to make operations in parallel phases execute atomically. Unlike most techniques for parallelizing compilers, which are designed to find independent tasks, commutativity analysis is capable of parallelizing computations whose parallel tasks interact by updating shared objects.