The goal of our pointer and escape analysis projects is to understand how programs access objects, and use that understanding to drive large-scale program transformations such as automatic distribution, communication generation, parallelization, and memory management optimizations such as the elimination of local and distributed garbage collection. We also hope to leverage the extracted information to help the programmer better understand the properties of the program and to provide correctness information such as static race detection. An overall theme of this research is the analysis of multithreaded programs.
One of our first results was an interprocedural, flow-sensitive, context-sensitive pointer analysis algorithm for multithreaded programs. The algorithm was implemented for the Cilk multithreaded language. Our PLDI '99 paper on this topic describes the algorithm and presents some experimental results from the implementation. The analysis has been used in the the following projects:
We have also developed a combined pointer and escape analysis algorithm for Java programs, and used the analysis for synchronization elimination and stack allocation of objects. A key concept underlying the analysis is the goal of representing interactions between analyzed and unanalyzed regions of the program. This approach delivered a flexible analysis that is capable of analyzing arbitrary parts of the program, with the analysis result becoming more precise as more of the program is analyzed. At every stage in the analysis, the algorithm can distinguish where it does and does not have complete information. Our OOPSLA '99 paper on this topic presents the algorithm and provides experimental results from an implementation in a dynamic compiler for Java.
Our base analysis deals with multithreaded programs in a very conservative way - once an object becomes accessible to multiple threads, the analysis considers the object to permanently escape. We extended the analysis to analyze interactions between parallel threads, enabling the analysis to capture objects that do not escape a given multithreaded computation. In addition to maintaining this escape information, the algorithm also extends the results of our earlier pointer analysis algorithm for multithreaded programs in that it handles the unstructured form of multithreading present in Java rather than the block-structured, nested form of parallelism present in CIlk. Our PPoPP 2001 paper on this topic presents the algorithm and provides experimental results from an implementation in the MIT Flex compiler. We use the results to verify the correct use of region-based allocation.
We also extended our base analysis to operate in an incrementalized, demand-driven way. Instead of analyzing the whole program, the incrementalized algorithm analyzes only those parts of the program that may deliver useful results. An analysis policy monitors the analysis results to direct the incremental investment of analysis resources to those parts of the program that offer the highest expected optimization return. Our experimental results show that almost all of the objects are allocated at a small number of allocation sites and that an incremental analysis of a small region of the program surrounding each site can deliver almost all of the benefit of a whole-program analysis. Our analysis policy is usually able to deliver this benefit at a fraction of the whole-program analysis cost. Our PLDI 2001 paper on this topic presents the algorithm and provides experimental results from an implementation in the MIT Flex compiler. We use the results for stack allocation.
Here are some papers on these topics: