Julian Shun   信哲文

Julian Shun   信哲文

Assistant Professor
Douglas T. Ross Career Development Professor of Software Technology
Electrical Engineering and Computer Science
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology

Office: 32-G736
Email: jshun at mit.edu

I am an assistant professor at MIT in the EECS department and a principal investigator in CSAIL. Prior to that, I was a Miller Research Fellow at UC Berkeley working with Michael Mahoney. I obtained my Ph.D. from Carnegie Mellon University, and was advised by Guy Blelloch.

Research Interests

I am interested in the theory and practice of parallel computing, especially parallel graph processing frameworks, algorithms, data structures, and tools for deterministic parallel programming. Below is a description of my recent projects.

Large-scale Graph Processing

I am very interested in developing algorithms for large-scale graph processing. Graph algorithms have many applications, ranging from analyzing social networks to finding patterns in biological networks. I have developed Ligra, a lightweight graph processing framework for shared memory. The project was motivated by the fact that the largest publicly available real-world graphs all fit in shared memory. When graphs fit in shared memory, processing them using Ligra can give performance improvements of up to orders of magnitude compared to distributed-memory graph processing systems. I have also developed Ligra+, an extension of Ligra that uses graph compression techniques to process large graphs with less memory. Recently, I have used Ligra/Ligra+ to design and evaluate parallel algorithms for graph eccentricity estimation as well as local graph clustering. I have also developed practical algorithms with strong theoretical guarantees for many fundamental graph algorithms, such as connected components, minimum spanning forest, triangle computations, maximum flow, maximal independent set, and maximal matching. I have implemented algorithms for the shared memory multicore setting and also for external memory.

Parallel In-Place Algoritms

I have developed parallel algorithms that are in-place, in that they use space sublinear in the input size. In-place algorithms not only reduce memory usage, which is important for processing large data sets, but can also improve locality and performance by reducing data movement.

Parallel String/Text Algorithms and Data Structures

I have developed practical parallel algorithms with theoretical guarantees for several important algorithms and data structures in string/text processing. These have important applications in bioinformatics, data compression, information retrieval among many others.

Deterministic Parallel Programming

I am interested in developing tools that many it easier for others to do parallel programming. In particular, I have developed algorithms, data structures and tools for deterministic parallel programming. Determinism is very important in parallel programming as it allows for ease of debugging, reasoning about correctness and performance.

Write-efficient Algorithms

I am interested in designing efficient parallel algorithms for memories with asymmetric costs of reading and writing (e.g., NVRAM). I have developed models for accounting for read-write asymmetry. Using the models, I have designed write-efficient algorithms for a number of primitives including sorting, filter, reduce, Fast Fourier transform, breadth-first search, list ranking, tree contraction, minimum spanning forest, and planar convex hull.

Parallel Semisorting

I have designed an efficient parallel algorithm for semisorting (where equal-valued keys are contiguous but different keys are not necessarily in sorted order). This is a useful primitive in many applications, for example in performing database joins and in performing the shuffle phase in the MapReduce paradigm.

Problem Based Benchmark Suite (PBBS)

I have developed a benchmark suite of fundamental problems along with parallel algorithms for solving them. The benchmarks are solely defined by the problem specification and input/output formats, without any reference to the algorithm, programming language or machine used. Please contribute your own implementations to the benchmark suite! The following paper contains descriptions of the benchmarks and experiments using them:

Ph.D. Thesis