Concurrency Workshop @ DISC17

The workshop focuses on recent developments in the theory and practice of concurrent data structures. The current plan for the first meeting is to have all presentations as invited talks, to cover current topics in this area.

A particular goal of this meeting is to cover exciting new (preliminary) results, as well as benchmarking methods for concurrent data structures.

Workshop Details

  • Date: October 16th 2017
  • Location: Vienna, Austria, co-located with DISC 2017. Please see here for directions on how to get to the conference site.
  • Invited speakers: Trevor Brown (IST Austria & Technion), Aleksandar Dragojevic (MSR), Tim Harris (Oracle Labs), Maurice Herlihy (Brown University & Oracle), Nir Shavit (MIT & TAU), Robert E. Tarjan (Princeton & Intertrust Technologies), Vasileios Trigonakis (Oracle Labs)
  • Organizer: Dan Alistarh, IST Austria
  • Sponsoring: IST Austria

Preliminary Program

  1.  9:00 – 9:45: Robert E. Tarjan (Princeton University and Intertrust Technologies)
    Title: Concurrent Disjoint Set Union
    Abstract:  The disjoint set union problem is a classical problem in data structures with a simple and efficient sequential solution that has a notoriously complicated analysis.  One application is to find strongly connected components in huge, implicitly defined graphs arising in model checking.  In this application, the use of multiprocessors has the potential to produce significant speedups.  We explore this possibility.  We devise and analyze concurrent versions of standard sequential algorithms that use single and double compare-and-swap primitives for synchronization, making them wait-free.  We obtain work bounds that grow logarithmically with the number of processors, suggesting the possibility of significant speedup in practice.  This is ongoing joint work with Siddhartha Jayanti. 
  2. 9:45 – 10:30: Tim Harris (Oracle Labs)
    Title: Benchmarking concurrent data structures (or “Do those numbers mean what you think they mean?”)
    Evaluating shared-memory data structures is difficult: there are lots
    of possible metrics, lots of possible ways to run experiments, and
    lots of ways in which low-level details of the hardware can have
    unexpectedly large impacts on performance.  It is hard to untangle
    algorithmic improvements from coding prowess.  I am going to talk
    about some of the ways in which I have been bitten by these problems
    in the past, and some of the techniques I use to try to organize my
    own experimental work.
    10:30-11:00 – Coffee Break
  3. 11:00 – 11:45 – Maurice Herlihy (Brown University)
    Title: Concurrency Models for Smart Contracts
    Abstract: TBA
  4. 11:45 – 12:30 – Aleksandar Dragojevic (MSR)
    Title: FaRM: Concurrency Lessons Learned
    FaRM is a distributed computing platform that exposes fast, strongly consistent, and highly available distributed transactions on data stored in main memory. FaRM can commit millions of transactions per second on a cluster of a few tens of machines and recover from machine failures in tens of milliseconds. This excellent performance is achieved by designing FaRM protocols to exploit modern data center hardware – fast networks with RDMA, non-volatile memory, and multi-core CPUs. This talk will focus on the synchronization between FaRM threads on a single machine, which plays significant role in achieving good performance of the overall system, but was designed to be simple to use in the common case. The talk will also cover some of the experience in using the distributed transactional interface to build a graph store on top of FaRM.
    12:30 – 14:00 – Lunch break 
  5. 14:00 – 14:45 – Panel Discussion 
  6. 14:45 – 15:30 - Nir Shavit (MIT and TAU)
    Title: DEF: A New Programming Language for Concurrent Data Structure Design
    15:30 – 16:00 – Coffee Break
  7. 16:00 – 16:45 – Vasileios Trigonakis (Oracle Labs)
    Towards Scalable Synchronization on Multi-Cores
    Synchronization is a major impediment to the scalability of concurrent software. In this talk, I will describe how to minimize the effects of synchronization on software scalability. I will show that scalability of synchronization is mainly a property of the underlying hardware (in terms of performance and energy efficiency), resulting in limited portability of concurrent software across hardware platforms. Nevertheless, I will describe how we can achieve portability without sacrificing performance, by creating design patterns (OPTIK) and abstractions (MCTOP), which implicitly leverage hardware details without exposing them to software developers. In particular, OPTIK is a practical design pattern for designing and implementing fast and scalable concurrent data structures. MCTOP is a multi-core topology abstraction which includes low-level information, such as memory bandwidths and enables developers to accurately and portably define high-level optimization policies. Finally, I will conclude the talk by sharing my brief experience with designing concurrent solutions in an enterprise graph processing system.
  8. 16:45 – 17:30 – Trevor Brown (IST Austria & Technion)
    Title: Good data structure experiments are RARE
    Should your trust your experimental results? In this talk, I will describe a variety of common mistakes that I’ve encountered doing experimental work over the last decade. These problems affect many experiments in the literature, often rendering their conclusions invalid. I will also describe an iterative, evidence-driven methodology for (1) identifying and resolving problems in experimental results, and (2) determining whether performance differences between data structures are the result of algorithmic differences, or are simply the result of engineering differences, or interactions between hardware and software systems. By following this methodology, and harnessing tools and techniques from the systems community, we can obtain a much better understanding of the differences between concurrent data structures.


Organizer: Dan Alistarh (

Sponsored by