LFMTP 2012

7th International Workshop on

Logical Frameworks and Meta-languages:

Theory and Practice

Copenhagen, Denmark - September 9, 2012

    Invited Talks

    Belugamu: Programming proofs in context

    Brigitte Pientka

    We extend a general purpose functional language with support for programming with binders and contexts by refining the type system of ML with a restricted form of dependent types where index objects are drawn from contextual LF. This allows programmers to specify formal systems and proofs within the logical framework LF using higher-order abstract syntax and avoids common and tricky routines dealing with variables, such as capture-avoiding substitution and renaming. In addition, programmers can index ML types with contexts and contextual LF objects allowing users to define relations between them. This leads to a powerful language, Belugamu, which supports manipulating and analyzing contexts and contextual LF objects. To illustrate the elegance and effectiveness of our language, we give elegant and concise programs for closure conversion and normalization by evaluation.

    From de Bruijn to nowadays: The evolution of libraries for binding representation

    Nicolas Pouillard

    Binding representation: from the early days of de Bruijn indices to approaches using polymorphism in various ways, I will try to present the step by step evolution leading to nowadays techniques. The support for binding structures has been studied both as a language design and as a technique hosted by a language. In this talk I will primarily focus on the latter, i.e. libraries, making use of a single and consistent presentation.

    A Walk in the Substructural Park

    Robert J. Simmons

    Substructural operational semantics (SSOS) specifications were introduced by Pfenning as a presentation form for the operational semantics of programming languages. SSOS specifications are informed by intuitionistic substructural logics and have the feel of abstract machine specifications. They are characterized by a high degree of modular extensibility - new features can be added without disturbing existing specifications. In particular, specifications can be easily written to be agnostic about the presence of language features that manipulate state, parallelism, and control flow.

    Early work on SSOS explored a number of idioms and styles, some of which differ in the degree of modular extensibility they support. Borrowing from Danvy et al.’s work on the functional correspondence, the logical correspondence connects these various styles by showing how they can be derived from one another by automatic and meaning-preserving transformations on specifications in a logical framework. These transformations set up a taxonomy of existing styles and make it possible to integrate multiple styles within a single specification.

    This talk presents joint work with Frank Pfenning and Ian Zerny.

    A POPLmark retrospective: Using proof assistants in programming language research

    Stephanie Weirich

    More than seven years ago, a group of researchers at Penn and Cambridge issued the POPLmark challenge: a set of challenge problems designed to demonstrate the effectiveness of proof assistants for programming language research. Programming language papers often are accompanied by long technical reports containing the details of the proofs. These proofs exist merely to bolster confidence in the results---they are rarely read closely. Ideally, such proofs should be expressed in the formal logic of a proof assistant. Our vision was a world where all programming language conference papers (such as for POPL) were accompanied by machine-checked appendices.

    In this overview talk, I will revisit the motivation of the POPLmark challenge to reflect on the progress we and others with similar vision have made since the challenge was issued. I will focus on the questions of 'What is the role of a proof assistant in PL research?', 'Did we get where we wanted to go?' and 'Where do we go from here?'

    Regular Papers

    A supposedly fun thing I may have to do again: A HOAS encoding of Howe's method

    Alberto Momigliano

    We formally verify in Abella that similarity in the call-by-name lambda calculus is a pre-congruence, using Howe's method. This turns out to be a very challenging task for HOAS-based systems, as it entails a demanding combination of inductive and coinductive reasoning on open terms, for which no other existing HOAS-based system is equipped for. We also offer a proof using a version of Abella supplemented with predicate quantification; this results in a more structured presentation that is largely independent of the operational semantics as well of the chosen notion of (bi)similarity. While the end result is significantly more succinct and elegant than previous attempts, the exercise highlights some limitations of the two-level approach in general and of Abella in particular.

    Source code

    LF in LF: Mechanizing the Metatheory of LF in Twelf

    Chris Martens and Karl Crary

    We present a mechanized metatheoretic development of two presentations of LF: the Canonical presentation (in which only beta-short, eta-long terms are well-formed) and the original version based on definitional equivalence. We prove the standard metatheory, i.e. that type checking is decidable and canonical forms exist uniquely. We do so by translating structures from the original formulation to the canonical formulation, establishing that definitional equivalence corresponds to syntactic equivalence of canonical forms. This particular approach represents the first syntactic proof of the metatheory of LF, thus the first that can be mechanized in Twelf. It is also the first proof formally addressing the correspondence between hereditary substitution in Canonical LF and definitional equivalence in the traditional version, justifying the body of recent work that takes Canonical LF as primary.

    LFP – A Logical Framework with External Predicates

    Furio Honsell, Marina Lenisa, Liquori Luigi, Petar Maksimovic and Ivan Scagnetto

    The LFP Framework is an extension of the Harper-Honsell-Plotkin's Edinburgh Logical Framework LF with external predicates. This is accomplished by defining lock type constructors, which are a sort of diamond-modality constructors, releasing their argument under the condition that a possibly external predicate is satisfied on an appropriate typed judgement. Lock types are defined using the standard pattern of constructive type theory, i.e. via introduction, elimination, and equality rules. Using LFP, one can factor out the complexity of encoding specific features of logical systems which would otherwise be awkwardly encoded in LF, e.g. side-conditions in the application of rules in Modal Logics, and substructural rules, as in non-commutative Linear Logic. The idea of LFP is that these conditions need only to be specified, while their verification can be delegated to an external proof engine, in the style of the Poincaré Principle. We investigate and characterize the metatheoretical properties of the calculus underpinning LFP: strong normalization, confluence, and subject reduction. This latter property holds under the assumption that the predicates are well-behaved, i.e. closed under weakening, permutation, substitution, and reduction in the arguments.

    Trace Matching in a Concurrent Logical Framework.

    Iliano Cervesato, Frank Pfenning, Jorge Luis Sacchini, Carsten Schürmann and Robert J. Simmons

    Matching and unification play an important role in implementations of proof assistants, logical frameworks, and logic programming languages. In particular, matching is at the heart of many reasoning tasks and underlies the operational semantic for well-moded logic programs. In this paper, we study the problem of matching on concurrent traces in the CLF logical framework, an extension of LF that supports the specification of concurrent and distributed systems. A concurrent trace is a sequence of computations where independent steps can be permuted. We give a sound and complete algorithm for matching traces with one variable standing for an unknown subtrace. Extending the result to general traces and to unification is left to future work.

    Project page