School of Computing, University of Dundee, Tuesday the 12^{th} of
May 2015, 12 PM - 6 PM

For ML and Haskell, accurate warnings when a function definition has redundant or missing patterns are mission critical. But today's compilers generate bogus warnings when the programmer uses guards (even simple ones), GADTs, pattern guards, or view patterns. We give the first algorithm that handles all these cases in a single, uniform framework, together with an implementation in GHC, and evidence of its utility in practice.

Joint work with Georgios Karachalias, Dimitrios Vytiniotis, and Simon Peyton Jones.

The Hindley-Milner let-rule operates on the basis that if we keep on restricting our ambitions for the expressiveness of the types we want for our programs, we reach the point where types are stupid enough that even computers can guess them. What happens if we reverse the information flow? Ambition for type expressivity can become rewarding if it increases the computer's ability to guess the *program*. This possibility has not escaped notice, thus far, but neither has it taken hold as the key to a language design. Fortunately, Milner's idea to ensure necessary coincidences by unification works perfectly well when the information flow is turned around. So let us (and our languages) be mindful about what we already know and open to new possibilities. I'll try to put my finger on some blind spots and some faulty learning, the better to blunder more effectively.

Substructural type systems are useful for reasoning about state in functional programming languages; for example, they can be used to assure the safety of code with type-changing updates, or to guarantee that communication protocols are followed (i.e., session typing). Unfortunately, existing approaches to substructural typing require significant duplication of code, particularly in the definition and use of higher-order functions. In this talk, I will describe a new approach to integrating substructural and Hindley-Milner typing in functional programming languages. My approach relies on using predicates to track the use of contraction and weakening, and avoids the duplication that appears in existing code. I will show that this approach enjoys principal typing and type inference, by extending standard approaches for Haskell-like languages.

Since the work of Fiore, Plotkin and Turi, we have known that the syntax of typed and untyped lambda-calculi can be thought of in terms of algebraic theories. The typechecking process that takes us from untyped to typed terms is defined by structural recursion on the initial algebra of untyped terms. In this talk, I'll explore the idea that the typechecking process *itself* can be expressed as an algebraic theory. This leads naturally to the idea of the programs that the programmer enters being "active" in the sense that they can participate in the type checking process, blending features of LCF-style tactic scripts with more normal terms.

In constructive proof theory, proving that a proposition A has a proof in the context Gamma amounts to showing that type A is inhabited by a proof p, namely it amounts to constructing p. This principle underlies most modern interactive theorem provers.

First-order automated theorem proving, in contrast, has much weaker, if any, support for capturing structural (constructive) properties of proofs. To prove that a proposition A follows from a logic program Gamma, the algorithm of SLD-resolution should derive a contradiction from assumption that A is false. The proof structure remains irrelevant in this decision procedure, as long as resolution algorithm succeeds.

The resolution approach has advantages, the main of which is efficient proof automation, which makes it suitable for automation of type inference. However, neglecting proof structure comes at a price. It is extremely hard to analyse structural properties of such derivations, whether they finitely succeed or run forever. For successful derivations, we still miss a construction of exact proof evidence, in potentially non-terminating derivations, we lack tools to analyse productivity.

Structural Resolution is an analogue of constructive proof theory approach realised in the untyped first-order setting of logic programming. In particular, it allows to formulate, for the first time, the notions of proof evidence and of universal productivity for first-order resolution.

In this talk, I will give an outline of Structural resolution and will hypothesize on its possible use in Type Inference.

Evidence construction for type class often requires instance declarations is terminating modulo cycles. In practice, there are cases where the evidence construction is diverging without forming any cycles, while the program is still meaningful. We provide a formalism to transforms instance declarations to a set of rewrite rules. This enable us to analyze the termination/nontermination behavior of the instance declaration.

Idris is a purely functional programming language with dependent types. As well as supporting theorem proving, Idris is intended to be a general purpose programming language. As such, it provides high-level concepts such as implicit syntax, type classes and "do" notation. Many components of a dependently-typed programming language are by now well understood, for example the underlying type theory, type checking, unification and evaluation. How to combine these components into a realistic and usable high-level language is, however, folklore, discovered anew by successive language implementors. In this talk, I will give an overview of the implementation of Idris, showing how it brings these components together into a complete programming language. I will briefly describe the high-level language and the underlying type theory, and present a tactic-based method for elaborating concrete high-level syntax with implicit arguments and type classes into a fully explicit core type theory. Furthermore, I will show how this method facilitates the implementation of new high-level language constructs.

Typed functional programming and units of measure are a natural combination, as F# demonstrates. However, few other languages support type inference for units of measure, perhaps because doing so would require changes to their type inference algorithms. I'll show how a tiny new feature of GHC 7.10 enables domain-specific constraint solving in the typechecker, making it possible to implement units of measure and other type inference extensions without so much as recompiling GHC. Moreover, I'll explain how to infer units of measure in a setting with universally quantified type variables and type equality constraints.

See a list of participants.