Abstract
TBA
Abstract
TBA
Abstract
TBA
Abstract
TBA
Abstract
TBA
Abstract
TBA
Abstract
TBA
Abstract
What's a free monoidal category? Imagine planar circuits connecting a sequence of inputs to a sequence of outputs, built from components similarly taking a sequence of inputs to a sequence of outputs. They are categorical: you can wire the outputs of one circuit to the inputs of another if they match. They are monoidal: you can place them in parallel, concatenating input and output sequences. They have an equational theory which amounts to allowing arbitrary jiggling around in the plane, just as long as you keep everything connected in the same way and treat wires and components as deformable but impassable boundaries. Can we decide this equational theory? Yes, in principle, but how might we do it in practice? What would it take to give unique normal forms for such circuits as inductively defined data? Fred, Jules and I were staring at this puzzle one evening, looking over a notebook for a clue to plant upon the page, in a pub not so many minutes away known for the central stairway up to its mezzanine. We were staring at circuits and staring at the stairway, until we began to see the stairways in the circuits. And it made us wonder!
Abstract
Inductive data types are a foundational tool for representing data and knowledge in dependently typed programming languages. The user provides a collection of rules that determine positive evidence for membership in the type. Elimination of an inductive type corresponds to structural induction on its members.
But what if our data modelling requires negative information as well as positive? For example, representing the result of a backtracking parser requires evidence that a certain parsing attempt *didn't* work. Standard formulations of inductive types do not allow negative information mixed with positive.
Mixing positive and negative information has been studied in logic programming, resulting in concepts like Negation as Failure, stable models, and answer set programming. Incorporating negative information into systems of rules leads us into the realm of non-monotonic logics, where simply adding knowledge does not necessarily preserve existing conclusions.
In this talk, I'll describe a way to understand data types with negative information in type theory by combining ideas from 3-valued stable models in logic programming and constructive negation (or refutation) from linear logic. The hope is that by adding negation to data types we will be able to represent some reasoning tasks more easily in dependent types.
Abstract
The modal mu-calculus is a general modal fixpoint logic that allows to reason about the ongoing, possibly infinite behaviour of a transition system. For example, the temporal logics from Sean's 101 talk a few weeks ago can be seen as fragments of the mu-calculus.
In this talk I am planning to give a brief introduction to the mu-calculus and then focus on a key result about the logic: every formula is equivalent to a so-called disjunctive formula. Constructing a disjunctive formula of "small" size is important e.g. for satisfiability checking. I am going to sketch a construction that yields a disjunctive formula that is single-exponential in the closure size of the input formula. The construction will highlight a few subtleties that arise when thinking about the size of a mu-calculus formula.
Based on joint work with Johannes Marti and Yde Venema.
Abstract
The relation between logic and algebra is often expressed by existence of equational completeness theorems stating that a certain logic L is complete with respect to a certain class of algebras K. Perhaps the most famous example is given by the class of Boolean algebras and classical propositional logic. However, CPC is also complete with respect to the class of Heyting algebras by a negative translation. In order to recover a univocal connection, we need to dig deeper into the theory of algebraisability [1] - an algebraisable logic admits a unique algebraic semantics.
In this talk we will introduce some notions and basic results from the theory of algebraisable logics. We will further propose an extension to the theory to accommodate for logics with weaker forms of substitutions (e.g. logics of dependence and independence, various epistemic logics).
This is joint work with Davide Quadrellaro.
[1] W. J. Blok and D. Pigozzi. Algebraizable logics, https://bookstore.ams.org/memo-77-396/
Abstract
Graph grammars and double-pushout rewriting are pretty old news at this point. But the abstract rewriting systems produced by (hyper)graph grammars actually carry an extremely rich compositional and algebraic structure, many aspects of which have remained woefully under-investigated. We will show how causality relations in non-deterministic hypergraph rewriting systems can effectively be captured by means of a strict monoidal 2-category (a “multiway system”), and how higher homotopy types may consequently be constructed by adjoining appropriate sets of additional rewriting rules, with the n → ∞ limit of the resulting n-fold category yielding an ∞-groupoid (the classifying space of which forms an (∞,1)-topos). We will discuss how this general framework has been applied pragmatically in the design of efficient automated theorem-proving and diagrammatic reasoning software, before concluding with a more speculative discussion of the potential applicability of the overall formalism to the foundations of physics (especially categorical quantum mechanics and discrete quantum gravity).
This is joint with Xerxes Arsiwalla.
Abstract
Optics can be constructed in at least three different ways: from Tambara theory (profunctor encoding), as open diagrams (existential optics) and as free 'categories with a counit' (free teleological categories). Each of these three constructions highlights a different point of view on optics, and their interplay is an elegant mathematical story and a(n arguably) successful story for abstract nonsense. In this play talk, we look at all three constructions and contemplate their equivalence.
Abstract
Modal logic can be applied to a diverse range of use-cases by fixing different notions of model and modality. Our motivation today will be the formal verification technique of model checking, and with this in mind, I will focus on temporal logic. In temporal logics, the models represent systems which change over time, and the modalities allow us to reason about exactly how they proceed in time.
I will first introduce basic relational modal logic, and show its limitations in this domain. We will then go on a tour of LTL, CTL, CTL*, and the modal 𝜇-calculus. Key destinations on the tour will include the respective expressive power and model-checking complexity of each, particularly in comparison to the 𝜇-calculus. I will end the tour with a closer look at the model-checking problem of the 𝜇-calculus, and its relationship to parity games.
Abstract
Model theory in classical logic provides a number of tools useful for classifying structures described by sets of axioms. In computer science we are often interested in cases where these structures are finite, but many results in classical model theory fail to specialise to cases where we require all models to be finite. In this talk I will give an introduction to some of the alternative techniques used for studying expressibility of finite models and discuss some interesting connections with complexity theory.
Abstract
Data analysis has dealt traditionally with regression, clustering and dimensionality reduction. Persistent homology, a tool in Topological Data Analysis, can be thought of as the counterpart of clustering: Instead of looking at clusters of data, it classifies holes of data. And we mean `holes' in the most visual way: given some point cloud in a metric space, it addresses how do we characterize regions of the space where there's no data. This talk will introduce persistence modules, which are a central concept of the theory. We will overview their construction from data, how we can compare between, and stability properties.
Abstract
The ménagerie of categorical models of dynamical systems is becoming a veritable zoo, but what makes all these animals tick? In this talk, I will introduce a new specimen: a symmetric monoidal category of continuous-time open Markov processes with general state spaces. I will explain how this category is obtained from a category of ``continuous-time coalgebras'' opindexed by polynomials, and describe how this recipe also gives categories of nondeterministic systems in arbitrary (continuous) time. These new specimens are motivated by the cybernetic question of how to model systems that are continuously performing approximate Bayesian inference. I will therefore sketch why their better-known cousins weren't quite up to the job, and show that our new SMC admits Bayesian inversion. Finally, I will attempt to make contact with the MSP branch of categorical cybernetics, asking what makes the shapes of our structures seem so similar-but-different, and how we might begin to understand systems nested within systems.
Abstract
Open games allow you to talk about some amount of game theory (and thus, some amount of economics) in terms of a lot of familiar ideas from category theory and compositionality. It isn’t possible to introduce open games adequately to anyone in this amount of time, but this talk will be the best I can manage in the circumstances. I’ll explain approximately how open games work, and how they relate to some other interests of MSP. I’ll also do a quick demo of the open game engine, a Haskell implementation that’s robust enough to do some Real Economics ™.
Abstract
Wouldn't it be nice if you could tell the computer what you mean, so that it could help you catch not only syntax errors, but also semantics errors? Even better, so it could help you and guide you towards the program that you want to write? So that knowing what you are doing could become a responsibility shared between human and machine, instead of being only your problem? Using dependent types, and some care, we can achieve this by encoding the precise meaning of the program in its type -- any implementation will consequently be correct *by construction*. I will give a demonstration of such dependently typed programming, based in part on a nice recent draft functional pearl by Wouter Swierstra: "A correct-by-construction conversion to combinators" [1]. I will not assume that you know much about correct-by-construction programming, or combinatory logic.
[1] https://webspace.science.uu.nl/~swier004/publications/2021-jfp-submission-2.pdf
Abstract
The free category on a graph may be understood as an operational semantics. The objects of this free category represent states and the morphisms represent possible sequences of events which may occur. How can this operational semantics be built from smaller components? We will see how gluing graphs together is the composition of category and how the operational semantics of graphs extends to a functor on this category. Blackboxing is a process which takes a system and focuses only the relationship it induces between its inputs and outputs. In this talk we will explain how blackboxing is almost a functor, and how the categorical framework developed so far gives insight into computation.
Abstract
The aim of this presentation is to give an overview of various concepts within universal coalgebra. We will explore such topics as corecursion, bisimulation and coalgebraic (modal) logic. No prior knowledge is required, save for some elementary category theory.
Abstract
What are types? What types of types exist? Why do we do bother and what does this buy us? This introduction to type theory aims to motivate the study of type systems and demonstrate their power. No knowledge is required except for basic programming concepts, like functions and data structures. Our journey will take us from the Set theory slums, through the curry-howard plains, into the type-theory ivory tower and end looking at the stars where modal logic and QTT live.
Abstract
Least fixpoints, often written mu X. Blah(X), give us inductive types (for strictly positive Blah), e.g. finite lists of A given as mu X. 1 + A*X: all the lists of finite length. Greatest fixpoints, often written nu X. Blah(X), give us possibly infinite type (for strictly positive Blah), e.g. finite lists, but also wild infinite lists like [0,1,2,3,...] which never repeat.
There is no Greek letter between mu and nu. But there is at least one interesting fixpoint between the least and the greatest: things which can loop back on themselves, but not diverge into the wide blue yonder. That is the rational fixpoint.
What are we to do with it? I don't know but I intend to find out.
I shall give a deeply unsatisfactory talk on this topic which leaves more questions than answers, for the purpose of infecting people with this problem. There will be at least one solid clue.
Abstract
We show open games cover extensive form games with both perfect and imperfect information. Doing so forces us to address two current weaknesses in open games: the lack of a notion of player and their agency within open games, and the lack of choice operators. Using the former we construct the latter, and these choice operators subsume previous proposed operators for open games, thereby making progress towards a core, canonical and ergonomic calculus of game operators. Collectively these innovations increase the level of compositionality of open games, and demonstrate their expressiveness.
This is a practice talk for ACT'21.
Abstract
I will have a look into the expressivity of BCI-algebras, by implementing them in Coq and by looking at them from a categorical point of view. The first step will be to implement the λ* operator as shown in [1], which will allow for the use of linear lambda-calculus. From this point it will be much easier to define useful terms with BCI combinators, and I will use the encoding of booleans in linear lambda-calculus defined in [2] to create a type of duplicable and discardable booleans. In parallel, I will talk about the categorical aspects with the notion of realisability and category of assemblies [1], with the aim of showing that we can obtain a linear-non-linear model.
[1] https://www.kurims.kyoto-u.ac.jp/~naophiko/paper/realizability.pdf
[2] https://www.cs.brandeis.edu/~mairson/Papers/jfp02.pdf
Abstract
Martin-Löf type theory has provided us with a new programming paradigm: One where types and terms have shed their differences in order to live harmoniously in the same universe. Despite this successful reunion we have yet to communicate this story to the people building today’s software, for whom it is still a fairy-tale, rather than reality.
For this, I will demonstrate how to use dependent types for a purpose that is extremely common in commercial software: Web servers. Web servers are a great example because of how ubiquitous they are. Every service, company or product probably has a web server running behind it in some way (if only to serve web pages). Dependent types help the implementation of servers in small and big ways and the experience can be further enhanced by combining it with lenses in order to reach powerful new levels of abstraction.
Abstract
In a constructive logic real numbers are even more interesting than they are in the classical world. To demonstrate the differences, I'll talk about how we can construct a type of real numbers in Agda in terms of the Cauchy completion of the metric space of rational numbers. This yields an implementation of real numbers that is reasonably efficient and that we can do proofs about. The basic construction closely follows Russell O'Connor's "A monadic, functional implementation of real numbers" [1]. I'll also talk about using the completion of a metric space to implement quantitative equational theories over complete separable metric spaces in Agda. I'll try to work from the assumption that the audience knows nothing about metric spaces, completion, or construction of real numbers.
[1] https://arxiv.org/abs/cs/0605058
Abstract
I will tell a (very simplified) story about how the adaptive immune system works through the medium of rules. Rules are compositional creatures that can be looked at from the perspective of structured cospans, algebras or double pushout graph rewriting systems. I won't belabour the abstract interpretation, rather I will concentrate on how they can be used to good effect to both explain (normal mode) immune response to a pathogen like SARS-CoV-2 and generate a model that can be simulated and reproduces some interesting heterogeneity that is observed in the world. Furthermore, compositionality means that this set of rules can be freely combined with sets of rules for transmission and diagnostic testing and will show this in action. Finally, I’ll speculate about some other kinds of models that it might be interesting to incorporate.
Abstract
Polymorphism allows a single function to be instantiated with multiple types. It is parametric if all of the instances behave uniformly. Reynolds managed to give rigorous formalization of this notion in his abstraction theorem for polymorphic lambda calculus. The key insight is that types may also be interpreted as relations. In this talk, I will give an overview of relational parametricity and some of its consequences, focus on an approach of extending parametricity to dependent type theories using reflexive graphs (due to Atkey, Ghani and Johann) and finish by presenting a relationally parametric model of Quantitative Type Theory.
Abstract
As a notion to represent disjoint sets of memory locations, regions serve as the basis of various techniques for e.g. memory management and pointer analysis. They are closely interrelated with effects, and have been illustrated to be useful for improving the precision of analysis. We generalize the notion of region to represent properties of values, introduce a region type system for Featherweight Java (FJ) that is parametrized with a monad-like structure, and prove a uniform soundness theorem. Its instances include some type systems studied by Martin Hofmann et al. as well as a new one that performs more precise analysis of trace-based program properties. Our region type system is separate from the FJ type system, making it simpler and also easier to move to larger fragments of Java. The uniform framework helps to avoid redundant work on the meta-theory when extending the system to cover other language features such as exception handling. This is joint work with Ulrich Schöpp.
Abstract
Hardware design is commoditised and it might be the case that several components of your design use encrypted bitstreams bought from third-parties. We must have faith that the encrypted bitstreams do what they are supposed to. In the Border Patrol Project we are interested in being able to reason about the structure & behaviour of designs as a whole, regardless of if we can inspect each module down to the individual gates.
Following on from my SPLS Nov '20 I want to update everyone on my journey in capturing the physical structure of hardware design using lambda-calculi. Specifically, I will re-introduce System-V, a typed lambda calculus that is based upon the well-known hardware description and verification language SystemVerilog. I will show how a System-V design can capture physical hardware design in the Verilog style, and its type-system enforce correct wiring. I will then show how we can formally look beyond System-V itself (and hardware design by extension) by leveraging known-things from programming-language theory, and posit on where to go from here.
Abstract
The Simply typed lambda calculus is a tried and true tool for experimentation and teaching, today we're going to use it as our guide through an introduction to category theory. Category theory without proper motivation or context can be a bit puzzling to get into. But using the simply typed lambda calculus as our framework we will see what it takes in order to interpret it as a Cartesian Closed Category. This walkthough should provide you with the tools to understand Cartesian Closed Category as well as some hands-on experience with proving your first results in Category theory!
Abstract
The algebra of open games is a compelling language for modelling large and medium scale games, with many interacting players. Its most striking features are compositionality and an intuitive and expressive graphical calculus. In this talk I'll go through some recent developments in this area to sketch a general way to build open games, starting from an informal specification, a classical game, or a bunch of given other games. The most important novelty is the presence of a well-defined and correct notion of player, which was missing until now. The guiding principle will be that ‘games and players live in orthogonal planes’, as suggested by the graphical language of the para construction.
Abstract
This talk is about the algebras of the finite support probability monad on Set. They consist of a set equipped with a “mixture” operator satisfying some axioms, and are variously known as (abstract) convex sets, convex algebras, barycentric algebras, etc. Every actually-convex subset of a real vector space is a convex set, but there are less expected examples too: every join-semilattice can be seen as a convex set, and there are examples that combine aspects of both vector spaces and semilattices.
I will spend a lot of time on the basic theory and examples, which is mostly due to Marshall Stone. I’ll then transition into talking about work in progress with Paolo Perrone and Sharwin Rezagholi, in which we aim to prove a version of Brouwer’s fixpoint theorem for convex sets. Specifically, we aim to construct a “topological realisation” functor F: Conv -> Top and find sufficient convex properties on X such that F(X) satisfies the fixpoint property for continuous maps. Along the way we prove a classification theorem: every convex set can be decomposed as a family of convex subsets of vector spaces “fibred over” a semilattice (slightly improving a similar unpublished result by Tobias Fritz).
Abstract
This talk serves as my formal introduction to the MSP101 Group, and covers work from my Bachelor thesis. We will look at how dependent types can be used to enable a type-safe crossover operation to allow the generation of program examples that are guaranteed to be well typed. We'll also cover alternate approaches I could've taken, as well as further work in the area.
Abstract
I will provide an introduction to dynamic modal logics such as Propositional Dynamic Logic (PDL) and Game Logic (GL) and will then describe a (co)algebraic framework for these logics. This framework relates program/game constructs of these logics to monad structure. The axioms of these logics express compatibility requirements between the modal operators and the monad structure.
Abstract
Quantitative Type Theory combines linear types (where we keep track of how many times a variable is used) and dependent types (where terms can appear in types). This gives a logical system which is both expressive and precise with respect to the resource usage of programs and proofs, with a rich model theory. I will talk about work in progress investigating data types in this setting, in the form of "resource-aware" quantitative containers, and their initial algebra semantics.
This is joint work with Georgi Nakov.
Abstract
We develop a fixed-point extension of quantitative equational logic and give semantics in one-bounded complete quantitative algebras. Unlike previous related work about fixed-points in metric spaces, we are working with the notion of approximate equality rather than exact equality. The result is a novel theory of fixed points which can not only provide solutions to the traditional fixed-point equations but we can also define the rate of convergence to the fixed point. We show that such a theory is the quantitative analogue of a Conway theory and also of an iteration theory; and it reflects the metric coinduction principle. We study the Bellman equation for a Markov decision process as an illustrative example.
This is a recent joint work with Gordon Plotkin and Prakash Panangaden.
Abstract
Circuit diagrams are commonly modelled by graphs embedded into some oriented surface (maps). When the circuit's topology is non-trivial (e.g. for quantum circuits), the maps live on higher genus surfaces. I will give an introduction to the relevant graph theory and discuss some ideas on how we might be able to program with these higher genus structures. Starting from the plane case, a multi-stack approach seems promising for approaching these more complex maps. Strategies for modelling structures with multiple stacks exist in various different contexts and any of your experiences are most welcome in the discussion!
Abstract
It is natural to formalise the notion of a model of type theory (especially the syntax = intended initial model) inside type theory itself. This is often done by writing the definition of a category with families (CwF) as a generalised algebraic theory. What could the success criterion from a HoTT point of view be? The typical first goal is that the initial model is an h-set or even has decidable equality. Our second goal is to make the "standard model" work, i.e. the universe U should be a CwF in a straightforward way (cf. Mike Shulman's 2014 question whether the n-th universe in HoTT models HoTT with n-1 universes). Unfortunately, it is hard to combine these two goals. If we include set-truncatedness explicitly in the definition of a CwF, then the "standard model" is not a CwF. If we don't, then the initial model is not an h-set. The root of the problem is that 1-categories are not well-behaved concepts in an untruncated setting. The natural approach are higher categories, which corresponds to "equipping the syntax with all coherences" instead of truncating. In this talk, I will explain one approach to this based on a type-theoretic formulation of Segal spaces, expressed in HTS/2LTT. I will discuss what works and what is still open. The talk will be based on arXiv:2009.01883.
Abstract
I will give an introduction to the categorical foundation of gradient-based learning algorithms. I'll define three abstract constructions and show how they can be put together to form general neural networks. The Para construction is used to compose neural networks while keeping track of their weights. Lenses/Optics which are used to take care of the forward-backward data flow and lastly, reverse derivative categories are used to functorially construct the backward wires from the forward ones. In addition, we'll see that gradient descent, Momentum, and a number of optimizers are lenses too, and that this framework includes learning on boolean circuits, in addition to standard Euclidean spaces.
Abstract
I'll talk about how I go about writing, reading, and responding to reviews. Reviews are an essential part of academic publishing, but why do we do them (for free!), and why do we care what they say? I discuss how reviews are used by programme committees and chairs to decide what papers are selected for conferences (and journals), how I think you should go about doing reviews, and why you should write reviews. I'll also talk about reading reviews written about your own work, and how to go about the author response period effectively.
Abstract
The aim of this talk is to state and prove Nash's theorem, which says that every finite game with mixed strategies has a Nash equilibrium. The proof makes use of two other famous results: Sperner's lemma, a combinatorial result about coloring the vertices of a triangulation, and Brouwer's fixed-point theorem. Thus, my talk will be organised into three largely independent parts: first I will sketch the proof of Sperner's lemma; then I will use Sperner's lemma to prove Brouwer's theorem; and finally I will use Brouwer's theorem to prove the existence of Nash equilibria.
Abstract
I will give an introduction to generic metatheory in the style of Allais, Atkey, Chapman, McBride, and McKinna, leading into new work from me and Bob. Generic metatheory is about deriving proofs and operations for a whole class of programming languages, rather than the usual 1 language. The class we consider combines variable binding with usage-sensitivity, and contains many variants of linear natural deduction calculi (and, as a special case, intuitionistic and classical calculi). We build on the framework of Allais et al. and our recent use of linear algebra in linear metatheory.
Abstract
I'll give an introduction to algebraic effects and effect handlers as a general approach to programming and reasoning about effectful computation. I'll present the notion of a computation over an algebraic effect as a command-response tree over an effect signature quotiented by some equational theory. I'll consider how to interpret command-response trees and motivate effect handlers as the reification of such interpretations as an object language feature that provides a generic implementation strategy for algebraic effects. I'll give examples to show that it can be useful to interpret the same command-response tree using different interpretations which may not respect the same equational theory. Thus effect handlers can provide an expressive programming feature independently of any non-trivial algebraic theory.
Abstract
This talk is a brief introduction to session types, a type formalism for structured communication between concurrent programs. Instead of typing programs, we will session type channel endpoints, to then ensure that the programs that make use of these endpoints do so in a principled manner, according to their session types. I will go over some of the key ideas that enable session-typed programming, and comment on the properties that session-typed programs exhibit. As an example, I will introduce (and comment on the oddities of) a type system that uses session types to type the pi calculus. I will close mentioning some of the extensions to session types and more advanced type systems.
Abstract
This talk will (probably) be a tutorial on how metrics meet probability theory and how this provides us with the context for innovative paradigms in computer science. The novel challenges that machine learning, cyber-physical systems and statistical computational methods rise to computer science require a fundamental change of the semantics of computation. While in the past we were happy to know whether two programs/algorithms/machines behave the same or not, today it is obvious that this is not enough for our purposes. We need, more and more, to be able to reason about and measure the similarity of non-identical computational behaviours. Moreover, we need to speak about randomness in computational phenomena, especially when we formalize learning or behaviours in unknown contexts. Last but not least, we need to eventually replace the classic metatheory of computation with a probabilistically-based one, where questions can be answered probabilistically within controlled confidence boundaries.
I will not have the time to approach all these during the talk, but I will try to give an overview of the mathematical instruments one needs to approach such a complex challenge.
Abstract
I’ll introduce event structures, a model of computation in which behaviour is captured through partial orders of causal dependency between events. As an application I'll show their role in a theory of concurrent/distributed games and strategies, which has been useful in the semantics of computation, also for probabilistic and quantum programs. Though originally motivated by the limitations of traditional semantics, through determinacy and value theorems, and the preservation of winning/optimal strategies under composition, a form of structural game theory is emerging. In this talk I'll concentrate on what I see as connection points with work here at Strathclyde: as a lead in, relations with the stable domain theory of Berry and Girard; then, the view of strategies as profunctors, whence how they connect to the traditional domain theory of Scott; concluding with how distributed strategies support certain dialectica categories and lenses, and through them open games. My hope is that these connections will be fruitful. On the one side, distributed games can offer a rich metalanguage which extends to probabilistic and quantum computation.
Abstract
How do we represent and reason about data types, in general? The theory of containers is one answer, which is locally quite popular in the MSP group (and beyond). A container is given by a collection of shapes, and for each shape, a collection of positions, where one is meant to plug in data. Containers can be used to analyse generic constructions on data types, without resorting to a messy induction over syntax.
Each container induces a functor from Set to Set, which maps an "element type" X to the set of containers with data drawn from X. Furthermore, there is a natural notion of morphism of containers, with the remarkable property that it completely captures the natural transformations between the functors the containers represent: the interpretation functor from containers to the functor category from Set to Set is full and faithful. In general, the category of containers is extremely nice, eg containers are closed under almost all operations you can imagine (composition, products, coproducts, exponentials, initial algebras, final coalgebras, ...). I hope to give an introductory talk that will make the above more precise.
Abstract
I’ll talk about the zoo of closely-related structures (with a lot of clashing terminology) known as “lenses” and “optics”. Lenses and optics are a way of managing inherently state-heavy applications (for example videogames) in purely functional languages, which turn out to have a lot of surprisingly deep category theory behind them. I will also talk about some of the places where lenses have recently appeared far outside their original domain, for example in game theory, machine learning and systems theory.
Abstract
Going back to Lawvere, people have tried to study probability theory in terms of monads - the idea being that for each "space" of some type, X, there should be a space PX of probability distributions on X, with point-distributions and integration of distributions giving the monad structure. The Kleisli category of this monad is of particular interest: it corresponds to "stochastic maps" between spaces.
It turns out that one can actually often do without the monad - the Kleisli category itself carries enough structure that one can do probability theory "internally", rederiving the notions of determinism, independence of measures, and many others. I will present this picture of probability, and give some examples of results that can be proven in this framework.
Abstract
Sometimes, we think of the category of sets and functions as a simple model of programs in a functional programming language. But sets and functions are much too expressive -- there are plenty of set theoretic functions that aren't expressible in any implementable programming language. Realisability is a way to bring functions down to earth by requiring that they are computable in some model of computation. I'll cover the construction of categories of realizable functions, usually called the category of Assemblies, and its interesting sub-category, the category of Modest Sets or PERs. I'll also sketch how to interpret type theory in the category of assemblies. If there is time, I'll also cover linear realisability.
Abstract
Sheaves are among the oddest creatures roaming the world of mathematics. They come from the far land of algebraic geometry, they speak a tricky language, and they organize in unfathomably big herds ('topoi'). There's no reason to be afraid, though: sheaves can be the tamest beasts and provide many useful services to the mathematician who's willing to learn their ways. In this talk, I'll try to demystify sheaves by giving an elementary exposition of their basic features, trying to convey useful intuition about their behaviour. I will also try to give a taste some of their most astounding tricks, such as sheaf cohomology and topos-theoretic forcing.
Abstract
To build upon last week's talk on Dependent types, we will develop the intuition around Quantitative types and their usage in modern programming languages. What does it mean to program when the type system is responsible for tracking resources rather than leave this task to the programmer? What new patterns do we see? And what are the limitations we encounter and how do we fix them? This talk aims to answers those questions in an approachable and interactive way.
Abstract
I'll give an introduction to the setup for dependent type theories, in the bidirectional style, again sketching why fundamental metatheoretic properties hold by not asking the wrong questions. In particular I'll develop the theory of dependent function types, then add lists. Let's conspire to make append associative, and if time permits, map a functorial action.
Abstract
A nice, chill introduction to the basic definitions and theorems on monoidal categories. We will be defining monoidal categories and symmetric monoidal categories, and talking about the coherence theorem and string diagrams.
Abstract
By way of kicking off this semester of MSP101, I'll give an introduction to our local dialect of bidirectional type systems, defined as mutually inductive systems of moded judgments. That is, we not only assign a syntactic category to each place in a judgment form, but also designate its mode as being "input", "subject", or "output". The syntactic categories of introduction and elimination forms are distinguished: the former have types as inputs, the latter types as outputs. I'll illustrate this using the Simply Typed Lambda-Calculus, and I'll rattle through as much metatheory thereof as I can before you bottle me off.
Abstract
The recent notion of Open Games allows to study game theory in a compositional way: complex games can be obtained from smaller ones using various operators such as sequential and parallel composition. Thus, Open Games form a symmetric monoidal category. There are many flavors of open games, ranging from small variations on how equilibria are treated, to more radical changes such as introducing probabilistic behavior. Proving from scratch that each variant still forms a monoidal category is tedious.
I will present a compositional construction of the category of Open Games, which consists of three main steps. In each step, one can swap a component for a similarly-behaved one, without disturbing the rest of the structure. Thus, one can define many variants of Open Games with minimal effort. This compositional approach is based on the notion of Arrows, a concept first introduced in functional programming.
This is a practise talk for ACT20 next week, and joint work with Bob, Bruno, Neil, Clemens and Fred.
Abstract
While many people use dependent types for theorem proving some of us like to use dependent types for building, and running, safer programs.
Idris' Effects library demonstrates how to embed resource dependent algebraic effect handlers into a dependently typed host language, providing run-time and compile-time based reasoning on type-level resources. Building upon this work, Resources is a framework for realising EDSLs with type systems that contain domain specific substructural properties. Differing from Effects, Resources allows a language's substructural properties to be encoded within type-level resources that are associated with language variables. Such an association allows for multiple effect instances to be reasoned about autonomically and without explicit type-level declaration. We use type-level predicates as proof that a language's substructural properties hold.
Using Resources we have shown how to provide correctness-by-construction guarantees that substructural properties of written programs hold when working with =Files=, specifying Domain Specific =BiGraphs=, and Global Session Descriptions---=Sessions=. With this talk I want to discuss the how and why of Resources, and show how we can use the framework to build EDSLs with interesting type-systems.
This talk is part practise talk for ECOOP 2020, part tutorial on an important idiom for practical dependently-typed programming, and part chance to highlight more how dependent-types are useful when programming in the real world.
Abstract
In her PhD thesis, Tamara von Glehn showed that the Category of Containers (a.k.a. Category of Polynomials, Category of Dependent Lenses, Dialectica Category) supports an interpretation of dependent types. I'll present the basic constructions used, and, if I get to it, show that the model refutes the principle of functional extensionality.
Abstract
In the 1970s, the notion of 'grading' appeared in analytic philosophy and logic as a way of making reasoning more fine-grained, capturing 'degrees' of necessity or possibility. Somewhat independently, grading has become a topic of interest in type theory and programming language semantics, with graded structures again providing a means of more fine-grained reasoning. In this talk, I will give an overview of this idea (both its past and present) and explore two uses: (1) using graded modal types for program reasoning in the context of an experimental functional language with linear and indexed types, called Granule; (2) specialising program semantics to give correct-by-construction arguments about program analyses and transformations. I will also give a mathematical characterisation of grading which suggests a broad paradigm, and will briefly mention various ongoing works.
Abstract
Resource monoids and length spaces are a semantic framework inspired from realizability, which was introduced by Dal Lago and Hofmann in the context of implicit complexity. It has been used to define quantitative models for various programming languages (such as Elementary Affine Logic, LFPL), and deduce soundness properties of the form: "Every definable function lies in a given complexity class". In this talk, I will show how this framework can be used to measure different quantitative properties of a language than time complexity. Namely, I will present a model of simply-typed λ-calculus such that the interpretation of a λ-term contains an upper bound on the size of its normal form.
Abstract
Since its introduction, linear logic has been the cause of and solution to many problems in theoretical computer science. There are many examples in the literature of calculi which restrict usage of variables, allowing them to capture linearity, monotonicity, sensitivity analysis, privacy constraints, and other coeffects. However, the syntactic metatheory of these calculi is often difficult, and rarely in a form amenable to formalisation in a proof assistant.
In this talk, I will introduce the notion of kits and environments yielding generic traversals – a method developed by Conor for proving renaming and substitution lemmas. Then, I will discuss recent work by myself and Bob on adapting this method to semiring-graded calculi. The result is a pleasingly and surprisingly minor variation on the original, comprising the introduction of a linear map to mediate usage annotations. This solution gives us confidence that we can tackle all of quantitative metatheory in the familiar intuitionistic style.
Abstract
The talk will focus on the category of Markov processes, on Markovian logics and Aumann algebras. Markovian logics are modal logics designed to specify (approximated) properties of Markov processes and characterize their bisimilarity. Aumann algebras are the algebraic counterpart of Markovian logics, i.e., they are Boolean algebras with operators that encode probabilistic information. Markovian logics are not compact and for this reason the "classic" Stone duality fails. I will present a different version of Stone duality constructed on top of Rasiowa-Sikorski lemma, which is a result bringing topological results into Model Theory.
This talk summarizes results obtained in collaboration with Dexter Kozen and Prakash Panangaden.
Abstract
This is very much work in progress, jointly with Guillaume Allais (now of St Andrews) and Fredrik Nordvall Forsberg, with occasional involvement from others of the usual MSP suspects.
Mary is our attempt to build a virtual learning environment which offers effective support to our own classes, building from the experience (but thankfully not the codebase) of the tools I built for teaching and assessing my first year hardware class. Last time I talked about Mary, she was just a twinkle in my eye. Now, she is at least a toddler.
Mary is a variant on pandoc markdown which allows code fragments to be embedded in documents and document fragments to be embedded in code. This talk will focus mainly on the programming language embedded in the system which is currently called "Shonkier", as it is a variant on the "Shonky" language to which Frank compiles. It is untyped, for the time being, but that's an opportunity.
The key design choice in Shonkier is that the C-style function application notation, f(e1,..,en), is used for all sorts of contextualization, and there are many such sorts. "Functions" (which already generalize to effect handlers) are but one form of context. We have also added first class environments (computed by pattern matching expressions) and guarding contexts (computed by Boolean expressions). One recent tweak which pays dividends is that effect handlers can now reply to requests with effects as well as with values. We are now free to negotiate various forms of contingency in a refreshingly direct style.
In short, we have abandoned the naive delights of being context-free in favour of being context-negotiable.
Abstract
Mozilla is a non-profit whose mission is to expand access to, and protect privacy on, the Internet. Mozilla Research investigates emerging technologies, such as programming languages and new web platforms. This talk is an overview of research efforts around the Rust programming language, the Servo web engine, and Mixed Reality (VR and AR) on the web.
Abstract
Physical dimensions such as length, mass and time can be used to ensure that equations are physically meaningful -- it makes no sense to add 7 metres to 12 seconds, and in an equation such as PV = nRT, it better be the case that both sides have the same physical dimension (in this case pressure times length^3). Even better, dimensions can not only be used to tell scientists off when making mistakes, but also to deduce non-trivial formulas (the famous example being that of the period of a pendulum as a function of its mass and length, and the acceleration of gravity).
From an MSP perspective, there is a strong similarity between physical dimensions and types in programming languages, and indeed, this connection was explored in the 1990's by various researchers. A concrete outcome is Andrew Kennedy's implementation of units of measure in Microsoft's F# language, which makes it possible to write F# programs that are dimensionally-correct by construction.
However, a lot of numerical software is written manipulating not single numbers, but vectors and matrices, using the methods of linear algebra. I will report on work in progress with Conor on extending dimensional analysis to multiple dimensions in this second sense.
Abstract
This talk is intended to be introductory, building on Radu's earlier intro to Stone duality. In my presentation I will show how to use duality for proving completeness of basic (propositional) modal logic. To this aim I will first recall what completeness means and then discuss two different kinds of semantics for modal logic: its algebraic semantics, wrt which completeness of the logic is relatively straightforward, and Kripke semantics, which is the semantics one usually is interested in. Stone Duality allows to establish a connection between the two types of semantics such that completeness of modal logic wrt Kripke semantics follows.
Abstract
There are several principles that are taken to be characteristic of non-constructive reasoning. These include the principles of excluded middle, double negation elimination, and choice.
On the other side, there are several formal systems that are well known to capture only constructive reasoning. These include dependent type theory (à la Martin-Löf) and linear logic.
However, each of the systems I just listed is able to derive at least one theorem that looks an awful lot like one of the non-constructive principles I listed earlier. What's going on here?
Abstract
The Dialectica Categories were introduced in de Paiva's eponymous work as an internalized version of Gödel's functional interpretation. The interpretation translates Heyting Arithmetic (HA) into System T (intended as an axiomatization of primitive recursive functionals of finite type) and was originally developed as a tool to prove the relative consistency of HA. Translating the contraction rule poses certain problems and as a solution, Gödel requires decidability of atomic formulae. Several variants exist that lift this restriction.
In this talk, I will present the categorical constructions from de Paiva's paper. We will investigate their structure and see how the different versions of the interpretation are accommodated in this setting. Finally, we will conclude that in the process we have obtained a model of Intuitionistic Linear Logic.
Abstract
Neural networks have become an increasingly popular tool for solving many real-world problems. They are a general framework for differentiable optimization which includes many other machine learning approaches as special cases. However, at the moment there is no comprehensive mathematical account of their behavior. I'm exploring the hypothesis that the language of category theory could be well suited to describe these systems in a precise manner. I will give a short tour of recent developments in this area, mostly based around the notion of lenses.
Abstract
We define Aumann algebras, an algebraic analog of probabilistic modal logic. An Aumann algebra consists of a Boolean algebra with operators modeling probabilistic transitions. We prove that countable Aumann algebras and countably-generated continuous-space Markov processes are dual in the sense of Stone. Our results subsume existing results on completeness of probabilistic modal logics for Markov processes.
This summarizes results obtained in collaboration with Dexter Kozen and Prakash Panangaden.
Abstract
I will talk about a system that combines Dependent Types and Linear Types. As an application of this system, I will show how to transport Martin Hofmann's LFPL and Amortised Resource analysis systems for resource constrained computing to full dependent types. This results in a theory where unconstrained computations are allowed at the type level, but only polynomial time computations at the term level. The combined system now allows one to explore the world of propositions whose proofs are not only constructive, but also of restricted complexity.
Abstract
Epistemic Logic is the modal logic of knowledge. It allows one to reason about a finite set of agents who can know facts about the world, and about what the other agents know. The traditional way to interpret epistemic logic formulas is by using Kripke models: that is, graphs whose vertices represent the possible worlds, and whose edges indicate the agents that cannot distinguish between two worlds. I will present an alternative kind of model for epistemic logic based on chromatic simplicial complexes. Simplicial models are equivalent to Kripke models; thus, this connection uncovers the higher-dimensional geometric nature of knowledge. Finally, I will show how to adapt these geometric models in order to interpret other epistemic notions, such as belief, distributed knowledge, and more.
Abstract
Quite a few colleagues in the MSP group have experienced both the pleasure and the pain of working with my Marx system in the course of delivering our classes. (I never meant to write a virtual learning environment, but somehow, I sort of did.) I propose to re-engineer it from scratch and do a rather better job (suitable for wider deployment) deliberately, and am keen to solicit assistance in this mission, lest it be yet another engineering project distinguished by my madness and dissolution that you are, even so, obliged to put up with. Inevitably, I will struggle and rapidly fail to exclude interesting computer science from the design of the system. There will be fun.
Mary will be a content management system with pages written in Markdown and stored in git repositories on the department's GitLab server. However, every such page is also a form, supporting interactive content. Pages will therefore need to embed code for processing the data in the form, certainly on the server, but preferably (unlike in Marx) on the client. So Mary will embed a programming language that we might have fun designing and implementing. I propose to base this language on Frank, an effects and handlers language that I cooked up a while back. Access to form fields can be seen as an effect: by remember the resumption for each such access, we can model what to recompute in the client when fields change, after the fashion of spreadsheets, programmed in apparently direct style.
Mary will also need to maintain a database to achieve cross-session persistence of student work and staff configuration data. The Marx approach to analytics over this database amounted to "grep". In the meantime, however, Fred and I spent quite a while thinking about how to give an account of data models and analytics at a higher level of abstraction using carefully undermarketed ideas from dependent type theory. We should consider how to adapt these ideas to manage our own data.
Let's get excited and make things!
(Comrades who are not Strathclyders but who are interested in effects, types, or just having better tools to survive this business we call higher education, should very much feel invited to engage. I will happily export this project.)
Abstract
Stuart and I have been working on interpreters for predicative calculi which are "potato-powered" in that they work by structural recursion on inductive data which may look suspiciously like types, but are not statically checked to be the actual types of the programs in question. This works because the calculi are presented in a bidirectional style which, by design, causes every redex to carry some "potatoes", which hopefully contain enough inductive carbohydrate to keep you going for the whole of the computation to be done.
Naturally, it would be lovely if we could prove that the interpreter's output is genuinely a reduct of its input, and that well typed input yields normal form output (i.e., that an honest type gives the potatoes required to do the whole computation). How is it done? Potato-powered logical relations, of course! I'll give a crash course on cooking programs and proofs with potatoes.
Abstract
This talk is based on the paper with the same title that I have presented at LICS 2018. It is a joint work with Dana Scott, Dexter Kozen, Prakash Panangaden, Robert Furber and Giorgio Bacci. We extend Dana's Boolean-valued set theory (introduced in 50's to demonstrate the independence of the continuum hypothesis) to get, initially, a denotational model of untyped lambda calculus, and eventually extend it to a model of stochastic lambda calculus. The model is constructed over a space of random variables, which inherit a natural continuous Boolean algebra structure.
Abstract
At this year's ICFP, Mohkov, Lukyanov, Marlow, and Dimino introduced "Selective Applicative Functors", a programming interface that is an intermediate stage between monads and applicative functors. I'll motivate what they're for, and describe what they are, and I'll talk about a more abstract way of thinking about them in terms of "right-skew monoidal categories".
Abstract
The classical Hennessy-Milner theorem says that two states of an image-finite transition system are bisimilar if and only if they satisfy the same formulas of a certain modal logic. I will place this type of result in a general context, moving from transition systems to coalgebras and from bisimilarity to coinductive predicates. Furthermore I will present a sufficient condition for a coalgebraic logic to be expressive, i.e., to fully characterise a coinductive predicate on coalgebras. Our approach will be illustrated with logics characterising similarity, divergence and a behavioural metric on automata.
This is joint work with Jurriaan Rot to be presented at CSL 2020.
Abstract
I try to understand the ways of linear algebraists from a programming languages perspective. In particular, I investigate the practice of specifying a linear map by giving only its action on basis elements. We can turn this practice into a nice programming language for linear maps. Furthermore, our reasoning about these maps is significantly simplified by only considering the basis elements. Finally, keeping track of basis elements amounts to a typing discipline, and we can expect to get some nice properties just by observation of the types.
Meanwhile, I will try to elucidate the similarity between, on one side, categories of spaces and linear maps and, on the other, Rel, the category of sets and relations. This forms an important part of our programming languages view.
Abstract
I will give an introduction to the geometric semantics for concurrent programs that have been developed by Goubault, Fajstrup, Raussen et al. since the 1990's. A concurrent program can be viewed as a topological space equipped with a notion of direction, modeling the passage of time. Thus, in such a space, the existence of a path from A to B does not guarantee the existence of a path from B to A. A path in a directed space corresponds to an execution of the program; and two such executions are equivalent when the corresponding paths are homotopic, that is, when they can be deformed continuously into one another. This idea motivated the development of directed algebraic topology, the analogue of algebraic topology for directed spaces.
Abstract
It should be noted that this is ongoing work. The talk that you will see is going to be very unpolished; this is stuff that I have been working on for the past while, and the material will not be fully formed yet. The things I say may not end up matching with reality. So with that minor caveat, let us continue.
A Hopf algebra is a monoid, a comonoid, and an extra endomorphism such that various commutation rules are obeyed, and Hopf monads are a strange generalisation of Hopf algebras in the direction of monads. I have previously talked about when a Hopf algebra is a Hopf-Frobenius algebra, and my goal now is to generalise this to the case of Hopf Monads. However, to keep things interesting, I am doing this in a weird way.
Let T be a Hopf monad in monoidal category C. T is isomorphic to a Hopf algebra if there exists a natural transformation e: T -> 1 that respects the Hopf monad structure. There are many examples of Hopf monads which are not isomorphic to any Hopf algebras, so this morphism does not exist in C in general. However, what happens if I formally add in this morphism, creating a new category C_e? In this category, T would presumably be isomorphic to a Hopf algebra, and this begs the question: what can T in C_e tell us about T in C? By looking at the natural functor C -> C_e, can I use this to transfer my theorem about Hopf algebras to Hopf monads? I certainly believe so, but let's find out together!
Abstract
Ordinals and ordinal notation systems play an important role in program verification, since they can be used to prove termination of programs. We present three ordinal notation systems representing ordinals below epsilon_0 in type theory, using recent type-theoretical innovations such as mutual inductive-inductive definitions and higher inductive types. As case studies, we show how basic ordinal arithmetic can be developed for these systems, and how they admit a transfinite induction principle. We prove that all three notation systems are equivalent, so that we can transport results between them using the univalence principle. All our constructions have been implemented in cubical Agda.
Abstract
We develop a quantitative analogue of equational reasoning meant to provide metric semantics for stochastic/probabilistic/quantitative systems and programming languages. Quantitative algebras are algebras over metric spaces defined by these quantitative equational theories, and they implicitly define monads over the category of (extended) metric spaces. We have a few relevant examples of such algebras, where the induced free algebras correspond to well-known structures; among these are Hausdorff metrics from quantitive semilattices, p-Wasserstein metrics (hence also the Kantorovich metric) from barycentric algebras, the total variation metrics from a variant of barycentric algebras, and more.
The talk is based on a series of joint works with Prakash Panangaden and Gordon Plotkin. The main results have been presented at LICS'16, LICS'17 and LICS'18.
Abstract
I will present how we can formalise non-symmetric monoidal categories (i.e. string diagrams) in a combinatorial way. These kind of diagrams will be represented by plane graphs with a distinguished boundary vertex. For encoding plane graphs it is sufficient to store the order of neighbours for each vertex, called a rotation system. I'll show how to define these sorts of graphs, how to compose them and — together with the right notion of rewriting — how they form an operad.
This is practice for a talk I will give at STRINGS in Birmingham next week.
Abstract
Given a deterministic finite automaton accepting L, another accepting the reverse language, and an integer n, does there exist a nondeterministic acceptor with n or fewer states? We've proved that this problem is in NP i.e. polynomial certificates exist. All previous algorithms constructing small nondeterministic automata for arbitrary regular languages are at best PSPACE-complete. We achieve this by explaining and substantially improving the Kameda-Weiner algorithm using coalgebraic methods. At the underlying level we use a categorical equivalence between finite join-semilattices and bipartitioned graphs.
Abstract
I've been hacking away on tools to do metatheory using "codebruijn" representation of syntax. The method, as with scoped de Bruijn representation, is to index terms by their scope, but the codebruijn method goes further, by systematically treating the support of terms, i.e., the particular subscope of variables which are actually relevant, paired with the thinning that embeds support into scope. The key codebruijn type constructor is "pair a thing with a thinning from some support", and it turns any old sort of thing into something which uniformly admits further thinning just by postcomposition, without affecting either the thing or its support.
Stuff-which-uniformly-admits-thinning is stuff with structure, so there is a forgetful functor back to plain old Stuff. The Free functor which adds uniform thinning, is its left adjoint. The codebruijn type constructor is the monad (Free;Stuff). The corresponding comonad, (Stuff;Free), is rather more familiar to semanticists as the abstraction over larger scopes found in Kripke models.
I've always been a bit shaky on adjunctions, presheaves and what have you, so this talk will not assume categorical confidence, but rather seek to build intuition, guided by a concrete example grounded in the practicalities of manipulating syntax.
Abstract
Isbell duality is an adjunction between a category of "generalised algebras" and a category of "generalised spaces", and is an abstraction of Stone duality. Broadly speaking, a Stone duality is an equivalence between some category of commutative algebras, and some category of spaces, for example: Boolean algebras and Stone spaces; or, commutative C*-algebras and Hausdorff spaces.
There is a Stone duality which applies in the setting of Newtonian mechanics, between the category of smooth manifolds and a category of commutative algebras, and this equivalence of categories admits an extremely elegant and intuitive interpretation from the perspective of physics.
I am going to discuss how to build an analogous interpretation for quantum theory using the machinery of Isbell duality, in such a way that directly generalises Newtonian mechanics, with the ultimate aim being to resolve some of the conceptual problems of quantum theory.
Abstract
Malin and James will practice giving their talks "Containers of Applications and Applications of Containers" and "Linear metatheory via linear algebra" respectively.
Abstract
We propose a new formalism for representing programming languages, based on a universal graph-rewriting abstract machine. The calculus itself only deals with the basic structural aspects of a programming languages, namely variables, names (e.g. memory locations) and thunks (i.e. fragments with delayed execution). Everything else needs to be supplied as extrinsic operations, with associated rewrite rules. This basic calculus allows us to represent all known sequential programming language features in a uniform framework, reason abstractly about their cost, and also reason about challenging equivalences.
This is joint work with Koko Muroya and Todd Waugh Ambridge.
Abstract
In an attempt to con Agda into allowing alternation of least and greatest fixed points, I try to remember how to construct these things for indexed containers.
Abstract
I will explain how to write practical generic n-ary functions and combinators for n-ary relations.
Abstract
Futhark is a small programming language designed to be compiled to efficient parallel code. It is a statically typed, data-parallel, and purely functional array language, and comes with a optimising ahead-of-time compiler that generates GPU code via OpenCL and CUDA. Futhark is not designed for graphics programming, but instead uses the compute power of the GPU to accelerate data-parallel array computations ("GPGPU").
This talk presents the design of the Futhark language, as well as outlines several of the key compiler optimisations that have enabled performance comparable to hand-written GPU code. Through the use of a functional source language, we obtain strong invariants that simplify and empower the application of traditional compiler optimisation techniques. In particular, I will discuss (i) aggressively restructuring transformations such as fusion, (ii) loop interchange and distribution to extract flat parallel kernels from nested parallel source programs, (iii) multi-versioned code generation that exploits as much parallelism as necessary and efficiently sequentialises the rest, and (iv) data layout transformations to ensure coalesced memory access on GPUs.
Abstract
Bigraphs are a mathematical model (hypergraph) for representing the spatial and communication structures of networked entities [fn:1]. Bigraph Reactive Systems (BRS) build upon bigraphs by incorporating temporal changes to a model as specified with reaction rules. There is a wealth of theory (category) that supports reasoning about bigraphs, and an emergent tool (BigraphER [fn:2]) for their modelling.
Bigraphs have an elegant algebraic structure that is general purpose and simply typed. However, it is common to place restrictions on the shape of a bigraph by a system of sorts that are applied as side-conditions. The details, and application, of these sorts are presented as an aside from the bigraph definition itself. This makes transformation of Bigraph models harder to enforce during manipulation as part of a BRS. We must provide runtime checks to provide relevant guarantees over model correctness.
In this talk, I want to introduce you to the interesting world of Bigraph specification, and my current and ongoing work on developing a dependent type-system to embed a system-of-sorts directly into the algebraic definition of bigraphs. This allows one to provide correct-by-construction guarantees that a given model is correct w.r.t. a provided system-of-sorts.
[fn:1] Robin Milner. 2009. The Space and Motion of Communicating Agents. Cambridge University Press.
[fn:2] http://dcs.gla.ac.uk/~michele/bigrapher.html
Abstract
Bigraphical Reactive Systems (BRS) are a universal model of computation introduced by Robin Milner for the representation of interacting systems that evolve in both time and space. Bigraphs have been shown forming a category called symmetric partial monoidal category and their dynamic theory is defined in terms of rewriting and transition. A limitation of bigraphs is that the underlying model of location is a forest, which means there is no straightforward representation of locations that can overlap or intersect. In this talk, I will present bigraphs with sharing, an extension of bigraphs which solves this problem by defining the model of location as a directed acyclic graph, thus allowing a natural representation of overlapping or intersecting locations. I will give a complete presentation of the extended theory, including a categorical semantics, algebraic properties, a normal form and several essential procedures for computation.
Abstract
The determinacy problem is to decide for queries v_{1},...,v_{n} and q whether for every database D knowing the answers to v_{1},...,v_{n} in D suffices to deduce the answer to q in D. It has been shown that this problem is undecidable if v_{1},...,v_{n} and q are arbitrary conjunctive queries.
In this talk I give a gentle introduction to some of the formal tools that are used to study the determinacy problem. Especially, I focus on the red-green chase, which is a neat construction that links the problem of determinacy with query answering relative to ontologies. Lastly, I might present some of our own, very modest, results about classes of conjunctive queries for which the determinacy problem is decidable.
Abstract
We extend the Open Games framework for compositional game theory to encompass also mixed strategies, making essential use of the discrete probability distribution monad on Set. We show how to compose the resulting probabilistic games in parallel and sequentially, and illustrate the framework on the well-known Matching Pennies game.
Abstract
Last time I talked, you heard about Interacting Frobenius algebras — this time, get ready for "Hopf-Frobenius algebras", as Ross likes to call them, or "Fropf algebras", as Ross doesn't like to call them. I am in the process of writing about them, so this is going to be quite similar to the last talk except we're coming at them from a new angle, and with new results.
Abstract
A key result in computational learning theory is Dana Angluin's L* algorithm that describes how to learn a deterministic finite automaton (DFA) using membership and equivalence queries. I will present a generalisation of this algorithm to the level of coalgebras. The approach relies on the use of logical formulas as tests, based on a dual adjunction between states and logical theories. This allows us to learn, e.g., labelled transition systems, using modal formulas as tests.
Joint work with Simone Barlocco and Jurriaan Rot.
Abstract
Proof-irrelevance can mean two similar but different things: on the one hand, irrelevant proofs can be discarded when extracting programs to execute (run-time erasability), and on the other hand, one might want to treat proofs as irrelevant during type checking (type-checking time erasability). I will give an overview of the subject, focusing on type-checking time erasability. I suspect that at least three people in the audience knows more than me, so my main function will be to keep things accessible. I plan to follow the recent paper "Degrees of Relatedness: A Unified Framework for Parametricity, Irrelevance, Ad Hoc Polymorphism, Intersections, Unions and Algebra in Dependent Type Theory" by Andreas Nuyts and Dominique Devriese (LICS 2018).
Abstract
I will talk about two different representatives of indexed containers which — all together in the end — will construct an application that manages windows on a screen.
The first instance of containers are rectangular windows. Defining these in terms of finite indexed containers (finite in the number of positions where elements can be stored) lets us interpret indices as outer boundaries. I will show some constructions on these space partitioning structures, e.g. product operations and overlaying of windows (this will hopefully include some pretty pictures!).
Secondly I will explain how we can encode applications as indexed containers (not necessarily finite, despite the title), being defined in terms of commands and responses. To sum up I present the window managing application where all the above things will occur and get combined.
I am working on these topics together with Conor and we are currently collocating them in a paper.
Abstract
In this talk I'll introduce syntax and semantics of a generic bidirectionally type checked lambda calculus. I'll also show that lumps of syntax that act as a type annotation are in fact the fuel which powers structural evaluation.
Abstract
In the linear world, we are no longer allowed to freely discard or duplicate hypotheses and conclusions. This should mean that there are fewer possible strategies for proving a given proposition, making the task of proving things easier. However, we find that linear logic proof search is a surprisingly powerful model of computation. In this talk, I'll give an introduction to linear logic sequent calculus. I'll then go through a neat construction of [0] embedding quantified boolean formulae into linear logic without bangs, and sketch the proof from the same paper that linear logic with bangs is Turing-complete. Some content from CS106 may come in handy.
[0]: Lincoln, Mitchell, Scedrov, Shankar, 1992, "Decision problems for propositional linear logic"
Abstract
I'll talk about Chu Spaces, a general construction of *-autonomous categories (a.k.a. models of classical linear logic). Chu spaces are interesting because they come with an inbuilt notion of duality, which has been interpreted as a kind of player/opponent duality.
Abstract
In previous talks, I've written down bidirectional versions of particular type systems, with type checking for introduction forms and type synthesis for elimination forms. But what are the metarules for writing down the rules? How do we show that following the metarules ensures good properties of the rules? I'll report some progress towards capturing the syntactic properties of "good" rules which might push us closer to the goal of metatheory by construction.
NB This talk has nothing to do with bidirectional transformations.
Abstract
The last few years has seen the development — largely in Canada and Australia — of an axiomatic approach to differential geometry based on tangent categories. Tangent categories incorporate the previous leading settings for differential geometry: finite dimensional manifolds, synthetic differential geometry, convenient manifolds, etc. In addition they widen the scope significantly as they also include combinatorial species, Goodwillie Functor calculi, and examples from computer science. The talk will give a survey of tangent categories and some of the developments so far.
Abstract
State-based transition systems are often studied relative to a specified initial state. System behavior then only depends on those states that are "reachable" from the initial state. This has both consequences for the theory (e.g. by allowing to prove non-definability results in modal logic) and practice (by making seemingly large systems more manageable) of such systems.
Coalgebras provide a general model for transition systems. In this introductory talk I will discuss how to define the reachable part of a coalgebra via the notion of T-base for an endofunctor T from [1]. We will first discuss this notion and then provide a sufficient categorical condition for the existence of the T-base. We will then show how to characterise the reachable part of a coalgebra as least fixpoint of a monotone operator.
This is based on joint work with Simone Barlocco and Jurriaan Rot.
========================
[1] Alwin Blok. Interaction, observation and denotation. Master's thesis, ILLC Amsterdam, 2012.
Abstract
What is a Frobenius algebra? What is a Hopf algebra? And why are they such good friends? In this talk, I will be talking about PROPs, what an interacting Frobenius algebra is, and some weird stuff that appears with them, and I shall be drawing lots of pretty pictures as well.
Abstract
Coalgebras provide an abstract framework to represent state-based transition systems. Modal logics provide a formal language to specify such systems. In our recently submitted work (joint work with Clemens Kupke and Jurriaan Rot) we devise a general algorithm to learn coalgebras. Modal formulas are used as tests to probe the behaviour of states.
In my introductory talk, I will discuss how to set up a general framework that connects coalgebras and their modal logics via a dual adjunction. Moreover, I will show a known result which guarantees that indistinguishable states wrt to modal formulas are behavioural equivalent, a key fact that entails that — whenever possible — our algorithm learns a minimal representation of a coalgebra.
Abstract
Implicit Computational Complexity is the study of programming languages or logical systems that capture complexity classes. Roughly, every program that can be written in the language is in some complexity class. Many of the languages that have been proposed for capturing useful classes like PTIME are not much fun to program in. However, the work of the late Martin Hofmann included work on languages like LFPL, which only allows polynomial time computation, but is also reasonably usable. I'll talk about LFPL, and how the proof of polynomial time bounds works.
Abstract
I will present a type theory whose judgements are indexed by a preorder of "worlds", representing for example nodes in a distributed computation, or a security level. This means a term may only typecheck at particular worlds, and will be mobile upwards along the preorder (for instance every low security value is also a high security one). To enable talking about the world structure without compromising mobility, the terms can talk about "shifts", which describe relative worlds.
I then give a semantic model based on the usual presheaf model for STLC where worlds form the base category, and shifts are endofunctors on the worlds. This semantics will show our programs are indeed oblivious to data they cannot "see". Examples will be given to demonstrate this framework in some concrete cases, and to motivate future work.
Abstract
In this talk, I will describe a hierarchy of program transformers in which the transformer at each level builds on top of the transformers at lower levels. The program transformer at the bottom of the hierarchy corresponds to positive supercompilation, and that at the next level corresponds to distillation. I will then try to characterise the improvements that can be made at each level, and show how the transformers can be used for program verification and theorem proving.
Abstract
By way of giving CS106 students better tools to tackle the concept of memory in circuits, I implemented a small programming language called (for reasons which are unlikely to become clear) Syrup. Syrup is suspiciously like a dialect of Haskell, except that the blessed monad allows bits of state. Marking homework done in Syrup necessitates checking whether two circuits have the same externally observable behaviour, which makes it a matter of bisimulation. I'll talk a bit about how much fun it was implementing the decision procedure to find either a bisimulation or a countermodel (for purposes of decent feedback).
Abstract
Near-term quantum computers have many limitations which make them difficult to use for stuff. I will outline some of the difficulties and handwave at some new ideas from compositional mathematics that might help us address these problems.
Abstract
Last week, I attended the ICFP conference and various associated workshops in St Louis. In this trip report, I will talk about selected talks and the people I met there. If time allows, I may also cover the adventures of Ioan, one of our summer interns and current undergraduate.
Abstract
I'll do a 101 today, filed under "stuff everybody should know" about first-order unification (the algorithm underlying Hindley-Milner type inference, Prolog, etc). But then I'll throw in the twist of considering syntax with binding. The way I cook it, this makes essential use of the structure of the category of thinnings.
Abstract
The minimalist tradition in type systems makes for easy mathematics, but often leaves their mechanisms needlessly obscured.
I build a structure for Hindley-Milner checking problems in the tradition of Type Inference in Context. This structure is derived from typing rules in the style of my first talk and mirrors data structures used for elaboration problems in dependent type systems — offering a notation that can be used among designers and implementors of type systems and even in explaining their behaviour to users.
Abstract
The minimalist tradition in type systems makes for easy mathematics, but often leaves their mechanisms needlessly obscured.
One possible remedy is to track the behaviour of information in a system — its creation, its destruction and how it flows between constraints and source locations. I illustrate this with the Simply-Typed Lambda Calculus.
Abstract
It is a well-known fact (used e.g. in model checking) that, on finitely branching transition systems, finite trace equivalence coincides with infinite trace equivalence. I will show how to prove this coinductively, which is arguably nicer than the standard inductive proof.
Abstract
The paper Type-and-Scope Safe Programs and Their Proofs abstracts the common type-and-scope safe structure from computations on lambda-terms that deliver, e.g., renaming, substitution, evaluation, CPS-transformation, and printing with a name supply. By exposing this structure, we can prove generic simulation and fusion lemmas relating operations built this way. In this talk I will present this approach but for simpler setting of Hutton's Razor. This reduces the mathematical structures involved from relative structures to the ordinary counterparts.
Abstract
This is the second half of Helle's talk on a (co)algebraic treatment of Markov Decision Processes. It focuses on a coinductive explanation of policy improvement using a new proof principle, based on Banach's Fixpoint Theorem, that we call contraction coinduction.
Abstract
Before getting lost in the realms of higher dimensions we should see wether we can interpret set-level HoTT. We know how to deal with functional extensionality and a static universe of propositions (see Observational Type Theory) but what about a dynamic universe of propositions, i.e. one reflecting HProps that also validates propositional extensionality. I will discuss the problems modelling this and a possible solution using globular setoids.
The dynamic prop corresponds to a subobject classifier in a topos (in particular we get unique choice) while the static universe corresponds to a quasitopos I am told.
Abstract
In this talk, we study Markov decision processes (MDPs) with the discounted sums criterion from the perspective of coalgebra and algebra. Probabilistic systems, similar to MDPs but without rewards, have been extensively studied, also coalgebraically, from the perspective of program semantics. Here, we focus on the role of MDPs as models in optimal planning, where the reward structure is central. Our main contributions are: (i) a coinductive explanation of policy improvement using a new proof principle, based on Banach's Fixpoint Theorem, that we call contraction coinduction, and (ii) showing that the long-term value function of a policy can be obtained via a generalized notion of corecursive algebra, which takes boundedness into account.
This is joint work with Frank Feys (Delft) and Larry Moss (Indiana).
Abstract
Though Multi-Dimensional Arrays (MDAs) seem conceptually straightforward, it's not easy to come up with a mathematical theory of arrays that can be used within optimising compilers. We'd like to treat arrays as functions from indices to values with some domain restrictions. It is desirable that these domain restrictions are specified in a compact form, and are equipped with closed algebraic operations like intersection, union, etc. We are going to consider a few typical models like hyperrectangulars, grids and polyhedrons.
When typing array operations, ideally we'd like to find a balance between tracking all the shapes of arrays and allowing for generic array operations. This proves to be tricky, for reasons we'll explain.
We will propose, tentatively, an analysis of MDAs in terms of container functors. The aim is to supply concepts helpful when thinking about MDAs, e.g. when designing notations for coding with arrays. Some intriguing gadgetry shows up.
Abstract
A proof interpretation translates proofs of one logical system into proofs of another (example: the double-negation interpretation of classical logic into constructive logic). This often reveals some information about the original system (e.g. classical logic is equiconsistent with constructive logic). Gödel's Dialectica interpretation (named after the journal it was published in) translates Heyting arithmetic (the constructive theory of the natural numbers, including induction) into System T (the quantifier-free theory of the simply typed lambda calculus with natural numbers) — quantifier complexity is traded for higher type complexity. Combining this translation with (a refined) double negation translation, one can extract System T programs from a proof of a forall-exists statement, even if this proof is using non-constructive priciples such as Markov's Principle, Excluded Middle, or the Quantifier-Free Axiom of Choice. I've always found the Dialectica translation mystifying, so I'll try to explain the intuition behind it.
Abstract
Higher inductive types (HITs) in Homotopy Type Theory (HoTT) allow the definition of datatypes which have constructors for equalities over the defined type. HITs generalise quotient types and allow to define types which are not sets in the sense of HoTT (i.e. do not satisfy uniqueness of equality proofs) such as spheres, suspensions and the torus. However, there are also interesting uses of HITs to define sets, such as the Cauchy reals, the partiality monad, and the internal, total syntax of type theory. In each of these examples we define several types that depend on each other mutually, i.e. they are inductive-inductive definitions. We call those HITs quotient inductive-inductive types (QIITs).
Although there has been recent progress on the general theory of HITs, there isn't yet a theoretical foundation of the combination of equality constructors and induction-induction, despite having many interesting applications. In the present paper we present a first step towards a semantic definition of QIITs. In particular, we give an initial-algebra semantics and show that this is equivalent to the section induction principle, which justifies the intuitively expected elimination rules.
Abstract
Let us revisit the definition of a category and define it in a way which has the advantage that we can generalize it to higher dimensions. Why am I interested in higher categories (or specifically (\infty,1)-categories)? I have a few problems in Homotopy Type Theory which I think can be solved using these beasts: the coherence problem for type theory in type theory (in the moment I cannot even define the standard model) and generalizing the Hungarian approach to Quotient Inductive Inductive Types (QIITs) to Higher Inductive Inductive Types (HIITs).
Abstract
The sets foundations of nominal techniques are usually taken to be Fraenkel-Mostowski set theory (which is ZFA + a finite support property). I will argue that in many situations, a new foundation which I call Equivariant ZFA (EZFA) may be a better choice, because you can do everything in EZFA that you can do in FM and furthermore EZFA with Choice (EZFAC) is consistent whereas FM with Choice is not.
Abstract
I will give an introduction to Domain Theory, focusing on motivation. I hope to cover recursive definitions, and solving domain equations using retraction pairs.
Abstract
Many program properties are relational, comparing the behavior of a program (or even two different programs) on two different inputs. While researchers have developed various techniques for verifying such properties for standard, deterministic programs, relational properties for probabilistic programs have been more challenging. In this talk, I will survey recent developments targeting a range of probabilistic relational properties, with motivations from privacy, cryptography, machine learning. The key idea is to meld relational program logics with an idea from probability theory, called a probabilistic coupling. The logics allow a highly compositional and surprisingly general style of analysis, supporting clean proofs for a broad array of probabilistic relational properties.
Abstract
In preparation for Fred's talk about "Quotient Inductive-Inductive Types" next week I will introduce quotients and setoids in type theory and some of the issues surrounding them. The use of setoids is discouraged by many doctors and can lead to a contagious and incurable condition: relation preservation. Quotients on the other hand are dangerous if not correctly handled and can lead to unsightly things appearing where they shouldn't such as inhabitants of the excluded middle.
Abstract
Quantum mechanics is dope, so it makes sense that making a computer using the principles of quantum mechanics would also be pretty sick. However, the formalism that is used by physicists, called a Hilbert space, is not specialised for this purpose. In particular, it is
1) Difficult to prove properties about programs for quantum computers
2) Difficult to see what is these programs are actually doing
Thankfully, category theory is very cool! Using ZX calculus, we can talk about quantum computing in a much clearer manner. I will be introducing some fundamental quantum mechanics and ZX calculus, and then using ZX calculus I will talk about Shor's algorithm.
Abstract
The protocols that describe the interactions between IP Cores on System-on-a-Chip architectures are well-documented. These protocols described not only the structural properties of the physical interfaces but also the behaviour of the emanating signals. However, there is a disconnect between the design of SoC architectures, their formal description, and the verification of their implementation in known hardware description languages.
Within the Border Patrol project we are investigating how to capture and reason about the structural and behavioural properties of SoC architectures using state-of-the-art advances in programming language research. Namely, we are investigating using dependent types and session types for the description of hardware communication.
In this talk I will discuss my work in designing a linked family of languages that captures and reasons about the topological structure of a System-on-a-Chip. These languages provide correct-by-construction guarantees over the interaction protocols themselves; the adherence of a component that connects using said protocols; and the validity of the specified connections. These guarantees are provided through abuse of indexed monads to reason about resource usage; and general (ab)use of dependent types as presented in Idris.
I will not cover all aspects of the languages but will concentrate my talk detailing the underlying theories that facilitate the correct-by-construction guarantees.
Abstract
Adjoint functors arise everywhere, but how do we find them? It is a fun exercise to prove that right adjoints preserve limits, and, dually, that left adjoints preserve colimits. An adjoint functor theorem is a statement that (under certain conditions) the converse holds: a functor which preserves limits is a right adjoint. I will discuss the General Adjoint Functor Theorem, and why Peter Johnstone considers it fundamentally useless.
Abstract
The coinductive proof method can be enhanced by several techniques, often referred to as up-to-techniques. I will talk about the basic theory of coinduction-up-to, and a little about the more recent notion of companion. This companion is the largest valid up-to technique for a given predicate, and gives a nice way of working with coinduction up-to.
Abstract
One monad to the tune of another.
Abstract
I talk about three methods for measuring the size of formulas in the modal mu-calculus and explore how the choice between them influences the complexity of computations on formulas. Especially, I focus on the guarded transformation, which is a simple syntactic transformation on formulas that is commonly assumed to be polynomial but has recently been shown to require exponential time.
I will complain about the mess in the literature and present two of our (Clemens, Yde and me) own preliminary results:
1) There is a polynomial guarded transformation if we measure the input formula in the number of its subformulas and measure the output formula in the size of its closure.
2) If there is a polynomial guarded transformation where we measure the input formula in the size of its closure then there is a polynomial algorithm for solving parity games. Hence finding such a transformation is at least as hard as solving parity games, which is commonly believed to be quite hard.
We employ an automata-theoretic approach that relates the different measures for the size of a formula to different constraints on the transition structure of an automaton corresponding to the formula.
This is a very technical talk but there will be many pictures!
Abstract
Per Martin-Löf's 1971 Theory of Types is the ancestor of the type systems used today in Agda, Coq, Idris, NuPRL, and many other variations on the theme of dependent types. Its principal virtue is its simplicity: it has very few moving parts (but they move quite a lot). Its well known principal vice is its inconsistency: you can write a looping program inhabiting any type (thus 'proving' any proposition). I'll be talking about the design principles for constructing dependent type systems which are bidirectional — clearly split into a type checking part and a type synthesis part. By following these principles, it gets easier to establish good safety properties of these systems. In particular, I'll sketch how to keep type safety ("well typed programs don't go wrong") separate from normalization ("all computations terminate"). Martin-Löf's 1971, reformulated bidirectionally, makes a good example, because it's small and type-safe, but not normalizing.
Abstract
Building on last week's introduction to Martin-Löf 1971, we describe a toy type theory which not only accounts for what type terms have, but also where they live. This extra information can be interpreted as where (physically) data lives, at what phase (typechecking vs runtime) it exists, when it exists, or who has access to the data.
In return for caring about these "worlds" describing where data lives, we get applications to distributed computing and erasure for efficient code generation, with future work to consider productivity and security.
Abstract
In ghci, you can write something like
Abstract
Automata learning is a well known technique to infer a finite state machine from a set of observations. One important algorithm for automata learning is the L* algorithm by Dana Angluin. In this 101 I will present a new perspective on L* using ideas from coalgebra and modal logic. After a brief recap of how L* works, I will describe a generalisation of the L* algorithm to the coalgebraic level. I will conclude my talk with two concrete instances of the general framework: the learning of Mealy machines and of bisimulation quotients of probabilistic transition systems. Joint work with Clemens Kupke.
Abstract
I'll talk about a way of measuring the sizes of trees using weighted tree automata, in a compositional way that works well with pattern matching. This is based on some work by Georg Moser and Martin Hofmann.
Abstract
I'll explain the first concepts of Deep Learning. This is an advert for a reading group on the topic which will run over the next few weeks.
Abstract
My talk will be based on our recent paper "A Compositional Treatment of Iterated Open Games". In this paper we introduce a new operator on open games to capture the concept of subgame perfect equilibrium as well as providing final coalgebra semantics of infinite games.
Abstract
In this talk, I will present a simple yet powerful principle for coinductive reasoning, which we call "parameterized coinduction". More specifically, it is as simple as the Knaster-Tarski theorem without requiring any syntactic checking, yet as powerful as Coq's syntactic guarded coinduction supporting incremental reasoning. As an important consequence, parameterized coinduction can easily support complex nested induction-coinduction.
We also implemented the parameterized coinduction as the Coq library called "paco", which can be found at:
http://plv.mpi-sws.org/paco
This is joint work with Georg Neis, Derek Dreyer and Viktor Vafeiadis, and was presented at POPL'13.
Abstract
Cubical Type Theory (CTT) provides an extension of Martin-Löf Type Theory (MLTT) where we can interpret the univalence axiom while preserving the canonicity property, i.e. every closed term actually computes to a value. The typing and equality rules of CTT come as a fairly well-behaved extension of the ones of MLTT and the denotational model and prototype implementation help clarifying the system further.
Given the above it felt reasonable to introduce the features of CTT into a more mature proof assistant like Agda, and this talk reports the status of this endeavour. In short:
Abstract
In the categorical semantics of (e.g.) the simply typed lambda calculus substitution of a variable by a term is achieved by composing morphisms. What is the equivalent notion in diagrammatic languages? What even is a "variable" in this context? I'll sketch some (pretty) rough ideas for the beginnings of a “functional language” of diagrams including substitution, binding, and pattern matching. It turns out to all be about operads and co-operads.
Abstract
Enumeration of graphs on surfaces (or "maps") is an active topic of research in combinatorics, with links to wide-ranging domains such as algebraic geometry, knot theory, and mathematical physics. In the last few years, it has also been found that map enumeration is related to the combinatorics of lambda calculus, with various well-known families of maps in 1-to-1 correspondence with various natural families of linear lambda terms. In the talk I will begin by giving a brief survey of these enumerative connections, then use those to motivate a closer look at the surprisingly rich topological and algebraic properties of linear lambda calculus.
Abstract
This is an informal talk on the interesting properties I've found when playing with the unknotting problem (knot simplification moves that help to establish whether any given knot is a loop in complicated disguise, or something really knotted).
I'll discuss the syntax that I've used for annotating knots that leads to a(n almost) syntax based method for unknotting, but that hints further at unknotting in a more interesting way by using an unintentional property of the syntax.
I'll present some examples of the problems with representing knots and how the syntax and reduction rules help, in my opinion, to make unknotting more tangible.
Abstract
At last year's WadlerFest celebration, Conor presented a dependent type theory where variables are tagged with information about how they are used. Variable usage tagging has been developed in the non dependent setting, starting with Girard's Linear Logic, and culminating with recent work in contextual effects, coeffects, and quantitative type theories. The subtlety with dependent types lies in how to account for the difference between usage in types and terms. Conor's system handles this by treating usage in types as a "zero" usage, so that it doesn't affect the usage in terms. This is a departure from previous "linear" type theories that maintains a strict separation between usage controlled information, which types cannot depend on, and unrestricted information, which types can depend on.
Conor presented a syntax and typing rules for the system, as well as an erasure property that exploits the difference between "not used" and "used", but doesn't do anything with the finer usage information available. I'll present a collection of models for the system that fully exploit the usage information. This will give interpretations of type theory in resource constrained computational models, Geometry of Interaction models, and imperative models. To maintain order, I will gather all these notions of model under a new concept of "Quantitative Category with Families", a generalisation of the standard "Category with Families" class of models of dependent types.
Abstract
I'll tell you about the most interesting talks, ideas and gossip that came out of the TYPES conference in BudaPest last week.
Abstract
Social, biological and economic networks grow and decline with recurrent fragmentation and re-formation, often explained in terms of external perturbations. I will present a model of dynamical networks and evolutionary game theory that explains these phenomena as consequence of imitation and endogenous conflicts between "cooperators" and "cheaters". Cooperators promote well-connected prosperous (but fragile) networks and cheaters cause the network to fragment and lose its prosperity. Once the network is fragmented it can be reconstructed by a new invasion of cooperators, leading to recurrent cycles of formation and fragmentation observed, for instance, in bacterial communities and socio-economic networks. In the last part of the talk, I will briefly introduce my current works on the role of individual decision-making in cooperative communities and the possibility of synthetic biology to address these ideas in microbial communities.
References
M. Cavaliere, S. Sedwards, C.E. Tarnita, M.A. Nowak, A. Csikasz-Nagy.
Prosperity is Associated with Instability in Dynamical Networks.
Journal of Theoretical Biology, 299, 2012.
Plasticity Facilitates Sustainable Growth in the Commons.
M. Cavaliere, J.F. Poyatos.
Journal of the Royal Society Interface, 10, 2013.
Eco-Evolutionary Feedbacks can Rescue Cooperation in Microbial Populations.
C. Moreno-Fenoll, M. Cavaliere, E. Martinez-Garcia, J.F. Poyatos.
Scientific Reports, 7, 2017.
Abstract
The Kameda-Weiner algorithm takes a machine (nondeterministic finite automaton) as input, and provides an optimal machine (state-minimal nondeterministic finite automaton) as output. In this talk I will discuss work which provides a clear explanation of it, by translating the various syntactic constructs into more meaningful order-theoretic ones, and then composing them together to prove correctness.
Abstract
A key result in computational learning theory is Dana Angluin's L* algorithm that describes how to learn a regular language, or a deterministic finite automaton (DFA), using membership and equivalence queries. In my talk I will present a generalisation of this algorithm using ideas from coalgebra and modal logic — please note, however, that prior knowledge of these topics will not be required.
In the first part of my talk I will recall how the L* algorithm works and establish a link to the notion of a filtration from modal logic. Furthermore I will provide a brief introduction to coalgebraic modal logic. In the second part of my talk I will present a generalisation of Angluin's original algorithm from DFAs to coalgebras for an arbitrary finitary set functor T in the following sense: given a (possibly infinite) pointed T-coalgebra that we assume to be regular (i.e. having an equivalent finite representation) we can learn its finite representation by (i) asking "logical queries" (corresponding to membership queries) and (ii) making conjectures to which a teacher has to reply with a counterexample (equivalence queries). This covers (known variants of) the original L* algorithm and algorithms for learning Mealy and Moore machines. Other examples are infinite streams, trees and bisimulation quotients of various types of transition systems.
Joint work with Simone Barlocco.
Abstract
Quotient inductive inductive types (QIITS) are set-truncated mutually defined higher inductive types. I am going to discuss two applications of QIITs:
1. define an internal syntax of Type Theory without reference to untyped preterms;
2. define a version of the partiality monad that doesn't require countable choice.
On the one hand I think that these applications are interesting because they represent applications of HoTT which have nothing to do with homotopy theory; on the other hand they are clearly not very higher order (in the sense of truncation levels) but can be defined in the set-truncated fragment of HoTT. Hence my question: what are interesting applications of higher types which are not directly related to synthetic homotopy theory?
This talk is based on joint work with Paolo Capriotti, Nils Anders Danielsoon, Gabe Dijkstra, Ambrus Kaposi and Nicolai Kraus.
Abstract
A traditional source of complaint from CS undergraduates (especially in the USA, but in other places, too) is that they are made to learn too much standard issue mathematics with little apparent relevance to computation. Differential calculus (with its usual presentational focus on physical systems) is often picked upon as the archetype. What we see in action is the fragile male ego: they are not so quick to complain about the unimportance of things they do not find difficult. All of which makes more delicious the irony that differential operators have a key role to play in understanding discrete structures, such as automata, datatypes, execution stacks, and plenty more.
The basic idea is as follows: to put your finger over any single K in the pair of words
Abstract
Blockchains, i.e. decentralised, distributed data structures which can also carry executable code for a non-standard execution environment, introduce new models of computation. Decentralised, here, means, informally speaking, "without central control", e.g. a currency without a (central) bank, but much more. Blockchains support the recently introduced virtual currencies, a la Bitcoin, and a new class of decentralised applications, including smart contracts. In this talk we will introduce the main aspects of a blockchain, with particular reference to the Bitcoin blockchain as a paradigmatic case of such a new model of computation, and also touching on smart contracts. No previous knowledge of bitcoin/blockchain required for this introductory talk.
Abstract
Statistical models in e.g. machine learning are traditionally expressed in some sort of flow charts. Writing sophisticated models succinctly is much easier in a fully fledged programming language. The programmer can then rely on generic inference algorithms instead of having to craft one for each model. Several such higher-order functional probabilistic programming languages exist, but their semantics, and hence correctness, are not clear. The problem is that the standard semantics of probability theory, given by measurable spaces, does not support function types. I will describe how to get around this.
Abstract
We compare the expressive power of three programming abstractions for user-defined computational effects: Bauer and Pretnar's effect handlers, Filinski's monadic reflection, and delimited control. This comparison allows a precise discussion about the relative merits of each programming abstraction.
We present three calculi, one per abstraction, extending Levy's call-by-push-value. These comprise syntax, operational semantics, a natural type-and-effect system, and, for handlers and reflection, a set-theoretic denotational semantics. We establish their basic meta-theoretic properties: adequacy, soundness, and strong normalisation. Using Felleisen's notion of a macro translation, we show that these abstractions can macro-express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro-expressed while preserving typeability either by monadic reflection or by delimited control. We supplement our development with a mechanised Abella formalisation.
Joint work with Yannick Forster, Sam Lindley, and Matija Pretnar.
Abstract
I'll report on my attempts to design a cubical type theory together with Dan Licata and Ed Morehouse during my visit to Wesleyan University, Middletown, Connecticut. We had something which seemed quite promising, but that falls apart just short of the finish line; I'll tell you about it in the hope of miraculous rescue from the audience. However, I'll start from basics so that everyone has a chance to join in in the fun. Mentions of Donald Trump will be kept to a minimum.
Abstract
PCF is the prototypical functional programming language, with two data types (naturals and booleans), lambda-abstraction and recursion. PCF was introduced by Gordon Plotkin in his seminal "LCF Considered as a Programming Language" paper from 1977. Despite PCF's simplicity, its semantics is theoretically interesting. I will introduce PCF, its operational semantics, the "standard" domain-theoretic denotational semantics and show that the two agree on closed programs. Finally, I will discuss observational equivalence for PCF and show that the denotational semantics fails to be "fully abstract".
Abstract
Hoare Logic is a logic for proving properties of programs of the form: if the initial state satisfies a precondition, then the final state satisfies a postcondition. Hoare logic proofs are structured around the structure of the program itself, making the system a compositional one for reasoning about pieces of programs. I'll introduce Hoare Logic for a little imperative language with WHILE loops. I'll then motivate Separation Logic, which enriches Hoare Logic with a Frame Rule for local reasoning.
Abstract
I will try to follow on from yesterday's introduction by getting to nitty gritty of bitcoin/blockchain. I won't assume attendance of the seminar but will try not to repeat it!
Abstract
In this 101 I plan to discuss omega-automata, i.e., finite automata that operate on infinite words/streams. These automata form an important tool for the specification and verification of the ongoing, possibly infinite behaviour of a system. In my talk I will provide the standard definition(s) of omega-automata and highlight what makes omega-automata difficult from a coalgebraic perspective. Finally, I am going to discuss the work by Ciancia & Venema that provides a first coalgebraic representation of a particular type of omega-automata, so-called Muller automata.
Abstract
Automata learning is a well known technique to infer a finite state machine from a set of observations. One important algorithm for automata learning is the L* algorithm by Dana Angluin. In this 101, I will explain how the L* algorithm works via an example. Afterwards, I will discuss the ingredients of the algorithm both in the standard framework by Angluin and in a recently developed categorical/coalgebraic framework by Jacobs & Silva. Lastly, I plan to outline the proof of the minimality of the automaton that is built by the learning algorithm.
Abstract
In this 101 I outline the syntax and semantics of classical first order predicate logic. I try to also mention some of the characteristic properties of first order logic such as compactness, the Löwenheim-Skolem theorem or locality properties in finite model theory.
Abstract
Our POPL attendees will tell us about their favourite talks, the latest research gossip and show us their most scenic photos from POPL in Paris.
Abstract
I explain how we can use automata and games to understand the behaviour of modal fixpoint logics.
Abstract
We explore the design and implementation of Frank, a strict functional programming language with a bidirectional effect type system designed from the ground up around a novel variant of Plotkin and Pretnar's effect handler abstraction.
Effect handlers provide an abstraction for modular effectful programming: a handler acts as an interpreter for a collection of commands whose interfaces are statically tracked by the type system. However, Frank eliminates the need for an additional effect handling construct by generalising the basic mechanism of functional abstraction itself. A function is simply the special case of a Frank operator that interprets no commands.
Moreover, Frank's operators can be multihandlers which simultaneously interpret commands from several sources at once, without disturbing the direct style of functional programming with values.
Effect typing in Frank employs a novel form of effect polymorphism which avoid mentioning effect variables in source code. This is achieved by propagating an ambient ability inwards, rather than accumulating unions of potential effects outwards.
I'll give a tour of Frank through a selection of concrete examples.
(Joint work with Conor McBride and Craig McLaughlin)
Abstract
The algebras of many-valued Lukasiewicz logics (MV algebras) as well as the theory of quantum measurement (Effect algebras) have undergone considerable development in the 1980s and 1990s; they now constitute important research fields, with connections to several contemporary areas of mathematics, logic, and theoretical computer science.
Both subjects have recently attracted considerable interest among groups of researchers in categorical logic and foundations of quantum computing. I will give a leisurely introduction to MV algebras (and their associated logics), as well as the more general world of effect algebras. If time permits, we will also illustrate some new results (with Mark Lawson, Heriot-Watt) on coordinatization of some concrete MV-algebras using inverse semigroup theory.
Abstract
The techniques used by the generic programming community have taught us that we can greatly benefit from exposing the common internal structure of a family of objects. One can for instance derive once and for all a wealth of iterators from an abstract characterisation of recursive datatypes as fixpoints of functors.
Our previous work on type and scope preserving semantics and their properties has made us realise that numerous semantics of the lambda calculus can be presented as instances of the fundamental lemma associated to an abstract notion of 'Model'. This made it possible to avoid code duplication as well as prove these semantics' properties generically.
Putting these two ideas together, we give an abstract description of syntaxes with binding making both their recursive and scoping structure explicit. The fundamental lemma associated to these syntaxes can be instantiated to provide the user with proofs that its language is stable under renaming and substitution as well as provide a way to easily define various evaluators.
Abstract
I will be discussing composition in a model of economic game theory and methods for representing the impact of choice on subsequent games. Time permitting I will also work through some games using this model.
Abstract
A filter P is a consistent deductively closed set of predicates. A filter is prime when
Abstract
We discuss a number of semantic properties pertaining to formulas of the modal mu-calculus. For each of these properties we provide a corresponding syntactic fragment, in the sense that a mu-calculus formula \phi has the given property iff it is equivalent to a formula \phi' in the corresponding fragment. Since this formula \phi' will always be effectively obtainable from \phi, as a corollary, for each of the properties under discussion, we prove that it is decidable in elementary time whether a given mu-calculus formula has the property or not.
The properties that we study have in common that they all concern the dependency of the truth of the formula at stake, on a single proposition letter p. In each case the semantic condition on \phi will be that \phi, if true at a certain state in a certain model, will remain true if we restrict the set of states where p holds, to a special subset of the state space. Important examples include the properties of complete additivity and (Scott) continuity, where the special subsets are the singletons and the finite sets, respectively.
Our proofs for these characterisation results will be automata-theoretic in nature; we will see that the effectively defined maps on formulas are in fact induced by rather simple transformations on modal automata.
Abstract
Modal logic provides a simple, yet surprisingly powerful, language for specifying properties of coalgebras. In this talk I introduce the basic modal logic that is interpreted on relational structures. My aim is to provide an idea how modal logic relates to other logics, such as first-order and intuitionistic logic, and to the duality between algebraic and coalgebraic structures.
If time permits, I might also give a very informal warm-up for the modal mu-calculus which is the topic of next week's talk.
Abstract
I'll give a standard overview of Simply Typed Lambda Calculus (STLC) (syntax, typing and computation rules) in a well-typed setting, and then modify it to show STLC in a bidirectional setting.
Afterwards I'll show Strong Confluence (Church-Rosser theorem) in the bidirectional setting.
Abstract
I shall give a brief introduction to System F.
I will then explain how to capture our intuition about polymorphic functions behaving uniformly by relational parametricity, and talk about ongoing work to find a notion of proof-relevant parametricity.
Abstract
I will give a basic introduction to data types and initial-algebra semantics. The meaning of a data type is given as the initial object in a category of types with the corresponding constructors. Initiality immediately allows the modelling of a non-dependent recursion principle. I'll show how this can be upgraded to full dependent elimination, also known as induction, by using the uniqueness of the mediating arrow; in fact, induction is equivalent to recursion plus uniqueness. All possibly unfamiliar terms in this abstract will also be explained.
Abstract
The core subject of Computer Science is "generated behaviour" (quiz: who said this?). Coalgebra provides the categorical formalisation of generated behaviour. I am planning to provide a first, very basic introduction to coalgebra. This will consist of two parts: i) coinduction & corecursion as means to define & reason about the (possibly) infinite behaviour of things; ii)modal logics for coalgebras.
Abstract
For a category C we consider the endomorphism category End(C) and the subcategory of automorphisms Aut(C) -> End(C). It has been observed that for C the category of finite sets, finite dimensional vector spaces, or compact metric spaces this inclusion functor admits a simultaneous left and right adjoint.
We give general criteria for the existence of such adjunctions for a broad class of categories which includes FinSet, FinVect and CompMet as special cases. This is done using the language of factorisation systems and by introducing a notion of eventual image functors which provide a general method for constructing adjunctions of this kind.
Abstract
Or: how someone else found the maths to justify my dogma
Abstract
Practice talk for Quantum Interactions.
Abstract
Practice talk for LICS.
Theories featuring the interaction between a Frobenius algebra and a Hopf algebra have recently appeared in several areas in computer science: concurrent programming, control theory, and quantum computing, among others. Bonchi, Sobocinski, and Zanasi have shown that, given a suitable distribution law, a pair of Hopf algebras forms two Frobenius algebras. Coming from the perspective of quantum theory, we take the opposite approach, and show that interacting Frobenius algebras form Hopf algebras. We generalise \cite{Bonchi2014a} by including non-trivial dynamics of the underlying object – the so-called phase group – and investigate the effects of finite dimensionality of the underlying model, and recover the system of Bonchi et al as a subtheory in the prime power dimensional case. We show that the presence of a non-trivial phase group means that the theory cannot be formalised as a distributive law.
Abstract
Practice talk for QPL.
Theories featuring the interaction between a Frobenius algebra and a Hopf algebra have recently appeared in several areas in computer science: concurrent programming, control theory, and quantum computing, among others. Bonchi, Sobocinski, and Zanasi (2014) have shown that, given a suitable distributive law, a pair of Hopf algebras forms two Frobenius algebras. Here we take the opposite approach, and show that interacting Frobenius algebras form Hopf algebras. We generalise (BSZ 2014) by including non-trivial dynamics of the underlying object – the so-called phase group – and investigate the effects of finite dimensionality of the underlying model. We recover the system of Bonchi et al as a subtheory in the prime power dimensional case, but the more general theory does not arise from a distributive law.
Abstract
I'll show how to generalise some results from algebra (think groups, rings, R-modules etc.) to a categorical setting using factorisation systems and an appropriate notion of finiteness on the objects of a category.
Abstract
Formal languages and automata are taught to every computer science student. However, the student will most likely not see the beautiful coalgebraic foundations.
In this talk, I recapitulate how infinite trees can represent formal languages (sets of strings). I explain Agda's coinduction mechanism based on copatterns and demonstrate that it allows an elegant representation of the usual language constructions like union, concatenation, and Kleene star, with the help of Brzozowski derivatives. We will also investigate how to reason about equality of languages using bisimulation and coinductive proofs.
Abstract
ThreadSafe is a static analysis tool for finding bugs in concurrent Java code that has been used by companies across the world to analyse and find bugs in large mission industrial applications. I will talk about how ThreadSafe works, and our experiences in applying static analysis technology to the "real world".
ThreadSafe is available from http://www.contemplateltd.com/
Abstract
I will sketch an alternative approach to economic game theory based upon the computer science idea of compositionality: concretely we i) give a number of operators for building up complex and irregular games from smaller and simpler games; and ii) show how the Nash equilibrium of these complex games can be defined recursively from their simpler components. We apply compositional reasoning to sophisticated games where agents must reason about how their actions affect future games and how those future games effect the utility they receive. This forces us into a second innovation — we augment the usual lexicon of games with a dual notion to utility because, in order for games to accept utility, this utility must be generated by other games. Our third innovation is to represent our games as string diagrams so as to give a clear visual interface to manipulate them. Our fourth, and final, innovation is a categorical formalisation of these intuitive diagrams which ensures our reasoning about them is fully rigorous.
The talk will be general so as appeal to as wide an audience as possible. In particular, no knowledge of category theory will be assumed!
Abstract
Motivated by the desire to understand the combinatorics of graph rewriting systems, it proved necessary to invent a formulation of graph rewriting itself that is not based on category theoretic structures as in the traditional framework, but on the concept of diagrammatic combinatorial Hopf algebras and reductions thereof. In this talk, I will present how the classical example of the Heisenberg-Weyl algebra of creation and annihilation of indistinguishable particles, which can alternatively be interpreted as the algebra of discrete graph rewriting, gave the initial clues for this novel framework. In hindsight, to pass from the special case of discrete graph rewriting to the case of general graph rewriting required every aspect of the framework of diagrammatic combinatorial Hopf algebras as a guideline for the construction, yet none of the traditional category theoretic ideas, whence one might indeed consider this reformulation as an independent formulation of graph rewriting.
The new framework results in a number of surprising results even directly from the formulation itself: besides the two main variants of graph rewriting known in the literature (DPO and SPO rewriting), there exist two more natural variants in the new framework. For all four variants, graph rewriting rules are encoded in so-called rule diagrams, with their composition captured in the form of diagrammatic compositions followed by one of four variants of reduction operations. Besides the general structure theory of the resulting algebras aka the rule algebras, one of the most important results to date of this framework in view of applications is the possibility to formulate stochastic graph rewriting systems in terms of the canonical representations of the rule algebras. Notably, this is closely analogous to the formulation of chemical reaction systems in terms of the canonical representation of the Heisenberg-Weyl algebra aka the bosonic Fock space. The presentation will not assume any prior knowledge of the audience on the particular mathematics required for this construction, and will be given on the whiteboard. The work presented is the result of a collaboration with Vincent Danos and Ilias Garnier (ENS Paris/LFCS University of Edinburgh), and (in an earlier phase) with Tobias Heindel (University of Copenhagen).
Abstract
In this talk, we explore the fundamental category-theoretic structure needed to model relational parametricity (i.e., the fact that polymorphic programs preserve all relations) for the polymorphic lambda calculus (a.k.a. System F). Taken separately, the notions of categorical model of impredicative polymorphism and relational parametricity are well-known (lambda2-fibrations and reflexive graph categories, respectively). Perhaps surprisingly, simply combining these two structures results in a notion that only enjoys the expected properties in case the underlying category is well-pointed. This rules out many categories of interest (e.g. functor categories) in the semantics of programming languages.
To circumvent this restriction, we modify the definition of fibrational model of impredicative polymorphism by adding one further ingredient to the structure: comprehension in the sense of Lawvere. Our main result is that such comprehensive models, once further endowed with reflexive-graph-category structure, enjoy the expected consequences of parametricity. This is proved using a type-theoretic presentation of the category-theoretic structure, within which the desired consequences of parametricity are derived. Working in this type theory requires new techniques, since equality relations are not available, so that standard arguments that exploit equality need to be reworked.
This is joint work with Neil Ghani and Alex Simpson, and a dry run for a talk in Cambridge the week after.
Abstract
String diagrams give a powerful graphical syntax for morphisms in symmetric monoidal categories (SMCs). They find many applications in computer science and are becoming increasingly relevant in other fields such as physics and control theory.
An important role in many such approaches is played by equational theories of diagrams, which can be oriented and used as rewrite systems. In this talk, I'll lay the foundations for this form of rewriting by interpreting diagrams combinatorially as typed hypergraphs and establish the precise correspondence between diagram rewriting modulo the laws of SMCs on the one hand and double pushout (DPO) rewriting of hypergraphs, subject to a soundness condition called convexity, on the other. This result rests on a more general characterisation theorem in which we show that typed hypergraph DPO rewriting amounts to diagram rewriting modulo the laws of SMCs with a chosen special Frobenius structure.
If there's time, I'll also discuss some of the results we have in developing the rewrite theory of hypergraphs for SMCs, namely termination proofs via graph metrics and strongly convex critical pair analysis.
Abstract
In this talk, we consider extending Lawvere theories to allow enrichment in a base category such as CMonoid, Poset or Cat. In doing so, we see that we need to alter the formulation in a fundamental way, using the notion of cotensor, a kind of limit that is hidden in the usual accounts of ordinary category theory but is fundamental to enriched category theory. If time permits, we will briefly consider the specific issues that arise when one has two-dimensional structure in the enriching category, as exemplified by Poset and Cat.
Abstract
In 1963, Bill Lawvere characterised universal algebra in category theoretic terms. His formulation being category theoretic was not its central contribution: more fundamental was its presentation independence. Three years later, monads were proposed as another category theoretic formulation of universal algebra. Overall, the former are technically better but the relationship is particularly fruitful and the latter are more prominent, cf Betamax vs VHS. So we study Lawvere theories carefully in the setting of universal algebra and in relation to monads.
Abstract
I have recently begun to learn about the Cubical Type Theory of Coquand et al., as an effective computational basis for Voevodsky's Univalent Foundations, inspired by a model of type theory in cubical sets. It is in some ways compelling in its simplicity, but in other ways intimidating in its complexity. In order to get to grips with it, I have begun to develop my own much less subtle variation on the theme. If I am lucky, I shall get away with it. If I am unlucky, I shall have learned more about why Cubical Type Theory has to be as subtle as it is.
My design separates Coquand's all-powerful "compose" operator into smaller pieces, dedicated to more specific tasks, such as transitivity of paths. Each type path Q : S = T, induces a notion of value path s {Q} t, where either s : S, or s is •, "blob", and similarly, t : T or t = •. A "blob" at one end indicates that the value at that end of the path is not mandated by the type. This liberalisation in the formation of "equality" types allows us to specify the key computational use of paths between types, extrusion:
if Q : S = T and s : S, then s ⌢• Q : s {Q} •
That is, whenever we have a value s at one end of a type path Q : S = T, we can extrude that value across the type path, getting a value path which is s at the S end, but whose value at the T end is not specified in advance of explaining how to compute it. Extrusion gives us a notion of coercion-by-equality which is coherent by construction. It is defined by recursion on the structure of type paths. Univalence can be added to the system by allowing the formation of types interpolating two equivalent types, with extrusion demanding the existence of the corresponding interpolant values, computed on demand by means of the equivalence.
So far, there are disconcerting grounds for optimism, but the whole of the picture has not yet emerged: I may just have pushed the essential complexity into one corner, or the whole thing may be holed below the waterline. But if it does turn out to be nonsense, it will be nonsense for an interesting reason.
Abstract
I will present work in progress on a (co)algebraic framework that allows to uniformly study dynamic modal logics such as Propositional Dynamic Logic (PDL) and Game Logic (GL). Underlying our framework is the basic observation that the program/game constructs of PDL/GL arise from monad structure, and that the axioms of these logics correspond to compatibility requirements between the modalities and this monad structure. So far we have a general soundness and completeness result for PDL-like logics wrt T-coalgebras for a monad T. I will discuss our completeness theorem, its limitations and plans for extending our results. [For the latter we might require the help of koalas, wallabies and wombats.]
Abstract
Infinity-categories simultaneously generalise topological spaces and categories. Intuitively, a (weak) infinity-category should have objects, morphisms, 2-morphisms, 3-morphisms, ... and identity morphisms and composition which is suitably unital and associative up to a higher (invertible) morphism (the number 1 in (infinity, 1)-category means that k-morphisms for k > 1 are invertible) . The trouble begins when one naively tries to make these coherence conditions precise; already 4-categories famously requires 51 pages to define explicitly. Instead, one typically turns to certain "models" of infinity-categories that encode all this data implicitly, usually as some kind of simplicial object with additional properties. I will introduce two such models: quasicategories and complete Segal spaces. If time allows, I will also discuss hopes and dreams about internalising these notions in Type Theory, which should give a satisfactory treatment of category theory in Type Theory without assuming Uniqueness of Identity Proofs.
Abstract
I've been working with Jules Hedges on a compositional model of game theory. After briefly reminding you of the model, I'll discuss where we are at – namely the definition of morphisms between games, and the treatment of choice and iteration of games. I'm hoping you will be able to shed some light on this murky area. There is a draft paper if anyone is interested.
Abstract
The probabilistic calculus introduced in the eponymous paper by Robin Adams and Bart Jacobs is inspired by quantum theory by considering that conditional probabilities can be seen as side-effect-free measurements. A type-theoretic treatment of this semantic observation leads, once equipped with suitable computation rules, to the ability to do exact conditional inference.
I will present the type theory and the accompanying computation rules proposed in the paper and discuss some of the interesting open questions I will be working on in the near future.
Abstract
Coherence spaces are a simplification of Scott domains, introduced by Girard to give a semantics to the polymorphic lambda-calculus. While investigating the structure of coherence spaces, Girard noticed that the denotation of the function type in coherence spaces can be decomposed into two independent constructions: a linear ("use-once") function space, and a many-uses-into-one-use modality. And so Linear Logic was discovered.
Coherence spaces are interesting because they model computation at a low level in terms of interchange of atomic 'tokens' of information. This makes them a useful tool for understanding several different computational phenomena.
In this talk, I'll show how coherence spaces naturally model session types, via Wadler's interpretation of Classical Linear Logic as a session-typed pi-calculus, and how that interpretation extends to an interpretation of a dependently typed version of session types.
Abstract
I would like to speak about the categorical structure of the category of von Neumann algebras, with as morphisms normal, completely positive, unital linear maps. For some years my colleagues and I have worked on identifying basic structures in this category, and while surprisingly many things do not exist or do not work in this category (it's not a topos or even an extensive category, there's no epi-mono factorisation system, there is no dagger, colimits — if they exist at all — are horrendous...), we did find some structure (the products behave reasonable in some sense, there is a 'quotient', and 'comprehension', and we have a universal property for the minimal Stinespring dilation, and a universal property for M_2—the qubit). There is no deep category theory involved by any standards, and I promise I will spare you the functional analysis, so it should be a light talk.
Abstract
Due to popular demand I am going to give a brief introduction to Marc Pauly's Coalition Logic, a propositional modal logic that allows to reason about the power of coalitions in strategic games. I will provide motivation and basic definitions. Furthermore I am planning to discuss how the logic can be naturally viewed as a coalgebraic logic and what we gain from the coalgebraic perspective. Finally — if (preparation) time permits — I am going to say how the logic can be applied to the area of mechanism design.
Abstract
In societies the notion of a law is not given by nature. Instead social dynamics are driven by the theories the citizens have about the dynamics of the social system. Obviously self-referential mathematical structures, developed in computer science, are candidates to be applied in social sciences for this foundational issue. We will see a prototypical game theoretical problem where several computer scientific tools can help to discuss these structures. It is a long standing problem in economics and of human kind and their scarce recourses: "should we go to a bar that is always so overcrowded"?
Abstract
Research in the proof theory of dynamic logics has recently gained momentum. However, features which are essential to these logics prevent standard proof-theoretic methodologies to apply straightforwardly. In this talk, I will discuss the main properties proof systems should enjoy in order to serve as suitable environments for an inferential theory of meaning (proof-theoretic semantics). Then, I'll identify the main challenges to the inferential semantics research agenda posed by the very features of dynamic logics which make them so appealing and useful to applications. Finally, I'll illustrate a methodology generating multi-type display calculi, which has been successful on interesting case studies (dynamic epistemic logic, propositional dynamic logic, monotone modal logic).
References
1. S. Frittella, G. Greco, A. Kurz, A. Palmigiano, V. Sikimić, A Proof-Theoretic Semantic Analysis of Dynamic Epistemic Logic, Journal of Logic and Computation, Special issue on Substructural logic and information dynamics (2014), DOI:10.1093/logcom/exu063.
2. S. Frittella, G. Greco, A. Kurz, A. Palmigiano, V. Sikimić, Multi-type Display Calculus for Dynamic Epistemic Logic, Journal of Logic and Computation, Special issue on Substructural logic and information dynamics (2014), DOI:10.1093/logcom/exu068.
3. S. Frittella, G. Greco, A. Kurz, A. Palmigiano, Multi-type Display Calculus for Propositional Dynamic Logic, Special issue on Substructural logic and information dynamics (2014), DOI:10.1093/logcom/exu064.
4. S. Frittella, G. Greco, A. Kurz, A. Palmigiano, V. Sikimić, Multi-type Sequent Calculi, Proc. Trends in Logic XIII, A. Indrzejczak, J. Kaczmarek, M. Zawidski eds, p 81-93, 2014.
5. G. Greco, A. Kurz, A. Palmigiano, Dynamic Epistemic Logic Displayed, Proc. Fourth International Workshop on Logic, Rationality and Interaction (LORI 2013) Eds: Huaxin Huang, Davide Grossi, Olivier Roy eds, 2013.
Abstract
Escalation is the behavior of players who play forever in the same game. Such a situation is a typical field for application of coinduction which is the tool conceived for reasoning in infinite mathematical structures. In particular, we use coinduction to study formally the game called dollar auction, which is considered as the paradigm of escalation. Unlike what is admitted since 1971, we show that, provided one assumes that the other agent will always stop, bidding is rational, because it results in a subgame perfect equilibrium. We show that this is not the only rational strategy profile (the only subgame perfect equilibrium). Indeed if an agent stops and will stop at every step, whereas the other agent keeps bidding, we claim that he is rational as well because this corresponds to another subgame perfect equilibrium. In the infinite dollar auction game the behavior in which both agents stop at each step is not a Nash equilibrium, hence is not a subgame perfect equilibrium, hence is not rational. Fortunately, the notion of rationality based on coinduction fits with common sense and experience. Finally the possibility of a rational escalation in an arbitrary game can be expressed as a predicate on games and the rationality of escalation in the dollar auction game can be proved as a theorem which we have verified in the proof assistant COQ. In this talk we will recall the principles of infinite extensive games and use them to introduce coinduction and equilibria (Nash equilibrium, subgame perfect equilibrium). We will show how one can prove that the two strategy profiles presented above are equilibria and how this leads to a "rational" escalation in the dollar auction. We will show that escalation may even happen in much simpler game named 0,1-game.
Abstract
Commutative Frobenius algebras play an important role in both Topological Quantum Field Theory and Categorical Quantum Mechanics; in the first case they correspond to 2D TQFTs, while in the second they are non-degenerate observables. I will consider the case of "special" Frobenius algebras, and their associated group of phases. This gives rise to a free construction from the category of abelian groups to the PROP generated by this Frobenius algebra. Of course a theory with only one observable is not very interesting. I will consider how two such PROPs should be combined, and show that if the two algebras (i) jointly form a bialgebra and (ii) their units are "mutually real"; then they jointly form a Hopf algebra. This gives a "free" model of a pair of strongly complementary observables. I will also consider which unitary maps must exist in such models.
Slides are here if you want a preview.
Abstract
We introduce a notion of type and scope preserving semantics generalising Goguen and McKinna's "Candidates for Substitution" approach to defining one traversal generic enough to be instantiated to renaming first and then substitution. Its careful distinction of environment and model values as well as its variation on a structure typical of a Kripke semantics make it capable of expressing renaming and substitution but also various forms of Normalisation by Evaluation as well as, perhaps more surprisingly, monadic computations such as a printing function.
We then demonstrate that expressing these algorithms in a common framework yields immediate benefits: we can deploy some logical relations generically over these instances and obtain for instance the fusion lemmas for renaming, substitution and normalisation by evaluation as simple corollaries of the appropriate fundamental lemma. All of this work has been formalised in Agda.
Abstract
Social Choice functions are procedures used to aggregate the preferences of individuals into a collective decision. We outline two recent abstract approaches to SCFs: a recent sheaf treatment of Arrow's Theorem by Abramsky and a modal logic studied by Ulle Endriss and myself. We show how to relate the categorical modelling of Social Choice problems to said work in Modal Logic. This insight prompts a number of research questions, from the relevance of sheaf-like condition to the modelling of properties of SCFs on varying electorates.
Abstract
The Polymorphic Blame Calculus (PBC) of Ahmed et al. (2011) combines polymorphism, as in System F, with type dynamic and runtime casts, as in the Blame Calculus. The PBC is carefully designed to ensure relational parametricity, that is, to ensure that type abstractions do not reveal their abstracted types. The operational semantics of PBC uses runtime sealing and syntactic barriers to enforce parametricity. However, it is an open question as to whether these mechanisms actually guarantee parametricity for the PBC. Furthermore, there is some question regarding what parametricity means in the context of the PBC, as we have examples that are morally parametric but not technically so. This talk will review the design of the PBC with respect to ensuring parametricity and hopefully start a discussion regarding what parametricity should mean for the PBC.
Abstract
This is a sequel to my last 101 where I spoke about describing cellular automata as algebras of a comonad on Set. I'll describe how we can make sense of "generalised cellular automata" (probabilistic/non-deterministic/quantum, for example) as comonads on other categories via distributive laws of monads and comonads.
Abstract
We give a technique to construct a final coalgebra out of modal logic. An element of the final coalgebra is a set of modal formulas. The technique works for both the finite and the countable powerset functors. Starting with a corecursive algebra, we coinductively obtain a suitable subalgebra. We see - first with an example, and then in the general setting of modal logic on a dual adjunction - that modal theories form a corecursive algebra, so that this construction may be applied.
We generalize the framework to categories other than Set, and look at examples in Poset and in the opposite category of Set.
Abstract
Relational parametricity is a fundamental concept within theoretical computer science and the foundations of programming languages, introduced by John Reynolds. His fundamental insight was that types can be interpreted not just as functors on the category of sets, but also as equality preserving functors on the category of relations. This gives rise to a model where polymorphic functions are uniform in a suitable sense; this can be used to establish e.g. representation independence, equivalences between programs, or deriving useful theorems about programs from their type alone.
The relations Reynolds considered were proof-irrelevant, which from a type theoretic perspective is a little limited. As a result, one might like to extend his work to deal with proof-relevant, i.e. set-valued relations. However naive attempts to do this fail: the fundamental property of equality preservation cannot be established. Our insight is that just as one uses parametricity to restrict definable elements of a type, one can use parametricity of proofs to ensure equality preservation. The idea of parametricity for proofs can be formalised using the idea of 2-dimensional logical relations. Interestingly, these 2-dimensional relations have clear higher dimensional analogues where (n+1)-relations are fibered over a n-cube of n-relations. Thus the story of proof relevant logical relations quickly expands into one of higher dimensional structures similar to the cubical sets which arises in Homotopy Type Theory. Of course, there are also connections to Bernardy and Moulin's work on internal parametricity.
Abstract
There exists various possible methods to distribute seats proportionally between states (or parties) in a parliament. Hamilton's method (also known as the method of largest reminder) was abandoned in the USA because of some drawbacks, in particular the possibility of the Alabama paradox, but it is still in use in many other countries.
In recent work (joint with Svante Janson) we give, under certain assumptions, a closed formula for the probability that the Alabama paradox occurs given the vector p_1,...,p_m of relative sizes of the states.
From the theorem we deduce a number of consequences. For example it is shown that the expected number of states that will suffer from the Alabama paradox is asymptotically bounded above by 1/e. For random (uniformly distributed) relative sizes p_1,...,p_m the expected number of states to suffer from the Alabama paradox converges to slightly more than a third of this, or approximately 0.335/e=0.123, as m -> infinity.
I will assume no prior knowledge of electoral mathematics, but begin by giving a brief background to various methods suggested and used for the distribution of seats proportionally in a parliament (it's all in the rounding).
Abstract
We define a category whose morphisms are 'games relative to a continuation', designed to allow games to be built recursively from simple components. The resulting category has interesting structure similar to (but weaker than) compact closed, and comes with an associated string diagram language.
Abstract
Let's say you have a database of people's private information. For SCIENCE, or some other reason, you want to let third parties query your data to learn aggregate information about the people described in the database. However, you have a duty to the people whose information your database contains not to reveal any of their individual personal information. How do you determine which queries you will let third parties execute, and those you will not?
"Differential Privacy" defines a semantic condition on probabilistic queries that identifies queries that are safe to execute, up to some "privacy budget".
I'll present the definition of differential privacy, talk a bit about why it is better than some 'naive' alternatives (e.g., anonymisation), and also describe how the definition can be seen as an instance of relational parametricity.
A good place to read about the definition of differential privacy is the book "The Algorithmic Foundations of Differential Privacy" by Cynthia Dwork and Aaron Roth.
Abstract
Classical computation, invertible computation, probabilistic computation, and quantum computation, form increasingly more sophisticated labelled transition systems. How can we approximate a transition system by less sophisticated ones? Considering all ways to get probabilistic information out of a quantum system leads to domain-theoretic ideas, that also apply in the accompanying Boolean logic. I will survey to what extent these domains characterise the system, leading with examples from quantum theory, in a way that is accessible to a broad audience of computer scientists, mathematicians, and logicians.
Abstract
In pure mathematics, cyclic homology is an invariant of associative algebras that is motivated by algebra, topology and even mathematicial physics. However, when studied from an abstract point of view it turns out to be an invariant of a pair of a monad and a comonad that are related by a mixed distributive law, and I speculate that this could lead to some potential applications in computer science.
(based on joint work with Niels Kowalzig and Paul Slevin)
Abstract
Type : Set
Type = String -> Bool
Abstract
The topic comes from theory of infinitary proofs, and cut-elimination. In essence it is about nicely-behaved maps from higher "infinities" to lower ones, as the infinitary proofs are er, infinite, and can be thought of as glorified transfinite iterators. What might nice behaviour mean?
You can think of it as how to fit an uncountable amount of beer into a bladder whose capacity is merely countable. (Or maybe even finite.)
The most ubiquitous form of infinity is the regular cardinal, iepassing from a container F to F + (mu F -> _), where mu is the W-type operation. I'll "explain" regular collapsing as being all about diagonalisation.
2015-03-11 11:00:00 UTCAbstract
I gave an SPLS talk, which was mostly propaganda, about why people should stop claiming that totality loses Turing completeness. There was some technical stuff, too, about representing a recursive definition as a construction in the free monad whose effect is calling out to an oracle for recursive calls: that tells you what it is to be recursive without prejudicing how to run it. I'm trying to write this up double-quick as a paper for the miraculously rubbery MPC deadline, with more explicit attention to the monad morphisms involved. So I'd be grateful if you would slap down the shtick and make me more morphic. The punchline is that the Bove-Capretta domain predicate construction is a (relative) monad morphism from the free monad with a recursion oracle to the (relative) monad of Dybjer-Setzer Induction-Recursion codes. But it's worth looking at other monad morphisms (especially to the Delay monad) along the way.
Abstract
What structure is required of a set so that computations in a given notion of computation can be run statefully this with set as the state space? Some answers: To be able to serve stateful computations, a set must carry the structure of a lens; for running interactive I/O computations statefully, a "responder-listener" structure is necessary etc. I will observe that, in general, to be a runner of computations for an algebraic theory (defined as a set equipped with a monad morphism between the corresponding monad and the state monad for this set) is the same as to be a comodel of this theory, ie a coalgebra of the corresponding comonad. I will work out a number of instances. I will also compare runners to handlers.
Abstract
It would be a great shame if dependently-typed programming (DTP) restricted us to only writing very clever programs that were a priori structurally recursive and hence obviously terminating. Put another way, it is a lot to ask of the programmer to provide the program and its termination proof in one go, programmers should also be supported in working step-by-step. I will show a technique that lowers the barrier of entry, from showing termination to only showing productivity up front, and then later providing the opportunity to show termination (convergence). I will show an example of a normaliser for STLC represented in Agda as a potentially non-terminating but nonetheless productive corecursive function targeting the coinductive delay monad.
(Joint work with Andreas Abel)
Abstract
I've managed to prove a theorem that I've been chasing for a while. The trouble, of course, was stating it. I'll revisit the motivation for extending type systems with an analysis of not just what things are but where, when, whose, etc. The idea is that typed constructions occur in one of a preordered set of worlds, with scoping restricted so that information flows only "upwards" from one world to another. Worlds might correspond to "at run time" and "during typechecking", or to computation in distinct cores, or in different stages, etc. What does the dependent function space mean in this setting? For a long time, I thought that each world had its own universal quantifier, for abstracting over stuff from that world. Failure to question this presumption is what led to failure to state a theorem I could prove. By separating quantifiers from worlds, I have fixed the problem. I'll show how to prove the key fact: if I can build something in one world and then move it to another, then it will also be a valid construction once it has arrived at its destination.
We will read the paper A Model of Type Theory in Cubical Sets by Marc Bezem, Thierry Coquand and Simon Huber. Thierry's Variation on cubical sets might also be useful. Administrative details: meet for lunch at 12am for those who want to, reading group starts at 2pm.
2014-12-03 14:00:00 UTCAbstract
Neil and Clemens will report back from the Lorentz Center Workshop on Logics for Social Behaviour.
Abstract
Monoidal categories are essentially 2-dimensional things, so why on earth would we represent them using a linear string of symbols? I'll talk about how to use string diagrams for monoidal categories, graph rewriting for reasoning within them, and how the syntax can be extended to handle certain kinds of infinitary expressions with the infamous !-box. If there's time I'll finish with some half-baked (eh... basically still looking for the on switch of the oven...) ideas of how to generalise them.
Abstract
Kevin will be giving an informal talk about some of the stuff he has been learning about. He'll give the definition of a cellular automaton and then talk about how this definition can be phrased in terms of a comonad.
Abstract
Logical relations are widely used to study various properties of typed lambda calculi. By extending them to the lambda calculus with monadic types, we can gain understanding of the properties on functional programming languages with computational effects. Among various constructions of logical relations for monads, I will talk about a categorical TT-lifting, which is a semantic analogue of Lindley and Stark's leapfrog method.
After reviewing some fundamental properties of the categorical TT-lifting, we apply it to the problem of relating two monadic semantics of a call-by-value functional programming language with computational effects. This kind of problem has been considered in various forms: for example, the relationship between monadic style and continuation passing style representations of call-by-value programs was studied around '90s. We give a set of sufficient conditions to solve the problem of relating two monadic semantics affirmatively. These conditions are applicable to a wide range of such problems.
Abstract
All kinds of semantics are syntax directed: the semantics follows from the syntax. Some varieties of semantics are syntax and type directed. In this talk, I'll discuss syntax, type, and analysis directed semantics (analysis-directed semantics for short!), for analyses other than types. An analysis-directed semantics maps from terms coupled with derivations of a static program analysis into some semantic domain. For example, the simply-typed lambda calculus with an effect system maps to the category generated by a strong parametric effect monad (due to Katsumata) and a bounded-linear logic-like analysis (described as a coeffect systems) maps to a category generated by various structures related to monoidal comonads. I'll describe a general technique for building analysis-directed semantics where semantic objects and analysis objects have the same structure and are coupled by lax homomorphisms between them. This aids proving semantic properties: the proof tree of an equality for two program analyses implies the rules needed to prove equality of the programs' denotations.
Abstract
The topic is the unholy trinity of eta, zeta, and xi. I'll indicate how Curry managed to give a finite combinatorial axiomatisation of this nastiness, by anticipating (almost-but-not-quite) McBride et al's applicative functors.
Abstract
Neil will talk about partial progress made during the summer on higher dimensional parametricity and the cubical structures that seem to arise.
Details will be kept to a minimum and, of course, concepts stressed.
Abstract
Conor has offered to talk to us about what he has been thinking about recently. He says this includes models, views, and dependent types.
Abstract
Or: My summer with Steve
Or: How Christine and Frank were right, after all
Or: Inductive types for the price of function extensionality and impredicative Set
Christine Paulin-Mohring and Frank Pfenning suggested to use impredicative encodings of inductive types in the Calculus of Constructions, but this was later abandoned, since it is "well-known" that induction principles, i.e. dependent elimination, can not be derived for this encoding. It now seems like it is possible to give a variation of this encoding for which the induction principle is derivable after all. The trick is to use identity types to cut down the transformations of type (Pi X : Set) . (F(X) -> X) -> X to the ones that are internally strongly dinatural, making use of a formula for a "generalised Yoneda Lemma" by Uustalu and Vene.
Abstract
Ohad gave an informal overview of his current draft, with the following abstract:
Haskell incorporates computational effects modularly using sequences of monad transformers, termed monad stacks. The current practice is to find the appropriate stack for a given task using intractable brute force and heuristics. By restricting attention to algebraic stack combinations, we provide a linear-time algorithm for generating all the appropriate monad stacks, or decide no such stacks exist. Our approach is based on Hyland, Plotkin, and Power's algebraic analysis of monad transformers, who propose a graph-theoretical solution to this problem. We extend their analysis with a straightforward connection to the modular decomposition of a graph and to cographs, a.k.a. series-parallel graphs.
We present an accessible and self-contained account of this monad-stack generation problem, and, more generally, of the decomposition of a combined algebraic theory into sums and tensors, and its algorithmic solution. We provide a web-tool implementing this algorithm intended for semantic investigations of effect combinations and for monad stack generation.
Abstract
This 101 is intended to be a brainstorming session on possible links between the theory of coalgebras and the theory of databases. I will outline some ideas in this direction and I am looking forward to your feedback.
Abstract
In Intuitionistic Multiplicative Linear Logic, the right introduction rule for tensors implies picking a 2-partition of the set of assumptions and use each component to inhabit the corresponding tensor's subformulas. This makes a naive proof search algorithm intractable. Building a notion of resource availability in the context and massaging the calculus into a more general one handling both resource consumption and a notion of "leftovers" of a subproof allows for a well-structured well-typed by construction proof search mechanism.
Here is an Agda file implementing the proof search algorithm.
Abstract
Guillaume presented parts of Hedges' paper Monad transformers for backtracking search (accepted to MSFP 2014). The paper extends Escardo and Oliva's work on the selection and continuation monads to the corresponding monad transformers, with applications to backtracking search and game theory.
Abstract
Stuart spoke about Lagrange inversion, a species-theoretic attempt to discuss the existence of solutions to equations defining species.
Abstract
Neil spoke about how adding structured quotients to containers gives rise to a larger class of data types.
Abstract
Tim gave a brief introduction to Synthetic Differential Geometry. This is an attempt to treat smooth spaces categorically so we can extend the categorical methods used in the discrete world of computer science to the continuous work of physics.
Abstract
Conor talked about worlds (aka phases, aka times, ...): why one might bother, and how we might go about equipping type theory with a generic notion of permitted information flow.
Abstract
Miles has kindly agreed to come through and tell us about Operads, thus revisiting the topic of his PhD and the city in which he did it.
Abstract
In 1987, Felleisen showed how to add control operators (for things like exceptions and unconditional jumps) to the untyped lambda-calculus. In 1990, Griffin idly wondered what would happen if one did the same in a typed lambda calculus. The answer came out: the inhabited types become the theorems of classical logic.
I will present the lambda mu-calculus, one of the cleanest attempts to add control operators to a type theory. We'll cover the good news: the inhabited types are the tautologies of minimal classical logic, and Godel's Double Negation translation from classical to intuitionistic logic turns into the CPS translation.
And the bad news: control operators don't play well with other types. Add natural numbers (or some other inductive type), and you get inconsistency. Add Sigma-types, and you get degeneracy (any two objects of the same type are definitionally equal). It gets worse: add plus-types, and you break Subject Reduction.
Abstract
I chose to go through (parts of) Hatcliff and Danvy's paper "A Generic Account of Continuation-Passing Styles" (POPL 94) which gives a nice factorization of various CPS transforms in terms of: