Abstracts

Monday, December 5th, 2016

9:30 am9:55 am

No abstract available.

10:00 am10:35 am

We consider compositionality, adequacy, and full abstraction from an algebraic point of view. A denotational semantics d is compositional if the denotation of a phrase is a function of the denotations of its (immediate) parts. Modelling a language as an initial  algebra over a finitary signature, compositionality is then that the image of the denotation function d can be turned into an algebra. Thus d becomes a homomorphism — indeed an epimorphism. So  we can factor the denotation function d through a compositional denotation function h, i.e., an algebra homomorphism. 

What can we do if the given denotation function d is not compositional? Say that a  compositional denotation semantics h is adequate with respect to the given denotation function if it factors through d. Then is there a best available adequate semantics? In [Hod] Wilfrid Hodges gave a construction, which can be phrased as dividing the syntax out by the “full abstraction” equivalence (actually congruence) relation, where two phrases P and Q are equivalent in this sense if, and only if, they have the same denotation in all language contexts.  This is  the final adequate semantics, as long as we restrict ourselves to epimorphisms.

In this talk we see that this pattern extends to other cases, but, perhaps surprisingly, not all. In the first we look at languages with variable-binding, following ideas of Peter Aczel for the appropriate notion of algebra. In the second, we look at the standard framework for the denotational semantics of programming languages, namely omega cpos (i.e., partial orders with lubs of increasing omega chains). In all these cases one can work with any algebra, not just free ones. However, and perhaps surprisingly, there are cases where the pattern fails, for example if we allow function symbols with countable arities there may be no final adequate semantics. An open problem emerges: to classify the categories of algebras where the final adequate denotation functions exist.  In the case of the category of sets, we see that such T-algebras exist for any finitary monad T; it may even be that a converse holds.

[Hod] Wilfrid Hodges, Formal Features of Compositionality, Journal of Logic, Language and Information, 10(1), 7-28,2001.

11:00 am11:35 am

In this talk, I'll discuss how operads, algebras, and especially the maps between them, naturally express a notion of compositionality. I'll give several examples, both from pure math and from concrete applications.

11:40 am12:15 pm

Higher category theory is one of the most general approaches to compositionality, with broad and striking applications across computer science, mathematics and physics. We present a new, simple way to define higher categories, in which many important compositional properties emerge as theorems, rather than axioms. Our approach is amenable to computer implementation, and we present a new proof assistant we have developed, with a powerful graphical calculus. In particular, we will outline a substantial new proof we have developed in our setting.

2:00 pm2:35 pm

We address a fundamental issue of interfaces that arises in the context of cloud computing; namely, what does it mean for a replicated and distributed implementation of a data structure to satisfy its standard sequential speci?cation. We describe an approach that liberalizes the linear and totally ordered time regime of linearizability to partial orders. We establish three key properties: * Expressiveness: we account for a wide range of extant replicated implementations of distributed data structures. * Composition: we show how to reason about composite data structures in terms of their components. * Abstraction: we show how a client's view of the distributed data structure can be simplified to reasoning with the sequential specification. This is joint work with James Riely.

2:40 pm3:15 pm

I shall talk about a difficulty to extend session types to be compositional in several contexts: multiparty session types, communicating automata and Linear Logic. I also talk about several solutions -- some of them are motivated by practical applications of the session types.

Tuesday, December 6th, 2016

9:30 am10:30 am

To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, Petri nets, electrical circuit diagrams, signal-flow graphs, chemical reaction networks, Feynman diagrams and the like. In principle all these different diagrams fit into a common framework - the mathematics of string diagrams, based on category theory. This has been known for some time. However, the details are more challenging, and ultimately more rewarding, than this basic insight. Two complementary approaches are decorated cospan categories (which are more topological in flavor) and PROPs (which are more algebraic). Playing these off each other, while keeping practical examples in mind, we can start to unearth the mathematical structures implicit in networks.

11:00 am11:35 am

Distributional models of natural language use contextual co-occurences to build vectors for words and treat these as semantic representations of the words. These vectors are used for natural language tasks such as entailment, classification, paraphrasing, indexing and so on. Extending the word vectors to phrase and sentence vectors has been the Achilles heel of these models. Building raw vectors for larger language units goes against the distributional hypothesis and suffers from data sparsity. Composition is the way forward. In this talk I will show how category theory can help. I will present categorical models of language based on finite dimensional vector spaces and of Lambek's categorial grammars (residuated monoids, pregroups), joint work with Balkir, Clark, Coecke, Grefenstette, Kartsaklis, and Milajevs. I will also present more recent work, joint with Muskens, on direct vector models of lambda calculus, following the lead of Montague who applied these to natural language. The latter will enable me to relate this work to our preliminary joint work with Abramsky on sheaf theoretic models of language.

11:40 am12:00 pm

Hypergraph categories are monoidal categories in which every object is equipped with a special commutative Frobenius monoid. Morphisms in a hypergraph category can hence be represented by string diagrams in which strings can branch and split: diagrams that are reminiscent of electrical circuit diagrams. As such they provide a framework for formalising the syntax and semantics of circuit-type diagrammatic languages. In this talk I will introduce decorated corelations as a tool for building hypergraph categories and hypergraph functors, drawing examples from linear algebra and dynamical systems.

12:05 pm12:25 pm

Compact closed categories have been exploited to provide compositional semantics for a variety of fields, including quantum computation and natural language processing. More recently, efforts have been made to apply these techniques to more unusual fields, such as the mathematical modelling of cognitive processes.

When applying these compositional techniques to a new field of investigation, the question often arises as to how to identify a suitable compact closed category capturing the phenomena of interest. The category of relations is a well known compact closed category and the notion of binary relation permits several different axes of variation. In this talk we will show how this freedom has been exploited in recent work to construct compact closed categories with a variety of desirable properties, and the general techniques that can be applied.

2:00 pm2:35 pm

When analyzing a networked system from the outside as a "black box", it is generally a difficult problem to make any inferences about the internal structure of the network that the system consists of. I will present some general thoughts on how much one can expect to be able to infer, and outline a potential technique for making such inferences based on hypothetical rewirings of copies of subsystems. So far, the method has been worked out in detail only for the case of Bayesian networks, in https://arxiv.org/abs/1609.00672.

2:40 pm3:15 pm
The area of automata learning was pioneered by Angluin in the 70's. Her original algorithm, which applied to regular languages and deterministic automata, has been extended to various types of automata and used in software and hardware verification. In this talk, we will take an abstract perspective at automata learning. We show how the correctness of the original algorithm and many extensions can be captured in one proof using coalgebraic techniques. We also show that a novel algorithm for nominal automata (which are automata over infinite alphabets) can be derived from the abstract framework. The new algorithm has applications to verification of cryptographic protocols. 
 
3:20 pm3:45 pm

Physical theories as usually formulated consist of a mathematical description and a set of (often implicit) guidelines/‘rules of thumb’ for their application. In the case of the thermodynamics of a single particle in a box, the simple equations for behaviour are generally accepted, but differences in their application lead to extreme disagreements; most notably, whether the Second Law can be violated. We introduce a programming language to describe these simple thermodynamic processes, and give an operational semantics and program logic as a basis for formal reasoning about such simple thermodynamic systems. By finding a computational invarient of the system, we prove that all possible combinations of basic operations must preserve the Second Law. Moreover, the same invariant determines a cost to information erasure: Landauer Erasure becomes a theorem of the formal system. This method of formalising both the mathematical and logical structures of physical processes will have applications to many different areas of physics.

Wednesday, December 7th, 2016

9:30 am10:30 am

Thermodynamics is one of the most effective paradigms in physics, with applications from the tiny scale of molecular engines to the grand scale of stars and black holes. The reason of such effectiveness can be traced back to the fact that thermodynamic approach is, in large part, independent of the physical theory describing the system of interest: no matter whether the system is classical or quantum, relativistic or non-relativistic, the concepts of equilibrium, entropy, and free energy can be successfully employed. But why? Recently, different groups have started to investigate which requirements have to be satisfied in order for a given physical theory to support a sensible thermodynamics. Here I will present our proposal, arguing that the foundations of thermodynamics are to be found in the way physical systems compose with one another. Specifically, I will show that most of the familiar properties of equilibrium states and entropy can be retrieved from the requirement that every physical system can be modeled as part of a larger system in a pure state that evolves reversibly through time. I will argue that classical thermodynamics is no exception to this scheme: while classical probability distributions cannot be reduced to classical pure states, the key point is that classical systems can in principle be composed with other (non-classical) systems and that the resulting composite has the desired properties. This observation leads to the conjecture that the physical theories admitting a sensible thermodynamics are those and only those that admit, in principle, a pure and reversible extension.

11:00 am11:20 am

Suppose that a convex-operational theory has two properties: tomographic locality (states on composite systems are uniquely characterized by the statistics of local measurements) and transitivity (every two pure states are connected by a reversible transformation). Does it follow that this theory is automatically a subtheory of quantum mechanics? This conjecture has first been formulated by Tony Short. In the talk, I give evidence for this conjecture, but also for a surprising way in which it could fail: namely, by going from pairs to triples of systems.

11:25 am11:45 am

Contextuality is one particular phenomenon allowed by quantum theory that has no analogue in classical probability theory. A contextuality scenario (a.k.a. test space) refers to a specification of a collection of measurements which says how many possible outcomes each measurement has and which measurements have which outcomes in common. Given two independent systems, a natural question is how to represent such a situation as a single test space. In other words, how do two contextuality scenarios combine into a joint one? The answer to this was first proposed by Foulis and Randall [1], but does not encompass compositions among more than two systems. In the present work [2] we study how to extend their approach to describe scenarios with any number of independent components. We develop a family of extensions and show that, as long as classical, quantum or general probabilistic models are considered, all such extensions are "observationally equivalent". How other sets of probabilistic models behave under these different compositions is still an open question. [1] Foulis, D. J. and Randall, C. H. (1981) Empirical logic and tensor products, in H. Neumann (ed.),Interpretations and Foundations of Quantum Theory, Wissenschaftsverlag, Bibliographisches Institut 5, Mannheim/Wien/Z_rich, pp. 9?20. [2] Antonio Ac_n, Tobias Fritz, Anthony Leverrier, Ana Bel_n Sainz, Comm. Math. Phys. 334(2), 533-628 (2015).

Thursday, December 8th, 2016

9:30 am10:05 am

Contextuality is a non-classical property of quantum physics that is believed to provide a key resource for quantum computing.  The topological approach to this property by Abramsky-Brandenburger 2011 etc. characterizes it as ``global inconsistency'' coupled with ``local consistency'', revealing it to be a phenomenon also found in many other fields and topics---including, notably, logical paradoxes arising from circularity in the reference and composition of sentences.  While this approach has yielded logical and algebraic methods of proving the ``global inconsistency'' part of contextuality, this talk targets the other, ``local consistency'' part.  We characterize it in logical terms by providing a novel system of ``local inference'' that is sensitive to the topology of contexts and whose notion of consistency is local as opposed to global.

10:10 am10:30 am

Contextuality is one of the fundamental features of quantum mechanics that set it apart from classical physical theories, and recent work establishes its rôle as a resource providing a quantum advantage in certain information-processing and computational tasks. We discuss the contextual fraction as a quantitative measure of contextuality, defined for any empirical model (i.e. table indicating the probability of outcomes of measurements in a given experimental scenario). The contextual fraction bears a precise relationship to maximum violations of generalised Bell inequalities, and it is shown to be monotone with respect to a range of operations which can be thought as forming the “free” operations in a resource theory of contextuality. It is also closely related to quantifiable advantages in certain tasks, such as cooperative games and a certain form of measurement-based quantum computation.

11:00 am11:20 am

Compositional reasoning about (possibly large, complex) distributed systems requires tools for reasoning about the flow of information between their components. I describe some first ideas for modelling and reasoning about interfaces between system models, illustrating with examples from security policy modelling.

11:25 am11:45 am

We argue that compositionality is important when building meaningful methodologies and tools for cybersecurity. There are many cybersecurity mitigations, but how do they compose? For example how do access controls and cryptography compose against a threat like data theft? Some cybersecurity controls may compose independently, others cooperating, in other cases "the winner takes it all", etc. This compositionality presents challenges to linearity and so to performance. We will talk about recent work on the mathematics of this compositionality and its role in developing decision support systems for cybersecurity capable to efficient analyze billions of possible strategic behaviours.

11:50 am12:10 pm

According to Frege’s principle of compositionality, the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules used to combine them. This talk will try to show how this principle acquires new unexpected features in the context of complex systems – which are composed of many non-identical elements, entangled in loops of nonlinear interactions – due to the characteristic 'emergence' effects. Indeed, representing expression composition as a path algebra construct in data space (the space on which composition rules operate) and embedding the latter in a simplicial complex – geometric and at the same time combinatorial notion coming from algebraic topology – provides us with a mathematical structure that naturally allows for a finer classification of compositional rules in equivalence classes not identifiable otherwise.

2:00 pm2:35 pm

Linearisability is a central notion for verifying concurrent libraries: a given library is proven safe if its operational history can be rearranged into a new sequential one which, in addition, satisfies a given specification. Linearisability has been examined for libraries in which method arguments and method results are of ground type, including libraries parameterised with such methods. In this talk we extend linearisability to the general higher-order setting: methods can be passed as arguments and returned as values. A library may also depend on abstract methods of any order. This generalised notion, along with its encapsulated variant, can be shown to be sound using techniques from operational game semantics.

This is joint work with Andrzej Murawski.

2:40 pm3:15 pm

Probabilistic Coherent Spaces are fully abstract model of higher-order probabilistic computation interpreting types as convex sets and programs as analytic functions. After comparing this quantitative semantics with the well-known probabilistic power domain approach, we will see why our call-by-name approach prevents us from encoding some probabilistic algorithms. Then, we will focus on a probabilistic extension of Call-By-Push-Value that is well-suited for encoding probabilistic algorithms and that also enjoys full abstraction.

3:50 pm4:10 pm

In 2010, Luis Caires and Frank Pfenning developed a propositions-as-types correspondence for concurrent computation, along the lines of the standard Curry-Howard correspondence between simply typed lambda-calculus and intuitionistic logic. Their correspondence connects a form of linear logic, and the pi-calculus equipped with session types (type-theoretic formulations of communication protocols). As well as guaranteeing correct communication in the sense that messages follow the specified protocol, the type system guarantees deadlock-free execution of concurrent processes by imposing a rather severe prohibition on cyclic interconnections. Earlier work by Naoki Kobayashi developed a more liberal type system for deadlock-freedom in the pi-calculus, which allows the formation of cyclic interconnections if there is no actual cyclic dependency between communication operations. Kobayashi's system was developed from computational intuitions, but it is natural to wonder whether it can be given a logical foundation. In this talk we explore a reformulation of Kobayashi's deadlock-freedom proof as a cut-elimination result, and assess its consequences on the computational and logical sides of the Curry-Howard correspondence.

4:15 pm4:35 pm

Pebble games (based on Ehrenfeucht-Fraïssé games) are an important tool for analyzing the expressive power of logics.  They are also related to efficient algorithms for approximating homomorphism and isomorphism on finite structures.  In this short talk, I look at some questions arising from composing strategies in pebble games and some
algorithmic consequences that may follow.

4:40 pm5:00 pm

The ZX-calculus, introduced by Coecke and Duncan, is a powerful diagrammatic language for quantum mechanics and quantum information processing. We will present a family of equations — including the so-called supplementarity and its generalizations — which cannot be derived in the ZX-calculus. Adding the generalized supplementarity to the axioms of the ZX-calculus is a step towards a complete and universal diagrammatic language for quantum mechanics

Friday, December 9th, 2016

9:30 am10:05 am

Schema mappings are high-level specifications that describe the relationship between two database schemas. As such, they constitute the essential building blocks in formalizing the main data inter-operability tasks, including data exchange and data integration. Several different operators on schema mappings have been introduced and studied in considerable depth. Among these, the composition operator is arguably the most fundamental and prominent one. The aim of this talk is to present an overview of results about the composition of schema mappings with emphasis on the interplay between first-order logic and second-order logic in expressing the composition.

10:10 am10:45 am

No abstract available.

11:20 am11:55 am
Quipper is a high-level functional programming language for quantum computing that was designed and implemented at Dalhousie University and the University of Pennsylvania a few years ago. The language is practical and scalable - seven non-trivial algorithms from the quantum computing literature have already been implemented in it. But Quipper, which is implemented as an embedded language, lacks theoretical foundations and a model. In this talk, I will present a Quipper-like language with a solid theoretical foundation and a compositional denotational semantics.  This is joint work with Francisco Rios.
12:00 pm12:35 pm

Recurrence relations have been of interest since ancient times. Perhaps the most famous is the Fibonacci numbers, where each additional term in the sequence is obtained as the sum of the previous two. I will show how we can use a graphical language of string diagrams?a ?graphical linear algebra??to reason about recurrence relations, and as a bonus, obtain efficient implementations. The application amounts to a compositional, string diagrammatic treatment of signal flow graphs?a model of computation originally studied by Claude Shannon in the 1940s.

TBD
3:00 pm3:40 pm

No abstract available.