Wednesday 22nd March 2017, noon - 5:30pm.
Room 1.33b, School of Computer Science, Jack Cole Building, University of St Andrews, KY16 9SX (map, directions).
|13:00||David Castro||University of St Andrews||Automatically Deriving Cost Models for Structured Parallel Processes Using Hylomorphisms.|
|13:30||Natalia Chechina||University of Glasgow||Let it Crash and Scale.|
|14:30||Chris Cummins||University of Edinburgh||Using deep learning to generate human-like code.|
|15:00||Bob Atkey||University of Strathclyde||Compiling Parallel Functional Code with Data Parallel Idealised Algol.|
|15:30||Daniel Hillerström||University of Edinburgh||An Abstract Machine Semantics for Handlers.|
|16:30||Pepijn Kokke||University of Edinburgh||Give or Take: Non-Determinism, Linear Logic and Session Types.|
|17:00||Matúš Tejiščák||University of St Andrews||Extending Dependent Types with Erasure.|
Structured parallelism using nested algorithmic skeletons can greatly
ease the task of writing parallel software, since common, but
hard-to-debug problems such as race conditions are eliminated by
design. However, choosing the right combination of algorithmic
skeletons to yield good parallel speedups for a specific program on a
specific parallel architecture is still a difficult problem.
This work uses the notion of hylomorphisms, a general recursion pattern, to make it possible to reason about both functional correctness properties and extra-functional timing properties of structured parallel programs. We have previously used hylomorphisms to provide a denotational semantics for skeletons, and proved that a given parallel structure for a program satisfies functional correctness. This paper expands on this theme, providing a simple operational semantics for algorithmic skeletons, and a cost semantics that can be automatically derived from that operational semantics. We prove both semantics sound with respect to the previous denotational semantics. This means that we can now automatically and statically choose the provably optimal parallel structure for a given program with respect to a parallel architecture or a class of architectures. We are also able to predict speedups using an automatic amortised analysis.
This talk will be an overview of the work we do at Glasgow on scaling distributed systems using Erlang programming language. This includes three projects: Scaling distributed Erlang itself (SD Erlang), comparing Erlang performance with other languages (Go and Scala with Akka), and scaling a distributed pack of robots (ROSIE).
Machine learning is an effective method for improving compiler
heuristics, but there is a shortage of benchmarks. Typical machine
learning experiments outside of the compilation field train over
thousands or millions of examples. In machine learning for compilers,
however, there are typically only a few dozen common benchmarks
available. What is needed is a way to generate an unbounded number of
programs which finely cover the program space.
I will talk about a new, deep learning based approach to generating source code. By mining millions of lines of code from GitHub, we automatically construct models for how humans write programs. We use these models to generate thousands of runnable benchmarks, showing that they lead to improvements in machine learning for compilers. Ultimately, this is the story of how we solved a problem with machine learning using machine learning.
Graphics Processing Units (GPUs) and other parallel devices are widely
available and have the potential for accelerating a wide class of
algorithms. However, expert programming skills are required to achieve
maximum performance. These devices expose low-level hardware details
through imperative programming interfaces which inevitably results in
non-performance-portable programs highly tuned for a specific device.
Functional programming models have recently seen a renaissance in the systems community as they offer a solution to tackle the performance portability challenge. Recent work has shown that a functional representation can be effectively used to automatically derive and encode low-level hardware-specific parallelism strategies resulting in high performance across a range of target architectures. However, the translation of the functional representation to the actual imperative program expected by the hardware interface is typically ad hoc with no correctness guarantees.
We present a formalised strategy-preserving translation from high level functional code to low level data race free parallel imperative code. This translation is formulated and proved correct within a language we call Data Parallel Idealised Algol (DPIA), a dialect of Reynolds' Idealised Algol. Performance results on GPUs and a multicore CPU show that the formalised translation process leads to performance on a par with ad hoc approaches.
This is joint work with Michel Steuwer, Sam Lindley and Christophe Dubach.
Handlers for algebraic effects provide a modular alternative to monads as a basis for effectful programming. They support user-defined effects, as in Haskell, in conjunction with direct-style effectful programming, as in ML. Recently, we have retrofitted the functional programming language Links, a single-source language for multi-tier web programming, with algebraic effects and handlers by leveraging the existing infrastructure. In this talk I will focus on the server-side implementation which is based on a novel generalisation of the CEK machine. A notable feature of our CEK machine is that it supports both deep and shallow handlers simultaneously. Thus allowing programmers to mix the two types of handlers seamlessly in the source language. I shall present the abstract machine semantics and show the minimal delta required for supporting handlers. (Joint work with Sam Lindley)
The π-calculus is great for modeling concurrent systems. However, as a basis for a programming language it has all sorts of pitfalls — e.g. non-termination, deadlocks. The system CP, by Phil Wadler (2012), gets rid of many of these pitfalls. Through a correspondence with linear logic, it guarantees, amongst other things, termination. However, it also excludes non-determinism, and with that, race conditions. This makes it a less than ideal tool for modeling concurrent systems. In this talk, I will introduce Nodcap, a variant of CP which allows non-deterministic programs while still keeping the ties with logic and with that the guarantees about session fidelity and termination.
In dependently typed languages, we often express algorithms in ways more
amenable to reasoning -- we program with views, add indices to type families.
However, this means that our programs compute with more data -- views, proofs,
indices -- and they may end up asymptotically slower if these extra structures
are too big.
In this talk, I will show a dependently typed calculus that has erasure built into its type system. This allows us to infer erasure annotations from a program, check consistency of these annotations, and then erase everything marked as runtime-irrelevant before execution, thus recovering the intended run-time behaviour. Experience with Idris indicates that this is important for practical programming with dependent types.
Compared to the talk from the last year, this talk will describe a different, type-based approach, which enables features like erasure from higher-order functions, dependent erasure, or various forms of erasure polymorphism.
SPLS receives financial support from the Theory, Modelling and Computation theme of SICSA, the Scottish Informatics and Computer Science Alliance.