Department of Computer Science
Distinguished Lecture Series
Paul H. J. Kelly
Imperial College London
Compiler technology for solving PDEs with performance portability
We have become used to the idea that higher-level languages supporting a higher level of abstraction come with a price in performance. In contrast, we should expect that the more information the compiler has about the structure and properties of our code, the more scope it should have for optimisation. This talk is about our experience in the Firedrake project of trying to make good on this idea. We are building software tools for a solving PDEs on unstructured meshes, mainly using the finite element method. We support a concise high-level programming model, based on the FEniCS Project's Unified Form Language, while mapping onto a high-performance implementation entirely automatically via an intermediate representation for loops over the mesh, called PyOP2. Our compiler is based around three layers of domain-specific program representation, each supporting different optimisations. The resulting software tools, implemented in Python, achieve higher performance than established C++ and Fortran codes.
Professor Kelly is a world leader in the creation of compiler technology to support the development of simulation software for the most powerful parallel supercomputers.
Paul H. J. Kelly graduated in Computer Science from University College London in 1983, and moved to Westfield College, University of London, for his PhD. He came to Imperial College, London, in 1986, working on fault-tolerant wafer-scale multicore architectures, and parallel functional programming. He was appointed as Lecturer in 1989. He became Professor of Software Technology at Imperial College, London, in 2009.
Professor Kelly leads Imperial's Software Performance Optimisation research group, and he is also co-Director of Imperial's Centre for Computational Methods in Science and Engineering.
His research contributions span single-address-space operating systems, scalable large shared-memory architectures, compilers (bounds checking and pointer analysis), graph algorithms, performance profiling, and custom floating-point arithmetic.
The main current focus is on engaging with applications specialists to develop software tools for multicore architectures, overcoming the limitations of conventional compilers through "active libraries" that exploit properties of a particular application domain to achieve high performance while maintaining a clean, abstract program structure.
His current work is compiler technology. Much of the work aims to push the frontiers of compiler research through moving up the "food chain" - exploiting properties and opportunities special to particular classes of application. This has led him to engage deeply with collaborators in finite element methods, and computer vision. He is a major contributor to the FEniCS Project (http://fenicsproject.org).
Host: Ridgway Scott
Argonne National Laboratory
The Chicago Center for the Theory of Computing and Allied Areas