Machine learning (ML) and artificial intelligence (AI) methodologies are now permeating many parts of science, technology, health and society. At their core these methods rely on highly complex, high-dimensional mathematical and statistical models. However, unfortunately they are generally hard to interpret and and their internal decision making mechanisms are not transparent. The objective of the StatXAI project is to develop statistical tools to understand and help create explainable ML/AI methodologies and models.
Research in StatXAI will focus on four lines of work: i) to investigate advanced nonparametric and algorithmic models such as neural networks and ensemble approaches, ii) to explore diverse strategies for explainable ML/AI using statistical approaches such as LIME/Anchors [1,2], LRP , explainable embeddings  etc., iii) to develop corresponding effective algorithms and implementing them in open source software (R and Python), and iv) to deploy and test explainable models in biological and medical settings.
The University of Manchester is a partner university of the Alan Turing Institute (ATI, the UK national institute for data science and artificial intelligence). PhD students will have the opportunity to interact and engage with the ATI.
 Ribeiro et al. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. https://arxiv.org/abs/1602.04938  Ribeiro et al. 2018. Anchors: High-Precision Model-Agnostic Explanations. https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
 Montavon et al. 2018. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73:1-15.
 Qi and Li. 2017. Learning Explainable Embeddings for Deep Networks.
Interest in modern machine learning methods and computational statistics, knowledge of multivariate statistics, experience in
programming in R and Python. Note the focus of the PhD project lies on
methods and algorithms rather than on pure theory.