Publications and Talks
Publications
*denotes equal contribution
Conferences
Speedy Performance Estimation for Neural Architecture Search. Robin Ru*, Clare Lyle*, Lisa Schut, Miroslav Fil, Mark van der Wilk, Yarin Gal. NeurIPS 2021 Spotlight.
Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties. Lisa Schut*, Oscar Key*, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal. AISTATS 2021.
A Bayesian Perspective on Training Speed and Model Selection. Clare Lyle, Lisa Schut, Robin Ru, Yarin Gal, Mark van der Wilk. NeurIPS 2020.
[Conference proceedings] [Arxiv Paper] [Clare's Blogpost]
Workshops
Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It. Lisa Schut, Edward Hu, Greg Yang, Yarin Gal. ICML Workshop on Uncertainty in Deep Learning (UDL) 2021. [Workshop Paper]
Uncertainty-Aware Counterfactual Explanations for Medical Diagnosis. Lisa Schut*, Oscar Key*, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal. NeurIPS Workshop on Machine Learning for Health 2020.
Capsule Networks -- A Probabilistic Perspective. Lewis Smith, Lisa Schut, Yarin Gal and Mark van der Wilk.
Spotlight paper at the ICML Workshop on Object-Oriented Learning: Perception, Representation, and Reasoning, 2020.
[Workshop Paper] [Full paper] [Lewis' Blogpost]
Influence of Outliers on Cluster Correspondence Analysis. Michel van de Velden, Alfonso Iodice D'Enza, Lisa Schut. CLADAG, Cassino Italy 2019.
Student Publications
Below are the workshop papers/publications of MSc students I've (co-)supervised:
DeDUCE: Generating Counterfactual Explanations at Scale. Benedikt Höltgen, Lisa Schut, Jan M.Brauner, Yarin Gal. NeurIPS XAI4Debugging Workshop 2021.
Can Network Flatness Explain the Training Speed-Generalisation Connection? Albert Qiaochu Jiang, Clare Lyle, Lisa Schut, Yarin Gal. NeurIPS BDL Workshop 2021.
Invited Talks
Counterfactual Explanations: Making AI Decisions More Useful and Trustworthy. Accenture Turing Innovation Symposium 2020
Reviewing Experience
Conferences
Workshops
ICML Workshop in Human Interpretability 2020
NeurIPS Algorithmic Fairness through the Lens of Causality and Interpretability Workshop 2020
I generated the images on this website using artbreeder.com, which uses this BigGAN.