Publications and Talks

Publications

*denotes equal contribution


Conferences


Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties. Lisa Schut*, Oscar Key*, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal. AISTATS 2021.

[Paper] [Code]


A Bayesian Perspective on Training Speed and Model Selection. Clare Lyle, Lisa Schut, Robin Ru, Yarin Gal, Mark van der Wilk. NeurIPS 2020.

[Conference proceedings] [Arxiv Paper] [Clare's Blogpost]


Workshops


Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It. Lisa Schut, Edward Hu, Greg Yang, Yarin Gal. ICML Workshop on Uncertainty in Deep Learning (UDL) 2021. [Workshop Paper]


Uncertainty-Aware Counterfactual Explanations for Medical Diagnosis. Lisa Schut*, Oscar Key*, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal. NeurIPS Workshop on Machine Learning for Health 2020.

[Poster Link]


Capsule Networks -- A Probabilistic Perspective. Lewis Smith, Lisa Schut, Yarin Gal and Mark van der Wilk.

Spotlight paper at the ICML Workshop on Object-Oriented Learning: Perception, Representation, and Reasoning, 2020.

[Workshop Paper] [Full paper] [Lewis' Blogpost]


Influence of Outliers on Cluster Correspondence Analysis. Michel van de Velden, Alfonso Iodice D'Enza, Lisa Schut. CLADAG, Cassino Italy 2019.

[Conference proceedings]


Pre-prints


Revisiting the Train Loss: An Efficient Performance Estimator for Neural Architecture Search. Robin Ru*, Clare Lyle*, Lisa Schut, Mark van der Wilk, Yarin Gal. 2020.

[ArXiv pre-print]


Invited Talks



  • Counterfactual Explanations: Making AI Decisions More Useful and Trustworthy. Accenture Turing Innovation Symposium 2020

Reviewing Experience


Conferences


Workshops

I generated the images on this website using artbreeder.com, which uses this BigGAN.