Project ideas for Part II / Part III / MPhil ACS

Machine Learning Explainability in Integrative Cancer Medicine – PartII (co-supervised with Marwa Mahmoud)

Explainability is becoming an inevitable part of machine-learning enabled systems, in particular in the light of the EU’s General Data Protection Regulation (GDPR). The importance of explanation is more evidenced in safety critical domains such as healthcare. The aim of this project is to add an explainability module to VIIDA, a visual integrative interface for data analysis, recently developed by CancerAI team in the lab as a part of MFICM. The module will bring explainability methods for feature importance [1], example-based explanation [2] and model extraction [3] together in a plug-in to VIIDA. User studies will be conducted to evaluate the usability and extensibility of the module. This is a very exciting opportunity for students who want to work in the intersection of ML and HCI in heath domain. 

The ideal candidate for this project requires great python programming skills as well as API and web-based development. Experience with explainable AI is a great advantage. 

[1] from libraries such as lime, SHAP, and iNNvestigate
[2] from the following repo: https://github.com/Timothy-Ye/example-based-explanation
[3] from the following repo: https://github.com/mateoespinosa/rem

Concept Learning for Human-Like Explanation in Machine Learning – PartIII/MPhil

Majority of explainability methods focus on providing explanation at input feature level (the reveal the most important feature for making a prediction). Recently there has been a shift towards concept-based explanation, where concepts refer to high level intermediate representation of the input that is relevant to the downstream task and also meaningful to humans. As a result of this in tasks such as image classification we can explain the behaviour of a classifier in terms of a groups of pixels that signify a semantically relevant (e.g. beak, feather, leg) entity rather than individual pixels that are on they own not much informative. 
Concept-based explanation methods are however mainly focused on preserving the properties of concept w.r.t. to output (i.e. that the intermediate representation can predict the classification labels well if used instead of input features). The aim of this project is to develop methods that comply with certain properties w.r.t. to input as well as output (e.g. that the concepts indeed only capture the part of input that is necessary for that concept and nothing more). This is a great opportunity for students who want to be involved in the forefront of explainability in ML that is getting increasingly important for full deployment of ML in society, in particular in the light of the EU’s General Data Protection Regulation (GDPR). Few core papers relevant to this project are listed below. 

The ideal candidate for this project will have a strong background in deep learning. Familiarity with explanatory AI is a big plus. 

[1] https://arxiv.org/abs/2007.04612
[2] https://arxiv.org/abs/1910.07969
[3] https://arxiv.org/abs/2105.04289
[4] https://arxiv.org/pdf/2106.13314