Research

Explainable Machine Learning [bioRxiv2021]

Deep learning models are increasingly used in clinical decision-making, however the lack of interpretability and explainability impedes their deployment in day-to-day clinical practice. To support clinical decision making, we propose REM-D, a rule extraction methodology that approximates a deep neural network (DNN) with ruleset models that are easy to simulate and comprehend.
Method. REM-D is a decompositional method that decomposes a DNN into adjacent layers from which rules are extracted, and merged to map input to output. The extracted rules can be used to explain the network predictions or to substitute the underlying DNN.
Application area. We are working very closely with clinicians and oncologists in MFICM to ensure REM-D is of relevance to clinical community and has the potential to decrease their scepticism about ML system. To show case our approach, we have mainly experimented with breast cancer data so far. Publications under progress.

Accessible Reasoning Systems [Diagrams2018][CP2018][KR2018]

Domain experts (e.g., in medicine and finance) are increasingly expressing their knowledge in machine readable format to use the computational power of machines for reasoning about the data. However, current reasoning systems are designed for computer experts and are hence inaccessible to most domain experts. We are designing a logic and implementing a reasoning system for this logic that is accessible to non-experts as well as experts. To ensure that the designed logic and the reasoning system serve their intended purpose, we collaborate closely with cognitive scientists who verify the accessibility of the logic and the system through user studies.
Method. Our approach to design a logic that is inherently accessible is to use diagrams instead of mathematical symbols to exploit their cognitive benefits over symbolic and textual notations.
Application area. The application area we are currently experimenting with is ontologies due to the popularity of ontologies as a knowledge representation paradigm. Essentially, we are using our diagrammatic logic to design an ontology reasoner for OWL 2 RL profile.


Regulation in Reasoning Systems [EAAI2017] [TAAS2020]

Larg-scale distributed systems have proven successful in engineering and controlling complex domains. A core issue in these systems is that of regulation and coordination such that an overall desirable behaviour is guaranteed, despite the distributed nature of the system. We propose a decision-making process in which actors in a distributed system simultaneously pursue social goals – those aiming at regulating the system’s overall behaviour – as well as their personal ones.
Method. We use the concept of norms to encode the social goals in terms of obligation and prohibitions that are imposed on actors in the system. To avoid restricting the autonomy of the actors, we design the norms as soft constraints that can be violated at a cost, however avoiding the cost creates an incentive to comply with norms. The decision-making method we propose allows weighing up the value of norm compliance and violation against personal goal satisfaction, where there are conflicts between social and personal goals.
Application area. We apply our decision-making process in a scenario where an autonomous software agent is reasoning about which actions to take in order to satisfy its personal goals while complying with norms imposed on it.


Explainable Reasoning Systems
 [IJCAI2016][JELIA2016]

Complex reasoning systems suffer from opacity, that is, it is unclear to the tool users why, and how, the reasoning system yields a certain solution (e.g., a prediction). Consequently, debugging these systems and scrutinising their solutions can be very challenging. This in turn impacts on the trust that the users have in these systems and their recommendations. Explaining the reasoning strategy that a system took to arrive at a solution has proven to play an important role in increasing its trustworthiness. We propose using formal dialogues that allow the users to question as to why a solution was reached.
Method. We use Argumentation Theory to explain the decision-making process made by an autonomous agent. Argumentation studies how conclusions are obtained from a potentially conflicting logical theory using specific semantics. We make use of dialogue games for preferred semantics to justify why an agent in a normative practical reasoning scenario decides to follow a certain sequence out of all possible options.
Application area. We use the same scenario as above, where an autonomous agent has to decide which actions to follow to optimise achieving personal and social goals. The focus here, however, is on mapping goals and norms to arguments and using these arguments to explain why a certain course of action is the most preferred one.