Learning from diverse data for Clinical Decision Support
This EngD project will focus on how to train machine learning algorithms on both imaging and non-imaging data to become clinical decision support systems which can make concrete contributions to better patient outcomes. Given the increasing amount of healthcare data that is available and the concurrent advances in the range and sophistication of healthcare treatments, alongside the continuing problem of limited resources, there is tremendous scope for artificial intelligence to assist in decision making. AI assistance may range from retrieving relevant information, to offering a differential diagnosis, to suggesting clinical courses of action, in addition to ancillary functions such as patient triage and flagging of incidental findings. However, the complexity of medical practice means that training an AI system fit for clinical use is not a straightforward task. For humans, gaining medical expertise requires many years of education and practical experience of thousands of patients. Some authors have previously looked at how to leverage text information for better interpretation of imaging data e.g. [1, 2], however there is little research as yet into how to best integrate imaging and non-imaging data for solving clinical tasks, particularly in the medical domain, where there are challenges to do with volume of data, unstructured or non-standard data, data decentralisation (and the need to work in e,g. safe haven environments to satisfy data governance requirements), missing data, data imbalance (there are few examples of patients with rare pathologies, hence the interest in one-shot or even zero-shot learning as mentioned in ) and so on. This EngD will yield approaches for integrating imaging and non-imaging data to solve CDS tasks. Approaches may consist of a single model or a hybrid approach of multiple models, using machine learning algorithms (neural nets, random forests, SVMs).  DeViSE: A Deep Visual-Semantic Embedding Model. Frome et al (Google, Inc.). NIPS 2013.  TandemNet: Distilling Knowledge from Medical Images Using Diagnostic Reports as Optional Semantic References. Zhang et al (Univ. Florida). MICCAI 2017.