Multitask Deep Learning from Images for Clinical Decision Support
A wide range of imaging modalities – including X-ray, CT, MR, PET, SPECT and ultrasound – are used routinely in healthcare to provide diagnoses and support clinical decision making. Deep learning techniques (sometimes referred to as “AI”), using neural networks, is revolutionising the field of medical imaging (radiology). These techniques learn from annotated examples, to perform tasks such as automatically identify regions of specific pathologies, segment anatomical structures, make repeatable measurements which can form the basis of biomarkers or predict long-term patient outcomes. Most solutions developed to date are relatively limited in their scope – they target a specific pathology, segmentation of biomarker, work with a single imaging modality and can be sensitive to acquisition protocol or even scanner manufacture. This has been referred to as “narrow AI” – single function solutions which often do not generalise well. The challenge is to train algorithms which work across several modalities, performing multiple functions, robust to variability of acquisition. In the Deep Learning literature this is referred to a “multi-task learning” and there have been some demonstrations in the medical imaging domain . Better generalisation is being achieved by adversarial domain adaptation , but there remain many challenges to be addressed. This project will research techniques for training algorithms to solve multiple problems, in a unified approach such that the solutions support each other, resulting in a whole which is better than the sum of independently trained parts. Thus, this is a step towards ‘general AI’, in the medical imaging domain at least.  Moeskops et al., Deep Learning for Multi-task Medical Image Segmentation in Multiple Modalities, MICCAI 2016.  Ganin et al., Domain-Adversarial Training of Neural Networks. J. of ML. Research, 2016.