Topic Title: Development of interpretable deep learning model with limited data for medical applications

 

Technical Area:

Machine Learning

 

Background

Recent advancements in Artificial Intelligence have led to tremendous strides in various walks of life from self-driving cars to decision support systems in healthcare. Medical science is one of the most critical areas of application of such technological advancements. However, recent deep learning algorithms require a large quantity of data in order to achieve competent performance. Such huge volumes of data are either unavailable or of great cost for the majority of medical applications. In order to leverage the learning ability of advanced Deep Learning (DL) techniques, it is paramount to solve the problem of small data in various studies. Meanwhile, the lack of interpretability makes deep learning algorithms difficult for clinical practice which requires a high level of trust association with decision-making processes. In these processes humans not only make a prediction but also rationalize it through a series of logically consistent and understandable choices, and this rationalization, in turn, enables the decision maker to implicitly or explicitly associate a measure of confidence to the prediction aiding the decision-making process. Thus, a deep learning model with interpretability will have great practical and theoretical value in medical applications.

 

Target

 

The aims of this research are to develop AI systems capable of achieving high accuracy from limited (small) data while being interpretable. And the data includes but not limited to EEG data and medical images such as MRI etc. Deep learning models with visual interpretability are of great interests

 

Related Research Topic

 

Transfer Learning: Few-shot learning refers to the training of machine learning algorithms using a very small set of training data (e.g. a handful of images), as opposed to the very large set that is more often used. The challenge for few-shot learning is that limited observations result in hard shifts in the behavior of the model that cannot be easily and smoothly extended for new classes. The reason for this difficulty is that for deep learning these hard shifts occur in a generally huge parameter space. Employing standard optimization techniques in a few-shot scenario will have the undesirable tendency to extremely overfit the data. In order to avoid this trap, the model has to be forced to generalize well beyond the few available training instances, which is far from straightforward and requires a sophisticated strategy.