top of page

Risk stratification for language-associated glioma patients using multi-modal 3D brain MRI
scans based on self-supervised transferable deep learning methods

Key Points

  • Subject area:
  • Project type:
  • Timespan:
  • Location:
  • University:
  • Supervisor:
Neurosurgery, Informatics
Clinical Research
Starting now for about 2 years
On-site
Charité – Universitätsmedizin Berlin
M.Sc. Boshra Shams, Prof Dr. Thomas Picht

Summary
of the
project:

Background:


Gliomas residing in the widespread network related to language function can lead to

preoperative language and cognitive functional deficits [Duffau 2014; Meyer et al., 2017]. The

presence of eloquent areas within or near tumors often limit resection, as resection in these areas

poses a high risk of neurological deterioration and thus affect the prognosis. To improve treatment

efficiency and spare important functional hubs we aim at developing reliable decision-making support system for neurosurgical planning. To this end, we will develop models based on deep-learning methods that can automatically analyze clinical data and identify imaging biomarkers for risk stratification. Over recent years, deep learning methods gained great interest in clinical applications to facilitate individual diagnoses and prognoses. However, to achieve satisfactory performance, these methods need a huge amount of labeled data which is costly and time-consuming in the clinical domain. Recently, self-supervised learning, an innovative unsupervised approach, has been introduced to overcome the need for labeled data. With this approach the model can learn a rich representation of the unlabeled data. Subsequently, the model can be fine-tuned to a downstream task using labeled data. In this project, we aim to stratify the risk for language eloquent glioma patients by using deep learning methods utilizing brain MRI data. We will apply self-supervised learning methods to pre-train the model on unlabeled publicly available data [Tang et al., 2022]. Then, In the down-stream task to predict the patient’s outcome, we will use methods which employs an attention mechanism [Vaswani et al., 2017; Oktay et al., 2018] to capture critical features with their attention signals on prediction results, such that the predictions generated by neural network model can be interpretable. To this end, we retrospectively and prospectively include the clinical MRI data, e.g., T1w, T2w, T1c, FLAIR and dMRI, of 100-120 language patients. We develop our predictive model based on state-ofthe- art self-supervised methods using BraTs and UK-biobank multimodal MRI data. Subsequently, the model will be fine-tuned to predict patient’s outcome using our clinical MRI data of language-eloquent

glioma patients.



The project tasks can be summarized as follows:


a. Data preparation on BraTs and UK-biobank dataset

b. Developing self-supervised deep-learning based

c. 2-stage pre-training on available data (UK-biobank, BraTS)

d. Clinical data preparation (pull data, prepare all MRI modalities and patients clinical data)

e. Training, test and validation of model with clinical data to predict patient’s outcome

f. Identify the imaging biomarkers for risk stratification of language associated glioma patients



Questions:


Can we improve the risk stratification for language-associated glioma patients using

models based on self-supervised and attention deep learning methods? Can we identify

the generalizable imaging biomarkers for risk stratification?

Further information,

Contact information

bottom of page