Multi-Modal MRI Fusion and Deep Learning-Based Segmentation with Feature Extraction for Enhanced Brain Tumor Diagnosis
DOI:
https://doi.org/10.64149/J.Carcinog.24.6s.493-501Keywords:
Brain Tumor Segmentation, Federated Learning, Deep Learning, U-Net, MRI, Feature Extraction, Medical Imaging.Abstract
Diagnosing brain tumors through medical imaging is critically important because it affects both the accuracy and time in intervening. Centralized, traditional machine learning models require large pooled datasets across hospitals, not only posing issues of data privacy/security for personal health information but also favour data that may not provide enough perspective of patient biology. Federated Learning (FL), which is an umbrella term for Decentralized, or “not sharing patient data” learning, could supplant traditional methods. This paper investigates a novel method of detecting and classification of brain tumors that combines the full strength of the robust deep learning segmentation model in a FL approach. We are continuing with the very exciting work using a multi-modal fusion approach, where the FLAIR and T1ce MRI images were harmonized as a single image, using the U-Net network structure, to achieve precise segmentation for analysis. Additive shape and texture features, such as Histogram of Oriented Gradients (HOG), were derived from the output masks, or segments to erect the dataset format to ultimately classification. Using the BruTS 2019 data and the aforementioned pipeline, the segmentation of the tumor region was provided justification of the U-Net model. In segments the analysis of preliminary baseline classification accuracy was 55.88% (standard classifier). This work presents the avenue of a validated pipeline using deep learning for brain tumor analyses where privacy was protected via the Federalized regime, suggesting plausibility in terms of implementing advanced classifiers like the SNet-PC classifier put forth in a Federal system.




