Medical Image Segmentation Based on Multi-Modal Convolutional Neural Network: Study on Image Fusion Schemes

We are exploring the how the multi-modality medical imaging can help advancing the accuracy and robustness of computer aided detection. To this purpose, we built multi-modal Convolutional Neural Networks (CNN) that performs fusion across CT, MR and PET images at various stages. For the task of detecting and segmenting the soft tissue sarcoma, multi-modal deep learning system shows much superior performance than single-modal systems, even using images of lowered quality. The capability of maintaining high segmentation accuracy on low-dose images with added modality of the proposed system provides a new perspective in medical image acquisition and analysis.


Illustration of the structure for (a) Type-I fusion networks, (b) Type-II fusion network and (c) Type-III fusion network. The yellow arrows indicate the fusion location.


(a) Ground truth shown as yellow contour line overlaid on the T2 image. (b) Result from Type-II fusion network based on PET+CT+T1. (c) Result from single-modality network based on T2. (d-f) Results from single-modality network based on PET, CT and T1, respectively.

Full Article