Using Synthesized Medical Images to Bridge the Gap between Medical Imaging Machines

Using Synthesized Medical Images to Bridge the Gap between Medical Imaging Machines

On the usability of synthetic data for improving the robustness of deep learning-based segmentation of cardiac magnetic resonance images – ScienceDirect 

If you’re old enough to remember ‘flip phones,’ then you might remember the first time phones had cameras. Fast forward 10-20 or so odd years – now,  phone cameras have front and back lenses with incredible resolution and the latest image processing technology. Now, imagine taking a picture of a dog with a flip phone from the early 2000s and with another phone released in 2023. The dog remains the same, but the image itself vastly differs. This is what is known as domain shift in medical imaging technology; the equipment and user used to capture the same object differs. Specifically in medicine, hospitals use different brands and specifications of equipment acquired from various vendors, which can depend on their resources and budget. 

Domain shift is an important problem in medical imaging because models rely on images being ‘similar enough’ to one another to perform well. Even small changes to the image such different orientations of the same object can cause the model to perform poorly. It is less efficient to create a model that can only work with a specific set of medical images. Instead, many researchers aim to create models that can generalise on other similar (but not identical) sources of data.  

One of the general problems associated with generalisation is the lack of data available for analysis. This problem can be mitigated by generating synthetic medical images. Synthetic medical images are created by learning how the images should look like, using the original images as inspiration.  In the authors’ work, they investigate the use of synthetic data for medical image analysis tasks to provide a way forward to overcome domain shift. The specific task that they investigate is image segmentation, where an image is ‘broken down’ into its different components. The authors use the heart as a use case since it can be segmented into different parts, such as the ventricles and atriums (see image below). 

The authors’ workflow can be described in three steps: 

  1. First, they create synthetic images of the heart, including the corresponding segmentation labels (e.g. left ventricle, right ventricle). 
  2. Then, they identify the location of the heart in the image in order to overcome different positions and orientations between images. Background information that is not relevant for the segmentation task is removed. 
  3. Then, using these synthetic, trimmed images to train the model, they investigate how well the model is able to segment a heart. 

Synthesizing Images

The purpose of creating synthetic images of the heart is two-fold. First, using synthetic images means that many more images that are realistic but not taken of real patients can be created. This has important implications for data-sharing privacy agreements, where data collection sites (for example, hospitals) may not be allowed to share the raw data with each other. Secondly, as stated by the authors, it could help alleviate some of the problems caused by domain shift. Simulated cardiac images taken from virtual subjects are used to generate synthetic images of the heart. In addition, its corresponding labels (e.g., ventricle, atrium) consider the image as a whole, putting each segmented part of the image into the contexts of the other parts. This is in contrast to traditional segmentation methods, which segment each part individually, just the atrium or just the ventricle. 

The Heart of the Image & Image Segmentation

The next step is crucial in helping alleviate domain shift difficulties. After the images have been synthesized and labelled accordingly, the position of the heart is located. Recall that even something as simple as the orientation of the image can affect the way the model performs. By localising the heart and removing all other information outside of its boundaries, the model will be trained on heart images that are approximately all in the same orientation and of the same size. 

Finally, to test the feasibility of their methods, the authors use the synthesized and localised heart images to train a heart segmentation model. The model then takes these images and identifies different components of the heart. 

Their Resolution

The authors’ original hypothesis was validated in their experiments and findings. Synthesized images can aid in alleviating the problems caused by domain shift. Synthesized images that focus on the object of interest (in this case, the heart) and remove background information prior to building the model can also improve the performance of segmentation models which use data acquired from different vendors. 
Generating models which are more generalisable has important implications for future data-sharing agreements involving hospital data. Synthetic data can aid and replace real data as a step towards creating more generalisable models and is an efficient way of increasing the occurrence of rarer cases not usually seen in the original dataset. The authors’ work provides a stepping stone towards understanding whether synthetic data can be used to help bridge the gap between different medical acquisition devices. Their concluding remarks introduce their next research idea of assessing the quality of the synthetic image generated (for more information on image quality, read my previous article here: https://mathstatbites.org/when-physics-and-engineering-imaging-solutions-collide-in-mri-scans/) and extending their methods to 3D!