Gong Y, Zhang Y, Zhu H, Lv J, Cheng Q, Zhang H, He Y, Wang S.
IEEE Trans Med Imaging. 2020 Apr;39(4):1206-1222. doi: 10.1109/TMI.2019.2946059. Epub 2019 Oct 7.
Select item 32162150
Fetal congenital heart disease (FHD) is a common and serious congenital malformation in children. In Asia, FHD birth defect rates have reached as high as 9.3%. For the early detection of birth defects and mortality, echocardiography remains the most effective method for screening fetal heart malformations. However, standard echocardiograms of the fetal heart, especially four-chamber view images, are difficult to obtain. In addition, the pathophysiological changes in fetal hearts during different pregnancy periods lead to ever-changing two-dimensional fetal heart structures and hemodynamics, and it requires extensive professional knowledge to recognize and judge disease development. Thus, research on the automatic screening for FHD is necessary. In this paper, we proposed a new model named DGACNN that shows the best performance in recognizing FHD, achieving a rate of 85%. The motivation for this network is to deal with the problem that there are insufficient training datasets to train a robust model. There are many unlabeled video slices, but they are tough and time-consuming to annotate. Thus, how to use these un-annotated video slices to improve the DGACNN capability for recognizing FHD, in terms of both recognition accuracy and robustness, is very meaningful for FHD screening. The architecture of DGACNN comprises two parts, that is, DANomaly and GACNN (Wgan-GP and CNN). DANomaly, similar to the ALOCC network, but incorporates cycle adversarial learning to train an end-to-end one-class classification (OCC) network that is more robust and has a higher accuracy than ALOCC in screening video slices. For the GACNN architecture, we use FCH (four chamber heart) video slices at around the end-systole, as screened by DANomaly, to train a WGAN-GP for the purpose of obtaining ideal low-level features that can robustly improve the FHD recognition accuracy. A few annotated video slices, as screened by DANomaly, can also be used for data augmentation so as to improve the FHD recognition further. The experiments show that the DGACNN outperforms other state-of-the-art networks by 1%-20% in recognizing FHD. A comparison experiment shows that the proposed network already outperforms the performance of expert cardiologists in recognizing FHD, reaching 84% in a test. Thus, the proposed architecture has high potential for helping cardiologists complete early FHD screenings.