Automated View Detection of Contrast and Non-Contrast Cardiac Views in Echocardiography - a Multi-centre, Multi-vendor Study

Sep 24, 2020
Evidence - Automated View detection of contrast-1

Download PDFGreen Line

This abstract is published in the Journal of the American Society of Echocardiography (volume 33, number 6, year 2020), as part of the proceedings of the American Society Echocardiography Conference 2020.

Title: Automated View Detection of Contrast and Non-Contrast Cardiac Views in Echocardiography; A Multi-Centre, Multi-Vendor Study

Authors: Gao, Shan and Stojanovski, David and Parker, Andrew and Marques, Patricia and Heitner, Stephen and Yadava, Mrinal and Upton, Ross and Woodward, Gary and Lamata, Pablo and Beqiri, Arian and others

BACKGROUND: Correct identification of views acquired in a 2D echocardiographic examination is paramount to post-processing and quantification steps performed as part of most clinical workflows. In many exams, microbubble contrast is used which greatly affects the appearance of the cardiac views. Here we present a fully automated convolutional neural network (CNN) which identifies apical 2, 3, and 4 chamber, and short axis (SAX) views with and without contrast in a completely independent, external dataset.

METHODS: Data was taken from 1014 subjects from a prospective multisite, multi-vendor, UK trial with the number of frames in each view greater than 17,500. Prior to view classification model training, images were processed using standard techniques to ensure homogenous and normalised image inputs. A CNN was built using the minimum number of convolutional layers required with batch normalisation, and dropout for reducing overfitting. Before processing, the data was split into 90% for model training (211,958 frames), and 10% used as a validation dataset (23,946 frames). Image frames from different subjects were separated out entirely amongst the different datasets. Further, a separate trial dataset of 240 studies acquired in the USA was used as an independent test dataset (39,401 frames).

View_Classification_ASE_FINAL

Figure 1: Confusion matrices for the validation dataset (left) and the independent test dataset (right)

RESULTS: Figure 1 shows the confusion matrix for both validation data (left) and independent test data (right), with an overall accuracy of 96% and 95% for the validation and test
datasets respectively. The accuracy for the non-contrast data of >99% exceeds that seen in other works such as Østvik et al. US Med. Biol. 45, 374–384. The combined datasets included images acquired across ultrasound manufacturers and models from 12 clinical sites.

CONCLUSION: We have developed a CNN capable of accurately identifying all relevant cardiac views used in “real world” echo exams, including views acquired with contrast, that could improve efficiency of quantification steps performed after image acquisition in routine clinical practice. This was tested on an independent dataset acquired in a different country to that used to train the model and was found to perform similarly.

 

Read more related evidence

Green Line