Multi-level multi-type self-generated knowledge fusion for cardiac ultrasound segmentation

Chengjin Yu, Shuang Li, Dhanjoo Ghista, Zhifan Gao, Heye Zhang, Javier Del Ser*, Lin Xu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

Most existing works on cardiac echocardiography segmentation require a large number of ground-truth labels to appropriately train a neural network; this, however, is time consuming and laborious for physicians. Self-supervision learning is one of the potential solutions to address this challenge by deeply exploiting the raw data. However, existing works mainly exploit single type/level of pretext task. In this work, we propose fusion of the multi-level and multi-type self-generated knowledge. We obtain multi-level information of sub-anatomical structures in ultrasound images via a superpixel method. Subsequently, we fuse various types of information generated through multi-types of pretext tasks. In the end, we transfer the learned knowledge to our downstream task. In the experimental studies, we have demonstrated the prove the effectiveness of this method through the cardiac ultrasound segmentation task. The results show that the performance of our proposed method for echocardiography segmentation matches the performance of fully supervised methods without requiring a high amount of labeled data.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalInformation Fusion
Volume92
DOIs
Publication statusPublished - Apr 2023

Keywords

  • Anatomically constrained neural network (ACNN)
  • Deep Neural Networks (DNNs)
  • Dual Closed-loop Network (DCLNet)
  • Full convolution network (FCN)
  • Multi-level and Multi-type Self-Generated (MM-SG)
  • SLIC (Simple Linear Iterative Clustering) algorithm

Fingerprint

Dive into the research topics of 'Multi-level multi-type self-generated knowledge fusion for cardiac ultrasound segmentation'. Together they form a unique fingerprint.

Cite this