TY - JOUR
T1 - Multi-level multi-type self-generated knowledge fusion for cardiac ultrasound segmentation
AU - Yu, Chengjin
AU - Li, Shuang
AU - Ghista, Dhanjoo
AU - Gao, Zhifan
AU - Zhang, Heye
AU - Ser, Javier Del
AU - Xu, Lin
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2023/4
Y1 - 2023/4
N2 - Most existing works on cardiac echocardiography segmentation require a large number of ground-truth labels to appropriately train a neural network; this, however, is time consuming and laborious for physicians. Self-supervision learning is one of the potential solutions to address this challenge by deeply exploiting the raw data. However, existing works mainly exploit single type/level of pretext task. In this work, we propose fusion of the multi-level and multi-type self-generated knowledge. We obtain multi-level information of sub-anatomical structures in ultrasound images via a superpixel method. Subsequently, we fuse various types of information generated through multi-types of pretext tasks. In the end, we transfer the learned knowledge to our downstream task. In the experimental studies, we have demonstrated the prove the effectiveness of this method through the cardiac ultrasound segmentation task. The results show that the performance of our proposed method for echocardiography segmentation matches the performance of fully supervised methods without requiring a high amount of labeled data.
AB - Most existing works on cardiac echocardiography segmentation require a large number of ground-truth labels to appropriately train a neural network; this, however, is time consuming and laborious for physicians. Self-supervision learning is one of the potential solutions to address this challenge by deeply exploiting the raw data. However, existing works mainly exploit single type/level of pretext task. In this work, we propose fusion of the multi-level and multi-type self-generated knowledge. We obtain multi-level information of sub-anatomical structures in ultrasound images via a superpixel method. Subsequently, we fuse various types of information generated through multi-types of pretext tasks. In the end, we transfer the learned knowledge to our downstream task. In the experimental studies, we have demonstrated the prove the effectiveness of this method through the cardiac ultrasound segmentation task. The results show that the performance of our proposed method for echocardiography segmentation matches the performance of fully supervised methods without requiring a high amount of labeled data.
KW - Anatomically constrained neural network (ACNN)
KW - Deep Neural Networks (DNNs)
KW - Dual Closed-loop Network (DCLNet)
KW - Full convolution network (FCN)
KW - Multi-level and Multi-type Self-Generated (MM-SG)
KW - SLIC (Simple Linear Iterative Clustering) algorithm
UR - http://www.scopus.com/inward/record.url?scp=85142714700&partnerID=8YFLogxK
U2 - 10.1016/j.inffus.2022.11.004
DO - 10.1016/j.inffus.2022.11.004
M3 - Article
AN - SCOPUS:85142714700
SN - 1566-2535
VL - 92
SP - 1
EP - 12
JO - Information Fusion
JF - Information Fusion
ER -