3D convolutional neural networks initialized from pretrained 2D convolutional neural networks for classification of industrial parts

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)
4 Downloads (Pure)

Abstract

Deep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.

Original languageEnglish
Article number1078
Pages (from-to)1-18
Number of pages18
JournalSensors
Volume21
Issue number4
DOIs
Publication statusPublished - 4 Feb 2021

Keywords

  • Computer vision
  • Deep learning
  • Object recognition
  • Transfer learning

Project and Funding Information

  • Funding Info
  • This paper has been supported by the project ELKARBOT under the Basque program_x000D_ELKARTEK, grant agreement No. KK-2020/00092.

Fingerprint

Dive into the research topics of '3D convolutional neural networks initialized from pretrained 2D convolutional neural networks for classification of industrial parts'. Together they form a unique fingerprint.

Cite this