Resumen
Deep learning methods have been successfully applied to image processing, mainly using 2D vision sensors. Recently, the rise of depth cameras and other similar 3D sensors has opened the field for new perception techniques. Nevertheless, 3D convolutional neural networks perform slightly worse than other 3D deep learning methods, and even worse than their 2D version. In this paper, we propose to improve 3D deep learning results by transferring the pretrained weights learned in 2D networks to their corresponding 3D version. Using an industrial object recognition context, we have analyzed different combinations of 3D convolutional networks (VGG16, ResNet, Inception ResNet, and EfficientNet), comparing the recognition accuracy. The highest accuracy is obtained with EfficientNetB0 using extrusion with an accuracy of 0.9217, which gives comparable results to state-of-the art methods. We also observed that the transfer approach enabled to improve the accuracy of the Inception ResNet 3D version up to 18% with respect to the score of the 3D approach alone.
Idioma original | Inglés |
---|---|
Número de artículo | 1078 |
Páginas (desde-hasta) | 1-18 |
Número de páginas | 18 |
Publicación | Sensors |
Volumen | 21 |
N.º | 4 |
DOI | |
Estado | Publicada - 4 feb 2021 |
Project and Funding Information
- Funding Info
- This paper has been supported by the project ELKARBOT under the Basque program_x000D_ ELKARTEK, grant agreement No. KK-2020/00092.