SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction

Asier Garmendia-Orbegozo, Jose David Nuñez-Gonzalez*, Miguel Angel Anton

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Application of deep neural networks (DNN) in edge computing has emerged as a consequence of the need of real time and distributed response of different devices in a large number of scenarios. To this end, shredding these original structures is urgent due to the high number of parameters needed to represent them. As a consequence, the most representative components of different layers are kept in order to maintain the network’s accuracy as close as possible to the entire network’s ones. To do so, two different approaches have been developed in this work. First, the Sparse Low Rank Method (SLR) has been applied to two different Fully Connected (FC) layers to watch their effect on the final response, and the method has been applied to the latest of these layers as a duplicate. On the contrary, SLRProp has been proposed as a variant case, where the relevances of the previous FC layer’s components were weighed as the sum of the products of each of these neurons’ absolute values and the relevances of the neurons from the last FC layer that are connected with the neurons from the previous FC layer. Thus, the relationship of relevances across layer was considered. Experiments have been carried out in well-known architectures to conclude whether the relevances throughout layers have less effect on the final response of the network than the independent relevances intra-layer.

Original languageEnglish
Article number2718
JournalSensors
Volume23
Issue number5
DOIs
Publication statusPublished - Mar 2023

Keywords

  • deep learning
  • edge computing
  • pruning

Fingerprint

Dive into the research topics of 'SLRProp: A Back-Propagation Variant of Sparse Low Rank Method for DNNs Reduction'. Together they form a unique fingerprint.

Cite this