Reduction of Vision-Based Models for Fall Detection

Asier Garmendia-Orbegozo*, Miguel Angel Anton, Jose David Nuñez-Gonzalez

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Due to the limitations that falls have on humans, early detection of these becomes essential to avoid further damage. In many applications, various technologies are used to acquire accurate information from individuals such as wearable sensors, environmental sensors or cameras, but all of these require high computational resources in many cases, delaying the response of the entire system. The complexity of the models used to process the input data and detect these activities makes them almost impossible to complete on devices with limited resources, which are the ones that could offer an immediate response avoiding unnecessary communications between sensors and centralized computing centers. In this work, we chose to reduce the models to detect falls using images as input data. We proceeded to use image sequences as video frames, using data from two open source datasets, and we applied the Sparse Low Rank Method to reduce certain layers of the Convolutional Neural Networks that were the backbone of the models. Additionally, we chose to replace a convolutional block with Long Short Term Memory to consider the latest updates of these data sequences. The results showed that performance was maintained decently while significantly reducing the parameter size of the resulting models.

Original languageEnglish
Article number7256
JournalSensors
Volume24
Issue number22
DOIs
Publication statusPublished - Nov 2024

Keywords

  • CNN
  • fall detection
  • LSTM
  • pruning

Fingerprint

Dive into the research topics of 'Reduction of Vision-Based Models for Fall Detection'. Together they form a unique fingerprint.

Cite this