Safety-aware Active Learning with Perceptual Ambiguity and Severity Assessment

  • Prajit T. Rajendran*
  • , Guillaume Ollier
  • , Huascar Espinoza
  • , Morayo Adedjouma
  • , Agnes Delaborde
  • , Chokri Mraidha
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Deep Neural Networks (DNN) used in self-driving cars need a large data coverage and labelling to manage all potential hazards in safety-critical scenarios. Active learning approaches make use of automated data selection and labelling that can build diverse datasets, with less human costs and more accuracy. Traditional active learning methods consider uncertainty of the model predictions and diversity of the data points for query selection. However, they are not optimal in capturing many critical data points, which are potentially risky with respect to safety considerations. In this position paper, we propose a novel approach that uses human feedback related to perceptual data ambiguity and a criticality score, linked to system-level safety assessment. This approach includes a continual learning model that learns to identify corner cases and blindspots with high impact in potential risk, and combines them with uncertainty-sampling and diversity-sampling models to create a safety-aware acquisition function for active learning.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3215
DOIs
Publication statusPublished - 2022
Externally publishedYes
Event2022 Workshop on Artificial Intelligence Safety, AISafety 2022 - Vienna, Austria
Duration: 24 Jul 202225 Jul 2022

Keywords

  • Active learning
  • Autonomous driving
  • Human-in-the-loop learning
  • Safety

Fingerprint

Dive into the research topics of 'Safety-aware Active Learning with Perceptual Ambiguity and Severity Assessment'. Together they form a unique fingerprint.

Cite this