Ir directamente a la navegación principal Ir directamente a la búsqueda Ir directamente al contenido principal

A collaborative content moderation framework for toxicity detection based on multitask neural networks and conformal estimates of annotation disagreement

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

1 Cita (Scopus)

Resumen

Content moderation typically combines the efforts of human moderators and machine learning models. However, these systems often rely on data where significant disagreement occurs during moderation, reflecting the subjective nature of toxicity perception. Rather than dismissing this disagreement as noise, we interpret it as a valuable signal that highlights the inherent ambiguity of the content—an insight missed when only the majority label is considered. In this work, we introduce a novel content moderation framework that emphasizes the importance of capturing annotation disagreement. In this work, we propose a novel content moderation framework that prioritizes capturing annotation disagreement. Our approach leverages multitask neural networks with transformer architectures as their backbone, where toxicity classification serves as the primary task and annotation disagreement is modelled as an auxiliary task. By framing disagreement as a predictive problem within the multitask learning architecture, our method effectively captures the nuanced ambiguity of content toxicity. Additionally, we leverage uncertainty estimation techniques, specifically Conformal Prediction, to account for the model's inherent uncertainty in predicting toxicity and annotation disagreement. The framework also allows moderators to adjust thresholds for annotation disagreement, offering flexibility in determining when ambiguity should trigger a review. We demonstrate that our joint approach enhances model performance, calibration, and uncertainty estimation, while offering greater parameter efficiency and improving the review process in comparison to single-task methods.

Idioma originalInglés
Número de artículo130542
PublicaciónNeurocomputing
Volumen647
DOI
EstadoPublicada - 28 sept 2025

Huella

Profundice en los temas de investigación de 'A collaborative content moderation framework for toxicity detection based on multitask neural networks and conformal estimates of annotation disagreement'. En conjunto forman una huella única.

Citar esto