Abstract
The estimation of the amount of uncertainty featured by predictive machine learning models has acquired a great momentum in recent years. Uncertainty estimation provides the user with augmented information about the model's confidence in its predicted outcome. Despite the inherent utility of this information for the trustworthiness of the user, there is a thin consensus around the different types of uncertainty that one can gauge in machine learning models and the suitability of different techniques that can be used to quantify the uncertainty of a specific model. This subject is mostly non existent within the traffic modeling domain, even though the measurement of the confidence associated to traffic forecasts can favor significantly their actionability in practical traffic management systems. This work aims to cover this lack of research by reviewing different techniques and metrics of uncertainty available in the literature, and by critically discussing how confidence levels computed for traffic forecasting models can be helpful for researchers and practitioners working in this research area. To shed light with empirical evidence, this critical discussion is further informed by experimental results produced by different uncertainty estimation techniques over real traffic data collected in Madrid (Spain), rendering a general overview of the benefits and caveats of every technique, how they can be compared to each other, and how the measured uncertainty decreases depending on the amount, quality and diversity of data used to produce the forecasts.
Original language | English |
---|---|
Pages (from-to) | 11180-11199 |
Number of pages | 20 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
Volume | 25 |
Issue number | 9 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- confidence
- Data models
- Estimation
- Forecasting
- Machine learning
- Measurement uncertainty
- Predictive models
- traffic forecasting
- Uncertainty
- Uncertainty estimation