Traffic Transformer: Transformer-based framework for temporal traffic accident prediction

Mansoor G. Al-Thani, Ziyu Sheng, Yuting Cao, Yin Yang

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Reliable prediction of traffic accidents is crucial for the identification of potential hazards in advance, formulation of effective preventative measures, and reduction of accident incidence. Existing neural network -based models generally suffer from a limited field of perception and poor long-term dependency capturing abilities, which severely restrict their performance. To address the inherent shortcomings of current traffic prediction models, we propose the Traffic Transformer for multidimensional, multi -step traffic accident prediction. Initially, raw datasets chronicling sporadic traffic accidents are transformed into multivariate, regularly sampled sequences that are amenable to sequential modeling through a temporal discretization process. Subsequently, Traffic Transformer captures and learns the hidden relationships between any elements of the input sequence, constructing accurate prediction for multiple forthcoming intervals of traffic accidents. Our proposed Traffic Transformer employs the sophisticated multi -head attention mechanism in lieu of the widely used recurrent architecture. This significant shift enhances the model's ability to capture long-range dependencies within time series data. Moreover, it facilitates a more flexible and comprehensive learning of diverse hidden patterns within the sequences. It also offers the versatility of convenient extension and transference to other diverse time series forecasting tasks, demonstrating robust potential for further development in this field. Extensive comparative experiments conducted on a real -world dataset from Qatar demonstrate that our proposed Traffic Transformer model significantly outperforms existing mainstream time series forecasting models across all evaluation metrics and forecast horizons. Notably, its Mean Absolute Percentage Error reaches a minimal value of only 4.43%, which is substantially lower than the error rates observed in other models. This remarkable performance underscores the Traffic Transformer's state-of-the-art level of in predictive accuracy.
Original languageEnglish
Pages (from-to)12610-12629
Number of pages20
JournalAIMS Mathematics
Volume9
Issue number5
DOIs
Publication statusPublished - 2024

Keywords

  • Attention mechanism
  • Deep learning
  • Neural network
  • Traffic accident prediction
  • Transformer

Fingerprint

Dive into the research topics of 'Traffic Transformer: Transformer-based framework for temporal traffic accident prediction'. Together they form a unique fingerprint.

Cite this