Automated efficient traffic gesture recognition using swin transformer-based multi-input deep network with radar images
[ X ]
Tarih
2025
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Springer London Ltd
Erişim Hakkı
info:eu-repo/semantics/closedAccess
Özet
Radar-based artificial intelligence (AI) applications have gained significant attention recently, spanning from fall detection to gesture recognition. The growing interest in this field has led to a shift towards deep convolutional networks, and transformers have emerged to address limitations in convolutional neural network methods, becoming increasingly popular in the AI community. In this paper, we present a novel hybrid approach for radar-based traffic hand gesture classification using transformers. Traffic hand gesture recognition (HGR) holds importance in AI applications, and our proposed three-phase approach addresses the efficiency and effectiveness of traffic HGR. In the initial phase, feature vectors are extracted from input radar images using the pre-trained DenseNet-121 model. These features are then consolidated by concatenating them to gather information from diverse radar sensors, followed by a patch extraction operation. The concatenated features from all inputs are processed in the Swin transformer block to facilitate further HGR. The classification stage involves sequential application of global average pooling, Dense, and Softmax layers. To assess the effectiveness of our method on ULM university radar dataset, we employ various performance metrics, including accuracy, precision, recall, and F1-score, achieving an average accuracy score of 90.54%. We compare this score with existing approaches to demonstrate the competitiveness of our proposed method.
Açıklama
Anahtar Kelimeler
Deep learning, Radar images, Swin transformers, Traffic hand gesture
Kaynak
Signal Image and Video Processing
WoS Q Değeri
Q3
Scopus Q Değeri
Q2
Cilt
19
Sayı
1