Yazar "Şengür, Abdulkadir" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Automated efficient traffic gesture recognition using swin transformer-based multi-input deep network with radar images(Springer London Ltd, 2025) Fırat, Hüseyin; Üzen, Hüseyin; Atila, Orhan; Şengür, AbdulkadirRadar-based artificial intelligence (AI) applications have gained significant attention recently, spanning from fall detection to gesture recognition. The growing interest in this field has led to a shift towards deep convolutional networks, and transformers have emerged to address limitations in convolutional neural network methods, becoming increasingly popular in the AI community. In this paper, we present a novel hybrid approach for radar-based traffic hand gesture classification using transformers. Traffic hand gesture recognition (HGR) holds importance in AI applications, and our proposed three-phase approach addresses the efficiency and effectiveness of traffic HGR. In the initial phase, feature vectors are extracted from input radar images using the pre-trained DenseNet-121 model. These features are then consolidated by concatenating them to gather information from diverse radar sensors, followed by a patch extraction operation. The concatenated features from all inputs are processed in the Swin transformer block to facilitate further HGR. The classification stage involves sequential application of global average pooling, Dense, and Softmax layers. To assess the effectiveness of our method on ULM university radar dataset, we employ various performance metrics, including accuracy, precision, recall, and F1-score, achieving an average accuracy score of 90.54%. We compare this score with existing approaches to demonstrate the competitiveness of our proposed method.Öğe Central serous retinopathy classification with deep learning-based multilevel feature extraction from optical coherence tomography images(Elsevier Ltd, 2025) Üzen, Hüseyin; Fırat, Hüseyin; Alperen Özçelik, Salih Taha; Yusufoğlu, Elif; Çiçek, İpek Balıkçı; Şengür, AbdulkadirCentral Serous Chorioretinopathy (CSCR) is an ocular disease characterized by fluid accumulation under the retina, which can lead to permanent visual impairment if not diagnosed early. This study presents a deep learning-based Convolutional Neural Network (CNN) model designed to automatically diagnose acute and chronic CSCR from Optical Coherence Tomography (OCT) images through multi-level feature extraction. The proposed CNN architecture consists of consecutive layers like a traditional CNN. However, it also extracts various features by creating feature maps at four different levels (F1, F2, F3, F4) for the final feature map. The model processes information using group-wise convolution and Pointwise Convolution Block (PCB) at each level. In this way, each feature group is further processed to obtain more representative features, enabling more independent learning. After the PCB outputs, the 4 feature maps are vectorized and combined, thus creating the final feature map. Finally, classification prediction scores are obtained by applying a fully connected layer and softmax function to this feature map. The experimental study utilized two datasets obtained from Elazığ Ophthalmology Polyclinic. The dataset includes 3860 OCT images from 488 individuals, with images categorized into acute CSCR, chronic CSCR, wet AMD, dry AMD, and healthy controls. Our proposed method achieves an increase in accuracy of 0.77%, attaining 96.40% compared to the highest previous accuracy of 95.73% by ResNet101. Precision is enhanced by 0.95%, reaching 95.16% over ResNet101′s 94.21%. The sensitivity (recall) is improved by 0.90%, achieving 95.65% versus ResNet101′s 94.75%. Additionally, the F1 score is increased by 0.93%, attaining 95.38% compared to ResNet101′s 94.45%. These results illustrate the effectiveness of our method, offering more precise and reliable diagnostic capabilities in OCT image classification. In conclusion, this study demonstrates the potential of artificial intelligence-supported diagnostic tools in the analysis of OCT images and contributes significantly to the development of early diagnosis and treatment strategies. © 2025 Elsevier LtdÖğe Epilepsy diagnosis from EEG signals using continuous wavelet transform-based depthwise convolutional neural network model(Mdpi, 2025) Dişli, Fırat; Gedikpınar, Mehmet; Fırat, Hüseyin; Şengür, Abdulkadir; Güldemir, Hanifi; Koundal, DeepikaBackground/Objectives: Epilepsy is a prevalent neurological disorder characterized by seizures that significantly impact individuals and their social environments. Given the unpredictable nature of epileptic seizures, developing automated epilepsy diagnosis systems is increasingly important. Epilepsy diagnosis traditionally relies on analyzing EEG signals, with recent deep learning methods gaining prominence due to their ability to bypass manual feature extraction. Methods: This study proposes a continuous wavelet transform-based depthwise convolutional neural network (DCNN) for epilepsy diagnosis. The 35-channel EEG signals were transformed into 35-channel images using continuous wavelet transform. These images were then concatenated horizontally and vertically into a single image (seven rows by five columns) using Python's PIL library, which served as input for training the DCNN model. Results: The proposed model achieved impressive performance metrics on unseen test data: 95.99% accuracy, 94.27% sensitivity, 97.29% specificity, and 96.34% precision. Comparative analyses with previous studies and state-of-the-art models demonstrated the superior performance of the DCNN model and image concatenation technique. Conclusions: Unlike earlier works, this approach did not employ additional classifiers or feature selection algorithms. The developed model and image concatenation method offer a novel methodology for epilepsy diagnosis that can be extended to different datasets, potentially providing a valuable tool to support neurologists globally.