Hybrid 3D/2D Complete Inception Module and Convolutional Neural Network for Hyperspectral Remote Sensing Image Classification

dc.contributor.authorFirat, Huseyin
dc.contributor.authorAsker, Mehmet Emin
dc.contributor.authorBayindir, Mehmet Ilyas
dc.contributor.authorHanbay, Davut
dc.date.accessioned2024-04-24T16:02:18Z
dc.date.available2024-04-24T16:02:18Z
dc.date.issued2023
dc.departmentDicle Üniversitesien_US
dc.description.abstractClassification in hyperspectral remote sensing images (HRSIs) is a challenging process in image analysis and one of the most popular topics. In recent years, many methods have been proposed to solve the HRSIs classification problem. Compared to traditional machine learning methods, deep learning, especially convolutional neural networks (CNNs), is commonly used in the classification of HRSIs. Deep learning-based methods based on CNNs show remarkable performance in HRSIs classification and greatly support the development of classification technology. In this study, a method in which the Hybrid 3D/2D Complete Inception module and the Hybrid 3D/2D CNN method are used together has been proposed to solve the HRSIs classification problem. In the proposed method, multi-level feature extraction is performed by using multiple convolution layers with the Inception module. This improves the performance of the network. Conventional CNN-based methods use 2D CNN for feature extraction. However, only spatial features are extracted with 2D CNN. 3D CNN is used to extract spatial-spectral features. However, 3D CNN is computationally complex. Therefore, in the proposed method, a hybrid approach is used by first using 3D CNN and then 2D CNN. This reduces computational complexity and extracts more spatial features. In addition, PCA is used as a preprocessing step for optimum spectral band extraction in the proposed method. The proposed method has been tested using Indian pines, Salinas, University of Pavia, HyRANK-Loukia and Houston datasets, which are frequently used in studies for HRSIs classification. The overall accuracy of the proposed method in these five datasets are 99.83%, 100%, 100%, 90.47% and 98.93%, respectively. These results reveal that the proposed method provides higher classification performance compared to state-of-the-art methods.en_US
dc.identifier.doi10.1007/s11063-022-10929-z
dc.identifier.endpage1130en_US
dc.identifier.issn1370-4621
dc.identifier.issn1573-773X
dc.identifier.issue2en_US
dc.identifier.scopus2-s2.0-85133226513
dc.identifier.scopusqualityQ2
dc.identifier.startpage1087en_US
dc.identifier.urihttps://doi.org/10.1007/s11063-022-10929-z
dc.identifier.urihttps://hdl.handle.net/11468/14734
dc.identifier.volume55en_US
dc.identifier.wosWOS:000819884700002
dc.identifier.wosqualityQ3
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.relation.ispartofNeural Processing Letters
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectRemote Sensingen_US
dc.subjectHyperspectral Image Classificationen_US
dc.subjectInception Modelen_US
dc.subjectConvolutional Neural Networken_US
dc.titleHybrid 3D/2D Complete Inception Module and Convolutional Neural Network for Hyperspectral Remote Sensing Image Classificationen_US
dc.titleHybrid 3D/2D Complete Inception Module and Convolutional Neural Network for Hyperspectral Remote Sensing Image Classification
dc.typeArticleen_US

Dosyalar