Please use this identifier to cite or link to this item: https://ir.swu.ac.th/jspui/handle/123456789/27899
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWuttichai Boonpookth_TH
dc.contributor.authorYumin Tanth_TH
dc.contributor.authorKritanai Torsrith_TH
dc.contributor.authorPatcharin Kamsingth_TH
dc.contributor.authorPeerapong Torteekath_TH
dc.contributor.authorAttawut Nardkulpatth_TH
dc.date.accessioned2023-02-15T01:27:52Z-
dc.date.available2023-02-15T01:27:52Z-
dc.date.issued2023-
dc.identifier.urihttps://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9991822-
dc.identifier.urihttps://ir.swu.ac.th/jspui/handle/123456789/27899-
dc.description.abstractUrban building segmentation from remote sensed imageries is challenging because there usually exists a variety of building features. Furthermore, very high spatial resolution imagery can provide many details of the urban building, such as styles, small gaps among buildings, building shadows, etc. Hence, satisfactory accuracy in detecting and extracting urban features from highly detailed images still remains. Deep learning semantic segmentation using baseline networks works well on building extraction; however, their ability in building extraction in shadows area, unclear building feature, and narrow gaps among buildings in dense building zone is still limited. In this article, we propose parallel cross-learning-based pixel transferred deconvolutional network (PCL–PTD net), and then is used to segment urban buildings from aerial photographs. The proposed method is evaluated and intercompared with traditional baseline networks. In PCL–PTD net, it is composed of parallel network, cross-learning functions, residual unit in encoder part, and PTD in decoder part. The performance is applied to three datasets (Inria aerial dataset, international society for photogrammetry and remote sensing Potsdam dataset, and UAV building dataset), to evaluate its accuracy and robustness. As a result, we found that PCL–PTD net can improve learning capacities of the supervised learning model in differentiating buildings in dense area and extracting buildings covered by shadows. As compared to the baseline networks, we found that proposed network shows superior performance compared to all eight networks (SegNet, U-net, pyramid scene parsing network, PixelDCL, DeeplabV3+, U-Net++, context feature enhancement networ, and improvedth_TH
dc.language.isoenth_TH
dc.titlePCL–PTD Net: Parallel Cross-Learning-Based Pixel Transferred Deconvolutional Network for Building Extraction in Dense Building Areas With Shadowth_TH
dc.typeArticleth_TH
dc.subject.keywordBuilding extractionth_TH
dc.subject.keywordBuilding shadowth_TH
dc.subject.keywordDense buildingth_TH
dc.subject.keywordPCL–PTD netth_TH
dc.subject.keywordSemantic segmentationth_TH
dc.identifier.bibliograpycitationIEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 16, 2023th_TH
dc.identifier.doi10.1109/JSTARS.2022.3230149-
Appears in Collections:Geo-Journal Articles

Files in This Item:
There are no files associated with this item.


Items in SWU repository are protected by copyright, with all rights reserved, unless otherwise indicated.