DSpace Repository

PCL-PTD Net: Parallel Cross-Learning-Based Pixel Transferred Deconvolutional Network for Building Extraction in Dense Building Areas With Shadow

Show simple item record

dc.contributor.author Boonpook W.
dc.contributor.author Tan Y.
dc.contributor.author Torsri K.
dc.contributor.author Kamsing P.
dc.contributor.author Torteeka P.
dc.contributor.author Nardkulpat A.
dc.contributor.other Srinakharinwirot University
dc.date.accessioned 2023-11-15T02:08:56Z
dc.date.available 2023-11-15T02:08:56Z
dc.date.issued 2023
dc.identifier.uri https://www.scopus.com/inward/record.uri?eid=2-s2.0-85146240129&doi=10.1109%2fJSTARS.2022.3230149&partnerID=40&md5=84763a48d32130f1436c6fa3d9ab1e1f
dc.identifier.uri https://ir.swu.ac.th/jspui/handle/123456789/29530
dc.description.abstract Urban building segmentation from remote sensed imageries is challenging because there usually exists a variety of building features. Furthermore, very high spatial resolution imagery can provide many details of the urban building, such as styles, small gaps among buildings, building shadows, etc. Hence, satisfactory accuracy in detecting and extracting urban features from highly detailed images still remains. Deep learning semantic segmentation using baseline networks works well on building extraction; however, their ability in building extraction in shadows area, unclear building feature, and narrow gaps among buildings in dense building zone is still limited. In this article, we propose parallel cross-learning-based pixel transferred deconvolutional network (PCL-PTD net), and then is used to segment urban buildings from aerial photographs. The proposed method is evaluated and intercompared with traditional baseline networks. In PCL-PTD net, it is composed of parallel network, cross-learning functions, residual unit in encoder part, and PTD in decoder part. The performance is applied to three datasets (Inria aerial dataset, international society for photogrammetry and remote sensing Potsdam dataset, and UAV building dataset), to evaluate its accuracy and robustness. As a result, we found that PCL-PTD net can improve learning capacities of the supervised learning model in differentiating buildings in dense area and extracting buildings covered by shadows. As compared to the baseline networks, we found that proposed network shows superior performance compared to all eight networks (SegNet, U-net, pyramid scene parsing network, PixelDCL, DeeplabV3+, U-Net++, context feature enhancement networ, and improved ResU-Net). The experiments on three datasets also demonstrate the ability of proposed framework and indicating its performance. © 2008-2012 IEEE.
dc.publisher Institute of Electrical and Electronics Engineers Inc.
dc.subject Building extraction
dc.subject building shadow
dc.subject dense building
dc.subject PCL-PTD net
dc.subject semantic segmentation
dc.title PCL-PTD Net: Parallel Cross-Learning-Based Pixel Transferred Deconvolutional Network for Building Extraction in Dense Building Areas With Shadow
dc.type Article
dc.rights.holder Scopus
dc.identifier.bibliograpycitation IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. Vol 16, No. (2023), p.773-786
dc.identifier.doi 10.1109/JSTARS.2022.3230149


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account

Statistics