Performance Improvement of Deep Convolutional Networks for Aerial Imagery Segmentation of Natural Disaster-Affected Areas

Deny Nugraha - Hasanuddin University, Gowa, 92171, Indonesia
Amil Ilham - Hasanuddin University, Gowa, 92171, Indonesia
Andani Achmad - Hasanuddin University, Gowa, 92171, Indonesia
Ardiaty Arief - Hasanuddin University, Gowa, 92171, Indonesia


Citation Format:



DOI: http://dx.doi.org/10.30630/joiv.7.4.01383

Abstract


This study proposes a framework for improving performance and exploring the application of Deep Convolutional Networks (DCN) using the best parameters and criteria to accurately produce aerial imagery semantic segmentation of natural disaster-affected areas. This study utilizes two models: U-Net and Pyramid Scene Parsing Network (PSPNet). Extensive study results show that the Grid Search algorithm can improve the performance of the two models used, whereas previous research has not used the Grid Search algorithm to improve performance in aerial imagery segmentation of natural disaster-affected areas. The Grid Search algorithm performs parameter tuning on DCN, data augmentation criteria tuning, and dataset criteria tuning for pre-training. The most optimal DCN model is shown by PSPNet (152) (bpc), using the best parameters and criteria, with a mean Intersection over Union (mIoU) of 83.34%, a significant mIoU increase of 43.09% compared to using only the default parameters and criteria (baselines). The validation results using the k-fold cross-validation method on the most optimal DCN model produced an average accuracy of 99.04%. PSPNet(152) (bpc) can detect and identify various objects with irregular shapes and sizes, can detect and identify various important objects affected by natural disasters such as flooded buildings and roads, and can detect and identify objects with small shapes such as vehicles and pools, which are the most challenging task for semantic segmentation network models. This study also shows that increasing the network layers in the PSPNet-(18, 34, 50, 101, 152) model, which uses the best parameters and criteria, improves the model's performance. The results of this study indicate the need to utilize a special dataset from aerial imagery originating from the Unmanned Aerial Vehicle (UAV) during the pre-training stage for transfer learning to improve DCN performance for further research.


Keywords


Semantic segmentation; aerial imagery; natural disaster; deep convolutional networks; the best parameters and criteria; grid search algorithm; k-fold cross-validation.

Full Text:

PDF

References


S. Minaee, Y. Y. Boykov, F. Porikli, A. J. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image Segmentation Using Deep Learning: A Survey,†IEEE Trans Pattern Anal Mach Intell, vol. 44, no. 7, pp. 3523–3542, Feb. 2021, doi: 10.1109/TPAMI.2021.3059968.

X. Yuan, J. Shi, and L. Gu, “A Review of Deep Learning Methods for Semantic Segmentation of Remote Sensing Imagery,†Expert Syst Appl, vol. 169, pp. 1–14, May 2021, doi: 10.1016/j.eswa.2020.114417.

F. Sultana, A. Sufian, and P. Dutta, “Evolution of Image Segmentation using Deep Convolutional Neural Network: A Survey,†Knowl Based Syst, vol. 201–202, pp. 1–38, Aug. 2020, doi: 10.1016/j.knosys.2020.106062.

A. Bakhtiarnia, Q. Zhang, and A. Iosifidis, “Efficient High-Resolution Deep Learning: A Survey,†arXiv e-prints, Jul. 2022, doi: 10.48550/arXiv.2207.13050.

M. Kampffmeyer, A. B. Salberg, and R. Jenssen, “Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks,†in Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, USA: IEEE, 2016, pp. 680–688. doi: 10.1109/CVPRW.2016.90.

D. Marmanis, J. D. Wegner, S. Galliani, K. Schindler, M. Datcu, and U. Stilla, “Semantic Segmentation of Aerial Images with an Ensemble of CNNs,†in Proc. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic: Copernicus GmbH, 2016, pp. 473–480. doi: 10.5194/isprsannals-III-3-473-2016.

Y. Chen et al., “LightFGCNet: A Lightweight and Focusing on Global Context Information Semantic Segmentation Network for Remote Sensing Imagery,†Remote Sens (Basel), vol. 14, no. 24, pp. 1–18, Dec. 2022, doi: 10.3390/rs14246193.

Z. Fan, J. Hou, Q. Zang, Y. Chen, and F. Yan, “River Segmentation of Remote Sensing Images Based on Composite Attention Network,†Complexity, vol. 2022, pp. 1–13, Jan. 2022, doi: 10.1155/2022/7750281.

A. Abdollahi, B. Pradhan, and A. M. Alamri, “An Ensemble Architecture of Deep Convolutional Segnet and Unet Networks for Building Semantic Segmentation from High-Resolution Aerial Images,†Geocarto Int, vol. 37, no. 12, pp. 1–21, Dec. 2020, doi: 10.1080/10106049.2020.1856199.

C. Sebastian, R. Imbriaco, E. Bondarev, and P. H. N. de With, “Adversarial Loss for Semantic Segmentation of Aerial Imagery,†in Proc. IEEE Symposium on Information Theory and Signal Processing, Benelux: IEEE, 2020, pp. 1–5. doi: 10.48550/arXiv.2001.04269.

D. Q. Tran, M. Park, D. Jung, and S. Park, “Damage-Map Estimation Using UAV Images and Deep Learning Algorithms for Disaster Management System,†Remote Sens (Basel), vol. 12, no. 4169, pp. 1–15, Dec. 2020, doi: 10.3390/rs12244169.

J. Wang, X. Fan, X. Yang, T. Tjahjadi, and Y. Wang, “Semi-Supervised Learning for Forest Fire Segmentation Using UAV Imagery,†Forests, vol. 13, no. 10, pp. 1–23, Sep. 2022, doi: 10.3390/f13101573.

S. Muksimova, S. Mardieva, and Y.-I. Cho, “Deep Encoder–Decoder Network-Based Wildfire Segmentation Using Drone Images in Real-Time,†Remote Sens (Basel), vol. 14, no. 24, pp. 1–20, Dec. 2022, doi: 10.3390/rs14246302.

M. Yandouzi et al., “Review on Forest Fires Detection and Prediction Using Deep Learning and Drones,†J Theor Appl Inf Technol, vol. 100, no. 12, pp. 4565–4576, Jun. 2022.

K. K. Eerapu, S. Lal, and A. V. Narasimhadhan, “O-SegNet: Robust Encoder and Decoder Architecture for Objects Segmentation from Aerial Imagery Data,†IEEE Trans Emerg Top Comput Intell, vol. 6, no. 3, pp. 556–567, Jun. 2022, doi: 10.1109/TETCI.2020.3045485.

M. S. Iqbal, H. Ali, S. N. Tran, and T. Iqbal, “Coconut Trees Detection and Segmentation in Aerial Imagery Using Mask Regionâ€Based Convolution Neural Network,†IET Computer Vision, vol. 15, no. 6, pp. 428–439, Sep. 2021, doi: 10.1049/cvi2.12028.

M. Kakooei and Y. Baleghi, “Fusion of Satellite, Aircraft, and UAV Data for Automatic Disaster Damage Assessment,†Int J Remote Sens, vol. 38, no. 8–10, pp. 2511–2534, Feb. 2017, doi: 10.1080/01431161.2017.1294780.

N. S. Ibrahim, M. K. Osman, S. B. Mohamed, S. H. Y. S. Abdullah, and S. M. Sharun, “The Application of UAV Images in Flood Detection Using Image Segmentation Techniques,†Indonesian Journal of Electrical Engineering and Computer Science, vol. 23, no. 2, pp. 1219–1226, Aug. 2021, doi: 10.11591/ijeecs.v23.i2.pp1219-1226.

O. Ghorbanzadeh, T. Blaschke, K. Gholamnia, S. R. Meena, D. Tiede, and J. Aryal, “Evaluation of Different Machine Learning Methods and Deep-Learning Convolutional Neural Networks for Landslide Detection,†Remote Sens (Basel), vol. 11, no. 2, pp. 1–21, Jan. 2019, doi: 10.3390/rs11020196.

S. N. K. B. Amit and Y. Aoki, “Disaster Detection from Aerial Imagery with Convolutional Neural Network,†in Proc. 2017 International Electronics Symposium on Knowledge Creation and Intelligent Computing (IES-KCIC), Surabaya, Indonesia: IEEE, 2017, pp. 239–245. doi: 10.1109/KCIC.2017.8228593.

A. Fujita, K. Sakurada, T. Imaizumi, R. Ito, S. Hikosaka, and R. Nakamura, “Damage Detection from Aerial Images via Convolutional Neural Networks,†in Proc. 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan: IEEE, 2017, pp. 5–8. doi: 10.23919/MVA.2017.7986759.

N. Attari, F. Ofli, M. Awad, J. Lucas, and S. Chawla, “Nazr-CNN: Fine-Grained Classification of UAV Imagery for Damage Assessment,†in Proc. 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA), Tokyo, Japan: IEEE, 2017, pp. 50–59. doi: 10.1109/DSAA.2017.72.

M. Rahnemoonfar, R. Murphy, M. V. Miquel, D. Dobbs, and A. Adams, “Flooded Area Detection from UAV Images Based on Densely Connected Recurrent Neural Networks,†in Proc. IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain: IEEE, 2018, pp. 1788–1791. doi: 10.1109/IGARSS.2018.8517946.

D. Popescu, L. Ichim, and F. Stoican, “Flooded Area Segmentation from UAV Images Based on Generative Adversarial Networks,†in Proc. 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore: IEEE, 2018, pp. 1361–1366. doi: 10.1109/ICARCV.2018.8581341.

A. Gebrehiwot, L. Hashemi-Beni, G. Thompson, P. Kordjamshidi, and T. E. Langan, “Deep Convolutional Neural Network for Flood Extent Mapping Using Unmanned Aerial Vehicles Data,†Journals Sensors, vol. 19, no. 7, pp. 1–13, Mar. 2019, doi: 10.3390/s19071486.

A. Gupta, S. Watson, and H. Yin, “Deep Learning-Based Aerial Image Segmentation with Open Data for Disaster Impact Assessment,†Neurocomputing, vol. 439, pp. 22–33, Jun. 2021, doi: 10.1016/j.neucom.2020.02.139.

M. Rahnemoonfar, T. Chowdhury, R. Murphy, and O. Fernandes, “Comprehensive Semantic Segmentation on High Resolution UAV Imagery for Natural Disaster Damage Assessment,†in Proc. 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA: IEEE, 2020, pp. 3904–3913. doi: 10.1109/BigData50022.2020.9377916.

Y. Pi, N. D. Nath, and A. H. Behzadan, “Detection and Semantic Segmentation of Disaster Damage in UAV Footage,†Journal of Computing in Civil Engineering, vol. 35, no. 2, pp. 1–19, Mar. 2021, doi: 10.1061/(asce)cp.1943-5487.0000947.

X. Zhu, J. Liang, and A. Hauptmann, “MSNet: A Multilevel Instance Segmentation Network for Natural Disaster Damage Assessment in Aerial Videos,†in Proc. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA: IEEE, 2021, pp. 2022–2031. doi: 10.1109/wacv48630.2021.00207.

T. Chowdhury and M. Rahnemoonfar, “Attention Based Semantic Segmentation on UAV Dataset for Natural Disaster Damage Assessment,†in Proc. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium: IEEE, 2021, pp. 1–5. doi: 10.48550/arXiv.2105.14540.

H. S. Munawar, F. Ullah, S. Qayyum, S. I. Khan, and M. Mojtahedi, “UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection,†Sustainability, vol. 13, no. 14, pp. 1–22, Jul. 2021, doi: 10.3390/su13147547.

M. Rahnemoonfar, T. Chowdhury, A. Sarkar, D. Varshney, M. Yari, and R. R. Murphy, “FloodNet: A High Resolution Aerial Imagery Dataset for Post Flood Scene Understanding,†IEEE Access, vol. 9, pp. 89644–89654, Jun. 2021, doi: 10.1109/ACCESS.2021.3090981.

R. Ünlü and R. Kiriş, “Detection of Damaged Buildings After an Earthquake with Convolutional Neural Networks in Conjunction with Image Segmentation,†Vis Comput, vol. 38, no. 2, pp. 685–694, Jan. 2022, doi: 10.1007/s00371-020-02043-9.

T. Chowdhury and M. Rahnemoonfar, “Self Attention Based Semantic Segmentation on a Natural Disaster Dataset,†in Proc. 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA: IEEE, 2021, pp. 2798–2802. doi: 10.1109/ICIP42928.2021.9506366.

S. Khose, A. Tiwari, and A. Ghosh, “Semi-Supervised Classification and Segmentation on High Resolution Aerial Images,†in Proc. EARTHVISION 2021, Computer Vision and Pattern Recognition (CVPR) 2021 Conference, Ithaca, New York: Cornell University, 2021, pp. 1–5. doi: 10.48550/arXiv.2105.08655.

D. Hernández, J. M. Cecilia, J.-C. Cano, and C. T. Calafate, “Flood Detection Using Real-Time Image Segmentation from Unmanned Aerial Vehicles on Edge-Computing Platform,†Remote Sens (Basel), vol. 14, no. 1, pp. 1–20, Jan. 2022, doi: 10.3390/rs14010223.

Y. Zhan, W. Liu, and Y. Maruyama, “Damaged Building Extraction Using Modified Mask R-CNN Model Using Post-Event Aerial Images of the 2016 Kumamoto Earthquake,†Remote Sens (Basel), vol. 14, no. 4, pp. 1–23, Feb. 2022, doi: 10.3390/rs14041002.

J. F. Guerrero Tello, M. Coltelli, M. Marsella, A. Celauro, and J. A. Palenzuela Baena, “Convolutional Neural Network Algorithms for Semantic Segmentation of Volcanic Ash Plumes Using Visible Camera Imagery,†Remote Sens (Basel), vol. 14, no. 18, pp. 1–18, Sep. 2022, doi: 10.3390/rs14184477.

C. Fang, X. Fan, H. Zhong, L. Lombardo, H. Tanyas, and X. Wang, “A Novel Historical Landslide Detection Approach Based on LiDAR and Lightweight Attention U-Net,†Remote Sens (Basel), vol. 14, no. 17, pp. 1–19, Sep. 2022, doi: 10.3390/rs14174357.

H. Zhang, M. Wang, Y. Zhang, and G. Ma, “TDA-Net: A Novel Transfer Deep Attention Network for Rapid Response to Building Damage Discovery,†Remote Sens (Basel), vol. 14, no. 15, pp. 1–17, Aug. 2022, doi: 10.3390/rs14153687.

L. P. Soares, H. C. Dias, G. P. B. Garcia, and C. H. Grohmann, “Landslide Segmentation with Deep Learning: Evaluating Model Generalization in Rainfall-Induced Landslides in Brazil,†Remote Sens (Basel), vol. 14, no. 9, pp. 1–17, May 2022, doi: 10.3390/rs14092237.

V. Boehm et al., “Deep Learning for Rapid Landslide Detection using Synthetic Aperture Radar (SAR) Datacubes,†ArXiv, vol. abs/2211.02869, pp. 1–8, Nov. 2022, doi: 10.48550/arXiv.2211.02869.

F. Zhang, Y. Shi, Q. Xu, Z. Xiong, W. Yao, and X. X. Zhu, “On the Generalization of the Semantic Segmentation Model for Landslide Detection,†in Proceedings of the Second Workshop on Complex Data Challenges in Earth Observation (CDCEO 2022), Vienna, Austria: CEUR Workshop Proceedings (CEUR-WS.org), Jul. 2022, pp. 97–101.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,†in Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), Cham, Switzerland: Springer, 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.

H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,†in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA: IEEE Computer Society, Dec. 2017, pp. 1–11. doi: 10.48550/arXiv.1612.01105.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,†in Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE Computer Society, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.

O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,†Int J Comput Vis, vol. 115, no. 3, pp. 211–252, Dec. 2015, doi: 10.1007/s11263-015-0816-y.

T.-Y. Lin et al., “Microsoft COCO: Common Objects in Context,†in Proc. European Conference on Computer Vision (ECCV 2014), Zurich, Switzerland: Springer, Cham, Sep. 2014, pp. 740–755. doi: 10.48550/arXiv.1405.0312.

M. Everingham, S. M. A. Eslami, L. van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge: A Retrospective,†Int J Comput Vis, vol. 111, no. 1, pp. 98–136, Jan. 2015, doi: 10.1007/s11263-014-0733-5.

M. Cordts et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding,†in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), United States: IEEE Computer Society, 2016, pp. 3213–3223. doi: 10.48550/arXiv.1604.01685.

H. Tung, “dira-simulator-road-segment,†2019. https://www.kaggle.com/datasets/phamthaihoangtung/dirasimulatorroadsegment (accessed Sep. 05, 2022).

G. Neuhold, T. Ollmann, S. Rota Bulò, and P. Kontschieder, “The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes,†in Proc. 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy: IEEE, Oct. 2017, pp. 5000–5009. doi: 10.1109/ICCV.2017.534.

Y. Lyu, G. Vosselman, G. S. Xia, A. Yilmaz, and M. Y. Yang, “UAVid: A Semantic Segmentation Dataset for UAV Imagery,†ISPRS Journal of Photogrammetry and Remote Sensing, vol. 165, pp. 108–119, Jul. 2020, doi: 10.1016/j.isprsjprs.2020.05.009.

Graz University of Technology, “Semantic Drone Dataset,†2022. http://dronedataset.icg.tugraz.at (accessed Sep. 05, 2022).

M. Abbaszadeh, S. Soltani-Mohammadi, and A. N. Ahmed, “Optimization of Support Vector Machine Parameters in Modeling of Iju Deposit Mineralization and Alteration Zones Using Particle Swarm Optimization Algorithm and Grid Search Method,†Comput Geosci, vol. 165, pp. 1–13, Aug. 2022, doi: 10.1016/j.cageo.2022.105140.

L. Yao, Z. Fang, Y. Xiao, J. Hou, and Z. Fu, “An Intelligent Fault Diagnosis Method for Lithium Battery Systems Based on Grid Search Support Vector Machine,†Energy, vol. 214, pp. 1–11, Jan. 2021, doi: 10.1016/j.energy.2020.118866.