Automatic Detection of Wrecked Airplanes from UAV Images
Abstract
Searching the accident site of a missing airplane is the primary step taken by the search and rescue team before rescuing the victims. However, due to the vast exploration area, lack of technology, no access road, and rough terrain make the search process nontrivial and thus causing much delay in handling the victims. Therefore, this paper aims to develop an automatic wrecked airplane detection system using visual information taken from aerial images such as from a camera. A new deep network is proposed to distinguish robustly the wrecked airplane that has high pose, scale, color variation, and high deformable object. The network leverages the last layers to capture more abstract and semantics information for robust wrecked airplane detection. The network is intertwined by adding more extra layers connected at the end of the layers. To reduce missing detection which is crucial for wrecked airplane detection, an image is then composed into five patches going feed-forwarded to the net in a convolutional manner. Experiments show very well that the proposed method successfully reaches AP=91.87%, and we believe it could bring many benefits for the search and rescue team for accelerating the searching of wrecked airplanes and thus reducing the number of victims.
Downloads
References
“180 Orang Diperkirakan Hilang Dalam Kecelakaan Kapal Danau Toba,” VOA Indonesia. [Online]. Available: https://www.voaindonesia.com/a/orang-diperkirakan-hilang-dalam-kecelakaan-kapal-danau-toba/4446547.html. [Accessed: 17-Jun-2019].
J. Y. CNN, “Search for missing MH370 plane ends but mystery remains,” CNN. [Online]. Available: https://www.cnn.com/2018/05/29/asia/mh370-search-ends-intl/index.html. [Accessed: 17-Jun-2019].
J. James, J. J. Ford, and T. L. Molloy, “Below Horizon Aircraft Detection Using Deep Learning for Vision-Based Sense and Avoid,” ArXiv Prepr. ArXiv190303275, 2019. DOI: https://doi.org/10.1109/ICUAS.2019.8798096
J. James, J. J. Ford, and T. L. Molloy, “Learning to Detect Aircraft for Long-Range Vision-Based Sense-and-Avoid Systems,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 4383–4390, 2018.
G. Hu, Z. Yang, J. Han, L. Huang, J. Gong, and N. Xiong, “Aircraft detection in remote sensing images based on saliency and convolution neural network,” EURASIP J. Wirel. Commun. Netw., vol. 2018, no. 1, p. 26, 2018.
S. Hwang, J. Lee, H. Shin, S. Cho, and D. H. Shim, “Aircraft detection using deep convolutional neural network in small unmanned aircraft systems,” in 2018 AIAA Information Systems-AIAA Infotech@ Aerospace, 2018, p. 2137. DOI: https://doi.org/10.2514/6.2018-2137
T. L. Molloy, J. J. Ford, and L. Mejias, “Detection of aircraft below the horizon for vision-based detect and avoid in unmanned aircraft systems,” J. Field Robot., vol. 34, no. 7, pp. 1378–1391, 2017.
A. Risnumawan, M. I. Perdana, A. H. Hidayatulloh, A. K. Rizal, and I. A. Sulistijono, “Towards an Automatic Aircraft Wreckage Detection Using A Monocular Camera of UAV,” in 2019 International Electronics Symposium (IES), 2019, pp. 501–504, doi: 10.1109/ELECSYM.2019.8901632. DOI: https://doi.org/10.1109/ELECSYM.2019.8901632
P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” TPAMI 2010, 2010. DOI: https://doi.org/10.1109/TPAMI.2009.167
H. Azizpour and I. Laptev, “Object detection using strongly-supervised deformable part models,” in European Conference on Computer Vision, 2012, pp. 836–849. DOI: https://doi.org/10.1007/978-3-642-33718-5_60
P. F. Felzenszwalb, R. B. Girshick, and D. McAllester, “Cascade object detection with deformable part models,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 2241–2248. DOI: https://doi.org/10.1109/CVPR.2010.5539906
J. Yan, Z. Lei, L. Wen, and S. Z. Li, “The fastest deformable part model for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2497–2504. DOI: https://doi.org/10.1109/CVPR.2014.320
I. A. Sulistijono and A. Risnumawan, “From concrete to abstract: Multilayer neural networks for disaster victims detection,” in 2016 International Electronics Symposium (IES), 2016, pp. 93–98. DOI: https://doi.org/10.1109/ELECSYM.2016.7860982
M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision, 2014, pp. 818–833. DOI: https://doi.org/10.1007/978-3-319-10590-1_53
P. Agrawal, R. Girshick, and J. Malik, “Analyzing the performance of multilayer neural networks for object recognition,” in European conference on computer vision, 2014, pp. 329–344. DOI: https://doi.org/10.1007/978-3-319-10584-0_22
W. Liu et al., “Ssd: Single shot multibox detector,” in European conference on computer vision, 2016, pp. 21–37. DOI: https://doi.org/10.1007/978-3-319-46448-0_2
A. G. Howard et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” ArXiv Prepr. ArXiv170404861, 2017.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.
“Aksara jawa text detection in scene images using convolutional neural network,” 2017.
A. Risnumawan, I. A. Sulistijono, and J. Abawajy, “Text detection in low resolution scene images using convolutional neural network,” in International Conference on Soft Computing and Data Mining, 2016, pp. 366–375. DOI: https://doi.org/10.1007/978-3-319-51281-5_37
I. A. Sulistijono et al., “Implementation of Victims Detection Framework on Post Disaster Scenario,” in 2018 International Electronics Symposium on Engineering Technology and Applications (IES-ETA), 2018, pp. 253–259. DOI: https://doi.org/10.1109/ELECSYM.2018.8615503
M. K. Anwar et al., “Deep Features Representation for Automatic Targeting System of Gun Turret,” in 2018 International Electronics Symposium on Engineering Technology and Applications (IES-ETA), 2018, pp. 107–112.
M. K. Anwar, A. Risnumawan, A. Darmawan, M. N. Tamara, and D. S. Purnomo, “Deep multilayer network for automatic targeting system of gun turret,” in 2017 International Electronics Symposium on Engineering Technology and Applications (IES-ETA), 2017, pp. 134–139. DOI: https://doi.org/10.1109/ELECSYM.2017.8240392
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” TPAMI 2016, 2016. DOI: https://doi.org/10.1109/TPAMI.2015.2437384
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, 2009, pp. 248–255. DOI: https://doi.org/10.1109/CVPR.2009.5206848
A. Vedaldi and K. Lenc, “Matconvnet: Convolutional neural networks for matlab,” in Proceedings of the 23rd ACM international conference on Multimedia, 2015, pp. 689–692. DOI: https://doi.org/10.1145/2733373.2807412
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” ArXiv Prepr. ArXiv14091556, 2014.
C. Szegedy, S. Reed, D. Erhan, D. Anguelov, and S. Ioffe, “Scalable, high-quality object detection,” ArXiv Prepr. ArXiv14121441, 2014.
M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The PASCAL visual object classes challenge 2007 (VOC2007) results,” 2007.
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge 2012 (voc2012) results (2012),” in URL http://www. pascal-network. org/challenges/VOC/voc2011/workshop/index. html, 2011.
Copyright (c) 2019 EMITTER International Journal of Engineering Technology
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The copyright to this article is transferred to Politeknik Elektronika Negeri Surabaya(PENS) if and when the article is accepted for publication. The undersigned hereby transfers any and all rights in and to the paper including without limitation all copyrights to PENS. The undersigned hereby represents and warrants that the paper is original and that he/she is the author of the paper, except for material that is clearly identified as to its original source, with permission notices from the copyright owners where required. The undersigned represents that he/she has the power and authority to make and execute this assignment. The copyright transfer form can be downloaded here .
The corresponding author signs for and accepts responsibility for releasing this material on behalf of any and all co-authors. This agreement is to be signed by at least one of the authors who have obtained the assent of the co-author(s) where applicable. After submission of this agreement signed by the corresponding author, changes of authorship or in the order of the authors listed will not be accepted.
Retained Rights/Terms and Conditions
- Authors retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
- Authors may reproduce or authorize others to reproduce the work or derivative works for the author’s personal use or company use, provided that the source and the copyright notice of Politeknik Elektronika Negeri Surabaya (PENS) publisher are indicated.
- Authors are allowed to use and reuse their articles under the same CC-BY-NC-SA license as third parties.
- Third-parties are allowed to share and adapt the publication work for all non-commercial purposes and if they remix, transform, or build upon the material, they must distribute under the same license as the original.
Plagiarism Check
To avoid plagiarism activities, the manuscript will be checked twice by the Editorial Board of the EMITTER International Journal of Engineering Technology (EMITTER Journal) using iThenticate Plagiarism Checker and the CrossCheck plagiarism screening service. The similarity score of a manuscript has should be less than 25%. The manuscript that plagiarizes another author’s work or author's own will be rejected by EMITTER Journal.
Authors are expected to comply with EMITTER Journal's plagiarism rules by downloading and signing the plagiarism declaration form here and resubmitting the form, along with the copyright transfer form via online submission.