FloYO-Net: Enhancing Small Floating Waste Detection in Natural Waters Using Atrous YOLOv5s
Abstract
Detecting small and partially hidden objects in rivers and water bodies remains a major challenge for real-time waste detection systems. These objects are often missed due to their small size, low contrast, and cluttered surroundings. Further complicating the task is the lack of dedicated datasets focused on small floating debris, limiting the development of more capable detection models. To bridge this gap, we developed D_six, a custom dataset of 495 high-resolution images capturing six classes of floating waste under real-world conditions. In this study, we improve the YOLOv5s object detection model by integrating atrous convolutions at three key backbone layers: P1/2, P3/8, and P5/32. These layers represent different scales of the feature pyramid, and the strategic placement of atrous convolution at each level plays a crucial role in helping the model recognize small and occluded objects more effectively. Using a dilation rate of 6, the model’s receptive field is expanded without increasing its size or slowing it down. When trained and evaluated on the D_six data set, the FloYO-Net (Floating Object YOLO Network) consistently outperformed the standard YOLOv5s, achieving a mean Average Precision (mAP@0.5) of 0.828 and mAP@0.5:0.95 of 0.509, compared to 0.787 and 0.498 respectively. Improvements were especially notable for hard-to-detect items like plastic bottles and plastic drink containers, with average precision gains of 6.6% and 7.1%, respectively. These results demonstrate that atrous convolution — when thoughtfully placed — can significantly improve detection accuracy, making it a powerful enhancement for real-time environmental cleanup systems.
Downloads
References
A. Luqman et al., Microplastic contamination in human stools, foods, and drinking water associated with Indonesian coastal population, Environments, vol. 8, no. 12, p. 138, 2021.
U. R. N. Santoso and F. Gamar, Deteksi Sampah Botol Plastik di Perairan Menggunakan YOLO v4-Tiny, Jurnal Teknologi Dan Sistem Informasi Bisnis, vol. 7, no. 1, pp. 91-98, 2025.
A. Akib et al., Unmanned floating waste collecting robot, in TENCON 2019-2019 IEEE Region 10 Conference (TENCON), 2019: IEEE, pp. 2645-2650, 2019.
Q. Li, Z. Wang, G. Li, C. Zhou, P. Chen, and C. Yang, An accurate and adaptable deep learning-based solution to floating litter cleaning up and its effectiveness on environmental recovery, Journal of Cleaner Production, vol. 388, p. 135816, 2023.
D. Hindarto, Exploring YOLOv8 Pretrain for Real-Time Detection of Indonesian Native Fish Species, Sinkron: jurnal dan penelitian teknik informatika, vol. 7, no. 4, pp. 2776-2785, 2023.
J. Zhang, J. Jin, Y. Ma, and P. Ren, Lightweight object detection algorithm based on YOLOv5 for unmanned surface vehicles, Frontiers in marine science, vol. 9, p. 1058401, 2023.
N. D. Ismail, R. Ramli, and M. N. Ab Rahman, Evaluating YOLOv5s and YOLOv8s for Kitchen Fire Detection: A Comparative Analysis, EMITTER International Journal of Engineering Technology, vol. 12, no. 2, pp. 167-181, 2024.
M. Vijayalakshmi and A. Sasithradevi, AquaYOLO: Advanced YOLO-based fish detection for optimized aquaculture pond monitoring, Scientific Reports, vol. 15, no. 1, p. 6151, 2025.
L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834-848, 2017.
M. D. Putro, Y. Mose, A. C. Andaria, J. Litouw, V. C. Poekoel, and X. Najoan, Streamlining Deep Learning Network for Real-time Sea Turtle Detection, Jurnal Rekayasa Elektrika, vol. 20, no. 3, 2024.
D. D. Aboyomi and C. Daniel, A Comparative Analysis of Modern Object Detection Algorithms: YOLO vs. SSD vs. Faster R-CNN, ITEJ (Information Technology Engineering Journals), vol. 8, no. 2, pp. 96-106, 2023.
W. Dong, Faster R-CNN and YOLOv3: a general analysis between popular object detection networks, in Journal of Physics: Conference Series, 2023, vol. 2580, no. 1, p. 012016, 2023
T. Palwankar and K. Kothari, Real time object detection using ssd and mobilenet, Int. J. Res. Appl. Sci. Eng. Technol, vol. 10, pp. 831-834, 2022.
W. Fang, L. Wang, and P. Ren, Tinier-YOLO: A real-time object detection method for constrained environments, IEEE Access, vol. 8, pp. 1935-1944, 2019.
X. Yan, D. Tian, D. Zhou, C. Wang, and W. Zhang, IV-YOLO: A Lightweight Dual-Branch Object Detection Network, Preprints, p. 2024082054, 2024.
C. Li, W. Pan, R. Su, and P. Yuen, Multiple structural defect detection for reinforced concrete buildings using YOLOv5s, Transactions Hong Kong Institution of Engineers, vol. 29, no. 2, 2022.
T. Zhou and J. Yang, An improved YOLOv5 algorithm for construction solid waste detection, in 2023 IEEE 3rd International Conference on Electronic Technology, Communication and Information (ICETCI), pp. 473-477, 2023.
H. Li et al., Detection of floating objects on water surface using YOLOv5s in an edge computing environment, Water, vol. 16, no. 1, p. 86, 2023.
J. R. Yasiri and R. Prathivi, Detection of Plastic Bottle Waste Using YOLO Version 5 Algorithm, Sinkron: jurnal dan penelitian teknik informatika, vol. 9, no. 1, 2025.
R. T. Hutabarat and R. Kurniawan, Deteksi Sampah di Permukaan Sungai menggunakan Convolutional Neural Network dengan Algoritma YOLOv8, in Seminar Nasional Official Statistics, 2024, vol. 2024, no. 1, pp. 537-548, 2024.
B. Tjandra, M. S. Negara, and N. S. Handoko, Deteksi Sampah di Permukaan dan Dalam Perairan pada Objek Video dengan Metode Robust and Efficient Post-Processing dan Tubelet-Level Bounding Box Linking, arXiv preprint arXiv:2307.10039, 2023.
A. Atalarais, K. Saputra, H. Syahputra, S. I. Al Idrus, and I. Taufik, Automatic Waste Type Detection Using YOLO for Waste Management Efficiency, Journal of Artificial Intelligence and Engineering Applications (JAIEA), vol. 4, no. 2, pp. 883-892, 2025.
H. A. Pratama, B. S. B. Dewantara, and D. Pramadihanto, Omnidirectional Stereo Vision Study from Vertical and Horizontal Stereo Configuration, EMITTER International Journal of Engineering Technology, pp. 294-310, 2022.
H. Chen and H. Lin, An effective hybrid atrous convolutional network for pixel-level crack detection, IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021.
K. R. Ahmed, Dsteelnet: a real-time parallel dilated cnn with atrous spatial pyramid pooling for detecting and classifying defects in surface steel strips, Sensors, vol. 23, no. 1, p. 544, 2023.
Y. Jiang, M. Ye, D. Huang, and X. Lu, AIU‐Net: An Efficient Deep Convolutional Neural Network for Brain Tumor Segmentation, Mathematical Problems in Engineering, vol. 2021, no. 1, p. 7915706, 2021.
Y. Huang, Q. Wang, W. Jia, Y. Lu, Y. Li, and X. He, See more than once: Kernel-sharing atrous convolution for semantic segmentation, Neurocomputing, vol. 443, pp. 26-34, 2021.
T. Panboonyuen, K. Jitkajornwanich, S. Lawawirojwong, P. Srestasathiern, and P. Vateekul, Semantic labeling in remote sensing corpora using feature fusion-based enhanced global convolutional network with high-resolution representations and depthwise atrous convolution, Remote Sensing, vol. 12, no. 8, p. 1233, 2020.
A. Halder and D. Dey, Atrous convolution aided integrated framework for lung nodule segmentation and classification, Biomedical Signal Processing and Control, vol. 82, p. 104527, 2023.
V. V. Y. Le Thanh Viet, V.-T. Pham, and T.-T. Tran, A Fully Convolutional Network with Waterfall Atrous Spatial Pooling and Localized Active Contour Loss for Fish Segmentation, 2023.
X. Chen, Y. Li, and Y. Nakatoh, Pyramid attention object detection network with multi-scale feature fusion, Computers and electrical engineering, vol. 104, p. 108436, 2022.
Y. Zhang et al., Small object detection based on hierarchical attention mechanism and multi‐scale separable detection, IET Image Processing, vol. 17, no. 14, pp. 3986-3999, 2023.
X. Xu, J. Zhao, Y. Li, H. Gao, and X. Wang, BANet: A balanced atrous net improved from SSD for autonomous driving in smart transportation, IEEE Sensors Journal, vol. 21, no. 22, pp. 25018-25026, 2020.
Z. Ren, Q. Kong, J. Han, M. D. Plumbley, and B. W. Schuller, CAA-Net: Conditional atrous CNNs with attention for explainable device-robust acoustic scene classification, IEEE Transactions on Multimedia, vol. 23, pp. 4131-4142, 2020.
Y. Cheng et al., Flow: A dataset and benchmark for floating waste detection in inland waters, in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10953-10962, 2021.
M. Liu, Y. Wu, R. Li, and C. Lin, LFN-YOLO: precision underwater small object detection via a lightweight reparameterized approach, Frontiers in Marine Science, 2025.
R. Xian, L. Tang, and S. Liu, Development of a Lightweight Floating Object Detection Algorithm, Water, vol. 16, no. 11, p. 1633, 2024.
J. Chen and M. J. Er, Dynamic YOLO for small underwater object detection, Artificial Intelligence Review, vol. 57, no. 7, p. 165, 2024.
Z. Xiao, Z. Li, H. Li, M. Li, X. Liu, and Y. Kong, Multi-Scale Feature Fusion Enhancement for Underwater Object Detection, Sensors, vol. 24, no. 22, p. 7201, 2024.
Copyright (c) 2025 EMITTER International Journal of Engineering Technology

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The copyright to this article is transferred to Politeknik Elektronika Negeri Surabaya(PENS) if and when the article is accepted for publication. The undersigned hereby transfers any and all rights in and to the paper including without limitation all copyrights to PENS. The undersigned hereby represents and warrants that the paper is original and that he/she is the author of the paper, except for material that is clearly identified as to its original source, with permission notices from the copyright owners where required. The undersigned represents that he/she has the power and authority to make and execute this assignment. The copyright transfer form can be downloaded here .
The corresponding author signs for and accepts responsibility for releasing this material on behalf of any and all co-authors. This agreement is to be signed by at least one of the authors who have obtained the assent of the co-author(s) where applicable. After submission of this agreement signed by the corresponding author, changes of authorship or in the order of the authors listed will not be accepted.
Retained Rights/Terms and Conditions
- Authors retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
- Authors may reproduce or authorize others to reproduce the work or derivative works for the author’s personal use or company use, provided that the source and the copyright notice of Politeknik Elektronika Negeri Surabaya (PENS) publisher are indicated.
- Authors are allowed to use and reuse their articles under the same CC-BY-NC-SA license as third parties.
- Third-parties are allowed to share and adapt the publication work for all non-commercial purposes and if they remix, transform, or build upon the material, they must distribute under the same license as the original.
Plagiarism Check
To avoid plagiarism activities, the manuscript will be checked twice by the Editorial Board of the EMITTER International Journal of Engineering Technology (EMITTER Journal) using iThenticate Plagiarism Checker and the CrossCheck plagiarism screening service. The similarity score of a manuscript has should be less than 25%. The manuscript that plagiarizes another author’s work or author's own will be rejected by EMITTER Journal.
Authors are expected to comply with EMITTER Journal's plagiarism rules by downloading and signing the plagiarism declaration form here and resubmitting the form, along with the copyright transfer form via online submission.
