Abstract
In the future, rescuing lost, ill or injured persons will increasingly be carried out by autonomous drones. However, discovering humans in densely forested terrain is challenging because of occlusion, and robust detection mechanisms are required. We show that automated person detection under occlusion conditions can be notably improved by combining multi-perspective images before classification. Here, we employ image integration by airborne optical sectioning (AOS)—a synthetic aperture imaging technique that uses camera drones to capture unstructured thermal light fields—to achieve this with a precision and recall of 96% and 93%, respectively. Finding lost or injured people in dense forests is not generally feasible with thermal recordings, but becomes practical with the use of AOS integral images. Our findings lay the foundation for effective future search-and-rescue technologies that can be applied in combination with autonomous or manned aircraft. They can also be beneficial for other fields that currently suffer from inaccurate classification of partially occluded people, animals or objects.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Drone swarm strategy for the detection and tracking of occluded targets in complex environments
Communications Engineering Open Access 02 August 2023
-
Combined person classification with airborne optical sectioning
Scientific Reports Open Access 09 March 2022
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout





Data availability
The data collected in experiments with users can be downloaded from https://doi.org/10.5281/zenodo.389477358 and includes labels and augmented images for training, validation and testing, configuration files, trained network weights and results.
References
Burke, C. et al. Requirements and limitations of thermal drones for effective search and rescue in marine and coastal areas. Drones 3, 78 (2019).
Lygouras, E. et al. Unsupervised human detection with an embedded vision system on a fully autonomous UAV for search and rescue operations. Sensors 19, 3542 (2019).
Brunetti, A., Buongiorno, D., Trotta, G. F. & Bevilacqua, V. Computer vision and deep learning techniques for pedestrian detection and tracking: a survey. Neurocomputing 300, 17–33 (2018).
Yurtsever, E., Lambert, J., Carballo, A. & Takeda, K. A survey of autonomous driving: common practices and emerging technologies. IEEE Access 8, 58443–58469 (2020).
Moreira, A. et al. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sensing Mag. 1, 6–43 (2013).
Li, C. J. & Ling, H. Synthetic aperture radar imaging using a small consumer drone. In 2015 IEEE International Symposium on Antennas and Propagation USNC/URSI National Radio Science Meeting 685–686 (IEEE, 2015); https://doi.org/10.1109/APS.2015.7304729
Rosen, P. A. et al. Synthetic aperture radar interferometry. Proc. IEEE 88, 333–382 (2000).
Levanda, R. & Leshem, A. Synthetic aperture radio telescopes. IEEE Signal Process. Mag. 27, 14–29 (2010).
Dravins, D., Lagadec, T. & Nuñez, P. D. Optical aperture synthesis with electronically connected telescopes. Nat. Commun. 6, 6852 (2015).
Ralston, T. S., Marks, D. L., Carney, P. S. & Boppart, S. A. Interferometric synthetic aperture microscopy. Nat. Phys. 3, 129–134 (2007).
Hayes, M. P. & Gough, P. T. Synthetic aperture sonar: a review of current status. IEEE J. Oceanic Eng. 34, 207–224 (2009).
Hansen, R. E. in Sonar Systems (ed. Kolve, N.) (InTech, 2011); https://www.intechopen.com/books/sonar-systems/introduction-to-synthetic-aperture-sonar
Jensen, J. A., Nikolov, S. I., Gammelmark, K. L. & Pedersen, M. H. Synthetic aperture ultrasound imaging. Ultrasonics 44, e5–e15 (2006).
Zhang, H. K. et al. Synthetic tracked aperture ultrasound imaging: design, simulation and experimental evaluation. J. Med. Imaging 3, 027001 (2016).
Barber, Z. W. & Dahl, J. R. Synthetic aperture ladar imaging demonstrations and information at very low return levels. Appl. Opt. 53, 5531–5537 (2014).
Turbide, S., Marchese, L., Terroux, M. & Bergeron, A. Synthetic aperture lidar as a future tool for earth observation. Proc. SPIE 10563, 105633V (2017).
Vaish, V., Wilburn, B., Joshi, N. & Levoy, M. Using plane + parallax for calibrating dense camera arrays. In Proc. 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004) Vol. 1, I–I (IEEE, 2004); https://doi.org/10.1109/CVPR.2004.1315006
Vaish, V., Levoy, M., Szeliski, R., Zitnick, C. L. & Kang, S. B. Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) Vol. 2, 2331–2338 (IEEE, 2006); https://doi.org/10.1109/CVPR.2006.244
Zhang, H., Jin, X. & Dai, Q. Synthetic aperture based on plenoptic camera for seeing through occlusions. In Proc. Advances in Multimedia Information Processing (PCM 2018) (eds Hong, R., Cheng, W.-H., Yamasaki, T., Wang, M. & Ngo, C.-W.) 158–167 (Springer, 2018).
Yang, T. et al. Kinect based real-time synthetic aperture imaging through occlusion. Multimedia Tools Appl. 75, 6925–6943 (2016).
Joshi, N., Avidan, S., Matusik, W. & Kriegman, D. J. Synthetic aperture tracking: tracking through occlusions. In 2007 IEEE 11th International Conference on Computer Vision 1–8 (IEEE, 2007); https://doi.org/10.1109/ICCV.2007.4409032
Pei, Z. et al. Occluded-object 3D reconstruction using camera array synthetic aperture imaging. Sensors 19, 607 (2019).
Yang, T. et al. All-in-focus synthetic aperture imaging. In Computer Vision—ECCV 2014, Lecture Notes in Computer Science Vol. 8694 (eds Fleet, D., Pajdla, T., Schiele, B. & Tuytelaars, T.) 1–15 (Springer, 2014); https://doi.org/10.1007/978-3-319-10599-4_1
Pei, Z., Zhang, Y., Chen, X. & Yang, Y.-H. Synthetic aperture imaging using pixel labeling via energy minimization. Pattern Recognit. 46, 174–187 (2013).
Kurmi, I., Schedl, D. C. & Bimber, O. Airborne optical sectioning. J. Imaging 4, 102 (2018).
Bimber, O., Kurmi, I., Schedl, D. C. & Potel, M. Synthetic aperture imaging with drones. IEEE Comput. Graph. Appl. 39, 8–15 (2019).
Kurmi, I., Schedl, D. C. & Bimber, O. Thermal airborne optical sectioning. Remote Sensing 11, 1668 (2019).
Kurmi, I., Schedl, D. C. & Bimber, O. A statistical view on synthetic aperture imaging for occlusion removal. IEEE Sensors J. 19 (2019); https://doi.org/10.1109/JSEN.2019.2922731
Schedl, D. C., Kurmi, I. & Bimber, O. Airborne optical sectioning for nesting observation. Sci. Rep. 10, 7254 (2020).
Hwang, S., Park, J., Kim, N., Choi, Y. & Kweon, I. S. Multispectral pedestrian detection: benchmark dataset and baselines. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2015); https://doi.org/10.1109/CVPR.2015.7298706
Xu, Z., Zhuang, J., Liu, Q., Zhou, J. & Peng, S. Benchmarking a large-scale FIR dataset for on-road pedestrian detection. Infrared Phys. Technol. 96, 199–208 (2019).
Kurmi, I., Schedl, D. C. & Bimber, O. Fast automatic visibility optimization for thermal synthetic aperture visualization. IEEE Geosci. Remote Sensing Lett. (2020); https://doi.org/10.1109/LGRS.2020.2987471
Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You only look once: unified, real-time object detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 779–788 (IEEE, 2016); https://doi.org/10.1109/CVPR.2016.91
Redmon, J. & Farhadi, A. YOLO9000: better, faster, stronger. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 7263–7271 (IEEE, 2017); https://doi.org/10.1109/CVPR.2017.690
Redmon, J. & Farhadi, A. YOLOv3: an incremental improvement. Preprint at https://arxiv.org/pdf/1804.02767.pdf (2018).
Shafiee, M. J., Chywl, B., Li, F. & Wong, A. Fast YOLO: a fast you only look once system for real-time embedded object detection in video. Preprint at https://arxiv.org/pdf/1709.05943.pdf (2017).
Vandersteegen, M., Vanbeeck, K. & Goedemé, T. Super accurate low latency object detection on a surveillance UAV. Preprint at https://arxiv.org/pdf/1904.02024.pdf (2019).
Yang, Y., Guo, B., Li, C. & Zhi, Y. in Genetic and Evolutionary Computing (eds Pan, J.-S., Lin, J. C.-W., Liang, Y. & Chu, S.-C.) 253–261 (Springer, 2020).
Vandersteegen, M., Van Beeck, K. & Goedemé, T. in Image Analysis and Recognition, Lecture Notes in Computer Science Vol. 10882 (eds Campilho, A., Karray, F. & ter Haar Romeny, B.) 419–426 (Springer, 2018); https://doi.org/10.1007/978-3-319-93000-8_47
Ivašić-Kos, M., Krišto, M. & Pobar, M. Human detection in thermal imaging using YOLO. In Proc. 2019 5th International Conference on Computer and Technology Applications 20–24 (ACM, 2019); https://doi.org/10.1145/3323933.3324076
Zheng, Y., Izzat, I. H. & Ziaee, S. GFD-SSD: gated fusion double SSD for multispectral pedestrian detection. Preprint at https://arxiv.org/pdf/1903.06999.pdf (2019).
Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J. & Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 88, 303–338 (2010).
Finn, R. L. & Wright, D. Unmanned aircraft systems: surveillance, ethics and privacy in civil applications. Comput. Law Security Rev. 28, 184–194 (2012).
Rao, B., Gopi, A. G. & Maione, R. The societal impact of commercial drones. Technol. Soc. 45, 83–90 (2016).
Shakhatreh, H. et al. Unmanned aerial vehicles (UAVs): a survey on civil applications and key research challenges. IEEE Access 7, 48572–48634 (2019).
Lu, H., Wang, H., Zhang, Q., Yoon, S. W. & Won, D. A 3D convolutional neural network for volumetric image semantic segmentation. Procedia Manuf. 39, 422–428 (2019).
Tan, M., Pang, R. & Le, Q. V. EfficientDet: scalable and efficient object detection. Preprint at https://arxiv.org/pdf1911.09070.pdf (2020).
Zhang, S., Chi, C., Yao, Y., Lei, Z. & Li, S. Z. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2020); https://doi.org/10.1109/CVPR42600.2020.00978
Liu, S., Huang, D. & Wang, Y. Learning spatial fusion for single-shot object detection. Preprint at https://arxiv.org/pdf/1911.09516.pdf (2019).
Lee, Y. & Park, J. Centermask: real-time anchor-free instance segmentation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020); https://doi.org/10.1109/CVPR42600.2020.01392
Schönberger, J. L. & Frahm, J. Structure-from-motion revisited. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 4104–4113 (IEEE, 2016); https://doi.org/10.1109/CVPR.2016.445
Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).
Bochkovskiy, A. et al. GitHub: YOLO v3 (2020); https://doi.org/10.5281/zenodo.3693999
Bochkovskiy, A., Wang, C.-Y. & Liao, H.-Y. M. YOLOv4: optimal speed and accuracy of object detection. Preprint at https://arxiv.org/pdf/2004.10934.pdf (2020).
He, K., Zhang, X., Ren, S. & Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 1904–1916 (2015).
Huang, Z. et al. DC-SPP-YOLO: dense connection and spatial pyramid pooling based YOLO for object detection. Inf. Sci. 522, 241–258 (2020).
Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015).
Schedl, D. C., Kurmi, I. & Bimber, O. Data: search and rescue with airborne optical sectioning (2020); https://doi.org/10.5281/zenodo.3894773
Acknowledgements
This research was funded by the Austrian Science Fund (FWF) under grant no. P 32185-NBL and by the State of Upper Austria and the Austrian Federal Ministry of Education, Science and Research via the LIT (Linz Institute of Technology) under grant no. LIT-2019-8-SEE-114.
Author information
Authors and Affiliations
Contributions
D.C.S. and O.B. conceived and designed the experiments. D.C.S. and I.K. performed the experiments. D.C.S. and O.B. analysed the data. D.C.S. and I.K. contributed materials/analysis tools. D.C.S. and O.B. wrote the paper.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Machine Intelligence thanks Professor Hong Hua, Professor Daisuki Iwai and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Sections 1–6, Figs. 1–6 and Tables 1 and 2.
Supplementary Video
Video showing our technique and our results.
Rights and permissions
About this article
Cite this article
Schedl, D.C., Kurmi, I. & Bimber, O. Search and rescue with airborne optical sectioning. Nat Mach Intell 2, 783–790 (2020). https://doi.org/10.1038/s42256-020-00261-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-020-00261-3
This article is cited by
-
Drone swarm strategy for the detection and tracking of occluded targets in complex environments
Communications Engineering (2023)
-
Combined person classification with airborne optical sectioning
Scientific Reports (2022)