Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Search and rescue with airborne optical sectioning

A preprint version of the article is available at arXiv.

Abstract

In the future, rescuing lost, ill or injured persons will increasingly be carried out by autonomous drones. However, discovering humans in densely forested terrain is challenging because of occlusion, and robust detection mechanisms are required. We show that automated person detection under occlusion conditions can be notably improved by combining multi-perspective images before classification. Here, we employ image integration by airborne optical sectioning (AOS)—a synthetic aperture imaging technique that uses camera drones to capture unstructured thermal light fields—to achieve this with a precision and recall of 96% and 93%, respectively. Finding lost or injured people in dense forests is not generally feasible with thermal recordings, but becomes practical with the use of AOS integral images. Our findings lay the foundation for effective future search-and-rescue technologies that can be applied in combination with autonomous or manned aircraft. They can also be beneficial for other fields that currently suffer from inaccurate classification of partially occluded people, animals or objects.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Airborne optical sectioning.
Fig. 2: Results of an initial field experiment.
Fig. 3: AOS without and with occlusion.
Fig. 4: AOS person detection results for the 10 test scenes.
Fig. 5: Single-image person detection results for the 10 test scenes.

Similar content being viewed by others

Data availability

The data collected in experiments with users can be downloaded from https://doi.org/10.5281/zenodo.389477358 and includes labels and augmented images for training, validation and testing, configuration files, trained network weights and results.

Code availability

Code to compute Tables 2 and 3 is provided with the dataset58. Further code that supports the findings of this study is available from the corresponding author upon reasonable request.

References

  1. Burke, C. et al. Requirements and limitations of thermal drones for effective search and rescue in marine and coastal areas. Drones 3, 78 (2019).

    Article  Google Scholar 

  2. Lygouras, E. et al. Unsupervised human detection with an embedded vision system on a fully autonomous UAV for search and rescue operations. Sensors 19, 3542 (2019).

    Article  Google Scholar 

  3. Brunetti, A., Buongiorno, D., Trotta, G. F. & Bevilacqua, V. Computer vision and deep learning techniques for pedestrian detection and tracking: a survey. Neurocomputing 300, 17–33 (2018).

    Article  Google Scholar 

  4. Yurtsever, E., Lambert, J., Carballo, A. & Takeda, K. A survey of autonomous driving: common practices and emerging technologies. IEEE Access 8, 58443–58469 (2020).

    Article  Google Scholar 

  5. Moreira, A. et al. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sensing Mag. 1, 6–43 (2013).

    Article  Google Scholar 

  6. Li, C. J. & Ling, H. Synthetic aperture radar imaging using a small consumer drone. In 2015 IEEE International Symposium on Antennas and Propagation USNC/URSI National Radio Science Meeting 685–686 (IEEE, 2015); https://doi.org/10.1109/APS.2015.7304729

  7. Rosen, P. A. et al. Synthetic aperture radar interferometry. Proc. IEEE 88, 333–382 (2000).

    Article  Google Scholar 

  8. Levanda, R. & Leshem, A. Synthetic aperture radio telescopes. IEEE Signal Process. Mag. 27, 14–29 (2010).

    Article  Google Scholar 

  9. Dravins, D., Lagadec, T. & Nuñez, P. D. Optical aperture synthesis with electronically connected telescopes. Nat. Commun. 6, 6852 (2015).

    Article  Google Scholar 

  10. Ralston, T. S., Marks, D. L., Carney, P. S. & Boppart, S. A. Interferometric synthetic aperture microscopy. Nat. Phys. 3, 129–134 (2007).

    Article  Google Scholar 

  11. Hayes, M. P. & Gough, P. T. Synthetic aperture sonar: a review of current status. IEEE J. Oceanic Eng. 34, 207–224 (2009).

    Article  Google Scholar 

  12. Hansen, R. E. in Sonar Systems (ed. Kolve, N.) (InTech, 2011); https://www.intechopen.com/books/sonar-systems/introduction-to-synthetic-aperture-sonar

  13. Jensen, J. A., Nikolov, S. I., Gammelmark, K. L. & Pedersen, M. H. Synthetic aperture ultrasound imaging. Ultrasonics 44, e5–e15 (2006).

    Article  Google Scholar 

  14. Zhang, H. K. et al. Synthetic tracked aperture ultrasound imaging: design, simulation and experimental evaluation. J. Med. Imaging 3, 027001 (2016).

    Article  Google Scholar 

  15. Barber, Z. W. & Dahl, J. R. Synthetic aperture ladar imaging demonstrations and information at very low return levels. Appl. Opt. 53, 5531–5537 (2014).

    Article  Google Scholar 

  16. Turbide, S., Marchese, L., Terroux, M. & Bergeron, A. Synthetic aperture lidar as a future tool for earth observation. Proc. SPIE 10563, 105633V (2017).

    Google Scholar 

  17. Vaish, V., Wilburn, B., Joshi, N. & Levoy, M. Using plane + parallax for calibrating dense camera arrays. In Proc. 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004) Vol. 1, I–I (IEEE, 2004); https://doi.org/10.1109/CVPR.2004.1315006

  18. Vaish, V., Levoy, M., Szeliski, R., Zitnick, C. L. & Kang, S. B. Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR06) Vol. 2, 2331–2338 (IEEE, 2006); https://doi.org/10.1109/CVPR.2006.244

  19. Zhang, H., Jin, X. & Dai, Q. Synthetic aperture based on plenoptic camera for seeing through occlusions. In Proc. Advances in Multimedia Information Processing (PCM 2018) (eds Hong, R., Cheng, W.-H., Yamasaki, T., Wang, M. & Ngo, C.-W.) 158–167 (Springer, 2018).

  20. Yang, T. et al. Kinect based real-time synthetic aperture imaging through occlusion. Multimedia Tools Appl. 75, 6925–6943 (2016).

    Article  Google Scholar 

  21. Joshi, N., Avidan, S., Matusik, W. & Kriegman, D. J. Synthetic aperture tracking: tracking through occlusions. In 2007 IEEE 11th International Conference on Computer Vision 1–8 (IEEE, 2007); https://doi.org/10.1109/ICCV.2007.4409032

  22. Pei, Z. et al. Occluded-object 3D reconstruction using camera array synthetic aperture imaging. Sensors 19, 607 (2019).

    Article  Google Scholar 

  23. Yang, T. et al. All-in-focus synthetic aperture imaging. In Computer Vision—ECCV 2014, Lecture Notes in Computer Science Vol. 8694 (eds Fleet, D., Pajdla, T., Schiele, B. & Tuytelaars, T.) 1–15 (Springer, 2014); https://doi.org/10.1007/978-3-319-10599-4_1

  24. Pei, Z., Zhang, Y., Chen, X. & Yang, Y.-H. Synthetic aperture imaging using pixel labeling via energy minimization. Pattern Recognit. 46, 174–187 (2013).

    Article  Google Scholar 

  25. Kurmi, I., Schedl, D. C. & Bimber, O. Airborne optical sectioning. J. Imaging 4, 102 (2018).

    Article  Google Scholar 

  26. Bimber, O., Kurmi, I., Schedl, D. C. & Potel, M. Synthetic aperture imaging with drones. IEEE Comput. Graph. Appl. 39, 8–15 (2019).

    Article  Google Scholar 

  27. Kurmi, I., Schedl, D. C. & Bimber, O. Thermal airborne optical sectioning. Remote Sensing 11, 1668 (2019).

    Article  Google Scholar 

  28. Kurmi, I., Schedl, D. C. & Bimber, O. A statistical view on synthetic aperture imaging for occlusion removal. IEEE Sensors J. 19 (2019); https://doi.org/10.1109/JSEN.2019.2922731

  29. Schedl, D. C., Kurmi, I. & Bimber, O. Airborne optical sectioning for nesting observation. Sci. Rep. 10, 7254 (2020).

    Article  Google Scholar 

  30. Hwang, S., Park, J., Kim, N., Choi, Y. & Kweon, I. S. Multispectral pedestrian detection: benchmark dataset and baselines. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2015); https://doi.org/10.1109/CVPR.2015.7298706

  31. Xu, Z., Zhuang, J., Liu, Q., Zhou, J. & Peng, S. Benchmarking a large-scale FIR dataset for on-road pedestrian detection. Infrared Phys. Technol. 96, 199–208 (2019).

    Article  Google Scholar 

  32. Kurmi, I., Schedl, D. C. & Bimber, O. Fast automatic visibility optimization for thermal synthetic aperture visualization. IEEE Geosci. Remote Sensing Lett. (2020); https://doi.org/10.1109/LGRS.2020.2987471

  33. Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You only look once: unified, real-time object detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 779–788 (IEEE, 2016); https://doi.org/10.1109/CVPR.2016.91

  34. Redmon, J. & Farhadi, A. YOLO9000: better, faster, stronger. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 7263–7271 (IEEE, 2017); https://doi.org/10.1109/CVPR.2017.690

  35. Redmon, J. & Farhadi, A. YOLOv3: an incremental improvement. Preprint at https://arxiv.org/pdf/1804.02767.pdf (2018).

  36. Shafiee, M. J., Chywl, B., Li, F. & Wong, A. Fast YOLO: a fast you only look once system for real-time embedded object detection in video. Preprint at https://arxiv.org/pdf/1709.05943.pdf (2017).

  37. Vandersteegen, M., Vanbeeck, K. & Goedemé, T. Super accurate low latency object detection on a surveillance UAV. Preprint at https://arxiv.org/pdf/1904.02024.pdf (2019).

  38. Yang, Y., Guo, B., Li, C. & Zhi, Y. in Genetic and Evolutionary Computing (eds Pan, J.-S., Lin, J. C.-W., Liang, Y. & Chu, S.-C.) 253–261 (Springer, 2020).

  39. Vandersteegen, M., Van Beeck, K. & Goedemé, T. in Image Analysis and Recognition, Lecture Notes in Computer Science Vol. 10882 (eds Campilho, A., Karray, F. & ter Haar Romeny, B.) 419–426 (Springer, 2018); https://doi.org/10.1007/978-3-319-93000-8_47

  40. Ivašić-Kos, M., Krišto, M. & Pobar, M. Human detection in thermal imaging using YOLO. In Proc. 2019 5th International Conference on Computer and Technology Applications 20–24 (ACM, 2019); https://doi.org/10.1145/3323933.3324076

  41. Zheng, Y., Izzat, I. H. & Ziaee, S. GFD-SSD: gated fusion double SSD for multispectral pedestrian detection. Preprint at https://arxiv.org/pdf/1903.06999.pdf (2019).

  42. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J. & Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 88, 303–338 (2010).

    Article  Google Scholar 

  43. Finn, R. L. & Wright, D. Unmanned aircraft systems: surveillance, ethics and privacy in civil applications. Comput. Law Security Rev. 28, 184–194 (2012).

    Article  Google Scholar 

  44. Rao, B., Gopi, A. G. & Maione, R. The societal impact of commercial drones. Technol. Soc. 45, 83–90 (2016).

    Article  Google Scholar 

  45. Shakhatreh, H. et al. Unmanned aerial vehicles (UAVs): a survey on civil applications and key research challenges. IEEE Access 7, 48572–48634 (2019).

    Article  Google Scholar 

  46. Lu, H., Wang, H., Zhang, Q., Yoon, S. W. & Won, D. A 3D convolutional neural network for volumetric image semantic segmentation. Procedia Manuf. 39, 422–428 (2019).

    Article  Google Scholar 

  47. Tan, M., Pang, R. & Le, Q. V. EfficientDet: scalable and efficient object detection. Preprint at https://arxiv.org/pdf1911.09070.pdf (2020).

  48. Zhang, S., Chi, C., Yao, Y., Lei, Z. & Li, S. Z. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2020); https://doi.org/10.1109/CVPR42600.2020.00978

  49. Liu, S., Huang, D. & Wang, Y. Learning spatial fusion for single-shot object detection. Preprint at https://arxiv.org/pdf/1911.09516.pdf (2019).

  50. Lee, Y. & Park, J. Centermask: real-time anchor-free instance segmentation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020); https://doi.org/10.1109/CVPR42600.2020.01392

  51. Schönberger, J. L. & Frahm, J. Structure-from-motion revisited. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 4104–4113 (IEEE, 2016); https://doi.org/10.1109/CVPR.2016.445

  52. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000).

    Article  Google Scholar 

  53. Bochkovskiy, A. et al. GitHub: YOLO v3 (2020); https://doi.org/10.5281/zenodo.3693999

  54. Bochkovskiy, A., Wang, C.-Y. & Liao, H.-Y. M. YOLOv4: optimal speed and accuracy of object detection. Preprint at https://arxiv.org/pdf/2004.10934.pdf (2020).

  55. He, K., Zhang, X., Ren, S. & Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 1904–1916 (2015).

    Article  Google Scholar 

  56. Huang, Z. et al. DC-SPP-YOLO: dense connection and spatial pyramid pooling based YOLO for object detection. Inf. Sci. 522, 241–258 (2020).

    Article  MathSciNet  Google Scholar 

  57. Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015).

    Article  MathSciNet  Google Scholar 

  58. Schedl, D. C., Kurmi, I. & Bimber, O. Data: search and rescue with airborne optical sectioning (2020); https://doi.org/10.5281/zenodo.3894773

Download references

Acknowledgements

This research was funded by the Austrian Science Fund (FWF) under grant no. P 32185-NBL and by the State of Upper Austria and the Austrian Federal Ministry of Education, Science and Research via the LIT (Linz Institute of Technology) under grant no. LIT-2019-8-SEE-114.

Author information

Authors and Affiliations

Authors

Contributions

D.C.S. and O.B. conceived and designed the experiments. D.C.S. and I.K. performed the experiments. D.C.S. and O.B. analysed the data. D.C.S. and I.K. contributed materials/analysis tools. D.C.S. and O.B. wrote the paper.

Corresponding author

Correspondence to Oliver Bimber.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Machine Intelligence thanks Professor Hong Hua, Professor Daisuki Iwai and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Sections 1–6, Figs. 1–6 and Tables 1 and 2.

Supplementary Video

Video showing our technique and our results.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schedl, D.C., Kurmi, I. & Bimber, O. Search and rescue with airborne optical sectioning. Nat Mach Intell 2, 783–790 (2020). https://doi.org/10.1038/s42256-020-00261-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-020-00261-3

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics