Abstract
Insect-computer hybrid robots are receiving increasing attention as a potential alternative to small artificial robots due to their superior locomotion capabilities and low manufacturing costs. Controlling insect-computer hybrid robots to travel through terrain littered with complex obstacles of various shapes and sizes is still challenging. While insects can inherently deal with certain obstacles by using their antennae to detect and avoid obstacles, this ability is limited and can be interfered with by control signals when performing navigation tasks, ultimately leading to the robot being trapped in a specific place and having difficulty escaping. Hybrid robots need to add additional sensors to provide accurate perception and early warning of the external environment to avoid obstacles before getting trapped, ensuring smooth navigation tasks in rough terrain. However, due to insects’ tiny size and limited load capacity, hybrid robots are very limited in the sensors they can carry. A monocular camera is suitable for insect-computer hybrid robots because of its small size, low power consumption, and robust information acquisition capabilities. This paper proposes a navigation algorithm with an integrated obstacle avoidance module using a monocular camera for the insect-computer hybrid robot. The monocular cameras equipped with a monocular depth estimation algorithm based on deep learning can produce depth maps of environmental obstacles. The navigation algorithm generates control commands that can drive the hybrid robot away from obstacles according to the distribution of obstacle distances in the depth map. To ensure the performance of the monocular depth estimation model when applied to insect-computer hybrid robotics scenarios, we collected the first dataset from the viewpoint of a small robot for model training. In addition, we propose a simple but effective depth map processing method to obtain obstacle avoidance commands based on the weighted sum method. The success rate of the navigation experiment is significantly improved from 6.7% to 73.3%. Experimental results show that our navigation algorithm can detect obstacles in advance and guide the hybrid robots to avoid them before they get trapped.
Similar content being viewed by others
Introduction
There are high expectations for small robots’ application in exploration missions in confined spaces and future search and rescue mission scenarios under rubble1,2,3,4. However, the application of small artificial robots in such scenarios will still be challenging due to difficulties in design, manufacturing, and limited operating time. Instead, insect-computer hybrid robots, consisting of an insect and a microcontroller2,5,6, become an alternative to small artificial robots (Fig. 1a). The living insect is the delivery platform, converting bioenergy into kinetic energy to move forward. The microcontroller generates control decisions by fusing information from various sensors on the hybrid robot platform. It stimulates the corresponding senses of the insect hybrid platform to induce relevant movements. Insect-computer hybrid robots with different capabilities can be obtained by using various species of insects as delivery platforms. For example, insects such as beetles or dragonflies can be employed to create miniaturized insect-computer hybrid robots that can fly7,8,9. However, these aerial robots are unsuitable for scenes with complex terrain, such as under ruins. On the contrary, if we utilize insects with excellent climbing abilities, such as cockroaches, we get small hybrid robots that can adapt to complex terrain10,11. These insect-computer hybrid robots demonstrated superior locomotor performance and range compared to small artificial robots7,12.
One of the most critical tasks of the hybrid robot’s development process is insect motion control. In recent years, there has been an influx of work investigating the development of protocols for insect movement control13,14,15,16,17,18,19. Ma et al. demonstrated that locust jumping movements can be induced by stimulating their leg muscles19. Choo et al. initiated flight behavior by applying electrical stimulation to the beetle’s dorsal longitudinal muscles20. Ye et al. studied the optimal electrical stimulation characteristics of bees’ unilateral optic lobes to induce their turning behavior21. Stimulating cockroach cerci and unilateral antennae can generate acceleration and turning behaviors11. Building on this foundation, some researchers have further developed insect navigation algorithms based on these insect locomotion control protocols10,22. However, these navigation algorithms are still at a very preliminary stage. Because in future practical applications, the terrain and environmental conditions faced by insect-computer hybrid robots will be much more complex. Obstacle sensing is crucial for robot navigation systems in rugged terrain conditions. Some studies have shown that insects can use their antennae to sense and avoid obstacles23. However, the detection range of insects’ antennae is limited. Obstacles are often only detected when the insect touches the obstacle. However, obstacle avoidance becomes problematic when the hybrid robot is close to the obstacle. The collision of the robot with the obstacle also affects the insect’s motion. In addition, many studies steer insects by stimulating the antennae, affecting or even completely disabling insects’ ability to avoid obstacles11. Moreover, when the insects are subjected to control stimulation, their subsequent reactions when they detect an obstacle are unpredictable and unreliable in navigation. They may fall into deeper traps to avoid the obstacle and have no escape. Therefore, relying solely on insects’ ability to avoid obstacles to achieve the established navigation goal is unrealistic.
To expand the adaptability of insect-computer hybrid robots to complex terrains, we need to enhance the robot platform’s capability to perceive the surrounding environment. Additional sensors will significantly improve the robot’s obstacle-avoidance capability. Currently, the sensors that can supply reliable obstacle detection are LiDAR or RGB cameras. Insect-computer hybrid robots have small dimensions, resulting in limited load capacity. They can only carry small-sized sensors with low power consumption. Hence, LiDAR sensors are unsuitable for insect-computer hybrid robotic platform applications. Instead, the monocular RGB camera is well suited for small hybrid robots due to its small size, low power consumption, and robust information acquisition capabilities24,25. A hybrid robot with an integrated onboard RGB camera, CameraRoach, was presented by Rasakatla et al.25. They demonstrated several applications, such as using the camera to navigate a cockroach robot by recognizing arrows indicating direction. However, they did not further process the images taken by the camera to discover more applications for monocular cameras.
Obstacles identification and avoidance can be achieved using the monocular camera to predict the distance between the camera and environmental objects26,27,28,29. In recent years, depth estimation algorithms based on monocular cameras, the monocular depth estimation, have seen unprecedented development due to the growth of deep learning. Depending on the training data, monocular depth estimation can be categorized into supervised30,31,32,33,34 and unsupervised35,36,37,38,39,40 learning approaches. Since unsupervised learning methods do not require images labeled by depth truth values for training, this dramatically reduces the threshold of acquiring training data. Therefore, unsupervised learning methods have gained increasing attention in monocular depth estimation. Many reports have shown the application of monocular cameras to guide drones or unmanned vehicles in obstacle avoidance41,42,43. However, due to the application scenarios of the insect-computer hybrid robot, applying monocular cameras to insect-computer hybrid robot navigation tasks remains challenging. The generalization ability of deep learning models in different scenes is a critical reason that limits their application scenarios. Enriching and expanding the diversity of datasets is beneficial to address this issue. However, collecting new datasets is a costly and time-consuming task44. Currently, no monocular depth estimation model can provide sufficiently accurate predictions for insect-computer hybrid robot application scenarios because of the lack of a dataset taken from the perspective of insect-computer hybrid robots. The existing public datasets, such as KITTI45 and Cityscapes46, mainly serve autonomous driving scenarios. From an insect’s perspective, the world differs significantly from the view captured in these datasets. Therefore, the monocular depth estimation models trained from these datasets have difficulties providing usable prediction results for the navigation task of small robots, which limits the application scenarios of monocular cameras to insect-computer hybrid robots. Another unresolved problem is generating obstacle avoidance control commands for insect-computer hybrid robots based on the depth maps from the monocular depth estimation models. To overcome the above challenges, we propose one navigation algorithm for insect-computer hybrid robots with obstacle avoidance functions using a monocular camera. Specifically, our contributions can be summarized as follows:
-
1.
To enhance the obstacle avoidance capabilities of insect-computer hybrid robots, we developed the first navigation algorithm with an integrated obstacle avoidance function using a monocular camera.
-
2.
One unsupervised learning monocular depth estimation model is utilized to process images from the monocular camera to gain the depth information of obstacles. We collected the first dataset, the SmallRobot Dataset, obtained from the viewpoint of insects. We used it to train a monocular depth estimation model that can provide accurate depth predictions for an insect-computer hybrid robot.
-
3.
We proposed one simple but effective way to process the depth maps to generate obstacle avoidance commands for insect-computer hybrid robots.
Results and discussion
To evaluate the effectiveness of the obstacle avoidance module, we conducted point-to-point navigation experiments of insect-machine hybrid systems guided by navigation algorithms with and without obstacle avoidance features. The navigation algorithm drove the insect-computer hybrid robot from the start point to the destination with an obstacle in between, as illustrated in Fig. 2. This obstacle has a corner with three closed sides, which may trap the hybrid robot’s navigation into a deadlock. If the robot successfully navigates around the obstacle to reach the destination, we count it as a successful attempt. If the robot fails to get around the obstacle and is stuck in the dead corner, we count it as a failed attempt. We conducted the navigation experiments in two setups: navigation with and without the Obstacle Avoidance module. Each setup was repeated in 15 trials using three insect-computer hybrid robots. Figure 2a shows the motion trajectory of hybrid robots guided by the navigation algorithm without the Obstacle Avoidance Module. Figure 2b shows the hybrid robots’ motion trajectory under the navigation algorithm’s guidance with the Obstacle Avoidance Module. By comparison, the obstacle avoidance module can grant the hybrid robot superior obstacle avoidance capabilities. After integrating the Obstacle Avoidance Module into the navigation algorithm, the success rate of the navigation task soared from 6.7% to 73.3% (Fig. 2c).
A more in-depth study found that the navigation task tended to fail when the hybrid robot entered a specific range close to the obstacle. We call this region the risk zone (Fig. 2d). When the robot was inside the risk zone, the collision between the robot and the obstacle hindered the robot’s motion. The space left for posture correction became small as well. For the navigation algorithm without the Obstacle Avoidance Module, the robot entered the risk zone in up to 93.3% of the attempts. For the navigation algorithm with the integrated Obstacle Avoidance Module, this figure was only 40%. Meanwhile, out of these 40% of attempts, 33.3% of the robots were directed out of the risk zone by the navigation algorithm, while for the navigation algorithm without obstacle avoidance, this number was 0. This indicates that the Obstacle Avoidance Module can anticipate the presence of obstacles and take action to avoid them. This allows the robot to correct its direction early and avoid entering the risk zone.
Another cause of navigation failure is conflicts between navigation commands and obstacles. For navigation algorithms without the Obstacle Avoidance Module, since it cannot detect the presence of obstacles, the General Navigation Module will disregard the shape of the obstacles to correct the robot’s orientation to face the destination forcefully. This leads to conflicts between the navigation commands and the obstacles, which can cause the robot to be trapped in the obstacles (Fig. 2e). However, the algorithm integrated with the Obstacle Avoidance Module prioritizes obstacle avoidance operations, thus ensuring that the robot avoids obstacles before approaching the destination (Fig. 2f).
Applying the technology of monocular depth estimation to robots can empower them with superior obstacle-avoidance capabilities. However, the generalization ability of monocular depth estimation algorithms based on deep learning is a big challenge that limits their deployment. Insect-computer hybrid robots possess unique requirements for the training data due to their tiny size and unique camera viewpoint. None of the existing publicly available training datasets for monocular depth estimation models can meet the deployment requirements for insect-computer hybrid robot applications. Therefore, models trained using these datasets cannot produce reliable predictions when applied to insect-computer hybrid robots. To overcome this issue, we collected training datasets for monocular depth estimation models that can be used for small robots such as insect-computer hybrid robots. In Fig. 3, the first column is images taken from an insect’s point of view. The second column shows the prediction results of the model trained with the KITTI dataset. The third column is the prediction result of the model trained with our collected dataset, SmallRobot Dataset. It is clear to see that the model trained using KITTI cannot generate reasonable and reliable depth maps for images taken from an insect’s perspective. In contrast, the model trained with SmallRobot Dataset produces high-quality depth maps with sharp edges.
Another area for improvement is completing the conversion process from depth map to obstacle avoidance control commands. Artificial devices like drones have higher control accuracy and faster movement speed. Their obstacle avoidance algorithm needs to determine the contour boundaries of obstacles to avoid collisions41,42, which may require adding additional features, such as object detection, resulting in more computation. However, the insect-computer hybrid robot’s biological body makes it more tolerant to collisions and not as easily damaged during crashes. As such, we can reduce the need for obstacle edge detection and instead, develop a method to generate obstacle avoidance commands based only on depth distribution trends. Figure 4 shows examples of generating an obstacle avoidance command from an RGB image. The first column is the input RGB images. The monocular depth estimation model processes these images to produce depth maps. After that, the weighted sums (Turn Left, Turn Right, and Go Forward) are computed according to the proposed obstacle avoidance algorithm. Finally, these values are fed into SoftMax to get obstacle avoidance commands. For the first and third images, the obstacle avoidance algorithm generates steering commands to drive the robot away from the obstacles according to the shape of the obstacle. For the second image, the algorithm maintains the robot’s moving direction. These generated commands are reasonable and consistent with human judgment, which validates the effectiveness of our proposed obstacle avoidance algorithm.
This paper demonstrates the first successful automatic navigation algorithm with an obstacle avoidance function using a monocular camera for the insect-computer hybrid robot. Experiments prove the effectiveness of the proposed algorithm. The ease of fabrication of insect-computer hybrid robots makes them suitable for a wide range of potential applications. Our algorithm can help them overcome their deficiencies in obstacle recognition and avoidance. However, the microcontroller employed to deploy the algorithm is unwieldy. In the future, we will develop a microcontroller with integrated image acquisition and stimulation modules. In addition, limited by microcontrollers’ current storage and processing capabilities, we send images back to the workstation via WiFi for processing. This would increase energy consumption. We are developing an ultra-lightweight depth model that can be deployed on the insect side, placing the model and inference process running onboard to reduce the information transmitted. Another essential follow-up question is ensuring that the camera is installed horizontally. We are also developing structures that simplify camera installation and maintain a horizontal position.
Methods
Insect platform
We used the Madagascar hissing cockroach, which has excellent climbing abilities, as a platform to build the insect-computer hybrid robot (Fig. 1a). Their large size (5.7 ± 0.6 cm)10 gives them a robust carrying capacity25. At the same time, there are many mature control protocols and demonstrations for controlling Madagascar cockroaches1,10,11. These traits make them well-suited as platforms for insect-computer hybrid robots. We kept these cockroaches in a laboratory environment with suitable temperature and humidity and regularly provided them with water and food. The experiment procedures of this study are in accordance with the literatures11,18,47,48. We followed similar steps for insect anesthesia and electrode implantation and adhered to high ethical standards throughout the experiment.
Microcontroller and implantation methods
The controller used to control insect movement is comprised of two parts. They are the ESP32-CAM with camera OV2640 used to acquire images and the custom-designed stimulation module used to output control signals (Fig. 1b). ESP32-CAM from Ai-Thinker Technology is powered by ESP32-S MCU module, which supports WiFi and Bluetooth communication. The ESP32-CAM can capture RGB images with a resolution of 320 × 240 pixels and pass them to the workstation via WiFi. The camera was carefully inspected to ensure it was mounted horizontally. The stimulation module employs MSP432P4011 as the central controller. The components of the stimulation module are shown in Supplementary Fig. 2. CC1352 works as the Bluetooth module. AD5504 is used as the signal generator. The stimulation module has four output channels that generate voltage from 0 to 12 V. Our navigation experiments only use stimulus signals below 3 volts to control insect movements. The stimulation module receives commands from a workstation via Bluetooth to create stimulus signals to control insect movement. We used a Li-Po battery with a calibrated voltage of 3.7 V and a capacity of 180 mAh to power the two modules simultaneously. The controller can be effortlessly detached from a hybrid robot and reused on another insect to build a fully functional hybrid robot.
We adopted the same electrode implantation method as in Erickson et al.’s work11 to control the motion of cockroaches. We implanted electrodes into the cerci part to induce the forward movement of cockroaches and used electrodes implanted in the antennae to induce the turning movements of cockroaches. An electrode was implanted into the cockroach’s abdomen as a common ground wire. The four electrodes were fixed on the cockroach’s body using melted beeswax and were connected to the four channels of the controller’s stimulation module through wires. The preparation and assembly of the insect-computer hybrid robot takes approximately 15 minutes.
Monocular depth estimation model
We followed an unsupervised approach that adopted image sequences from a monocular camera as training data to train a monocular depth estimation model (Fig. 5). The depth estimation model was trained together with a pose estimation model38,39. The target image was first fed into the depth estimation model in the training phase to generate a predicted depth map. At the same time, the image pair consisting of the target picture and its adjacent frame (the reference image) was sent to the pose estimation network to calculate the camera’s pose change between the two frames. Afterward, the matching relationship between pixels on the target image and the reference image was calculated based on the camera’s pose change and depth map. The target image was synthesized by sampling the corresponding pixels from the reference image. The training optimized the models by minimizing the difference between synthetic and natural target images. We used the same network as in Godard et al.’s work38 as the depth estimation model and a pose estimation network with a resnet50 basic skeleton. Meanwhile, we followed the loss function and the auto-masking technology proposed by Godard et al.38 to train the model. To make the depth maps have the same scale, we added a scale consistency loss49. The training was conducted on an NVIDIA GeForce RTX 3090 GPU with 24 G memory.
Dataset captured from small robot’s view
The performance of a depth estimation network in its application scenarios is highly reliant on the training dataset. No publicly available datasets are suitable for the application scenarios of insect-computer hybrid robots. Therefore, it is difficult for monocular depth estimation models trained on existing datasets to extend their application scenarios to navigation tasks of insect-computer hybrid robots. To overcome this issue, we have collected a dataset suitable for visual models of small robots from the viewpoint of tiny robots, the SmallRobot Dataset.
We employed the ESP32-CAM to capture the dataset. The ESP32-CAM was mounted on a tray powered by a power bank. The tray had a lever with a handle that allowed the operator to move the ESP32-CAM to capture images. The ESP32-CAM transmitted the images to the laptop via WiFi. An image capture program in Python ran on the laptop to collect images at intervals of 0.01 s. The resolution of the images was 320 × 240 pixels.
Obstacle avoidance module
The obstacle avoidance module works by processing the depth map generated by the monocular depth estimation model. It includes two functions: turning the obstacle avoidance function on and off and generating control commands to guide the robot for obstacle avoidance.
As shown in Fig. 6, we select a 40 × 40 pixel area in the center of the depth map. Then, the minimum depth value of this region is regarded as the distance from the obstacle to the robot. The obstacle avoidance function will be triggered when the minimum depth is smaller than the threshold.
In addition, we section off a 40 × 320 pixel area along the height direction of the depth map. The control commands (Left Turn, Right Turn, or Go Forward) will be generated based on this section. Specifically, we give each pixel point a different weight along the width direction, which is assigned differently for different commands. For the Left Turn command, maximum weights are assigned at the left side of the depth map and decrease towards the right. This is reversed for the Right Turn command. For the Go Forward command, maximum weights are assigned in the middle and decrease towards both sides. Then, the weighted sums of the Left Turn, Right Turn, and Go Forward are calculated and passed to the SoftMax function to decide the control command. As shown in Supplementary Fig. 1, the Left Turn indicates that the insect’s right antenna is stimulated to rotate to the left side and vice versa for the Right Turn. Go Forward suggests that the insect’s cerci are stimulated to move forward.
Navigation experiment and algorithm
The setup of the navigation experiment is shown in Fig. 7a. The insect-computer hybrid robot was navigated from the start point to the destination. The navigation algorithm of the insect-computer hybrid robot consists of two modules. They are the obstacle avoidance and general navigation module (Fig. 7b). A monocular depth estimation model deployed on the workstation processes the image from ESP32-CAM by WiFi to obtain the predicted depth map. Meanwhile, the workstation receives and processes the robot’s location data from the 3D motion capture system to generate suitable control commands, which would be issued to the insect-computer hybrid robot via BLE. The navigation algorithm first calculates the distance from the robot to the destination to determine if the robot has reached the destination. If not, the obstacle avoidance module will check the distance of the obstacle to decide whether to trigger the obstacle avoidance function. The general navigation module guides the robot towards its destination if obstacle avoidance is not required. The workflow of the general navigation module consists of two steps. First, the robot’s direction of movement is checked. The Go Forward command is released directly if the robot is moving towards the destination, else the steering command is released first to adjust the moving direction before outputting the Go Forward command.
Data Availability
The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request. The underlying code for this study [and training/validation datasets] is not publicly available but may be made available to qualified researchers at a reasonable request from the corresponding author.
Code availability
The underlying code for this study [and training/validation datasets] is not publicly available but may be made available to qualified researchers at a reasonable request from the corresponding author.
References
Lin, Q. et al. Resilient conductive membrane synthesized by in-situ polymerisation for wearable non-invasive electronics on moving appendages of cyborg insect. npj Flex. Electron 7, 42 (2023).
Nguyen, H. D., Tan, P. Z., Sato, H. & Vo-Doan, T. T. Sideways Walking Control of a Cyborg Beetle. IEEE Trans. Med. Robot. Bionics 2, 331–337 (2020).
Chukewad, Y. M., James, J., Singh, A. & Fuller, S. RoboFly: An Insect-Sized Robot With Simplified Fabrication That Is Capable of Flight, Ground, and Water Surface Locomotion. IEEE Trans. Robot. 37, 2025–2040 (2021).
Rubio, F., Valero, F. & Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 16, 172988141983959 (2019).
Siljak, H., Nardelli, P. H. J. & Moioli, R. C. Cyborg Insects: Bug or a Feature? IEEE Access 10, 49398–49411 (2022).
Yang, X., Jiang, X.-L., Su, Z.-L. & Wang, B. Cyborg Moth Flight Control Based on Fuzzy Deep Learning. Micromachines 13, 611 (2022).
Sato, H. & Maharbiz, M. M. Recent Developments in the Remote Radio Control of Insect Flight. Front. Neurosci. 4, 199 (2010).
Li, Y., Wu, J. & Sato, H. Feedback Control-Based Navigation of a Flying Insect-Machine Hybrid Robot. Soft Robot. 5, 365–374 (2018).
Bao, L. et al. Flight control of tethered honeybees using neural electrical stimulation. In 2011 5th International IEEE/EMBS Conference on Neural Engineering 558–561 (IEEE, Cancun, 2011). https://doi.org/10.1109/NER.2011.5910609.
Tran-Ngoc, P. T. et al. Intelligent Insect–Computer Hybrid Robot: Installing Innate Obstacle Negotiation and Onboard Human Detection onto Cyborg Insect. Advanced Intelligent Systems 2200319 https://doi.org/10.1002/aisy.202200319 (2023).
Erickson, J. C., Herrera, M., Bustamante, M., Shingiro, A. & Bowen, T. Effective Stimulus Parameters for Directed Locomotion in Madagascar Hissing Cockroach Biobot. PLoS ONE 10, e0134348 (2015).
Li, R., Lin, Q., Kai, K., Nguyen, H. D. & Sato, H. A Navigation Algorithm to Enable Sustainable Control of Insect-Computer Hybrid Robot with Stimulus Signal Regulator and Habituation-Breaking Function. Soft Robotics soro.2023.0064 https://doi.org/10.1089/soro.2023.0064 (2023).
Tadepalli, S. et al. Remote-Controlled Insect Navigation Using Plasmonic Nanotattoos. http://biorxiv.org/lookup/doi/10.1101/2020.02.10.942540 (2020) https://doi.org/10.1101/2020.02.10.942540.
Tsang, W. M. et al. Remote control of a cyborg moth using carbon nanotube-enhanced flexible neuroprosthetic probe. In 2010 IEEE 23rd International Conference on Micro Electro Mechanical Systems (MEMS) 39–42 (IEEE, Wanchai, Hong Kong, China, 2010). https://doi.org/10.1109/MEMSYS.2010.5442570.
Holzer, R. & Shimoyama, I. Locomotion control of a bio-robotic system via electric stimulation. In Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS ’97 vol. 3 1514–1519 (IEEE, Grenoble, France, 1997).
Cao, F., Zhang, C., Choo, H. Y. & Sato, H. Insect–computer hybrid legged robot with user-adjustable speed, step length and walking gait. J. R. Soc. Interface 13, 20160060 (2016).
Giampalmo, S. L. et al. Generation of complex motor patterns in american grasshopper via current-controlled thoracic electrical interfacing. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society 1275–1278 (IEEE, Boston, MA, 2011). https://doi.org/10.1109/IEMBS.2011.6090300.
Visvanathan, K., Gupta, N. K., Maharbiz, M. M. & Gianchandani, Y. B. Control of locomotion in ambulatory and airborne insects using implanted thermal microstimulators. In TRANSDUCERS 2009 - 2009 International Solid-State Sensors, Actuators and Microsystems Conference 1987–1990 https://doi.org/10.1109/SENSOR.2009.5285681 (2009).
Ma, S., Liu, P., Liu, S., Li, Y. & Li, B. Launching of a Cyborg Locust via Co-Contraction Control of Hindleg Muscles. IEEE Trans. Robot. 38, 2208–2219 (2022).
Choo, H. Y., Li, Y., Cao, F. & Sato, H. Electrical Stimulation of Coleopteran Muscle for Initiating Flight. PLoS ONE 11, e0151808 (2016).
Yu, L. et al. Experimental Verification on Steering Flight of Honeybee by Electrical Stimulation. Cyborg Bionic Syst. 2022, 2022/9895837 (2022).
Nguyen, H. D., Dung, V. T., Sato, H. & Vo-Doan, T. T. Efficient autonomous navigation for terrestrial insect-machine hybrid systems. Sens. Actuators B: Chem. 376, 132988 (2023).
Baba, Y., Tsukada, A. & Comer, C. M. Collision avoidance by running insects: antennal guidance in cockroaches. J. Exp. Biol. 213, 2294–2302 (2010).
Iyer, V., Najafi, A., James, J., Fuller, S. & Gollakota, S. Wireless steerable vision for live insects and insect-scale robots. Sci. Robot. 5, eabb0839 (2020).
Rasakatla, S. et al. CameraRoach: A WiFi- and Camera-Enabled Cyborg Cockroach for Search and Rescue. JRM 34, 149–158 (2022).
Dong, X., Garratt, M. A., Anavatti, S. G. & Abbass, H. A. Towards Real-Time Monocular Depth Estimation for Robotics: A Survey. IEEE Transactions on Intelligent Transportation Systems 1–22 https://doi.org/10.1109/TITS.2022.3160741 (2022).
Vyas, P., Saxena, C., Badapanda, A. & Goswami, A. Outdoor Monocular Depth Estimation: A Research Review. Preprint at http://arxiv.org/abs/2205.01399 (2022).
Zhao, C., Sun, Q., Zhang, C., Tang, Y. & Qian, F. Monocular depth estimation based on deep learning: An overview. Sci. China Technol. Sci. 63, 1612–1627 (2020).
Hu, J. et al. Deep Depth Completion from Extremely Sparse Data: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 1–20 https://doi.org/10.1109/TPAMI.2022.3229090 (2023).
Jung, G. & Yoon, S. M. Monocular depth estimation with multi-view attention autoencoder. Multimed Tools Appl 1–12 https://doi.org/10.1007/s11042-022-12301-8 (2022).
Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F. & Navab, N. Deeper Depth Prediction with Fully Convolutional Residual Networks. In 2016 Fourth International Conference on 3D Vision (3DV) 239–248 (IEEE, Stanford, CA, 2016). https://doi.org/10.1109/3DV.2016.32.
Fu, H., Gong, M., Wang, C., Batmanghelich, K. & Tao, D. Deep Ordinal Regression Network for Monocular Depth Estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2002–2011 (IEEE, Salt Lake City, UT, 2018) https://doi.org/10.1109/CVPR.2018.00214.
Song, M. & Kim, W. Decomposition and replacement: Spatial knowledge distillation for monocular depth estimation. Journal of Visual Communication and Image Representation 103523 https://doi.org/10.1016/j.jvcir.2022.103523 (2022).
Farooq Bhat, S., Alhashim, I. & Wonka, P. AdaBins: Depth Estimation Using Adaptive Bins. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 4008–4017 (IEEE, Nashville, TN, USA, 2021). https://doi.org/10.1109/CVPR46437.2021.00400.
Godard, C., Aodha, O. M. & Brostow, G. J. Unsupervised Monocular Depth Estimation with Left-Right Consistency. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 6602–6611 (IEEE, Honolulu, HI, 2017). https://doi.org/10.1109/CVPR.2017.699.
Poggi, M., Aleotti, F., Tosi, F. & Mattoccia, S. Towards Real-Time Unsupervised Monocular Depth Estimation on CPU. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 5848–5854 (IEEE, Madrid, 2018). https://doi.org/10.1109/IROS.2018.8593814.
Li, R. & Sato, H. A Fully Convolutional Network of Self-Supervised Monocular Depth Estimation with Global Receptive Field and Unreasonable Matching Penalty. https://www.techrxiv.org/articles/preprint/A_Fully_Convolutional_Network_of_Self-supervised_Monocular_Depth_Estimation_with_Global_Receptive_Field_and_Unreasonable_Matching_Penalty/21723518/1 (2022) https://doi.org/10.36227/techrxiv.21723518.v1.
Godard, C., Aodha, O. M., Firman, M. & Brostow, G. Digging Into Self-Supervised Monocular Depth Estimation. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) 3827–3837 (IEEE, Seoul, Korea (South), 2019). https://doi.org/10.1109/ICCV.2019.00393.
Liu, J., Li, Q., Cao, R., Tang, W. & Qiu, G. MiniNet: An extremely lightweight convolutional neural network for real-time unsupervised monocular depth estimation. ISPRS J. Photogramm. Remote Sens. 166, 255–267 (2020).
Zhang, Y. et al. Self-Supervised Monocular Depth Estimation With Multiscale Perception. IEEE Trans. Image Process. 31, 3251–3266 (2022).
Wang, D., Li, W., Liu, X., Li, N. & Zhang, C. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution. Computers Electron. Agriculture 175, 105523 (2020).
Zhang, Z., Xiong, M. & Xiong, H. Monocular Depth Estimation for UAV Obstacle Avoidance. In 2019 4th International Conference on Cloud Computing and Internet of Things (CCIOT) 43–47 (2019). https://doi.org/10.1109/CCIOT48581.2019.8980350.
Ding, J. et al. Monocular Camera-Based Complex Obstacle Avoidance via Efficient Deep Reinforcement Learning. IEEE Trans. Circuits Syst. Video Technol. 33, 756–770 (2023).
Ming, Y., Meng, X., Fan, C. & Yu, H. Deep learning for monocular depth estimation: A review. Neurocomputing 438, 14–33 (2021).
Geiger, A., Lenz, P., Stiller, C. & Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 32, 1231–1237 (2013).
Cordts, M. et al. The Cityscapes Dataset for Semantic Urban Scene Understanding. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 3213–3223 (IEEE, Las Vegas, NV, USA, 2016). https://doi.org/10.1109/CVPR.2016.350.
Whitmire, E., Latif, T. & Bozkurt, A. Kinect-based system for automated control of terrestrial insect biobots. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 1470–1473 (IEEE, Osaka, 2013). https://doi.org/10.1109/EMBC.2013.6609789.
Li, G. & Zhang, D. Brain-Computer Interface Controlled Cyborg: Establishing a Functional Information Transfer Pathway from Human Brain to Cockroach Brain. PLoS ONE 11, e0150667 (2016).
Bian, J.-W. et al. Unsupervised Scale-consistent Depth Learning from Video. Int J. Comput Vis. 129, 2548–2564 (2021).
Acknowledgements
The authors thank Mr. Bing Sheng Chong for his helpful advice, Ms. Tan Yue Ling for illustration drawing, and Ms. Kerh Geok Hong, Wendy for her support and help.
Author information
Authors and Affiliations
Contributions
H.S. and R.L. conceived the problem. R.L. developed the navigation algorithm. R.L. collected the dataset and trained the model. R.L., Q.F.L., and P.T.T.N. designed and conducted the navigation experiment. R.L. and D.L.L. designed the data flow of the navigation experiment. R.L. conducted the data analysis. R.L., D.L.L., Q.F.L., P.T.T.N., and H.S. wrote and edited the manuscript. H.S. supervised the research. All authors read and edited the paper.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Li, R., Lin, Q., Tran-Ngoc, P.T. et al. Smart insect-computer hybrid robots empowered with enhanced obstacle avoidance capabilities using onboard monocular camera. npj Robot 2, 2 (2024). https://doi.org/10.1038/s44182-024-00010-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s44182-024-00010-3