Robots can operate autonomously in extreme environments that might be hazardous for humans. For example, they can inspect oil and gas equipment, monitor offshore wind turbines, survey subsea power networks and maintain nuclear reactors. We suggest that these robots should be required to self-certify that they can operate safely under such circumstances.
Robots can learn to adapt the way they perform tasks in changing and unexpected environments. However, if a robotic system learns a flawed model of the environment or a risky behaviour, it could undermine its own operation and the integrity of the asset that it is inspecting or repairing — with potentially catastrophic consequences. To protect against this, the robot should self-certify its correct operation by collecting data as it executes its task. It would then check the data against its mission plan, with minimal input from human operators (see D. M. Lane et al. IFAC Proc. Vol. 45, 268–273; 2012).
For autonomous systems to be trusted, developments in robotics and artificial intelligence need to be accompanied by advances in certification techniques. Regulators such as Lloyd’s Register have certification standards for industrial equipment and are beginning to explore the challenges of certifying self-learning systems (see go.nature.com/2cxxjcx). And several teams at Research Councils UK, including the Offshore Robotics for Certification of Assets hub (https://orcahub.org), are investigating this crucial area.
Nature 553, 281 (2018)