For cars to be safe, full control must be allocated to the driver — be it human or computer, argues John Baruch.
As the popularity of self-driving cars increases, so do the concerns. Last month, China banned tests of autonomous vehicles on public roads. And investigations continue into the death of Joshua Brown, who was killed when his Tesla car on autopilot ploughed into the side of an articulated truck in Florida. His car used visible-light cameras to image the road and computers to evaluate the situation. But according to Tesla, the white truck merged with the bright Florida sky and was not recognized.
Self-driving vehicles promise to extend and enrich travel. But they raise profound questions for society about our relationship with machines. How will people cope, and what business models will develop?
Will car manufacturers emulate jet-engine manufacturers such as Rolls-Royce and Pratt & Whitney and start to sell travel distance and continuously record the operation of every engine they sell? Will car owners use the Uber model to rent out their vehicles as taxis instead of leaving them in a car park? Will automation revolutionize rural transport and give the rural poor, young, old and disabled the low-cost travel they are entitled to?
The challenges for the architects of our city centres, housing, streets, schools and workplaces are immense. And the issues around control of information such as big data and privacy must be confronted. Car manufacturers may find that the information they glean from tracking the lifestyle of their customers is worth much more than their vehicles.
The scientific community and society in general need to engage with these questions, and together decide what kind of future we want and how autonomous vehicles fit in. If we don't, the future is likely to be mapped out by companies that merely want to make money from the technology. The questions are complex, but they can be boiled down to one: should self-driving cars have a steering wheel?
The big Internet companies such as Google, Apple and Baidu — those generating the real pressure for self-driving vehicles — do not think that they should. These companies are keen to maximize time online for those who are rich enough to afford a car. Google's business model can use daily commute time that is no longer spent driving a car to increase the value of its advertising. It would therefore not want self-driving vehicles that allow the driver to take over. The vehicle is totally autonomous.
A number of driverless vehicles with no steering wheel and no opportunity for people to take control are on trial — including a parking transit system at London Heathrow Airport and buses in the Netherlands, Italy and China. It is the Chinese who are taking the opportunity most seriously, where Internet companies are working with vehicle manufacturers.
Incremental support is not a safe compromise.
But this model poses a problem for Tesla and for many other car manufacturers, especially for the more expensive brands. The appeal to customers of luxury car brands tends to be the driving experience. And if the car has no steering wheel — and the 'driver' is a mere passenger — that appeal evaporates. That's why Tesla, Jaguar Land Rover and others use the technology in existing self-driving cars only to provide support, with the driver officially remaining in charge. Formally, Brown was in charge of the vehicle he died in.
Car and component companies are working hard to generate a business model for more-autonomous vehicles that have steering wheels. It is clear that they remain keen to provide the driver with assistance, and that means there is a real need for research: on both the technical and social-science aspects. How will drivers react when the car tells them to take over? How can the handover be made safe? How can the vehicle be brought to a safe halt if the driver does not take charge?
I operate an autonomous robotic telescope in the Canary Islands that is 3,000 kilometres from its base in the United Kingdom. Autonomous telescopes do not pose the same dangers to the public as self-driving vehicles do, but there is a lot that can be learnt from our experiences. We have removed nearly all the single-point failure modes by quadruplexing all the crucial information flows (when one fails, you can still poll the others and isolate the failure) and have instituted an artificial-intelligence reconfiguration process that isolates failure until it is repaired. With quadruplexed systems, Brown might not have died. The car would have slowed down, if only because the sensor systems were experiencing confusion.
The driver-support philosophy is a flawed approach. Some aircraft already have technology that can take the plane from runway to runway, with the pilot just taxiing the plane to and from the stand. It is generally not used, because, with no role in the flight, pilots can become bored and do other things. They are then totally unprepared to take over if needed. Incremental support is not a safe compromise. People must either drive a car or be driven by it.
The efforts of the luxury car brands are reminiscent of when gas companies tried to improve lighting by adjusting lantern mantles when electric lighting appeared. For self-driving cars to rule the road, the steering wheel must go the way of the Model T Ford.
Related links in Nature Research
Related external links
About this article
Cite this article
Baruch, J. Steer driverless cars towards full automation. Nature 536, 127 (2016). https://doi.org/10.1038/536127a
Journal of Intelligent & Robotic Systems (2022)
American Journal of Gastroenterology (2019)