Self-driving vehicles are often hailed as the safer alternative to error-prone and easily distracted human drivers. Google wrote in 2010 that self-driving technologies could potentially reduce the total number of traffic fatalities — over a million each year worldwide — by half, clearly a goal worth pursuing. By 2016, a multitude of technology companies and carmakers made optimistic announcements about self-driving cars and taxis ready for the market within years. But reality hit soon after: self-driving cars continue to have many blind spots in unanticipated situations, in extreme weather conditions or with other unusual sensor input. A human driver’s constant attention is still required, as they need to be ready to take over the wheel at any moment. This is more than a technical detail: the practical, human and legal challenges underlying the takeover process, which require driver and car to interact at short notice for a swift and safe hand-over of control, are daunting.

Credit: RooM the Agency / Alamy Stock Photo

Almost all major car manufacturers now include some form of automated driving technology in their flagship models. However, most of these advanced assistive driving systems require the user to keep their hands firmly on the steering wheel. Some recent models are advertised as ‘level 2’ autonomous, which means that driving functions themselves are automated and a driver only needs to monitor the behaviour and take back control in challenging situations. An example is the ‘Super Cruise’ by Cadillac, advertised as the first hands-free self-driving car for highways in the United States and Canada. A few models are even advertised as ‘level 3’, based on the UNECE (United Nations Economic Commission for Europe) standard for Automated Lane Keeping Systems, which can theoretically allow drivers to divert their attention in certain situations. For instance, Mercedes-Benz’s ‘Drive Pilot’ is certified for ‘level 3’ automation in highway traffic jams in Germany and is restricted to 60 km/h. Although, depending on jurisdiction, drivers might now legally be able to take their hands off the steering wheel, their availability to take control needs to be constantly ensured by onboard cameras.

Due to marketing and over-promises by car manufacturers, driver perceptions and expectations may not match the actual capabilities of a car. Tesla early on provided a feature called ‘autopilot’ and now offers a so-called ‘Full Self-Driving’ feature, but drivers are still required to pay full attention in either mode, with their hands on the wheel at all times, according to Tesla’s own safety instructions. In a recent court case, the driver of a Tesla Model S, which was involved in a fatal crash after running a red light while in ‘autopilot’ mode, was charged with manslaughter.

Last month, coincidentally only days after the ruling was announced, the Law Commission of England and Wales and the Scottish Law Commission released a joint report on the legal definitions and implications of self-driving cars. The report aims to remove the liability of the person in the driver’s seat when the car has been legally characterized as ‘self-driving’, which means that the car does not require the user’s input unless it prompts for it. In case of an accident or traffic infraction, the user in charge would not be liable, except when they had been alerted by the system to take over and failed to do so, or failed to mitigate the risk as could be reasonably expected from a safe driver. Crucially, the report recommends making it a criminal offense when marketing falsely implies a car is self-driving.

A challenge for regulatory bodies will be to define the tests necessary to define whether the autonomous capabilities of a specific vehicle are adequate to pass a legal threshold for ‘self-driving’. There is also the problem of over-trust in self-driving technology: every trip that is driven without incident increases user confidence, whereas failure conditions are hard to predict. An underlying question is whether sufficiently fast control takeover is realistically possible at all times, especially as users become accustomed to their vehicle’s growing independence.

It may be a safer route to abandon commercialization of level 2 and level 3 features in favour of basic driving-assistance features until fully autonomous (level 4 and level 5) self-driving vehicles are possible. A level 4 approach would limit the fully driverless capability to specific areas and conditions, such as in the case of Alphabet’s Waymo, which offers a driverless taxi service in Phoenix, Arizona, in a well-mapped 50-square-mile area with wide roads and few pedestrians.

The increased awareness that cars gain from additional sensors and their growing ability to react quickly in specific situations can improve safety, especially when a fully engaged driver is assisted in a critical situation. But when semi-automated driving is offered to drivers, it must be clear what the driver’s responsibilities are and whether they are allowed ‘eyes off’ levels of disengagement. Arguably, it might be unrealistic to expect that drivers will always be able to turn their attention back to traffic and take over control of the vehicle quickly enough. Assigning responsibility will likely remain a challenging issue in the future, even when legal verification processes are established.