Introduction

Advanced driver assistance systems (ADAS) used in partial automation are intended to reduce the driver workload without causing disengagement. Level-2 automation splits the responsibility of the real-time operational and tactical functions to operate a vehicle safely in on-road traffic1, where the driver is responsible for the ‘object and event detection and response (OEDR)’, while the vehicle operates the sustained lateral and longitudinal motion control2. A combination of two ADAS is used to comply with this definition. Active cruise control regulates the vehicle to a predefined speed and slows down to maintain a preset distance with any slower moving vehicles ahead. Lane centering assistance (LCA) operates the steering system to track the trajectory computed by the automation (or AD trajectory), which typically is the center position of the lane in which the vehicle is traveling. Moreover, assistance for lane change is available in some vehicles. The automated lane change (ALC) function provides guidance to support the driver when the traffic condition is safe. Level-0 ADAS functions, such as automatic emergency braking for the longitudinal displacement or lane keeping assistance (LKA) for the lateral deviation complete the active safety envelope.

Providing an interactive environment with the steering system, where manual and automated inputs can coexist alleviates the risk of disengagement. Hence, lateral control of the vehicle is often shared so that manual steering over the guidance torque of the automation is possible without deactivation. Here, shared control is defined following3: ‘human(s) and robot(s) are interacting congruently in a perception-action cycle to perform a dynamic task that either the human or the robot could execute individually under ideal circumstances. This definition excludes full automation (where there is no human) or manual control (where there is no automation)’. The concept of haptic shared control (HSC) has received significant attention due to the anticipated benefits to safety for partial and conditional automation levels4,5,6,7,8. Haptic communication through the steering interface is suggested to be the most practical channel to bond driver and vehicle because of its bilateral and dynamic characteristics4,6,9.

Most partially automated vehicles use a blended control scheme for HSC (Fig. 1a). It finds its origin in robotic force control under the name of ‘parallel force/position control’10,11 and is based on the idea that the driver and the automation can apply a torque command independently to the same actuator. In terms of control, the automation is a feedback loop of the steering displacement, in which a manual torque input is seen as an external disturbance to be rejected. Conceptually, blended control consists in modulating the angle controller impedance to enable driver intervention. The ADAS functions are realized through conditional operation of the blended control scheme. Typically, the gain Gt and eventually the angle controller gains are programmed to satisfy the operating condition of each ADAS function. For example, the assistance provided from the LCA function is obtained by operating the steering system in shared control mode with 0 < Gt < 1. The reaction torque to the driver is proportional to the tracking error and its derivative. This angular error is caused by manual intervention or variation of the tracking reference. Therefore, the reaction torque of the LCA represents haptic guidance directed towards the AD trajectory, which is intended to reduce the driver workload.

Fig. 1: Two configurations of steering HSC between human driver and automation and overview of an automated driving control framework including the proposed collaborative steering control.
figure 1

The dashed lines represent the human driver control. a Blended control: The driver torque Td and the automation torque Ta track their target angles θd and θa with the feedback of the measured angle θp. The respective tracking efforts are superposed to form an electric power steering (EPS) motor torque command. The gain Gt is used for attenuating the automation effort and enabling driver intervention in shared mode. Gt is set to zero when operating the EPS in manual mode in the event of an override. b Admittance control: A virtual plant is used to estimate the manual deviation θm from the measured driver torque input Ttb. The angle control attempts to enforce the superposition angle of θa and θm by applying the command torque Tmot to the EPS. The reaction torque perceived by the driver is designed with the virtual plant and its load Ta. Similarly to blended control, the gain Gt is set to zero for manual steering. c The black blocks with the vision loops illustrate the typical structure of an automated vehicle control system. The proposed control is represented with the turquoise blocks. Arbitration allocates the control authority of the automation based on the interaction type. The driver and the automation interact through the virtual EPS. The resulting manual deviation is input into the steering angle control so as to enforce the superposition of the driver intent to the AD trajectory. Additionally, this deviation is propagated to the inclusion block to assimilate the driver intent into the trajectory planning.

If the surrounding traffic situation allows for a safe lane change, ALC is activated upon confirmation that the driver holds the steering wheel and activation of the turn indicator. The ALC consists in the application of a predefined trajectory change toward the adjacent lane center. When the lane change is completed, LCA is again activated to maintain the vehicle centered in the new lane.

LKA functions are often realized through brake activation to prevent lane departure. Torque vectoring is used to generate a vehicle yaw motion toward the lane center by applying asymmetrical commands to the individual brakes. This is a conservative approach that corrects the vehicle heading while reducing speed. In vehicles that employ the steering system for LKA, correction of the vehicle heading is achieved by adding a torque overlay12.

As reported in the review of shared control for automated vehicles8, there are more than 100 contributions focusing on shared steering control, thus revealing the wide range of applications and implementations of the HSC concept. Nevertheless, most contributions focus on particular issues, such as how to prioritize the driver versus the automation and how to manage conflict, therefore providing only limited answers toward a unified and holistic approach.

Further, state-of-the-art blended control has major disadvantages due to the dual role of tracking and regulation of the angle controller, which is typically a PID13. Ideally, perfect tracking is expected in the absence of driver input, while low rejection performance is required to enable manual intervention. Modulation of the control gain as a function of the driver activity is technically challenging because no sensor is available nor sufficiently reliable for this application (Supplementary Note 1).

Current practice is to consider each ADAS independently. This results in a discontinuous operation, which makes the driving experience uncomfortable. Consequently, drivers tend to display a low acceptance rate of ADAS technology14:

  • LCA uses proportional and derivative gains independent of the driver input, while only the integrator is switched on to ensure zero steady-state error in the absence of driver intervention and switched off to avoid windup on manual input. Furthermore, the proportional and derivative gains are set to relatively low values to enable manual input, which lowers the tracking performance. Consequently, most partially automated vehicles have limited capability in tracking the lane in the case of road curvature. However, this centering torque is bounded by the driver input. Shared control is available under a preset driver torque threshold. Input above this threshold results in an override that deactivates the ADAS by returning the steering mode to manual (Gt = 0). When the driver torque decreases below the threshold, the ADAS is reactivated automatically by switching the steering mode back to HSC. Therefore, shared control for LCA is only available over a limited driver torque range resulting in discontinuous operation of the ADAS function15.

  • Although ALC provides comfort during regular operation, the steering operation is switched to manual in the case of driver intervention. The assistance interruption is uncomfortable and eventually requires driver reactivation16,17.

  • A torque overlay or offset is applied for LKA, which, in the worst case, results in the vehicle bouncing between the left and right lane markings. Shared control is not used because the low tracking performance of blended control does not guarantee lane tracking (as explained for LCA above) and therefore is not reliable for lane-keeping support in all road conditions (e.g., curves). While LCA and LKA share the control objective of centering the vehicle in the lane, they are not combined, increasing the risk of driver confusion.

Although technical limitations of mass produced cars justify some of these design choices, partially automated vehicles are characterized by limited functional integration of shared control and discontinuous operation of the ADAS18. Therefore, a generic control framework for collaborative steering, consistent across tactical and operational vehicle controls and across all levels of automation is required to address these issues.

In order to go beyond the classical form of driver-automation interaction this paper proposes a collaborative steering control framework within the limitation of mass produced steering hardware, that is based on the following functions:

  • Interaction consists in providing the capability of haptic shared control to the steering system. Admittance control (Fig. 1b) is applied to enable the driver to deviate the vehicle from the AD trajectory without impairing the tracking performance of the angle control.

  • Arbitration refers to the allocation of roles among the driver and the automation when attempting to share the lateral control of the vehicle. There are four types of interaction: cooperation, co-activity, collaboration and competition (see “Methods” section for their definitions). Based on a preselected type of interaction, an arbitration rule is used to set the reaction torque of the automation according to the motor control of the driver. The parameters of the driver motor control: goal (or target angle) and impedance, have to be estimated with the sensors available in mass produced vehicle.

  • Inclusion consists in adapting the AD trajectory to the driver intervention. If the manual deviation is sufficiently large and persistent in time, the automation assimilates this correction in the trajectory planning.

Similarly to human-human collaboration19,20, the above three functions are essential for the realization of collaborative steering. An interactive control environment is a prerequisite for haptic communication with the driver21. Arbitration provides the capability of interacting in different manners according to the road, traffic, and driver conditions. Then, inclusion assimilates the deviation resulting from the interaction into the AD trajectory. While the frequency bandwidth of the interaction has to be compatible with that of the driver torque, inclusion occurs at the lower bandwidth of the vehicle motion. There have been various attempts to provide human-robot collaboration, but none of them have combined the three functions of interaction, arbitration and inclusion. For example, the literature22,23,24,25,26 addresses the interaction and arbitration problem following different approaches, but omit inclusion. Collaboration cannot be achieved because the robot does not assimilate the human intent in its trajectory. As soon as the human stops interacting, the robot returns to its predefined trajectory. Conversely, the literature27,28,29,30 proposes adaption of the trajectory based on manual intervention (human force or torque) without arbitration. While driver triggered re-routing of the AD trajectory becomes available, it is performed at the relatively slow dynamics of the vehicle, which is inappropriate for haptic interaction. Indeed, fine-tuning of a vehicle steering behavior is a subjective process that defines the vehicle performance. Degraded steering feel caused by low frequency interaction control is unacceptable.

The main contribution of this work is the integration of the interaction, arbitration, and inclusion functions into a generic multi-level control framework that is applicable to mass-produced vehicles within the limitation of the available hardware. The control framework features the following advantages:

  • Compatible with all levels of automation 0–4, where the human can still take part in the driving.

  • Integration of the ADAS functions and continuous operation in shared control mode (override-free ADAS).

  • ADAS functions that satisfy multi-objective requirements related to vehicle motion and driver intent to effectively contribute to better traffic safety.

Figure 1c gives an overview of the proposed control. The black blocks and the bottom half of the figure (gray background) represent the plant (detailed in “System dynamics” section) and the basic controls for automated driving. These controls are based on state feedback (positions and their derivatives) and rely on vision sensors (camera, radar, lidar, etc.). The turquoise blocks and the top half of the figure (light turquoise background) show the proposed control framework. The closed loop made with the arbitration and interaction blocks corresponds to the torque feedback of the admittance control illustrated in Fig. 1b and detailed in “Interactive steering control” section. The arbitration block allocates the control authority of the automation based on the estimation of the motor control of the driver (“Estimation of driver motor control” section) and on a preset type of interaction (“Arbitration” section). Propagation of the manual deviation resulting from the interaction to the trajectory planning is realized with the inclusion block (“Inclusion of driver intent into the trajectory adaptation” section), which closes the haptic loop.

Following the explanation of experimental configurations, the paper continues with the performance and experimental validation of the proposed control in the “Results” section. The “Discussion” section offers the contributions and limitations of the proposed control and the paper is concluded.

Results

Experimental configurations

Four experiments have been conducted on different setups to test and validate the proposed control framework: virtual driver, human driver, driving simulator, and test vehicle (Fig. 2). These are summarized as follows and the symbols and parameters used for these experiments are listed in Supplementary Table 1 and Table 2.

  • Virtual driver configuration. The first experimental configuration consists of a column-type EPS and an electric motor to replicate the driver input (Fig. 2a). Instead of the driver, an impedance-controlled motor is used for the validation of the estimation performance of the driver motor control (“Estimation of the driver motor control” section). The reference target angle and impedance of the virtual driver can be compared to their estimated values.

  • Human driver configuration. The second setup uses the same equipment, but the impedance-controlled motor is replaced by a human driver (Fig. 2b). The human driver is required to execute a sine-shaped maneuver through the different preset types of interactions. The estimation of the driver impedance validated in the first test configuration is confirmed in the case of manual steering and used to verify the arbitration rules (Eq. (12)).

  • Driving simulator configuration. The trajectory adaptation algorithm is validated on a static driving simulator (Fig. 2c). The control environment includes trajectory planning, tracking control, and the shared control framework. The vehicle motion is simulated and displayed on a screen for visual immersion. The driving scenario is a double lane change on a three-lane 1.5 km straight course. The nominal trajectory of the automation lies in the center of the middle lane and the vehicle is controlled to track this nominal trajectory at 60 km h−1 using the Stanley trajectory tracking model31. The driver is required to operate the steering wheel only and is free to change lanes.

  • Test vehicle configuration. This configuration concerns the implementation of the previously validated admittance control, arbitration rule, and trajectory adaptation in an actual test vehicle (Fig. 2d). The vehicle tracks a predefined trajectory (nominal AD trajectory) at 60 km h−1 using cruise control on the same driving scenario and algorithm as that of the driving simulator configuration and position feedback from a high precision global navigation satellite system (GNSS). The driver is free to intervene and to deviate the vehicle away from the nominal AD trajectory. For the quantitative study (“Driver quantitative study” section), the driving scenario is the double lane change with 100 m intervals, as illustrated in Fig. 2d.

Fig. 2: Test equipment.
figure 2

a Virtual driver configuration. An impedance-controlled motor is used instead of the driver for the validation of the estimation of the driver motor control (driver goal and impedance). b Human driver configuration for the validation of the actual driver motor control and of the arbitration rules. c Driving simulator configuration for the validation of the trajectory adaptation. d Test vehicle configuration used for the proof of concept and the quantitative evaluation, and driving scenario for the quantitative evaluation. The following abbreviations are used: EPS for electric power steering, GNSS for global navigation satellite system, and AD for automated driving.

Performance of the driver motor control estimation

The performance of the estimation of the driver target angle (Eq. (15)) and impedance (Eq. (18) and Eq. (19)) were measured individually on the experimental configuration shown in Fig. 2a. For the impedance estimation, the goal of the virtual driver was set to a sine wave and that of the automation to zero. For the goal estimation, the impedance was set randomly, as shown in Fig. 3a. The estimation results are plotted in the same figure. The accuracy of the approximated driver goal varies as the driver impedance changes. When using the target angle of the virtual driver as input, the estimation of the driver stiffness and damping converge toward an oscillatory behavior about the set value. These oscillations stem from the driver impedance (Eq. (5)) that is undefined when either driver input or tracking error goes to zero32.

Fig. 3: Independent and combined estimations of the driver motor control measured on the test bench (virtual driver configuration).
figure 3

a The estimation of the driver target angle \({\hat{\theta }}_{d}\) is computed from the actual driver impedance Zd,1 and Zd,2, while the estimated driver impedance \({\hat{Z}}_{d,1}\) and \({\hat{Z}}_{d,2}\) are calculated with the actual driver target angle θd. b The estimation of the driver target angle \({\hat{\theta }}_{d}\) is computed from Eq. (13) and the estimated driver impedance \({\hat{Z}}_{d,1}\) and \({\hat{Z}}_{d,2}\) are calculated with the estimated driver target angle.

Figure 3b shows the combined estimations under the same test conditions when the approximated driver goal is used as input for the estimation of the impedance. While the performance of the combined estimation is impaired, the impedance variations can still be extracted. Two major errors can be observed. First is the overestimation of the driver impedance in the steady-state conditions that is a consequence of the underestimation of the driver goal (Eq. (5)). In practice, the control is tuned for safe operation. Indeed, an overestimated impedance would amplify the role allocation from the arbitration rule (Eq. (12)). The second error is the amplification of the oscillatory behavior. This is caused because the model used in the extended Kalman filter (EKF) is different from that used for the approximation of the driver goal. This class of oscillatory problems caused by modeling error is well-known33.

Verification of the arbitration rule

Setting the automation effort with the arbitration rule according to a type of interaction (Eq. (12)) is verified in this experiment. The human driver configuration shown in Fig. 2b is used so that the human can take part in the experiment. The driver was asked to perform a steady slalom maneuver while the automation had the objective of driving in a straight line (Fig. 4). The type of interaction was changed every fifteen seconds in the following order: (i) co-activity, (ii) collaboration, and (iii) competition. Furthermore, the driver was asked to take his hands off the steering wheel during the last five seconds of each interaction type. The automation impedance is set to be constant (κ = 0) for co-activity during the first 15 s. The measurements show that the pinion angle tracks the average angle between those of the driver and the automation in this particular case, where the driver accommodates the automation. Since the automation impedance is constant, the driver torque is simply proportional to the angular deviation from the automation target. During the next fifteen seconds, κ = 1, which corresponds to collaboration. The automation impedance is adapted based on the estimated driver impedance: the larger the driver impedance, the smaller the automation impedance. During the manual intervention, the driver perceives resistance from the higher authority of the automation at first. Then, as the automation detects driver engagement, the control authority is gradually transferred to the driver. Conversely, when the automation detects that the manual intervention fades, the automation impedance is recovered and the automation target angle is tracked. This demonstrates how automation backs up the human in the driving task with a continuous estimation of the driver motor control. From 30 s onward, κ = −1 which sets the competition type of interaction. The automation impedance increases according to that of the driver in order to oppose manual intervention. The driver has to apply higher torque to accomplish the same maneuver. Smaller values of κ enable stronger resistance and virtually full rejection of the driver intervention.

Fig. 4: Interaction performance for different types of interaction measured on the human driver configuration.
figure 4

The estimation of the target angle \({\hat{\theta }}_{d}\) is computed from the road information and the measured driver torque Ttb with Eq. (13), and the estimated driver impedance \({\hat{Z}}_{d,1}\) and \({\hat{Z}}_{d,2}\) is calculated with the estimated driver target angle \({\hat{\theta }}_{d}\). Based on the estimated driver impedance \({\hat{Z}}_{d,1}\) and \({\hat{Z}}_{d,2}\) and a preselected type of interaction, an arbitration rule (Eq. (12)) is used to modulate the automation impedance Za,1 and Za,2, which finally generates the automation torque Ta to track its target angle θa with the feedback of the measured angle θp. The type of interaction is set to co-activity (κ = 0) from 0 to 15 s, to collaboration (κ = 1) from 15 to 30 s, and to competition (κ = −1) from 30 s onward. The sections with the gray background indicate time periods where the driver has his/her hands off the steering wheel.

During the three time periods in which the driver is not holding the steering wheel (hands-off), the control authority is naturally returned to the automation (nominal impedance). This highlights that the automation works as a backup to the driver but also that sustained effort is required for any deviation away from the AD trajectory. Automation backup is suitable for automated driving level 3 or more but not at level 2, where the driver is required to be engaged in the driving task, as it may increase the risk of misuse.

Performance of the trajectory adaptation

This section summarizes the results obtained for the trajectory adaptation on the static driving simulator, shown in Fig. 2c. Co-activity role allocation (κ = 0) is chosen to focus on the trajectory adaptation without the estimation of the driver motor control. The measured torque, the AD trajectory, and the actual vehicle trajectory are compared when the adaptation is deactivated and activated for a double lane change maneuver (Fig. 5). When deactivated, the AD trajectory is fixed on the initial lane and the driver has to continuously apply torque to deviate the vehicle away from the AD trajectory (Fig. 5a). The double lane change maneuver is performed at the expense of sustained effort as the automation continuously pulls the driver back to the AD trajectory. This interaction is of interest because it provides guidance to the driver. While large deviation may result in high interaction torque, it is fundamental as a haptic cue during local deviation.

Fig. 5: Comparison of driver inputs over a double lane change maneuver with the trajectory adaptation inactive and active measured on the driving simulator (driving simulator configuration).
figure 5

The starting of the lane changes correspond to the points where the measured driver torque Ttb arises. a The automation (AD) trajectory yr,opt is not adapted, and the driver and the automation track the actual vehicle trajectory Δyv. b The AD trajectory yr,opt is adapted according to the driver intervention, and the driver and the automation track the actual vehicle trajectory Δyv.

As the trajectory shifts towards the next lane with the adaptation algorithm activated, the interaction torque relaxes and the vehicle is centered on that new lane (Fig. 5b). The driver applies torque to initiate a local deviation, which triggers an adaptation of the trajectory if sufficiently large. The bounded interaction torque and the adaptive guidance constitute the relevant haptic cues for collaborative steering.

Proof of concept on the test vehicle

The previously validated arbitration and inclusion control algorithms were implemented on the test vehicle shown in Fig. 2d. The driving scenario is the same double-lane change as that in the previous section and both arbitration and inclusion controls were active. Two responses are provided for the arbitration rule set to collaboration (Fig. 6a) and to competition (Fig. 6b). The practical verification of the proposed multi-level haptic control demonstrates a consistent response of the test vehicle. First, the driver interacts with the automation under the control allocation set with the arbitration rule. Because the driver and the automation impedances are complementary in collaboration mode, the torque peak that is observed is lower than that during co-activity (Fig. 5b). Second, the lateral deviation induced from the interaction is propagated to the trajectory planning via the inclusion control. Consequently, the vehicle tracks the left and right lanes without sustained manual torque.

Fig. 6: Performance of the complete multi-level haptic control measured on the test vehicle (test vehicle configuration).
figure 6

All symbols are compatible in Fig. 3–Fig. 5. a The type of interaction is set to collaboration (κ = 1). b The type of interaction is set to competition (κ = −2).

In competition mode, the automation impedance varies together with that of the driver. Because of the relatively low value of κ, the driver cannot apply a torque high enough to deviate the vehicle from the AD trajectory without reaching the maximum capacity of the steering system. No assimilation of the manual intervention is verified, and the vehicle tracks the trajectory developed without haptic contribution.

In summary, the haptic cue communicated to the driver is twofold. First, the driver feels the set role allocation and may be able to generate a lateral deviation of the vehicle. Second, if this deviation is large enough, inclusion occurs by means of haptic adaptation of the trajectory. As a consequence, the interaction torque remains bounded, indicating to the driver that his intervention has been assimilated. These haptic cues contribute to the realization of intuitive collaborative steering.

Driver quantitative study

This section presents an evaluation of several individuals to verify the usefulness of the proposed multi-level haptic control framework. Five participants with an average age of 35 (from 29 to 44) years old took part in the driving assessment on the test vehicle shown in Fig. 2d. All participants were experienced drivers and reported an average annual travel distance of 5800 km. The participants were required to execute double lane change maneuvers at 60 km h−1, as illustrated in Fig. 2d, with the four different control modes (Table 1) set in random order. All participants drove twice under each control mode and the averages of these trials are used for this study.

Table 1 Four control modes for the quantitative study

Two criteria are proposed for the assessment of collaborative steering: driver effort (DrE) and steering entropy (StE). DrE corresponds to the driver torque steering effort throughout the test duration tsc28,34, while StE is a criterion representing the smoothness of the evolution of the steering angle that is commonly used to quantify maneuverability35,36. Their respective formulations are given in the “List of KPIs” section. Control modes with lower DrE and StE allow for a smoother operation with less effort for the driver. Statistical differences between the control modes were analyzed using one-way analysis of variance (ANOVA), and multiple comparisons between specific control modes were executed via paired samples t-tests.

The results obtained with each control mode are summarized in Fig. 7. Figure 7a shows that the DrE significantly decreases in the order of modes 1–4 (p < 0.001) according to the ANOVA results. In particular, there is a large gap in DrE under modes 3 and 4 compared to modes 1 and 2. This is interpreted as the result of the implementation of inclusion, where the AD trajectory was adapted to match the driver intention so that sustained manual torque was no longer required during the lane change maneuver.

Fig. 7: Quantitative evaluation by driver effort (DrE) and steering entropy (StE) measured on the test vehicle (test vehicle configuration).
figure 7

DrE and StE are normalized with the min–max normalization method where the maximum value is the mean of mode 1 and the minimum value is zero, i.e. the lower bound of the physical range. The error bars represent the standard deviations between participants. a DrE evaluation between control modes. b StE evaluation between control modes. c DrE evaluation between control modes and participants. d StE evaluation between control modes and participants.

Further, Fig. 7b suggests that the application of arbitration reduces the average of StE both with and without inclusion, i.e. the maneuver was executed more smoothly. In particular, the lowest StE is obtained under mode 4, compared to mode 1 (p < 0.04) and mode 3 (p < 0.02), according to the t-test results. StE shown in Fig. 7b suggests that the variability between participants was higher than that of DrE (Fig. 7a), especially under modes 1 and 2. In addition, the participants can be classified into two groups: group 1 includes participants 1 and 2, and group 2 includes the others. The StE is smaller and the DrE larger in group 1 compared to group 2 (Fig. 7c, d). From these trends, it can be inferred that participants of group 1 applied more effort to achieve a smooth maneuver, while participants of group 2 operated with a smaller torque input at the expense of smoothness. These variations in driver behavior suggest that control modes 1 and 2 may lead to a low rate of acceptance, whereas control modes 3 and 4, which yield a smaller variability, are likely to be accepted by a wide range of drivers.

The comprehensive analysis of these two criteria (DrE and StE) suggests that the proposed control framework based on arbitration and inclusion has the potential to achieve smooth maneuvering with less effort for a wide variety of drivers.

Discussion

The proposed control framework enables collaborative steering through haptic control integration at the operational and tactical levels of automated driving vehicle. A broad spectrum of interactions between the driver and the automation is made available through the arbitration rules, and manually induced deviations are consistently assimilated and updated via trajectory planning.

Compared to the literature22,23,24,25,26, which consider interaction and arbitration only, the proposed control framework prevents the vehicle from returning to the nominal AD trajectory after a driver intervention. Inclusion assimilates intervention as an additional factor alongside vision information for the rerouting of the AD trajectory (Fig. 5b and Fig. 6a). This enables maintaining continuous shared steering operation in the event of a manually induced lane or route change. For example, by tuning the inclusion parameters and selecting the appropriate interaction type, LCA, ALC, and LKA can be integrated consistently in partially automated vehicles.

Compared to control algorithms that consider solely interaction and inclusion29,30, the proposed framework permits full rejection of the driver input by setting the interaction type to competition (Fig. 6b). This means that it can accommodate any level of automation and the development of multi-objective ADAS functions. It is the arbitration that provides this capability (Fig. 7). For partially automated vehicles, the LKA function can be enhanced with a high temporary reaction torque to prevent a collision. Also, this applies to highly automated vehicles where the automation can take the responsibility of the OEDR. The automation will have the authority to collaborate or compete with the driver depending on the road and traffic situation.

Compared to control schemes that merely rely on inclusion27,28, the proposed control framework enables independent optimizations of the driver and vehicle responses. Driver reaction torque is tuned with the interaction and arbitration functions, while the vehicle motion is adapted with the inclusion function. This alleviates the tuning trade-off and results in higher overall performance.

With the admittance control, the trade-off between the acceptance of the driver input and the tracking accuracy, which is found in blended control, is solved, and the tuning range is widened significantly.

The implemented framework requires the interaction type between the driver and the automation to be set by a higher-level controller based on endogenous (driver state) and exogenous (road and traffic conditions) information. Since the selection of the interaction type is out of scope for this work, the appropriate interaction type setting according to the driving situation remains to be addressed in a future task.

The adaptation of the automation impedance based on the preset type of interaction is simplified with Eq. (12) in comparison to the optimization method37 (Fig. 4). This approach allows the validation of the comprehensive concept of arbitration while satisfying the implementation requirement on mass production hardware. However, to faithfully realize the interaction types originally defined in the literature37, a control theory to minimize the cost function consisting of the angle tracking error and effort of the driver and the automation defined for each type of interaction is required. In a future task, this could be achieved by using non-linear model predictive control (MPC)25,26.

Although the accuracy of the driver goal approximation used for arbitration is limited, it proves to be sufficiently rich to extract the dynamics of the driver impedance (Fig. 4). Moreover, the implemented approximation method, which merely relies on an admittance model, requires relatively low computational power. Nevertheless, the driver goal and impedance are abstract concepts, and it has not been verified whether the values estimated by the current algorithm match the actual driver motor control. However, this is a conceptual proposal to roughly approximate how strongly the driver is steering the vehicle, in order to implement the arbitration strategy. A further limitation is that the EKF tuning for the driver impedance estimation is based on the assumption that the stiffness and damping of driver change simultaneously. This means that the EKF cannot capture situations where the driver operation causes an extreme change only in his stiffness or damping. This limitation could be improved by the comparison with measured driver operation information captured via electroencephalography (EEG) or electromyography (EMG) sensors.

Using the manual angle as input to the vehicle model to estimate the yaw rate is robust to modeling errors compared to using the driver torque as suggested in the literature27,28. Furthermore, as the manual deviation is related to the type of interaction, propagation of this deviation to the trajectory planning enhances haptic consistency. Vehicle tests demonstrated the capability of interacting under the role allocated by the arbitration and the inclusion to manual intervention. However, the timing for that propagation from the initial manual intervention to the trajectory adaptation should be carefully adjusted to guarantee an acceptable steering feel. Hence, further fine-tuning and customization of the proposed control strategy is essential for intuitive haptic communication and driver acceptance.

The analysis of DrE and StE suggests that the proposed control framework (Mode 4 in Table 1) can achieve smoother maneuvers with less effort for a wide variety of drivers compared to controls that use arbitration only (Mode 2 in Table 1)22,23,24,25,26 or which consider solely inclusion (Mode 3 in Table 1)27,28,29,30. However, since the test samples are relatively small, it would be worthwhile to validate the proposed control with a larger number of participants to obtain a quantitative evaluation of greater statistical relevance.

Conclusion

A driver-centered automation control has been proposed to address the concept of collaborative steering in automated driving without alteration of the hardware available in mass-produced vehicles. According to a preset type of interaction, the driver steering intention is reflected in the automation impedance and trajectory planning. Because the implication of manual intervention affects both operational and tactical levels of automated driving control, intuitive haptic communication is made available to the driver and consistent integration across all vehicle actuators is supported.

The originality of the proposed implementation is summarized as follows:

  • The proposed multi-level control framework enables consistent integration of the ADAS functions while continuously operating in shared control mode. Furthermore, the high-performance angle control, combined with the large spectrum of interaction, makes this framework compatible with all automation levels where the driver can still be part of the driving.

  • The admittance control has been applied to a steering system to enable interaction between driver and automation. The interactive nature of admittance control alleviates the trade-off found in blended control while ensuring superior tracking performance of both AD trajectory and driver intervention. Furthermore, the interactions taking place in the virtual plant are isolated from hardware limitations resulting in robust performance.

  • A large spectrum of interaction between independent agents has been made available with the proposed rules of arbitration.

  • Consideration of the context of collaborative steering enables the assumptions of independent interacting agents. The observability issue of the combined estimation of driver goal and impedance is avoided by considering the agent goals as boundary conditions and consequently, impedance modulation can be achieved.

  • Practical reconsideration of classical two-level steering control within the context of collaborative steering resulted in the development of a simple approximation of the driver goal.

  • The manual deviation from the AD trajectory resulting from the interaction is consistently propagated to the trajectory planning by using the manual angle as input.

  • Through quantitative evaluation with five participants, the proposed multi-level haptic control has been validated in a vehicle. The assessment suggests a significant potential to provide smooth collaborative steering with less effort for a wide range of drivers.

While the proof of concept on the test vehicle demonstrates the capability of this multi-level collaborative steering control, fine-tuning and customization is required to render the steering feel comfortable and consistent for a safer and more reliable shared driving experience.

Finally, the application of the proposed control framework for the development of ADAS functions can be considered with the objective of encouraging driver engagement at partial automation or providing continuous automation back up to the driver at higher automation levels.

Methods

System dynamics

The system enabling collaborative steering is composed of the driver, the automation, and an electric power steering (EPS) system, which represents the mechatronic interface. The EPS is composed of a steering wheel, a motor, gears, and angle and torque sensors, as shown in Fig. 8. The dynamics of the EPS system can be described as:

$${T}_{d}+{i}_{s}{T}_{mot}+\epsilon ={{\Psi }}$$
(1)

where Tmot is the motor torque command, Td is the driver input torque, is is the ratio of the reduction gear, ϵ is white noise in the driver and the automation torque, and Ψ is the dynamics of the EPS. Assuming that the components from the lower side of the torque sensor to the front wheel are stiff, Ψ can be simplified to a two-inertia system38:

$${J}_{sw}{\ddot{\theta }}_{sw}={T}_{d}-{T}_{tb}$$
(2)
$${T}_{tb}={K}_{tb}({\theta }_{sw}-{\theta }_{p})$$
(3)
$${J}_{p}{\ddot{\theta }}_{p}={T}_{tb}+{i}_{s}{T}_{mot}+{T}_{ld}$$
(4)

where Jsw and Jp are the steering wheel and the lower part of the torque sensor inertia, respectively, θsw and θp are the steering wheel and the measured pinion shaft angles, Ttb and Ktb are the torque sensor output and its stiffness, and Tld is a disturbance consisting of internal nonlinearities (friction, backlash, etc.) and the road load.

Fig. 8: Structure of a dual pinion type electric power steering.
figure 8

In manual operation, the motor is controlled so that less effort is required for the driver when turning the wheels. For collaborative steering, the automation, which input is computed in the motor control unit (MCU), is controlled so as to support appropriately the driver.

Both agents, the driver, and the automation are assumed to track their own trajectory based on individual impedance control loops. The motor control of the driver holding the steering wheel is formulated as follows:

$${T}_{d}=-{Z}_{d}^{{\prime} }{\xi }_{d},\,{Z}_{d}=\left[\begin{array}{c}{Z}_{d,1}\\ {Z}_{d,2}\end{array}\right],\,{\xi }_{d}=\left[\begin{array}{c}{\theta }_{sw}-{\theta }_{d}\\ {\dot{\theta }}_{sw}-{\dot{\theta }}_{d}\end{array}\right]$$
(5)

where Zd is the driver impedance (\({Z}_{d}\in {{\mathbb{R}}}_{\ge 0}^{2}\)), ξd is the tracking error of the driver, and θd is the target angle or goal of the driver. \({}^{{\prime} }\) represents a transpose matrix.

The effort Ta is the torque input of the automation in Fig. 1b. It constitutes one of the components of the EPS motor torque Tmot (see the next section):

$${T}_{a}=-{Z}_{a}^{{\prime} }{\xi }_{a},\,{Z}_{a}=\left[\begin{array}{c}{Z}_{a,1}\\ {Z}_{a,2}\end{array}\right],\,{\xi }_{a}=\left[\begin{array}{c}{\theta }_{p}-{\theta }_{a}\\ {\dot{\theta }}_{p}-{\dot{\theta }}_{a}\end{array}\right]$$
(6)

where Za is the automation impedance (\({Z}_{a}\in {{\mathbb{R}}}_{\ge 0}^{2}\)), ξa is the tracking error of the automation, and θa is the target angle or goal of the automation.

To represent how the driver and the automation impedances may evolve over time, the following dynamic models are introduced39.

$${\dot{Z}}_{d}(t)=-{T}_{z,d}^{-1}{Z}_{d}(t)+{T}_{z,d}^{-1}{Z}_{d}(t-1)$$
(7)
$${\dot{Z}}_{a}(t)=-{T}_{z,a}^{-1}{Z}_{a}(t)+{T}_{z,a}^{-1}{Z}_{a}(t-1)$$
(8)

where Tz,d and Tz,a are time-constant parameters for modulating the driver and the automation impedances.

Interactive steering control

The interpretation of “physical human-robot interaction” (pHRI) has significantly evolved over the past decades. While safety was originally the main concern in the case of physical contact with a robot, pHRI has been considered as an implicit means to communicate the human intention to a robot with the objective of jointly completing a task40. The literature41 groups control strategies for pHRI into two categories: “indirect force control” and “direct force control”. The former controls the force through motion feedback, with typical applications of impedance and admittance controls. The latter has the objective of controlling the interaction force to the desired value based on the feedback from the actual force measurement. The objectives of the interactive control of the steering actuator are twofold:

  • High-angle tracking performance

  • Enabling manual deviation from the AD trajectory without impairing the angle-tracking performance

An admittance control framework (Fig. 9a) is proposed for the interactive steering control to overcome the limitations of blended control. Although admittance control is not commonly used for haptic interaction42, it is appropriate for the application of automated steering because of the high performance of position tracking and the availability of the measurement of the driver torque. Assuming that a lower-level controller linearizes and decouples the plant dynamics, a linear two-degree of freedom controller (feedback and feedforward) with a single set of gains is sufficient to guarantee constant position tracking performance under any operating condition. One of the advantages of admittance control is that the inner angle control loop is purposefully made stiff so as to ensure high tracking performance. Consequently, the AD trajectory is tracked accurately in the absence of interaction. Conversely, the outer torque loop is naturally closed in the presence of interaction43. The reference position of the automation θa is corrected with an estimated manual deviation θm computed from the dynamics of the virtual plant.

$${J}_{vp}{\ddot{\theta }}_{m}={T}_{tb}+{T}_{a}$$
(9)

The angle reference of the inner loop θcmd is defined as the superposition of commands from the automation and the driver:

$${\theta }_{cmd}={\theta }_{a}+{\theta }_{m}$$
(10)

Hence, the automation angle control attempts to enforce the angle superposition of θa and θm by applying the motor torque command Tmot.

Fig. 9: Detailed representation of the admittance control structure for haptic shared control and equivalent interaction dynamics of an admittance-controlled electric power steering (EPS).
figure 9

a The dashed lines represent the driver control. The inner loop is an angular position control, which is purposefully made stiff. The outer loop is activated only when the driver inputs torque. The virtual EPS computes an estimation of the manual deviation, which reflects the driver intent under the preset type of interaction. The manual deviation is superposed to the angular command of the automation (AD trajectory) to form the command of the inner loop. b The equivalent interaction dynamics is a two-inertia system coupled with the torque sensor, which stiffness is Ktb. The steering wheel (inertia Jsw) represents the interface with the driver motor control (goal θd and impedance Zd), while the reaction from the automation (goal θa and impedance Za) is applied to the virtual inertia Jvp.

The closed-loop system dynamics are obtained by substituting Eq. (2), Eq. (5), and Eq. (6) into Eq. (9) and assuming perfect tracking (θcmd ≈ θp).

$$\begin{array}{l}-{J}_{sw}{\ddot{\theta }}_{sw}-{Z}_{d}^{{\prime} }{\xi }_{d}={J}_{vp}{\ddot{\theta }}_{m}+{Z}_{a}^{{\prime} }{\xi }_{a}\end{array}$$
(11)

This equivalent two-inertia system is illustrated in Fig. 9b. It shows that the torque felt by the driver (\({T}_{d}={Z}_{d}^{{\prime} }{\xi }_{d}\)) when interacting with the automation can be controlled by tuning the virtual plant and the automation effort (\({T}_{a}={Z}_{a}^{{\prime} }{\xi }_{a}\)). For stability reasons, the bandwidth of the outer torque control loop should be set lower than that of the inner angle control loop42,43,44. In practice, the inertia of the virtual plant is set to a value higher than that of the actual plant (Jvp > Jp). Hence, it is the automation effort that is modulated to render the interaction. As shown in the next section, an arbitration rule is used to allocate the automation control authority according to a preset type of interaction.

In consequence, the admittance control framework enables manual deviation θm of the vehicle from the AD trajectory θa. When the manual intervention ends (θm = 0), the steering returns back to the AD trajectory (θcmd = θa).

Arbitration

Arbitration in pHRI is required to regulate the control authority of the robot when attempting to accomplish a common task according to a preset type of interaction. The literature37 proposes a taxonomy of the types of interactions based on neuroscience and game theory:

  • Assistance is an extreme case of cooperation where, typically, the robot (slave) is used to amplify the physical capability of the human (master).

  • Cooperation takes place when the two agents work towards a common end and need each other to reach the goal. Part of cooperation is the education role arbitration, which is critical for gaining new capability by ensuring a certain degree of engagement.

  • Co-Activity occurs when the two interacting agents, without knowledge of each other’s actions, incidentally succeed in a common task.

  • Collaboration features no fixed roles distribution but rather adapts the distribution to accommodate the other while still considering its own perspective.

  • Competition, similarly to collaboration, is a symmetric arbitration where the role distribution opposes the other while considering its own perspective.

Review40 cites numerous contributions for each interaction type. The type of interaction is likely to vary dynamically over the completion of a joint task. Endogenous (driver state) and exogenous (road and traffic conditions) information is used in a higher-level controller to set the interaction type, which is, however, outside the scope of this work.

The objective of the arbitration is to define how the automation has to react to the driver intent based on the preset type of interaction. From Eq. (11) and assuming constant virtual plant inertia, two variables, the automation angle θa, and its impedance Za, are available for adjusting the reaction torque as a function of the driver goal θd and impedance Zd. Two approaches have been proposed in the literature. The literature24 focuses solely on the interaction and avoids consideration of the boundary conditions by opting for constant human and automation impedances. In this way, an arbitration rule was established based on the human goal and resulted in a large spectrum of interaction but with limited dynamic performance. Conversely, the literature23 addresses the application of robotic rehabilitation, which relies on cooperation, with the human goal assumed to be equivalent to that of the robot. Under this assumption, impedance modulation was developed for this specific type of interaction. Here, the proposed arbitration considers that driver and automation are two independent agents. Therefore, their respective goals are boundary conditions that need to be identified separately. Then, the driver impedance can be estimated with the EKF when knowing the driver goal (detailed in the next section).

Here, the following arbitration rule is proposed for the adaptation of the automation impedance:

$${Z}_{a}={Z}_{a,0}-\kappa {\hat{Z}}_{d}$$
(12)

where Za,0 is the nominal automation impedance, \({\hat{Z}}_{d}\) is the estimated driver impedance, and \(\kappa \in {\mathbb{R}}\) is a parameter used to set the type of interaction. For κ = 0, the automation impedance is constant, which corresponds to co-activity. This is the natural type of interaction obtained from the admittance control. For κ > 0, the automation adapts and supports the driver. This is the collaboration type of interaction. The opposite behavior or competition is obtained for κ < 0. Here the automation impedance increases with that of the driver, resulting in a rejection of the manual intervention. With this approach, the range of interactions from competition to co-activity and collaboration is made available. However, cooperation (including assistance) is not applicable because of the assumption made regarding the independent goals of driver and automation. However, note that cooperation-type interaction is already being used in the EPS control for manual operation: the EPS (automation) amplifies the manual torque so as to assist the driver in reaching their goal.

Estimation of the driver motor control

Realization of the arbitration relies on the availability of the driver goal and impedance. The observability issue of a combined estimation of these two variables with the interaction dynamics32 (Eq. (11)) is avoided with independent estimations. Indeed, it is assumed that the contextual nature of the joint task of driving defines the boundary conditions (driver and automation goals) of the interaction dynamics. Then, the driver and the automation vary their impedances as they interact under the constraints of their respective goals.

The driver goal and impedance are abstract representations of how the driver interacts with the automation. Although numerous driver models have been proposed, their objectives are to represent the driver under particular conditions. These objectives range from vehicle tracking of a given trajectory with a virtual driver model to more elaborated driver models, which include trajectory planning with optimization preference (time, acceleration, braking, rpm, etc.)45. However, there is no practical and generic approach available that could predict where a driver intends to go in any situation. Similarly, various attempts to describe and identify the driver impedance have been proposed but they rely either on additional sensors (e.g. EMG, grip force, driver torque) or are laboratory-based setups with limited practical relevance46,47. The literature48 proposes the identification of the driver impedance while driving under the assumption of a constant driver goal. Unfortunately, these approaches are not suitable for the estimation of the driver impedance while interacting with the automation.

Considering the economic constraints of mass-produced vehicles with a limited number of sensors available, the driver goal and impedance can, at best, only get approximated roughly. Here, an approximation of the driver goal is computed at first. The sensors available in mass-produced vehicles are limited, so the two-level model of steering49 is applied in the context of collaborative steering7. The driver anticipatory visual open-loop control is assumed to track the center of the lane as an inherent environmental constraint. Any deviation from it is considered as originating from a driver intent in the compensatory closed-loop control. Hence, the estimate of the driver goal \({\hat{\theta }}_{d}\) is composed of an environmental constraint θenv and of the driver intent θint.

$${\hat{\theta }}_{d}={\theta }_{{{{{\rm{env}}}}}}+{\theta }_{{{{{\rm{int}}}}}}$$
(13)

A steady-state model of the vehicle motion with longitudinal speed vx and road curvature ρ as inputs is used for the computation of the environmental constraint:

$${\theta }_{env}=\left(1-\frac{{M}_{v}{v}_{x}^{2}}{2{({l}_{f}+{l}_{r})}^{2}}\frac{{l}_{f}{C}_{f}-{l}_{r}{C}_{r}}{{C}_{f}{C}_{r}}\right)({l}_{f}+{l}_{r}){i}_{o}\rho$$
(14)

where Mv is the mass of the vehicle, lf and lr are the distance from the gravity center to the front and rear axles, respectively, Cf and Cr are the fronts and rear cornering stiffness, and io is the overall gear ratio from the steering angle to the tire angle.

A driver intent estimator is introduced to generate an approximation of the manual deviation away from the environmental constraint. It is assumed that a simple admittance model used to convert the driver torque to a desired future angle propagated by some time interval will provide a rough approximation of the driver intent50:

$${\theta }_{int}={\iint} _{t}^{t+{t}_{i}}\frac{{T}_{tb}(t)}{{J}_{sw}+{J}_{d}}dt$$
(15)

where Jd is the driver inertia and ti is the propagation time. Both parameters can be tuned.

With the available measurements of the torque Ttb and pinion angle θp as well as the estimate of the driver goal \({\hat{\theta }}_{d}\), the EKF51 is developed for estimating the driver impedance52. The measurement of the pinion angle allows the decoupling of the steering wheel inertia from the dynamics of the pinion53. Consequently, Eq. (2), Eq. (3), Eq. (5), and Eq. (7) are discretized at time interval Δt to form the plant model for the estimation.

$${x}_{t+1}={f}_{t}({x}_{t})+{w}_{t}$$
(16)
$${y}_{t}={h}_{t}({x}_{t})+{v}_{t}$$
(17)

where,

$${f}_{t} =\left[\begin{array}{c}{\theta }_{sw,t}+{\dot{\theta }}_{sw,t}\Delta t\\ {\dot{\theta }}_{sw,t}+{J}_{sw}^{-1}({T}_{d,t}-{T}_{tb,t})\Delta t\\ {\hat{\theta }}_{d,t}+{\dot{\hat{\theta }}}_{d,t}\Delta t\\ {\dot{\hat{\theta }}}_{d,t}\\ {Z}_{d,1,t}+{T}_{z,d}^{-1}(-{Z}_{d,1,t}+{Z}_{d,1,t-1})\Delta t\\ {Z}_{d,2,t}+{T}_{z,d}^{-1}(-{Z}_{d,2,t}+{Z}_{d,2,t-1})\Delta t\end{array}\right]\\ {h}_{t} =\left[\begin{array}{c}{T}_{tb,t}\\ {\hat{\theta }}_{d,t}\\ {\dot{\hat{\theta }}}_{d,t}\end{array}\right]\\ {x}_{t} ={\left[\begin{array}{cccccc}{\theta }_{sw,t}&{\dot{\theta }}_{sw,t}&{\hat{\theta }}_{d,t}&{\dot{\hat{\theta }}}_{d,t}&{Z}_{d,1,t}&{Z}_{d,2,t}\end{array}\right]}^{{\prime} }$$

The following observer is formulated for the estimation of the driver impedance.

$${\hat{x}}_{t+1/t}={f}_{t}({\hat{x}}_{t/t})$$
(18)
$${\hat{x}}_{t/t}={\hat{x}}_{t/t-1}+{K}_{t}({ \, y}_{t}-{h}_{t}({\hat{x}}_{t/t-1}))$$
(19)

The EKF gain is calculated as:

$${K}_{t}={P}_{t/t-1}{\hat{H}}_{t}^{{\prime} }({\hat{H}}_{t}{P}_{t/t-1}{\hat{H}}_{t}^{{\prime} }+R)$$

where P can be obtained by solving the Riccati equations:

$${P}_{t+1/t} ={\hat{F}}_{t}{P}_{t/t}{\hat{F}}_{t}^{{\prime} }+Q\\ {P}_{t/t} ={P}_{t/t-1}-{P}_{t/t-1}{H}_{t}^{{\prime} }{({\hat{H}}_{t}{P}_{t/t-1}{\hat{H}}_{t}^{{\prime} }+R)}^{-1}{\hat{H}}_{t}{P}_{t/t-1}$$

where \(\hat{x}\) is the state estimated by the EKF and \(\hat{F}\) and \(\hat{H}\) are Jacobian matrices, defined as follows.

$${\hat{F}}_{t}={\left(\frac{\partial {f}_{t}({x}_{t})}{\partial {x}_{t}}\right)}_{{x}_{t} = \hat{{x}_{t}}},\quad{\hat{H}}_{t}={\left(\frac{\partial {h}_{t}({x}_{t})}{\partial {x}_{t}}\right)}_{{x}_{t} = \hat{{x}_{t}}}$$

where Q and R are the covariance matrices of the process noise w and observation noise v respectively, which have to be tuned based on the modeling error and the noise level of the target system. Through computation of the prediction and correction33 with Eq. (18) and Eq. (19), the last two components of \(\hat{x}\) are estimated as the driver impedance Zd.

Inclusion of driver intent into the trajectory planning

The arbitration rule allocates the control authority of the automation according to the preselected type of interaction. Manual intervention causes a deviation from the AD trajectory. Sustained input from the driver results in a steady interaction torque, and when released, the vehicle returns to the AD trajectory. This section presents the inclusion of driver intervention into trajectory planning to realize collaborative steering. For example, during a manually triggered lane change maneuver, it is necessary to reflect the driver intent in the trajectory planning. Hence, the reaction torque remains bounded along the maneuver and the driver does not have to apply a sustained torque to keep the vehicle in the new lane. These effects on the reaction torque represent haptic cues that communicate to the driver how the automated steering collaborated during the maneuver.

The proposed approach is inspired by the literature27,28 for the integration of the driver intent into trajectory planning. However, rather than using the driver torque for the trajectory planning because of the absence of interactive steering control, the proposed approach uses the angular deviation resulting from the interaction. Consequently, collaborative steering is available only when the type of interaction enables a manual deviation from the AD trajectory, such as co-activity and collaboration. In the following, only the differences from the literature27,28 are presented.

Inclusion consists in adding a term that represents the manual intent into the trajectory planning. At first, the yaw rate of the vehicle γm caused by the manual intervention is computed from a single track vehicle model54 with the driver angle θm as input (Fig. 10a):

$${\dot{x}}_{v} ={A}_{v}{x}_{v}+{B}_{v}{u}_{v}\\ {x}_{v} =\left[\begin{array}{l}\beta \\ {\gamma }_{m}\\ \end{array}\right]\,{u}_{v}={\delta }_{m}=\frac{{\theta }_{m}}{{i}_{o}}\\ {A}_{v} =\left[\begin{array}{cc}{a}_{11}&{a}_{12}\\ {a}_{21}&{a}_{22}\\ \end{array}\right],\,{B}_{v}=\left[\begin{array}{c}{b}_{11}\\ {b}_{21}\\ \end{array}\right]\\ {a}_{11} =\frac{-2({C}_{r}+{C}_{f})}{{M}_{v}{v}_{x}},\,{a}_{12}=\frac{2({l}_{r}{C}_{r}-{l}_{f}{C}_{f})}{{M}_{v}{v}_{x}^{2}}-1,\\ {a}_{21} =\frac{2({l}_{r}{C}_{r}-{l}_{f}{C}_{f})}{{I}_{z}},\,{a}_{22}=\frac{-2({l}_{r}^{2}{C}_{r}+{l}_{f}^{2}{C}_{f})}{{I}_{z}{v}_{x}},\\ {b}_{11} =\frac{2{C}_{f}}{{M}_{v}{v}_{x}},\,{b}_{21}=\frac{2{l}_{f}{C}_{f}}{{I}_{z}}$$
(20)

where β is the side slip angle, vx is the longitudinal velocity, and Iz is the yaw moment of inertia of the vehicle. Second, a constant turn ratio and velocity (CTRV) model is used for converting the calculated yaw rate into a driver desired lateral deviation. The CTRV model enables the computation of the lateral deviation Δyd when the vehicle moves forward during a time horizon ts in stationary condition with constant vehicle longitudinal velocity vx and yaw rate γm. The kinematics are given as follows55:

$$\Delta {y}_{d}=\Delta {y}_{v}+\frac{{v}_{x}}{{\gamma }_{m}}(1-\cos ({t}_{s}{\gamma }_{m}))$$
(21)

where Δyd and Δyv represent the lateral error between the driver desired lateral position and the AD trajectory and that between the current vehicle position and the AD trajectory as illustrated in Fig. 10b. The inclusion of the driver intent uses this estimate of the lateral deviation as a new corrective term into the trajectory planning. The cost function used to select the optimal lateral trajectory yr,opt from a predefined set of candidates yr(i, k) is augmented with the new corrective term.

$${C}_{y}(i,k)= \,{k}_{j}{J}_{y}(i,k)+{k}_{t}{t}_{f}(k)\\ +{k}_{a}{({ \, y}_{rf}(i))}^{2}+{k}_{m}{({ \, y}_{rf}(i)-\Delta {y}_{d})}^{2}$$
(22)

where \({k}_{j},{k}_{t},{k}_{a},{k}_{m}\in {\mathbb{R}}\) are the weights of the cost function components. kjJy(i, k) is the jerk-related term to account for driving comfort. kttf(k) is the time-related term. \({k}_{a}{({y}_{rf}(i))}^{2}\) and \({k}_{m}{({y}_{rf}(i)-\Delta {y}_{d})}^{2}\) account for the deviation errors from both agents. The final lateral position yrf(i) is used with the completion time tf(k) for the computation of the trajectory candidates yr(i, k).

Fig. 10: Vehicle and constant turn ratio and velocity (CTRV) model.
figure 10

a Representation of the single track vehicle model used for the calculation of the yaw rate γm from the manual deviation θm. β is the side slip angle, vx is the longitudinal velocity, and io is the overall gear ratio from the steering angle to the tire angle δm. lf and lr are the distance from the gravity center to the front and rear axles, respectively. b CTRV model for the calculation of the driver desired lateral deviation Δyd when the vehicle moves with the constant yaw rate γm and longitudinal velocity vx during a time horizon ts. Representation of the lateral deviation caused by the manual intervention is made in the Frenet coordinate. The AD trajectory is represented on the s-axis and any deviation from it corresponds to a relative displacement along the d-axis as Δyv.

Notice that the selected optimal lateral trajectory is tracked during no driver intervention only. In the case of manual intervention, the optimal lateral trajectory is continuously computed at a frequency higher than the completion time tf.

Inclusion of the driver intent into the trajectory planning is realized with the term of the lateral position error from the driver in the cost function (Eq. (22)). Consequently, the deviation caused by the driver intervention is propagated to the trajectory planning, thus preventing the occurrence of excessive and sustained interaction torque. Moreover, this assimilation transfers the manual correction of the AD trajectory consistently to the other actuators of the vehicle, such as the brakes and the accelerator (Fig. 1c).

List of KPIs

The KPIs used for the driver quantitative study are listed as follows:

  • Driver effort (DrE)

Driver torque steering effort during the time of manoeuver:

$$DrE=\int\nolimits_{0}^{{t}_{sc}}{T}_{tb}^{2}dt$$
(23)
  • Steering entropy (StE) Algorithm to calculate the entropy:

    1. 1.

      Obtain the time-series steering angle data for each sampling time dt (dt was set to 150 ms in this study with reference to the literature56).

    2. 2.

      The future steering angle is predicted by quadratic Taylor expansion from the past three data points of the steering angle, and the prediction error between the predicted future steering angle and the actual steering angle is obtained.

    3. 3.

      Determine the 90 percentile value α centered at 0 degrees (α was set to 0.25 from the average prediction error distribution of all participants when driving in conventional manual mode).

    4. 4.

      Divide the frequency distribution of the prediction error into nine bins based on the range of α (−5α, −2.5α, α, −0.5α, 0.5α, α, 2.5α, 5α).

    5. 5.

      Calculate StE from the proportion Pi of each bin using the following formula:

$$StE=\mathop{\sum }\limits_{i=1}^{9}{P}_{i}lo{g}_{9}{P}_{i}$$
(24)