CanDrive
Challenge
The Society for Automotive Engineering recognizes multiple levels of driving automation, from L0 (completely unassisted) to L5 (completely automated). Most automated driving systems are L2, where the person is expected to immediately intervene if the AI makes a mistake.
How does exposure to automated driving affect attitude toward safety? Do people accurately understand what the car is doing? As they become more comfortable with the vehicle, do they become complacent?
Many perspectives on automated driving focus on maximizing ‘driver trust’. However, this is extremely dangerous if drivers falsely believe the vehicle to be reliable. Instead, people should have an accurate, “calibrated” view of the system: knowing where it performs well and where it’s likely to struggle.
A Tesla Model S with Autopilot™ (SAE Level 2 Automation) was equipped with Seeing Machines FOVIO eye-tracking suite, alongside multiple cameras, a Time-of-Flight sensor, a speaker system for giving cognitive load tests (n-back), an LED-based peripheral detection task (PDT), and a MobileEye vehicle informatics unit.
We could not only tell how the vehicle was performing (follow-distance, number of vehicles around, speed, etc.) but also how the driver performed. We could continuously see what they were looking at, how sensitive their peripheral vision was, and how heavy their mental workload was.
The Method
This mixed-methods study consisted of two phases: first, we distracted drivers on a test-track, and second, we tested their cognitive workload and peripheral vision while on a public highway. Afterwards, we interviewed them regarding their attitudes and understanding of driving automation to see what had changed.
The Results
- On the highway, drivers can reliably detect objects through near-peripheral vision, but this degrades with increasing visual angle, low-speed vehicle following, cognitive load, and age.
- You can measure cognitive distraction in manual driving by looking at how the vehicle is performing, but not in L2 mode.
- Drivers engaging with their cellphone are more likely to rely upon “Lizard-like glances”, moving their eyes while keeping their heads still. This means that head-tracking will not be effective at catching texting without also including eye-tracking.
- Gaze features, such as percent road center, are effective indicators of cognitive distraction, but less reliable in L2 driving.
- Interacting with L2 automation increases driver acceptance and trust of driving automation. Even if drivers experience multiple system failures, they’re more likely to focus on what the system did well and accept more personal blame instead.
- While Tesla uses a torque sensor to see if drivers have their hands on the wheel, this not only can’t tell if drivers are paying attention, but also leads to mode confusion where drivers accidentally disengage assistance and don’t realize that they have. This means that they will expect the car to react when they are actually driving unassisted.
My Contribution
I assisted Phase 2 of the CanDrive project in experiment design, data collection, and manuscript review.
Papers:
- Yang, S., Wilson, K., Roady, T, Kuo, J., & Lenné, M. G. (2022). Beyond gaze fixation: modeling peripheral vision in relation to speed, Tesla autopilot, cognitive load, and age in highway driving. Accident Analysis and Prevention.
- Yang, S., Shiferaw, B., Roady, T., Kuo, J., & Lenné, M. G. (2021). Drivers Gance Like Lizards during Cell Phone Distraction in Assisted Driving. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 65(1), 1410–1414. https://doi.org/10.1177/1071181321651147
- Wilson, K. M., Yang, S., Roady, T., Kuo, J., & Lenné, M. G. (2020). Driver trust & mode confusion in an on-road study of level-2 automated vehicle technology. Safety Science, 130, 104845.
- Yang, S., Wilson, K. M., Roady, T., Kuo, J., & Lenné, M. G. (2020). Evaluating Driver Features for Cognitive Distraction Detection and Validation in Manual and Level 2 Automated Driving. Human Factors. https://doi.org10.11770018720820964149