+44 (0) 15 64 793552

The analysis of driver’s gazes using eye-tracker data and image processing techniques

Seminar
Day 2 (7 Sep 2023), Session 4, Data Analysis, 10:00 - 11:30

Status
Accepted, documents submitted

Submitted by / Abstract owner
Furkan Aydin

Authors
Aydin Furkan, Mussone Lorenzo and Caruso Giandomenico

Short abstract
The paper presents a method to analyse 2D images collected by a world camera in order to track drivers' gazes on the road, overcoming the lack of 3D data. Experiments are conducted in a driving simulator, and an eye-tracker equipment is employed.

Abstract
Driver behaviour is one of the major contributing causes of traffic crashes. In an effort to learn more about it, it has been the focus of several inquiries over the years. Thanks to driving simulators, this activity can now be performed while considering a variety of potential danger situations in a completely safe driving environment. Whatever the case, because numerous studies show that driving context may influence drivers' gaze behaviour, it is crucial to look at drivers' eye movements as a sign of their focus and readiness to operate a motor vehicle.
In order to track drivers' behaviour while they are driving, this article examines where the gaze is located in the simulated road scenario. When experiments are conducted in a driving simulator and an eye-tracker device is used, at least two problems occur: the driver looks at points on the screens, which are on a plane surface, so we cannot have information about image depth; and the screen image changes or moves with the simulation, so the same point on the screen may represent different elements of the scene, especially when a curvilinear stretch is engaged.
By utilising an image processing technique, we provide a way to resolve these problems. By identifying the gaze position in relation to the road and determining whether the glance is on the inside or outside of the road segment, it enables us to make an informed decision. The eye-tracker video camera's RGB images (frames) are converted into a b/w image using the Canny filter to accomplish this. By analysing the colour change on an object's surface, this filter is able to determine its contours.
The information about the gaze position in the real world is then extracted from these new images by applying a window. The lack of 3D information is overcome in cases when not only the road but also buildings are present alongside the road in the simulation by projecting the gaze over their facades on the plane of the road. It is worth noting that the proposed technique can also be used to analyse driving images collected in the real-world.
A home-made fixed seating buck, a set of vehicle controls (a force-feedback steering wheel, a gear shifter, and pedals), and three 32" full-HD monitors with a 175-degree field of vision are all components of the driving simulator utilised in the experiment. The IPG Automotive CarMaker programme is used to virtually simulate the road scenarios.
Four drivers were used as a sample for the method's testing. The findings demonstrate a wide range of variations among drivers, as was to be expected, but they also demonstrate a clear disparity between driving in curved and rectilinear segments. In curved sections, the gaze is typically within, but in rectilinear sections, it is frequently outside the roadside. The simulation's scenarios' level of detail also has an impact on the gaze in the rectilinear segment; the higher the detail level, the higher the number of frames where the driver looks outside the road. We request that our paper undergo peer review.

Programme committee
Young Researchers' and Practitioners' Forum