New ADAS Technology Can Detect a Driver’s Comfort Level

Category: AI Insights

Published date: 30.08.2023

Read time: 6 min

As researchers continue to stride towards Level 5 automation (fully autonomous vehicles), currently, AI-powered vehicles rely on a combination of autonomous driving and human driver assistance. Having said this, researchers are very much interested in knowing what would make a person uncomfortable riding in a fully autonomous vehicle to the point they decide to take over the steering wheel. New ADAS technology could help answer that question by utilizing computer vision cameras. In this article, we will take a look at this new technology and the data annotation required to create it. 

What are the Various Levels of Automation?

The Society for Automotive Engineers (SAE) defines six levels of driving automation ranging from 0 (fully manual) to 5 (fully autonomous). These levels have been adopted by the U.S. Department of Transportation. Levels 1 and 2 are referred to as assisted driving technology, which is a technology that assists humans in the driving process but still requires a driver to make judgments and operate a vehicle. Levels 3-5 are referred to as automation driving technology, which is a technology that vehicles use to intervene, make decisions, and control operations. 

Earlier this year, Mercedes-Benz became the first automaker authorized to sell vehicles equipped with SAE Level 3 autonomous technology in the United States. The system, which is presently only authorized for use in Nevada, will be offered on 2024 Mercedes-Benz EQS and S-Class cars. To comprehend its surroundings, the system makes use of a number of sensors, including LiDAR, cameras, microphones, and road wetness detectors. So, what would it take to get from Level 3 automation to Level 4, which is where the vehicles drive you from one point to another without any assistance from the driver? 

In order to answer this question, researchers would first need to know in what situations human drivers feel comfortable letting the car do all the work and when they would like to take over. This is where the new technology comes in to help answer the question. 

Detecting Drivers’ Comfort Levels Behind the Wheel

When anything about the experience tempts you to take the wheel, the new self-driving car is built to anticipate that moment. It can happen when someone cuts you off in front of you, when a self-driving car follows a little too closely, when you’re moving at an excessive pace, when the weather is bad, when there are lots of pedestrians on the streets, or in a variety of other situations. GM has submitted a patent application for a system it calls “gaze” technology in an effort to lower the incidence of takeover incidents when employing Level 3 or higher autonomous driving. According to GM, sensitive sensors may tell whether a person is uncomfortable by using internal cameras to count the number of times a person looks in the mirror or over their shoulder.

If the car can detect when someone is uneasy, the autonomous technology can then change the way it drives to prevent the driver from taking control. For example, if a self-driving car is traveling in the rain and notices that the driver is looking around more frequently than usual, the car can learn to slow down on wet roads going forward. The gaze technology would have three levels, with the third level being when most people would take over driving the car.

According to GM, the technology may also be used to understand specifics about certain drivers in addition to learning to increase the driver’s comfort. Although the big manufacturer avoided going into great detail about what their autonomous vehicles would do with learned behaviors, everything is intended to “enhance the time spent in the vehicle.”

What Types of Data Annotation are Required to Train This Technology?

Since the computer vision cameras inside the car need to constantly monitor the driver’s eye and head movements, keypoint annotation will be necessary. This is where specific landmarks on objects in images or videos are labeled to identify their position, shape, orientation, or movement. Multiple key points can be connected to form larger structures known as keypoint skeletons.

Depending on the application, keypoints might represent a variety of components of the image, including corners, edges, or certain features. For instance, they can identify the eyes, mouth, and nose in facial recognition, while in human position estimation, keypoints can stand in for the body’s joints.

Trust Mindy Support With All of Your Data Annotation Needs

Mindy Support is a global provider of data annotation services and is trusted by Fortune 500 and GAFAM companies. With more than ten years of experience under our belt and offices and representatives in Cyprus, Poland, Romania, The Netherlands, India, OAE, and Ukraine, Mindy Support’s team now stands strong with 2000+ professionals helping companies with their most advanced data annotation challenges.

TABLE OF CONTENTS

    Stay connected with our latest updates by subscribing to our newsletter.

      ✔︎ Well done! You're on the list now

      TALK TO OUR EXPERTS ABOUT YOUR AI/ML PROJECT

      CONTACT US