(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.data-privacy-src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-W9652R7');

How Machine Learning in Automotive Makes Self-Driving Cars a Reality

Automotive Artificial Intelligence is rapidly displacing human drivers by enabling self-driving cars that use sensors to gather data about their surroundings. But how do self-driving cars interpret that data? This is the biggest use case of machine learning in automotive.

How self-driving cars make decisions

Driverless cars can identify objects, interpret situations, and make decisions based on object detection and object classification algorithms. They do this by detecting objects, classifying them, and interpreting what they are.

Diversity and redundancy           

Machine learning is accomplished through a fusion of many algorithms that overlap to minimize failure and ensure safety. These algorithms interpret road signs, identify lanes, and recognize crossroads.

How does a self-driving car see?

The three major sensors used by self-driving cars work together as the human eyes and brain. These sensors are cameras, radar, and lidar. Together, they give the car a clear view of its environment. They help the car to identify the location, speed, and 3D shapes of objects that are close to it. Additionally, self-driving cars are now being built with inertial measurement units that monitor and control both acceleration and location.

Reliable cameras

Self-driving cars have a number of cameras at every angle for a perfect view of their surroundings. While some cameras have a broader field of view of about 120 degrees, others have a narrower view for long-distance vision. Fish-eye cameras provide extensive visuals for parking purposes.

Radar detectors

Radar detectors augment the efforts of camera sensors at night or whenever visibility is poor. They send pulses of radio waves to locate an object and send back signals about the speed and location of that object.

Laser focus

Lidar sensors calculate distance through pulsed lasers, by empowering driverless cars with 3D visuals of their surroundings, adding richer information about shape and depth.

All these detectors provide cars with the data they need to have a clear view of their surroundings.

How automotive Artificial Intelligence algorithms are used for self-driving cars

To empower self-driving cars to make decisions, machine learning algorithms are trained based on real-life datasets.

Supervised vs unsupervised learning

Machine learning has two learning models: supervised and unsupervised. With unsupervised learning, a machine learning algorithm receives unlabeled data and no instructions on how to process it, so it has to figure out what to do on its own.

With the supervised model, an algorithm is fed instructions on how to interpret the input data. This is the preferred approach to learning for self-driving cars. It allows the algorithm to evaluate training data based on a fully labelled dataset, making supervised learning more useful where classification is concerned.

Machine learning algorithms used by self-driving cars

SIFT (scale-invariant feature transform) for feature extraction

SIFT algorithms detect objects and interpret images. For example, for a triangular sign, the three points of the sign are entered as features. A car can then easily identify the sign using those points.

AdaBoost for data classification

This algorithm collects data and classifies it to boost the learning process and performance of vehicles. It groups different low-performing classifiers to get a single high-performing classifier for better decision-making.

TextonBoost for object recognition

The TextonBoost algorithm does a similar job to AdaBoost, only it receives data from shape, context, and appearance to increase learning with textons (micro-structures in images). It aggregates visual data with common features.

Histogram of oriented gradients (HOG)

HOG facilitates the analysis of an object’s location, called a cell, to find out how the object changes or moves.

YOLO (You Only Look Once)

This algorithm detects and groups objects like humans, trees, and vehicles. It assigns specific features to each class of object that it groups to help the car easily identify them. YOLO is best for identifying and grouping objects.

Wrap-up

Machine learning algorithms make it possible for self-driving cars to exist. They allow a car to collect data on its surroundings from cameras and other sensors, interpret it, and decide what actions to take. Machine learning even allows cars to learn how to perform these tasks as good as (or even better than) humans.

This leads to the reasonable conclusion that machine learning algorithms and autonomous vehicles are the future of transportation.

At Mindy Support, we agree. Over the years, we’ve built teams to annotate data for automotive AI solutions. If you’re creating an automotive AI system, send a message to [email protected] or click the build me a team button.

2019-10-30T19:21:51+02:00June 15th, 2019|Mindy News Blog|
Schedule a Meeting

×
Schedule a Meeting

×
Schedule a Meeting

×

×

×
Our website uses cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our privacy policy. Settings Agree & Close

Google Analytics AND Tracking Cookies