Advanced Autonomous Vehicles Require Extensive Data Annotation

Category: AI insights

Published date: 16.02.2023

Read time: 5 min

The future of driving is not only about developing electrical vehicles, but also developing autonomous ones. Imagine one day you hop in your driverless car in the morning, and since it’s a weekday, it knows you want to go to the office unless you instruct it otherwise. When it drops you off, the vehicle goes back home instead of you paying for parking. While a vehicle with such capabilities is still far away in the future, we are already starting to see what industry professionals call Level 4 automation. 

In this article, we will take a look at the various levels of automation and an example of a Level 4 autonomous vehicle that may soon be on the road. We will also take a look at the types of data annotation that are required to train advanced autonomous vehicles. 

What are the Different Levels of Vehicle Automation, and What Do They Mean?

To set agreed-upon standards early in the transition to autonomous vehicles, the Society of Automotive Engineers (SAE) developed a classification system that defines the degree of driving automation a car and its equipment may offer. Ranging from levels zero to five, the driving automation spectrum begins with vehicles without this technology and ends with entirely self-driving vehicles.

Basically, they came up with a system of 5 levels of automation: 

  • Level 0 – No Driving Automation
  • Level 1 – Driver Assistance
  • Level 2 – Partial Driving Automation
  • Level 3 – Conditional Driving Automation
  • Level 4 – High Driving Automation
  • Level 5 – Full Driving Automation

As we can see, automated driving is not an all-or-nothing proposition, far from it. Automakers are incrementally adding active safety and self-driving features to their vehicles. These features generally fall into the logical groups listed above, based on how they combine acceleration and braking (called longitudinal control) and steering (called lateral control). 

Although many of the technological components exist for an artificially intelligent car today, due to regulations and legal battles, Level 5 vehicles are probably still many years away. Having said this, we may already start seeing Level 4 automation. Let’s take a look at an interesting example of a Level 4 autonomous vehicle we might see on the road soon. 

Level 4 Vehicle Tested in Japan

According to the Transport Ministry of Japan, self-driving autonomous vehicles are slated to begin operating in the town from fiscal 2023, starting April, the first time authorities in Japan will give the go-ahead for such a project. The central Japan town of Eiheiji, known for its ancient Zen Buddhist temple in Fukui Prefecture, is set to be the first place in the country to adopt a new type of autonomous self-driving vehicle, hoping to pioneer investment in what could become an important future technology.

In April, the central government will permit level-4 autonomous driving under certain conditions, such as designated routes and remote monitoring. As the vehicles will operate themselves, the cameras and communication equipment they use need to be further enhanced to ensure their safety. The government has set a target of launching level-4 automated transit services in more than 40 areas nationwide by fiscal 2025.

What Types of Data Annotation are Needed to Develop Level 4 Automation? 

One of the main types of data annotation needed for advanced autonomous vehicle development is 3D Point Cloud annotation. Companies place LiDARs somewhere on the vehicles, which send out pulses of light that bounce off objects and return back to the LiDAR. The longer it takes the light to bounce off and return back, the farther away the object is. All of this creates a 3D Point Cloud, which is a digital representation of how the AI sees the physical world.

This 3D Point Cloud needs to be annotated with techniques like polylines, which identify and detect low-intensity ground truth points like curbs, road pavement, and lane marks. Semantic segmentation is also necessary because here, several objects are classified as having additional attributes that provide deep insights, like finding smaller lanes for cars. Other types of image and video annotation will also be necessary, depending on the specification of the project. This can be tagging, 2D/3D bounding boxes, polygon annotation, and many others. 

Trust Mindy Support With All of Your Data Annotation Needs

Mindy Support is a global provider of data annotation services, trusted by Fortune 500 and GAFAM companies, as well as innovative startups. With 10 years of experience under our belt and offices and representatives in Cyprus, Poland, Romania, The Netherlands, India, UAE, and Ukraine, Mindy Support’s team now stands strong with 2000+ professionals helping companies with their most advanced data annotation challenges. 

TABLE OF CONTENTS

    Stay connected with our latest updates by subscribing to our newsletter.

      ✔︎ Well done! You're on the list now

      TALK TO OUR EXPERTS ABOUT YOUR AI/ML PROJECT

      CONTACT US