Training an AI System to Categorize Game Reviews

Services provided: Text Annotation

Published date: 20.03.2023

Read time: 2 min

Client Profile

Industry: Media & Entertainment
Location: Canada
Size: 1,001-5,000

Company Bio

Our partner’s client, working across more than 30 countries around the world, provides a mission to enrich players’ lives with original and memorable gaming experiences. They develop iconic franchises that fascinate millions around the world.

Services Provided

Project Overview

We were working with the client indirectly, through our partner. Their customer created an AI model that will check feedback from their games products. To increase gamers satisfaction, they wanted to categorize reviews dividing into: positive, neutral, negative and unclear. This was the request our partner made to Mindy Support to achieve the client’s goals.

Business Problem

The partner’s client had an existing AI system in place that could monitor the game reviews from its users. However, the system needed to be better trained to obtain deeper insights from the feedback provided by the users. They had a large training dataset that needed to be annotated which consisted of 19 packs of 900 files. All of the reviews needed to be grouped into four categories: positive, negative, neutral, undefined. The quality of the annotation needed to be high so the system could more accurately categorize the reviews. 

Why Mindy Support

Mindy Support had a long-lasting relationship with its partner. Since we proved ourselves to be a reliable data annotation provider, the partner did not hesitate to introduce us to the client and involved in this project.

Solutions Delivered to the Client

Mindy Support hired a team of three people to generate some sample game reviews. As mentioned earlier, the volume of data that needed to be annotated was quite high and each team member annotated 17,100 files. Thanks to the experience and professionalism of our team, we were able to complete the annotation work in a couple of months. In total, we completed more than 900 hours of data annotation and reached a consensus score of 90%. They were also responsible for annotating the data that was created. This included methods such as entity annotation, entity linking, sentiment analysis and many others

Key Results

  • 17,100 files annotated by each team member
  • 900 hours of data annotation completed in total
  • Consensus score of 90%

TABLE OF CONTENTS

    Stay connected with our latest updates by subscribing to our newsletter.

      ✔︎ Well done! You're on the list now

      GET A QUOTE FOR YOUR PROJECT

        I have read and agree to the Privacy Policy

        We have a minimum threshold for starting any new project, which is 735 productive man-hours a month (equivalent to 5 graphic annotators working on the task monthly).

        TALK TO OUR EXPERTS ABOUT YOUR AI/ML PROJECT

        CONTACT US