Stamping Out Bias at Every Stage of AI Development

Category: AI Insights

Published date: 23.12.2020

Read time: 6 min

As AI is playing an increasingly bigger role in our everyday lives, a lot of people are starting to worry about possible biases during the AI development process. This is usually due to the fact that the datasets used to train these systems are inherently biased themselves. Researchers are aware of this problem and have been working for quite a while now to resolve it. Since the pace of development of AI technology will continue to increase, it is important to shine a spotlight on all of the different methods available to researchers to prevent biases in the early stages of development to prevent flaws in the final which could be too expensive or impossible to fix. Let’s take a look at how researchers can mitigate the possibility of bias appearing at various stages starting with pre-processing. 

Rooting Out Bias During Pre-Processing 

The first thing we need to do is take a close at the data you will be using to train the system. This is one of the easiest ways biases can come into play because it is very easy to overlook a certain group of people unintentionally. When the model is applied to this group of people, you are bound to get inaccurate results. Also, the training data needs to be labeled by human data annotations to train the system to accurately identify people or objects. This is another human factor that can potentially introduce bias into the system which is why you should make sure that the data annotators are a diverse group of individuals. This will prevent unconscious biases from creeping in. 

One of the techniques used to get rid of bias at this stage is called counterfactual fairness where researchers try to ensure fairness in the algorithms’ decision process by making sure that their outcomes are the same in the actual world and a counterfactual one where individuals belong to different demographics. For example, let’s say two people, John and Ron, are competing for admission to a University. They both have similar GPAs and test scores. If we were to swap the demographic information, would the model process the data in the same way? Since this is a potentially life-changing decision, it is important to ensure that there are no inherent biases in the system. 

Getting Rid of Bias During In-Processing 

In-processing models also offer some interesting techniques to get rid of biases. One of them is called adversarial training. Basically with this method, you are essentially building two models. The first one will be geared towards producing your target outcome based on the training data you used during the pre-processing stage. The second model is the adversarial one which will try to predict the sensitive attribute used by the first model to produce its outcome. This is very useful since it can guide researchers to specific modifications that need to be made to the first model until the predictive power of the second one is weakened to the point where it can no longer predict the key attributes. 

Stamping Out Bias in Post-Processing 

Finally, we come to the post-processing stage which is the last chance researchers have to get rid of any biases. One of the most popular techniques used at this stage is called reject-option-based classification. In this approach, researchers make the assumption that most discrimination occurs when a model is least certain of the prediction meaning. In other words, the decision is 50/50 and it is too close to call. This technique is useful because it can prevent you from making potentially problematic decisions by exploiting the low confidence region of a classifier for discrimination reduction and rejecting its predictions, we can reduce the bias in model predictions. 

Mindy Support is Assisting With the Development of New and Fair AI Products

Since the pre-processing stage requires human data annotators to label what the systems need to identify, you will need a diverse group of annotators. It can be difficult and expensive to assemble such a diversity of people in your local area which is why it would be easier to outsource this job to Mindy Support. We have a successful track record of actualizing data annotation projects for various industries and companies of all sizes. Regardless of the size or complexity of your project, we will be able to assemble a team that fits your needs. We have rigorous QA processes in place to make sure that all of the annotation work is done on time at the first time of asking, thus eliminating the need to radio any of the work and preventing costly delays. 

TABLE OF CONTENTS

    Stay connected with our latest updates by subscribing to our newsletter.

      ✔︎ Well done! You're on the list now

      TALK TO OUR EXPERTS ABOUT YOUR AI/ML PROJECT

      CONTACT US