Ethical Issues in AI

Category: AI insights

Published date: 14.09.2020

Read time: 6 min

Artificial intelligence is already providing a lot of benefits in areas like healthcare, automotive, agriculture, and many other industries but, as the technology advances, we also have to think about the ethical problems presented by AI. A lot of attention is being given to problems like loss of human jobs due to AI, robots going rogue but there are many other pressing issues that are falling through the cracks. In this article, we will explore some of the ethical issues presented by AI that are not getting as much attention as they deserve, but they are things that we will have to address in the near future.

Ethical Issues in AI

AI Generating Photos of Fake People

Companies are always looking to increase diversity in their advertising and AI is now offering them an interesting new shortcut. The machines are now able to produce photos of fake people and these photos look so real, that there is no way an average person could tell that this is a deep fake. Basically, the companies who can produce such fake images are coming to advertisers and offering them to put a checkmark in the diversity box. All they have to do is simply tell the AI company the race, age, gender, and any other characteristics they would like the people in the advertisement to have and the company will provide them with images of fake people with the specified characteristics. Of course, this is not real diversity.

The way such AI systems work is by annotating thousands and thousands of images of real people with landmark annotation. Basically, a human annotator would place data points around the facial features the machine needs to learn in all of these photos which will, later on, be inputted into the system. The machines learn all of the possible ways human eyes, cheeks, mouth, and other features may look like and provide life-like images of fake people.

Lethal Autonomous Weapon Systems

If we think about all of the military conflicts going on overseas they typically involve heavy urban warfare that causes a lot of collateral damage. One of the biggest reasons for this is that it is difficult for a human soldier to identify an enemy target with all of the nuances and factors that must be considered in a war zone. Lethal autonomous weapon systems, or LAWS, could potentially change that. Basically, an algorithm would have to be developed that tells the machines what they should look for and in which situations they should use lethal force.

For example, let’s say that the robot needs to be programmed to shoot somebody holding a particular weapon. Since the weapons could come in various shapes and sizes, human data annotators would need to label all of the various types of weapons the computer needs to identify. The algorithm will rely on this information to determine whether or not it is allowed to shoot at a person wielding such a weapon. We are a long way from seeing such technology on the battlefield, but a lot of countries are already working on developing these robots and there are no rules from the United Nations and other world bodies governing the use of such weapons.

AI Making Decisions

A lot of companies are using AI to make better business decisions since the machines are able to process more information than humans and provide deeper insights. However, this also poses some ethical issues that need to be addressed. For example, Airbnb has patented an AI product that uses AI to assess potential guests based on their social media profiles and criminal record. The big idea here was to create a machine that could collect and process all of the information about a person available online and stitch all of this information into a complete picture. Even though the online portrait of a given person may be incomplete, the computer would make a judgment call in terms of whether this person is a risk or not.

Mindy Support is Assisting the Development of Next-Gen AI Technology

Even though data annotation is a very tedious and time-consuming task, it is a very important part of the overall project. Mindy Support is alleviating companies from the burden of performing such tasks in-house allowing them to focus more attention on developing their product. We are one of the largest BPO providers in Eastern Europe with more than 2,000 employees across six locations in Ukraine. Our size and location allow us to source and recruit candidates quickly and thanks to our rigorous QA processes, we can scale your project without sacrificing the quality.

Posted by Il’ya Dudkin

TABLE OF CONTENTS

    Stay connected with our latest updates by subscribing to our newsletter.

      ✔︎ Well done! You're on the list now

      TALK TO OUR EXPERTS ABOUT YOUR AI/ML PROJECT

      CONTACT US