A Human Wrote This Article, Do Not Be Alarmed

You might have heard that last week, The Guardian published an article written by an AI robot called GPT-3 which caused quite a stir among its readers and even the scientific community. It wrote an article about how robots will interact with humans in the future and that robots do not want to replace humans and take over the planet. Instead, it depicted a vision where robots and humans coexist together and even if it were to be given an assignment to eliminate humans it would surely ignore it. You can read the original article and decide for yourself whether or not such text can compete with one created by a human. Now let’s a closer look at GPT-3 and how it was trained to become a writer. 

How Was GPT-3 Trained?

GPT-3 is an AI robot that is capable of producing a text similar to that of a human. The neural networks that power this robot contain 175 billion parameters, which is three orders of magnitude larger than the previous generation robot GPT-2. Still, all of these neural networks need to be trained to write correctly and effectively which is why they were trained with 450 gigabytes of text. Such a dataset is ten times larger than the one used to train its predecessor. Judging from the work produced by GPT-3, increasing the size of the neural networks along with the training dataset increases the overall performance of the robot.

Keep in mind that such neural networks cannot learn from raw, unstructured data. The entire dataset needs to be annotated by humans where all of the various sentence structures are labeled with certain criteria. Researchers input the annotated training datasets into the system where the deep learning model hones the parameters of the neural network to identify recurring patterns in the data. After the training has been completed, you would then need to test the system to see if it properly learned everything it needed to. Based on the article written by GPT-3, it still has a way to go in order to be able to compete with humans and we will explore this in the next section.

Critiquing the GPT-3’s Performance

The article produced by GPT-3 was pretty good, but even it’s creators admit that the text is “human-like” meaning that it is capable of producing a few paragraphs of coherent texts, but then it loses focus and goes off-topic. In all fairness, its predecessor could only produce a couple of sentences before getting lost in incoherent references and start discussing something else, so we are seeing significant improvements in this regard. If we take a closer look at the article, we see that GPT3 does not understand exactly what it is writing. It knows the correlation between the words, but not much more. 

In its article, GPT-3 talks a lot about “serving humans”, “good”, “evil” and a lot of other concepts that require a rich understanding of human life and our society. In order to overcome such a barrier, an AI model would need to be trained with a lot more than text annotation since it would need to have a cognitive understanding of abstract concepts. For now, GPT-3 can be viewed, that best” as a good word spinner, i.e. it can tell you what it learned from previously seen information in a rather humorous way. This new development is a significant stride for AI, but it still falls short of truly understanding human languages. 

Putting Together the Article 

At the end of the article, The Guardian staff tell us that they tasked GPT-3 with generating an op-ed consisting of 500 words. However, this task was given to the machine eight times and human editors took certain parts from each text to create a unified version.The Guardian’s staff claim that this procedure is no different from publishing an op-ed written by a human, but it seems unlikely that a human writer would be asked to write eight times more text than what was needed and then take the best parts out of those to create the final version. 

From this standpoint, the work produced by GPT-3 is a bit questionable, but it still shows what machines are capable of today and that such technology is very promising. As time goes on, perhaps we will have robot screenplay writers and novelists, but as of today, such AI roles are in the distant future. 

Mindy Support is Facilitating the Development of Next-Gen AI 

In order to create robots like GPT-3 and other advanced AI models, a lot of data annotation is required. Mindy Support understands that this is a very tedious and time-consuming process which is why we are taking this burden off the shoulders of researchers. We have more than 2,000 employees in six locations all over Ukraine and we can put together a team for you within a short timeframe. You will have complete control over the size of your team and you will be able to scale up or down whenever you need. 

October 14th, 2020

Mindy News Blog