Fundamental flaws of GPT-3 & why language technology is still useful for authors

The German Tech blog heise.de writes about GPT-3, with the title: “Automated, but mediocre Articles” (“Automatisch mittelmäßige Artikel”),  the original article is from TechnologyReview.com. While GPT-2/3 is really a break through in terms of language technology deep learning showcases, this kind of articles is missing the fundamental point of language: bringing information across. 

Unsolvable Technical debt of End-to-End Approach

Ehud Reiter, probably the one most would consider one of the inventors of NLG as a scientific field and founder of Arria NLG, writes about this in very nice blog posts for the layman.

Lets look at three of the underlying problems:

  1. Communication Goal: “But it is not useful if the goal is to accurately communicate real-world insights about data.” http://blog.arria.com/openai-gpt-system-what-does-it-do

  2. Hallucination: “like all end-to-end neural NLG systems it hallucinates and says things about the data which are not true.”  (https://ehudreiter.com/2020/08/10/is-gpt3-useful-for-nlg/).

  3. Bias, Racism etc: Since end-to-end neural systems like GPT-2/3 are trained from existing content they contain sentiments and bais and propagate this into all content articles created. E.g. “woman” gets expanded into “prostitute” while “man” gets expanded to car salesman. Others have already written about the danger of this, so I won’t get further into this. (See Nanyun Peng with https://arxiv.org/pdf/1909.01326.pdf)

All three of these problems are fundamentally anchored in the end-to-end approach of GPT, so these will not go away in the next iterations just by more training data or some more “magic AI” features.

Linking to data, and annotating benefits and context are the fundamentals of information transfer – and just because an article is perfectly worded, it neither links that information to textual output, it can also not transfer that information to the reader.

What's in it for you?

This being said, language technology like this is actually really useful. It helps authors during the various stages of writing, e.g. by helping the creativity process with suggestions or variations, taking care of grammatical features like flection, generating perfect syntax or similar.

So what we are looking at is a future about Co-creation between human and machinebuzzworded as Augment Human. Just don’t get distracted by machines producing shiny articles which are actually only a Potemkin village of readability.

So the Heise author misses the point: the article looks mediocre, but in reality they are just unfounded arguments, random fiction and racist, combined into fluent phrases.

 

Would you like to learn more about a software that helps you to produce meaningful texts faster?

Try it out! Right now! For free!

Get In Touch

Tell us a little bit about yourself and we will tell you everything about AX Semantics.

What's your role?

Tell us a little about your work!

Thanks!

We will contact you shortly. In the meantime, we recommend you starting the free trial of the software!

Share on facebook
Share on twitter
Share on linkedin