background

GPT-3, the algorithm that writes like a human being

Artificial Intelligence is a hot topic in recent years. For some time now, thanks to the growing capabilities of the algorithms that govern our online lives, many have been wondering whether the day is near when a robot will think like a human being. You could say that the concept of artificial intelligence was invented in science fiction, but today this is a reality with GPT-3: according to a recent article in the New York Times, computer scientists have developed a new type of AI that has the ability to write like a human being, imitating natural language.

This short introduction was not written by me, but by an online software that composed the text from a few words of input. All this is based on the latest frontier of Artificial Intelligence (AI), namely GPT-3, the algorithm that can write like a human being developed by OpenAI in 2020 and that could revolutionise the world of online writing. But how does this seemingly sci-fi technology work? What are its uses and limitations? And should those working in marketing start to worry?

The algorithm

GPT-3 stands for Generative Pre-training Transformer 3 and is a deep learning-based algorithm capable of generating text, code and images after storing a large amount of data. In fact, OpenAI, an AI research company founded by Sam Altman and Elon Musk, has 'trained' this algorithm with texts from the most varied digital sources, from Wikipedia to the New York Times and Reddit, reaching the exorbitant amount of 175 billion parameters (against the 'only' 1.5 billion of its predecessor, GPT-2, created just a year earlier) and bringing this new version of the algorithm to a level of performance that is clearly superior to the past.

In this, GPT-3 can be considered a great step forward in the field of Natural Language Processing, the branch of computer science and linguistics that studies the interactions between humans and computers and aims to programme devices to recognise, analyse and reuse data written in natural language (i.e. our own). In fact, the algorithm was subjected to an immense amount of text written by humans so that it could memorise notions, concepts, syntactic rules but also more subtle elements that make writing “human”. And its immense computing power means that GPT-3, from an input of even a few words, generates texts that are very fluid and thus easily confused with those produced by real people.

The world of copywriting

The creation of texts for marketing is one of the first large-scale use cases of this technology, with many companies such as ContentEdge or Jasper adapting GPT-3 to create content such as blog posts, headlines and press releases with a view to SEO and to be able to position them among the first Google search results.

Automatically generated texts do not enjoy a good reputation on the net, because the most rudimentary versions of this technology have often been linked to the activities of spammers and bots or otherwise to manipulate Google's search ranking. However, as Peter Welinder, vice-president of OpenAI, argues, more sophisticated tools should not harm ranking if used constructively, i.e. to smooth out flaws and falsehoods in a text. Moreover, recent updates would make the algorithm more truthful.

But we are still far from being able to consider jobs such as copywriting outdated. At least for the time being, this tool can be considered a complement rather than a substitute for the human worker, allowing him or her to delegate most of the work to the algorithm and then going on to refine and enrich the text, also due to the limitations that technology still presents today.

Limitations

Despite the enormous growth in data storage capacity, GPT-3 and GPT-2 share common problems related to reasoning and complexity.

Tests carried out by the MIT Technology Review have in fact noted that the algorithm, although it has no problem producing "grammatically correct linguistic segments with complete meaning", has some semantic difficulties, struggling to follow the nuances of more complex situations. Famous is the example of GPT-3 trying to solve the problem of carrying a table through a door that is too narrow, answering that it is enough to "remove the door.”

The remaining problem is due to the fact that the algorithm only analysed texts and not the psychology or cultural references of writers and readers, leading it to have only a superficial understanding of everyday situations. But this, say the experts, is only a temporary limitation, while soon the technology may come to be indistinguishable from human work, with important implications for our society.

The consequences

Perhaps the most immediate consequence will be the proliferation of content. With an algorithm that can produce good texts en masse in a few moments, authors will have substantial help in creating a text, while users will have to get used to not knowing how much of what they read was written by a machine.

The first concern of experts is that once these tools become available on a large scale, the spread of texts created by AI will increase exponentially, leading to an immense amount of texts being published everywhere and an increase in infodemics. If in fact already today the 'circulation of an excessive amount of information [...] makes it difficult to find one's way around a given topic due to the difficulty of identifying reliable sources'[1], it is clear that the possibility of pressing a button and obtaining a text ad hoc to one's needs will only exacerbate this problem.

The other major fear is related to the content of the texts created by the algorithm. GPT-3 could in fact repeat the toxic language contained in the texts provided to it for training. And although OpenAI claims to have improved its conditions of use and its filtering system, when the technology becomes commonly used, the amount of texts to be monitored could become excessive. Moreover, OpenAI itself is concerned about the mass dissemination of politically polarising content, and the emergence of competition means that it will become increasingly easy to circumvent the rules of conduct in the future. With a possible increase in fake news and misinformation at every level, the automated production of texts could generate even more polarisation of opinion and the spread of echo chambers, as algorithms will be increasingly good at creating texts tailored to certain target demographics.

It is therefore clear that, like all revolutionary technologies, the effect of GPT-3 will depend on our approach to it. It will require joint work by individuals and institutions to create on the one hand more informed and critical citizens of digital content and on the other hand new rules and accountability systems for the production of automated texts.

Translated by Margherita Folci


Share the post

  • L'Autore

    Davide Bertot

    IT

    Davide Bertot, torinese classe 2000, è un ragazzo fortemente interessato alle relazioni internazionali, alla politica e all'attualità. Attualmente studente di laurea triennale in International Relations and Diplomatic Affairs presso l'Università di Bologna, collabora con Mondo Internazionale come Caporedattore per l'area tematica Tecnologia e Innovazione, in particolare in ambito economico, contribuisce come autore e revisore per altre associazioni ed è volontario presso Volt Torino. Ragazzo intraprendente, pragmatico, curioso e sempre pronto ad imparare, spera un giorno di poter lavorare nelle istituzioni europee e dare il suo contributo per il miglioramento della società. Studia e lavora con la politica e l'attualità perché crede nella capacità delle persone di avere un impatto e nella necessità di parlare dei problemi e lavorare insieme per risolverli.

    EN

    Davide Bertot, born in Turin in 2000, is a boy strongly interested in the field of international relations, politics and current affairs. Currently an undergraduate student of International Relations and Diplomatic Affairs at the University of Bologna, he works with Mondo Internazionale as Chief Editor for the section Technology and Innovation, in particular of economic matters, gives his contribution as writer and editor for other associations, and volunteers at Volt Torino. Resourceful, pragmatic, curious, and a fast-learner, he hopes one day to work in the European institutions and do his part to improve our society. He studies and works with politics and current affairs because he believes in the people's capacity to have an impact and in the need to acknowledge problems and work together to fix them.

Categories

Sections Technology and Innovation


Tag

intelligenza artificiale AI ArtificialIntelligence IA GPT-3 GPT-2 OpenAI algoritmo machine learning deep learning Natural Language Processing informatica linguistica computer linguaggio umano essere umano input testo codice scrittura automatica marketing copywriting Internet Online ContentEdge Jasper SEO bot spam ranking limiti complessità ragionamento proliferazione contenuti digitali infodemia hate speech Fake News disinformazione polarizzazione echo chamber regolamentazione responsabilità tech Tecnologia e Innovazione Technology and Innovation T&I

You might be interested in

Image

La Propaganda Occidentale ai Tempi del Conflitto Russo-Ucraino

Michela Rivellino
Image

Technology in war - Part 2: cryptofunding

Davide Bertot
Image

The disinformation era: a new threat in the abscence of rules

Alessandro Micalef
Log in to your Mondo Internazionale account
Forgot Password? Get it back here