ChatGPT and the impact of using language as an interface

In their 1961 memo about the National Space Program, Webb and McNamara advised that “the orbiting of machines is not the same as the orbiting or landing of man. It is man, not merely machines, in space that captures the imagination of the world.” It’s an interesting insight about the greatest scientific achievement of the past century that the PR aspect of making space travel relatable contributed to defining what success would look like 8 years later.

Fast forward to 2023: it was recently announced that AI can now identify breast cancers that doctors miss. You would be forgiven for having missed it though, because a different type of AI has been capturing the imagination of the world lately. Generative AI has mind-boggling use cases that we are only starting to grasp, but it is the human element that makes it a mainstream media darling: will machines develop feelings, can they make art?

Word-work is sublime… because it is generative; it makes meaning that secures our difference, our human difference – the way in which we are like no other life.
We die. That may be the meaning of life. But we do language. That may be the measure of our lives.

Toni Morrison – Nobel Prize acceptance speech, 1993

Human language is a demanding interface

When we log into a computer and input a command into a digital interface, everything about the user experience tells us that the entity spitting back the answer is programmed to do so. We are speaking the machine’s language. If we don’t get the expected output then surely we have ticked the wrong box, made a mistake in our code or misplaced a coma in our query.

When language becomes the interface however, we are in quintessentially human territory. Language is subjective by nature and always evolving. Not even the best programmer in the world could anticipate all the possible prompts a chatbot could receive, to be relevant it needs to be able to continuously learn by itself how to process the subtleties of human language.

How do chatbots learn?

The field in charge of optimising the interactions between machines and human language is called Natural Language Processing (NLP). It dates back to 1950 when mathematician Alan Turing devised a test called the “imitation game” which assessed the intelligence of a machine by measuring its ability to interpret and generate natural language.

Between the 50s and the 80s, NLP mostly consisted of researchers trying to map out all the inputs for a specific use case – often translation – and to then hand-write all the corresponding instructions, with varying degrees of success.

Things changed in the 90s with the introduction of machine learning, made possible by bigger computational power and lots of data openly available on the web to learn from.

We are now in the exciting phase of Neural Network NLP. Artificial Neural Networks (ANNs) are information processing architectures that mimic human brains. Although some researchers find the comparison misleading, ANNs do look very similar to neurons, connected by synapses and organised in layers.

A biological neuron in comparison to an artificial neural network:
(a) human neuron; (b) artificial neuron; (c) biological synapse; and (d) ANN synapses – source here

Could they learn too fast?

The movie Arrival explores a interesting idea: that language programs us as much as we program it. Linguist Louise Banks is sent to translate what aliens are trying to tell us in their circular written language. She eventually gets to a place where she can use their language well enough to have actual conversations with them. In doing so, it changes her.

Amy Adams as linguist Louise Banks in the movie Arrival, 2016

On the off chance that you have not yet seen this beautiful movie I will not reveal here the final plot twist but this idea of the power of language over human brains is important. Think about the impact of saying “died by suicide” rather than a guilty “committed suicide”, of saying “enslaved people” rather than “slaves”. To Toni Morrison’s point, language is meaning.

Left unchecked, ANN-powered Natural Language Processing could end up amplifying harmful biases. A famous example is the word2vec algorithm which assigns vectors to words to understand which words are similar and which words “go well” together. Its applications include the ability to recommend words to complete unfinished sentences. Unfortunately it completed the sentence “man is to computer programmer as woman is to x” with x=homemaker and “a father is to a doctor as a mother is to x” with x=a nurse. Even if machines learn by themselves, as long as they are learning from data sets created and curated by human beings, our biases will be embedded in them.

AI computer scientist Timnit Gebru described several of these issues in her paper Race and Gender. One striking example is that of “a Palestinian arrested for writing “good morning” in Arabic which was translated to “hurt them” in English or “attack them” in Hebrew by Facebook Translate“. The person was arrested and then let go when someone checked the original Arabic message. As Dr Gebru explains: “had the field of language translation been dominated by Palestinians as well as those from other Arabic speaking populations, it is difficult to imagine that this type of mistake in the translation system would have transpired“.

Having machines that can learn by themselves is fantastic, but giving them too much input without oversight is dangerous. It’s great to see that OpenAI took this very seriously with ‘red teams’ dedicated to testing GPT’s ability to generate harmful content. We would do well to ensure that powerful AI is matched with equally powerful AI ethics mechanisms that can audit ANNs and ensure they are helping us be better, not just smarter versions of ourselves.

Paul Röttger‘s full Twitter thread here

Spoken language is one of the most intuitive and intimate interfaces that can be built between humans and machines, and chatbots are fast learners. Looking at the acceleration of NLP since its inception in the 50s, we can also be very excited for what comes next – after all, the field is still young and full of promising applications. In the words of voice assistant -turned-superhero Vision:

Paul Bettany as Vision in the movie Avengers: Age of Ultron, 2015