AI Has All Of The Words But None Of The Wonder

AI Has All Of The Words But None Of The Wonder
SHARE
THIS



In this opinion spot, The Tall Planner’s Kate Smither (pictured) laments ChatGPT’s lack of inventiveness, originality or understanding of context and maintains that these deficiencies make it no better than a bad one-night stand.

It was 1968 when Phillip K Dick asked “Do Androids dream of electric sheep?” and since then debate has been circled, asking how human machines can be and where human behaviour and machines can cross over?

Now with the mainstreaming of ChatGPT, the debate has risen to a new level as the model of machine learning takes on context or the lack thereof.

ChatGPT claims it does have context, but really it only has the prompt of context through its language models. It doesn’t have retained, informed-enriched context. The models don’t really understand, they just understand how to mimic and interpret language patterns. But no amount of mimicking or language prediction will ever give AI a memory in the same way that memory allows humans to contextualise things

“Each time a model generates a response, it can take into account only a limited amount of text, known as the model’s context window. ChatGPT has a context window of roughly 4,000 words—long enough that the average person messing around with it might never notice but short enough to render all sorts of complex tasks impossible,” wrote Jacob Stern in The Atlantic.

Whilst GPT-4 is getting better at context than ChatGPT, it is still limited by how widely the context window can be opened rather than how well it can retain meaning.

It is still about the words, not the wonder.

That inherent short-termism makes ChatGPT and GPT-4 the cultural equivalent of a one-night stand.

Studies back in 2015 claimed that human beings now had attention spans shorter than the average goldfish. The stats have since been disproven but the point is still that even at this low point, humankind is still miles ahead of ChatGPT. That’s because humans live in a world of attention where that attention creates layers of context and memory that help us understand the world around us.

Writing for Forbes in January, Shane Snow made the point that “there are different types of attention—and some activities sustain the human brain’s attention better.”

“Sustains” being the key word.

Some activities, according to Snow, can actually “induce sustained attention” and one of the most reliable ways of inducing that “sustained attention” is storytelling. That’s because storytelling imprints on our memory faster and in a far more enduring way. Storytelling is sustained.

At this moment, there is no storytelling in ChatGPT.

You could run the “monkey typing Shakespeare” argument but it would mean accepting that the Shakespeare ChatGPT typed carried with it no creative foundation. It was not created deliberately or even understood. It doesn’t make the storytelling repeatable, it makes it an interesting one-off

The greatest storytelling of all time has been born of context. Whether it’s in all forms of art, history or even advertising, context creates the meaning and the meaning makes even the made-up make sense

When you take away the context and the memory it is easy to just cancel things outright. To smooth out the wrinkles of culture. It becomes a very binary decision.

It’s a wrinkle-free approach that leads to people saying “Of course Roald Dahl should be rewritten… it’s offensive” or “Agatha Christie has some troubling parts… let’s remove them” and even to questioning why we celebrate the art of Picasso? After all, he was, by all accounts, “not the nicest human…”

But how and who does that kind of cultural short-termism help? Culture doesn’t just reset when we want it to, like meaning and memory, it accumulates. That is what gives understanding, creates empathy and keeps us all learning and evolving

So without a memory, ChatGPT is not a creator or a context or a culture, it is a highly skilled (and getting more skilled) mime. It learns and can be trained. The one thing that does mean is that there is no real cancel culture, ChatGPT doesn’t really leave anything to be cancelled. In many ways, it is self-cancelling. It relies on leaving nothing enduring behind—no layers of meaning or context.

Technology doesn’t really change what we do—but it changes how we do the things we do. In that way, ChatGPT and AI more broadly are just more great pieces of technology. They are the facilitators of our imagination but they are not the imagination itself.

Think of AI like a functioning blackspot. When the eye can’t see, the mind steps in and fills in the blanks, It basically guesses based on the context of experience what the eye should see. Look at the recent NotCo work, it plays right into the fact that you can’t, or don’t, see old animals, but it makes your mind fill in the blanks of the ‘unseen’ in an eerily realistic way. It shows you what would most likely be in the blind spot based on what we know about ageing in people. The AI translates that to animals based on the compilation of the references we load into it. But it can’t work independently of those references, It can’t imagine for itself.

As fun as ChatGPT and other AI tools that come along are right now, they are only extending humans’ ability to explore their own potential. Giving us a new tool to play with and to express ourselves with, but they are not in competition with our capacity… at least not in the short-termism they are creating.

But maybe that’s the point. Short-termism is all they’re here for. They might be the equivalent of a good one-night stand, or even a great one, but they won’t call you afterwards because they won’t remember to. They won’t even know why they should.

Please login with linkedin to comment

ChatGPT Kate Smither The Tall Planner

Latest News