Do LLMs lose "the scent of people"?

Do LLMs lose "the scent of people"?

Legislators, ethicists are not alone in asking can 'watermarking' and other means of making the provenance of AI outputs recognisable? Jaron Lanier is an AI expert with broad experience of AI who writes about 'the scent of people.' What might this intriguing phrase lead us to when working with AI in a productive way is a key concern?

AI creatives like Lanier hold beliefs worthy of a deeper dive when there is a push, even a willingness for LLM immersion in day to day workflow for people in all walks of life. For context, some of us have outputs that are strictly technical and process driven, others are more creative in the traditional 'hand made' sense of the word. Acknowledging the messy middle forces a reality take: even though it may seem eclectic, creativity also needs process.

The focus of legislators and others appears to be somewhere on a spectrum. At one end there is AI seen as a destructive force that needs to be reigned in. At the other of this spectrum, AI is pure wizardry that will save the world. The importance of AI detection appears to be a means to arbitration. The reasoning appears to be that if AI is detectable, real humans can at least take a look at inputs, outputs and outcomes and look for value for people and planet. A follow on assumption is that what is detectable and measurable (not the same thing) is then controllable. This is a hugely grand statement to make. Starting with the end in mind, let's go back to that term 'detectable'.

One of my favourite pieces written by Lanier is his recent essay "There is no AI." Such confidence. In this piece he talks about the concept of 'digital dignity' where the smaller and quieter human voices are not drowned out by bigger and noisier tech equivalents. People with a big 'P' are asked to stand up for what humans need and what they are good at. Small 'p' people are the ones who are lost in the noise; Lanier seems to raise dignity as an issue of speaking up and speaking out or of risking losing out in the long term.

And yet if we go back a little into his musings, there is a strong thread defining the differences between humans and AI forms of creativity. This is where the phrase 'scent' comes in. Humans are sense driven beings.

Human emotions come from human feelings which rely on human senses. Humans can have emotions in a way that is orders of difference to the way machine learning processes represent emotions. There is an important and related idea that it is often the emotions of a single human, in an act of emotional expression not enacted by a collective, that gives us great works like those of Bach. Lanier writes about the differences between individual and collective works in his 2010 book You Are Not a Gadget (2010).

Yes, Large Language Models can harvest the emotion words and string them together. That word string can't be grounded in the way that each of the humans who created the source text in the LLM were in a real place at a real time - in context - when the original and multiple experiences driving that text occurred.

This idea that human creativity has a scent is also found in a recent blogpost by John Loewen titled "As a heavy user, my mood towards GPT has changed: here’s why."

As a heavy ChatGPT user, Loewen finds the LLM wonderful for helping with processes yet not at all productive when taken to the centre of his writing efforts. He claims LLM writing to be 'hollow'. That "when using GPT to write for you, you are making the decision to be a story-writer rather than a story-teller. Your GPT-generated story is just a script — something pretends, rather than an appeal for any sort of emotional response."

When context and text are separated, when the situated-ness, the senses and the 'place-ness' of words aggregated in the way LLMs aggregate and reassemble words into sentences, words sourced from multiple contexts, that some form of meaning is lost.

Perhaps Lanier's concept of the 'scent' of humans paired with Lowen's lament of AI writing as being 'hollow' suggest a way to look at AI detection. Subtle and perhaps not entirely attainable and worth a longer look. What is the trace of human emotion that filters into human writing that we might look for in AI detection? And what does it mean if this 'scent' is gradually being lost?

Related reading

More Stuff I Think

More Stuff tagged creativity , we

Cookies disclaimer saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.