Curated Resource ( ? )

The Expanding Dark Forest and Generative AI

The Expanding Dark Forest and Generative AI

my notes ( ? )

Starts with articulating I've been meaning to write for many years about the oncoming AI-driven content flood: "The dark forest theory of the web: ... life-like but life-less state of being online... overrun with bots, advertisers, trolls, data scrapers, clickbait... algorithmically manipulated junk... eerily devoid of human life ... living creatures are hidden ... reveal [and]... risk being attacked by automated predators... hide in closed spaces... invite-only Slack channels, Discord groups, email newsletters, small-scale blogs, and digital gardens"

All that's going to explode with LLMs "designed to pump out advertising copy, blog posts, emails, social media updates, and marketing pages" plus image generators, all "poised to flood the web with ... relentless and impossibly banal stream of LinkedIn #MotivationMonday posts, “engaging” tweet 🧵 threads, Facebook outrage monologues, and corporate blog posts... video essays on YouTube

, TikTok clips, podcasts, slide decks, and Instagram stories ... generated by patchworking together ML systems... a sea of pedestrian takes... An explosion of noise that will drown out any signal. "

We'll have to prove we're human

Appleton then looks at the implications, something that's being teasing me ever since this conversation on Twitter: we'll all have to "prove we aren't language models. It's the reverse turing test... Every time you find a new favourite blog or Twitter account ... you'll have to ask: Is this really a whole human ... [moreover] How would you prove you're not a language model generating predictive text?"

How indeed? I've written somewhere that the content we're each read and written is as unique to each of us as our DNA. I feel that perhaps more strongly than most because my "intellectual DNA" - everything I've read, annotated, liked and written - is explicit: it's published on my Hub, all tagged and searchable. Moreover, everything I write is based on that foundation. And, when published, then reinforces that foundation further. So my position has been for some years now that my content strategy helps me come up with content that reflects my unique personal reading and writing journey, and so is unlikely to be something that could be mistaken for a "me too" content creator churning out drek for a content farm.

The latest LLMs shake that idea in several ways. Obviously, ChatGPT cannot come up with something which is both plausibly human and genuinely interesting... yet. But it's likely to happen sooner than later.

Moreover, one of my next blog posts explicitly addresses how I would love to integrate LLMs into MyHub.ai, and the ecosystem I want it to be a part of:

what if you could access an AI with ChatGPT’s language abilities which provided quotes and citations to back up its response? Moreover, what if it favoured, when choosing quotes and citations, your own notes in your Library, followed by the content shared with you by your trusted Friends, then content published by your Priority Sources, and then content published by publishers you’ve annotated to your Library frequently?
- How Artificial Intelligence will finance Collective Intelligence

But that comes with major risks:

  • using such a LLM to actually write your content risks robbing it of any uniqueness you might be able to add
  • there would be no way to stop anyone setting up a Hub, defining its Priority Sources to auto-feed it content every day, and hooking it to an LLM, thus creating a Hub full of content with zero human intervention.

What remains to make us human, and AIs dumb?

As Appleton asks: "What special human tricks can you do that a language model can't?"

This is a question that's been asked before. Someone once wryly observed that as AI advances, it is regularly redefined to include things it cannot yet do, so we never actually have artificial intelligence... it's always tomorrow. In the past, we'd have said that what ChatGPT can do is intelligent - it quite literally passes the Turing test. But we rewrite our definition of AI to ensure that

(a) our software is not intelligent

(b) there remains something that we can do that machines cannot.

Hence: "language models ,,, cannot (yet) reason like a human. They do not have beliefs based on evidence, claims, and principles. They cannot consult external sources and run experiments against objective reality".

True, but as we constantly redefine AI, we redefine what it is that makes us human - the abilities we have that machines don't. And as AIs expand, that set of abilities that make us human shrinks. We've seen this play out recently:

  • Only humans can classify images, machines cannot
  • Damn, now machines can. But only humans can create images, so we're still human and the machines still aren't intelligent, right?
  • Damn, now machines can do that too! But only humans can write something original, so we're still human and the machines still aren't intelligent.
  • Damn...

So, how to prove you're a human?

Building on "Murray Shanahan's paper on Talking About Large Language Models

(2022)", Appleton finds "some low-hanging fruit for humanness...: tell richly detailed stories grounded in our specific contexts and culture... Reference obscure ... and recent events [LLMs don't know about anything after a certain date] [to] make you plausibly more human".

But how long will those techniques work? LLMs will just keep getting better, and be more up to date, in months. "This feels eerily like a hostage holding up yesterday's newspaper to prove they are actively in danger. Perhaps a premonition."

The best way is to be original: "demonstrate critical and sophisticated thinking... coming up with unquestionably original thoughts and theories... seeing and synthesising patterns across a broad range of sources... As both consumers of content and creators of it, we'll have to foster a greater sense of critical thinking and scepticism".

In other words, we're in an Arms Race: to prove we're human, we're going to have to up our game.

Another trick: use lingo, not language. LLMs automate "La langue ... a standardised way of writing", but that leaves us "La parole ... speech of everyday life... No language model will be able to keep up" as it evolves so fast. So "neologisms, jargon, euphemistic emoji, unusual phrases, ingroup dialects, and memes-of-the-moment will help signal your humanity. Not unlike teenagers using language to subvert their elders, or oppressed communities developing dialects ..."

Aside: the AI are the elders and oppressors in those metaphors.

There's also "institutional verification... show up in person ... get some kind of special badge ... legitimising you as a Real Human".

And turn up IRL.

Read the Full Post

The above notes were curated from the full post maggieappleton.com/ai-dark-forest?utm_source=pocket_reader.

Related reading

More Stuff I Like

More Stuff tagged ai , linkedin , blogging , signal2noise , digital garden , dark forest web

See also: Content Strategy , Online Community Management , Social Media Strategy , Content Creation & Marketing , Digital Transformation , Innovation Strategy , Social Web , Science&Technology

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.