"GPT-3 ... generates tweets, pens poetry, summarizes emails, answers trivia questions, translates languages and even writes its own computer programs, all with very little prompting"
It's surprised a lot of AI researchers, but also "often spews biased and toxic language" and isn't always convincing.
It's a universal language model that's "learned from a far larger collection of online text than previous systems", and so opens new horizons, but it's not yet clear this is a route to general AI: "It is very articulate ... [but[ does not ... think in advance"
But it's ability to generate computer code (in figma) was a surprise to its creators, who "had built it to do just one thing: predict the next word in a sequence of words... you can focus it on particular patterns ... priming the system for certain tasks... using just a few examples [known as] few-shot learning".
Trained on human text, it generates hateful language - hence it's not yet open to all. A potentially powerful tool for generating disinformation and hatespeech to undermine and distort human online conversation.
Humans tend to anthropomorphise not just their pets when they "exhibit even small amounts of humanlike behavior", but also machines, paying less attention to their limits. GPT-3 still needs human help - it does the first 70% of the tedious work - so not yet a disinfo threat. But will it reach 95+%? It already took a huge chunk of online content, "a specialized supercomputer running for months... tens of millions dollars... the approach might be close to running out of juice."
More Stuff I Like