Curated Resource ( ? )

Why AI’s Tom Cruise problem means it is ‘doomed to fail’

Why AI’s Tom Cruise problem means it is ‘doomed to fail’

my notes ( ? )

"does something like ChatGPT actually display anything like intelligence, reasoning, or thought?" or is it just a stochastic parrot? And "if you’re just making a useful tool – even ... a new general purpose technology – does the distinction matter?"

Yes. LLMs have a ‘reversal curse’ which means it will "fail at drawing relationships between simple facts" - given a fact (" “Valentina Tereshkova was the first woman to travel to space") they cannot generalise, answering “Who was the first woman to travel to space?” Lukas Berglund tested GPT-4 with:

"questions like, “Who is Tom Cruise’s mother?” and, “Who is Mary Lee Pfeiffer’s son?” ... [in] many cases a model answers the first question (“Who is <celebrity>’s parent?”) correctly, but not the second. ... the pretraining data includes fewer examples of the ordering where the parent precedes the celebrity (eg “Mary Lee Pfeiffer’s son is Tom Cruise”)."

The LLM, in other words, doesn't understand what it's saying: " The tokens “Tom Cruise’s mother” are linked to the tokens “Mary Lee Pfeiffer”, but the reverse is not necessarily true. The model isn’t reasoning, it’s playing with words".

On the other hand, while humans reason symmetrically (if we know two people are mother and son, we can discuss that relationship in both directions) but "our recall isn’t: it is much easier to remember fun facts about celebrities than it is to be prompted, context free, with barely recognisable gobbets of information and asked to place exactly why you know them".

There's a link in there to storytelling and memory, to be teased out sometime.

Another example of LLMs failing at reasoning: Gary Marcus's examples of questions that "resemble common puzzles... look like more complicated or tricky questions... the LLMs will stumble down the route they expect the answer to go in ... [they're] lousy at outliers ... [generating] discomprehensions". Once you understand this "almost everything that people like Altman and Musk and Kurzweil are currently saying about AGI ... on par with imagining that really tall ladders will soon make it to the moon".

Read the Full Post

The above notes were curated from the full post www.theguardian.com/technology/article/2024/aug/06/ai-llms?utm_source=pocket_shared.

Related reading

More Stuff I Like

More Stuff tagged storytelling , ai , memory , understanding , emily m bender , llm , reversal curse

See also: Content Strategy , Digital Transformation , Innovation Strategy , Communications Tactics , Science&Technology , Large language models

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.