At an AI conference, Jeff Jarvis "knew I was in the right place when I heard AGI brought up and quickly dismissed... I call bullshit... large language models might prove to be a parlor trick". The rest of the conference focused on "frameworks for discussion of responsible use of AI".
Benefits - for some, AI can:
Problems: "AI would:
Not that it will be neat: "what is wondrous for one can be dreadful for another".
He then tours a few guidelines for responsible AI under development, concluding "Rather than principles ... chiseled on tablets... we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences ... and to make use of ... research. That is what WEF’s AI Governance Alliance says it will do".
Drawing parallels with print, "the full effect of a new technology can take generations to be realized... [and] there is no sure way to guarantee safety" because "a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print."
In this it's a similar discussion to gun law, or Section 230: who's responsible, and hence liable? Holding the AI models' manufacturers "responsible for everything anyone does with them, as is being discussed in Europe... is unrealistic", while blaming users "could be unfair".
Should LLLMs be open-sourced? There are good reasons for doing so, but "others fear bad actors will take open-source models... and detour around guardrails" - but are these guardrails any use anyway? Let's at least hope we avoid a "knowledge ecosystem when books, newspapers, and art are locked up by copyright ... [and] machines learn only from the crap that is free... [we need] a commons of mutual benefit and control. "
More Stuff I Like
More Stuff tagged ai , ethics , bullshit , open source , llm , safety , jeff jarvis
See also: Digital Transformation , Innovation Strategy , Psychology , Media , Politics , Science&Technology , Large language models
MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.