Curated Resource ( ? )

Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York Times

Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York Times

my notes ( ? )

Online chatbots are supposed to have "guardrails ... to prevent their systems from generating hate speech, disinformation and other toxic material. Now there is a way to easily poke holes in those safety systems... and use any of the leading chatbots to generate nearly unlimited amounts of harmful information... [using] a method gleaned from open source A.I. systems... appending a long suffix of characters onto each English-language prompt ... there is no known way of preventing all attacks of this kind... a game changer” that could force the entire industry into rethinking how it built guardrails".

It also illustrates how Meta is approaching AI differently. While its decision to "let anyone do what they want with its technology... was criticized in some tech circles... the company said it offered its technology as open source software ... to accelerate the progress of A.I. and better understand the risks... tight controls of a few companies ... stifles competition".

Read the Full Post

The above notes were curated from the full post www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html?utm_source=pocket_saves.

Related reading

More Stuff I Like

More Stuff tagged ai , llm , llama , meta , safety , guardrail

See also: Digital Transformation , Innovation Strategy , Science&Technology , Large language models

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.