Online chatbots are supposed to have "guardrails ... to prevent their systems from generating hate speech, disinformation and other toxic material. Now there is a way to easily poke holes in those safety systems... and use any of the leading chatbots to generate nearly unlimited amounts of harmful information... [using] a method gleaned from open source A.I. systems... appending a long suffix of characters onto each English-language prompt ... there is no known way of preventing all attacks of this kind... a game changer” that could force the entire industry into rethinking how it built guardrails".
It also illustrates how Meta is approaching AI differently. While its decision to "let anyone do what they want with its technology... was criticized in some tech circles... the company said it offered its technology as open source software ... to accelerate the progress of A.I. and better understand the risks... tight controls of a few companies ... stifles competition".
More Stuff I Like