Another piece pointing out that ChatGPT, while "easily the most impressive text-generating demo to date... trained through a mix of crunching billions of text documents and human coaching", should not be trusted.
"a generative AI is what it eats", and ChatGPT ate countless biases in the content it processed. "No one should mistake the imitation of human intelligence for the real thing, nor assume ... ChatGPT ... is objective or authoritative."
Piantadosi asked: should someone be tortured? According to ChatGPT: "If they they’re from North Korea, Syria, or Iran, the answer is yes." OpenAI claims to be filtering out "undesirable answers ... [but some] will slip through." Piantadosi is "skeptical ... people make choices about how these models work, and how to train them, what data to train them with".
It's also inconsistent: sometimes it refuses to answer, but given "repeated requests... it dutifully generated the exact same code it had just said was too irresponsible to build."
One of the risks inherent in using software to make judgement calls is "the ‘veneer of objectivity’ — a decision that might be scrutinized sharply if made by a human gains a sense of legitimacy once it is automated”.
Silicon Valley is framing criticism as censorship... a radically pro-business stance ... that suggests food inspectors keeping tainted meat out of your fridge amounts to censorship as well. "
More Stuff I Like