The claims re: ChatGPT by "fascinated evangelists... [that they] contain “humanity’s scientific knowledge,” are approaching artificial general intelligence ... even consciousness", are simply "a distraction from the actual harm perpetuated by these systems. People get hurt from the very practical ways such models fall short in deployment".
This excellent piece provides plenty of examples, some explanations and a plea for better engineering.
Underlying problems include:
These are well-known and have been documented since at least Microsoft's Tay chatbot was released and then quickly withdrawn. However, "as models get larger ... becomes increasingly difficult to document the details of the data involved and justify their environmental cost."
Moreover, these models' achievements are presented as "independent of the design and implementation choices of its engineers", which disconnects these problems from human accountability and puts the blame on "society at large or supposedly “naturally occurring” datasets, factors the companies developing these models claim they have little control over." This is not true: "none of the models we are seeing now are inevitable.
But its easier to "dismiss criticism as baseless and vilify it as “negativism,” “anti-progress,” and “anti-innovation."... for some reason it seems to be the job of the marginalized to “fix”" these models, with engineers and CEOs appealing to users to help them improve their flawed, dangerous software.
More Stuff I Like
More Stuff tagged ai , gpt-3 , chatgpt
See also: Innovation Strategy , Digital Transformation , Science&Technology