The claims re: ChatGPT by "fascinated evangelists... [that they] contain “humanity’s scientific knowledge,” are approaching artificial general intelligence ... even consciousness", are simply "a distraction from the actual harm perpetuated by these systems. People get hurt from the very practical ways such models fall short in deployment".
This excellent piece provides plenty of examples, some explanations and a plea for better engineering.
Examples:
Underlying problems include:
These are well-known and have been documented since at least Microsoft's Tay chatbot was released and then quickly withdrawn. However, "as models get larger ... becomes increasingly difficult to document the details of the data involved and justify their environmental cost."
Moreover, these models' achievements are presented as "independent of the design and implementation choices of its engineers", which disconnects these problems from human accountability and puts the blame on "society at large or supposedly “naturally occurring” datasets, factors the companies developing these models claim they have little control over." This is not true: "none of the models we are seeing now are inevitable.
But its easier to "dismiss criticism as baseless and vilify it as “negativism,” “anti-progress,” and “anti-innovation."... for some reason it seems to be the job of the marginalized to “fix”" these models, with engineers and CEOs appealing to users to help them improve their flawed, dangerous software.
More Stuff I Like
MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.