"The most common AI-based functionalities in participation tools are toxicity screening, analysis of inputs and translation. The first two in particular are meant to lighten the workload" of community managers.
"Toxicity screening ... is used to flag hateful or inappropriate inputs." Text is usually flagged and post-moderated, but images are blocked before a human can unblock it, as "text can be harmful, it is rarely illegal, unlike images and videos, which can more easily cross legal lines.”
"AI-based analysis of participants’ inputs", like the EC's DORIS tool, "sort inputs into different categories... within each category are then grouped with similar ideas."
None of this is new - it's basic NLP. But LLMs now "also provide written summaries ... or allow users to interact with them via chatbot", although they are less reliable.
"make.org platform uses AI-based anomaly detection to identify potential trolls" and "uses an algorithm to ensure that all proposals are shown to the same number of participants" to prevent early ideas rolling over everything else.
Most players in this space seem pretty wary of "Delegating decision-making to algorithms ... [which are] unstable in terms of regulatory and ethical frameworks". AIs must be trustworthy and bias free.
More Stuff I Like
More Stuff tagged ai , community , participation , troll , agent , ai4communities
See also: Online Strategy , Online Community Management , Social Media Strategy , Digital Transformation , Innovation Strategy , Psychology , Social Web , Politics , Communications Strategy , Science&Technology
MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.