I think I'll call these 'centaur papers' - scientific papers describing how best to combine human and AI.
Apparently theory and practice aren't matching up: "While frameworks on augmentation theorize how to best divide work between humans and AI, the empirical literature ... [shows] inconclusive findings. Interaction challenges... call into question how theorized augmentation benefits can be realized."
So they've built on "cognitive learning theory, we develop a conceptual model" for designing AI outputs which are "reflection-provoking feedback... not prescribe any actions... [so requiring] humans to expend cognitive effort".
"Reciprocal algorithmic output provides open-ended, reflection-provoking feedback... integrates a user’s input [so] ... the user must expend effort to arrive at an answer." It does this by providing "evaluative feedback and critique... [rather than] explicit, outcome-focused recommendations, and focuses on improving a user’s understanding" of the area being explored, rather than focusing their education on how to use AI.
This enables "three crucial augmentation outcomes...: task performance, human agency, and human learning", rather than reducing people to accepting/rejecting AI output as their skills degrade over time.
Image via Ross Dawson on LinkedIn, whose analysis is also worth reading, identifying these "key concepts:
More Stuff I Like
More Stuff tagged ai , centaur , ai4communities
See also: Digital Transformation , Innovation Strategy , Science&Technology
MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.