Curated Resource ( ? )

AI tools as science policy advisers? The potential and the pitfalls

AI tools as science policy advisers? The potential and the pitfalls

my notes ( ? )

Asks: could LLMs be used to "create tools that sift and summarize scientific evidence for policymaking... [for] knowledge brokers providing presidents, prime ministers, civil servants and politicians with up-to-date information on how science and technology intersects with societal issues... [who must] nimbly navigate ... millions of scientific papers ... reports from advocacy organizations, industry and scientific academies... must work fast ... [but] Producing policy summaries in weeks, days or sometimes hours is a daunting task".

Obviously, there are quality concerns, so "US House of Representatives to impose limits on chatbot use in June" 2023, so a lot of work needs to go into guidelines and guardrails. The article explores two promising tasks: "synthesizing evidence and drafting briefing papers — and highlight areas needing closer attention".

Faster evidence synthesis

"AI-based platforms should be able to ... free subject-matter experts to focus on more complex analytical aspects" by supporting two types of synthesis:

  • "Systematic reviews ... identify a question ... systematically locate and analyse all relevant studies to find the best answer...
  • subject-wide evidence syntheses... reading the literature at scale ... eg 70 people ... 50 person-years reading more than 1.5 million conservation papers ... summarized all 3,689 tested interventions ... -> expert panel... (https://conservationevidence.com)

Early stage processes automisable by AI include "search, screening and data-extraction... especially useful in making sense of emerging domains [and] to detect emerging ‘clusters’ of research... Nonetheless, assessing data quality and drawing conclusions ... typically require human judgement."

These processes could also help "decision-making... creating possible options in a process known as solution scanning ... Advisers can then collate and synthesize the relevant evidence". AI can also help advisers overcome linguistic barriers.

Issues to consider

  • Consistency: source materials are not standardised, so it's "difficult to develop fully automated methods to identify specific findings and study criteria" eg period of effect & sample size are often buried in the text. Standard formats exist but are not used globally;
  • Credibility: Objective metrics like "impact factors and citation counts, are found to be poor measures of research quality". It's difficult for an AI to balance the many credibility factors advisors do use - plausibility, based on the advisers’ knowledge and evaluation of the research; reputations of authors and their institutions; views of others in the field, colleagues and peers - particularly as the factors' relative importance will vary by policy question/context. We need agreed standards for research quality.
  • Database selection and access: "Access and interoperability of databases, and government collaboration... essential foundations for large-scale automated evidence synthesis."

Drafting policy briefs

Today Al "could be used to provide first drafts of discrete sections... plain-language summaries of technical information or complex legislation", although Elsevier's early experiment created bland text at the same (high) level of understanding as the papers it sourced - not that useful.

But tomorrow they might provide advice tailored to each policymaler, factoring in an MP's "political affiliation, voting record, educational ... background" and constituency (demographics, socio-economic information, and even "present content on science-informed issues in the voice of the policymaker" by "leverage the policymakers’ previous work as a training data... The level of technical detail might be dialled up or down by the reader themselves."

Also:

  • LLMs can "have distinct political leanings ... [so] cannot be black boxes — they will require transparency and participatory design processes", so institutions will need "robust governance, broad participation, public accountability and transparency."
  • They must be made invulnerable to being "data-poisoned" by disinformation, probably AI-created.
  • To avoid "disclosing restricted information... (see go.nature.com/3rrhm67), Institutions will need ... guidelines about what documents and information can be fed into external LLMs and, ideally, develop their own internal models".
  • The brokers will require training (duh), particularly to "avoid inappropriate over-reliance on AI"

Developing this field properly will require a partnership: "technical know-how is likely to come from academia and technology companies, whereas demands for robust governance, transparency and accountability can only be met by governments... We still need old-school intelligence to make the most of the artificial kind. "

Read the Full Post

The above notes were curated from the full post www.nature.com/articles/d41586-023-02999-3.

Related reading

More Stuff I Like

More Stuff tagged k4p , ai , knowledge management , autosummarise , llm

See also: Digital Transformation , Innovation Strategy , Productivity , Politics , Science&Technology , Business , Large language models

Cookies disclaimer

MyHub.ai saves very few cookies onto your device: we need some to monitor site traffic using Google Analytics, while another protects you from a cross-site request forgeries. Nevertheless, you can disable the usage of cookies by changing the settings of your browser. By browsing our website without changing the browser settings, you grant us permission to store that information on your device. More details in our Privacy Policy.