Avoiding the AI “off switch”: Make AI work for clinicians, to unlock potential for patients

Share:

A groundbreaking White Paper has been released, urging the government, AI developers, and regulators to prioritize clinician needs in the development of artificial intelligence for healthcare. The report, supported by the NIHR ARC Yorkshire & Humber Improvement Academy, warns that the vast potential of AI to improve patient care could be lost if these technologies fail to serve the professionals who will use them.

The healthcare sector is a major target for global AI investment, with nations, including the UK, actively pursuing strategies to integrate AI for more efficient and responsive healthcare. However, the newly published White Paper, a collaboration between the Centre for Assuring Autonomy at the University of York, the MPS Foundation, and the Improvement Academy hosted at the Bradford Institute for Health Research, identifies a critical risk: the “off-switch.”

This “off-switch” represents the potential for frontline clinicians to reject AI tools they find burdensome, impractical, or a threat to their professional judgment and patient safety. The report highlights a key concern: the risk of clinicians becoming “liability sinks,” bearing full legal responsibility for AI-influenced decisions, even when the AI system is flawed.

The White Paper’s findings are rooted in the Shared CAIRE (Shared Care AI Role Evaluation) research project, a collaboration with the Centre for Assuring Autonomy. This research examined the impact of AI decision-support tools on clinicians, drawing on expertise from safety, medicine, AI, human-computer interaction, ethics, and law.

The research team evaluated various AI tool applications, from simple information provision to direct patient interaction. This evaluation led to seven key recommendations. These include a call for: 

  • Reform to product liability for AI tools, due to significant difficulties in applying the current product liability regime to an AI tool 
  • AI tools should not provide recommendations, but only information to clinicians. This will reduce the potential risk to both clinicians and patients, until product liability reform. 
  • Clinicians to be fully involved in the design and development of the AI tools they will be using. 

The White Paper authors stress the urgency of these recommendations, urging the government, AI developers, and regulators to act swiftly to ensure AI’s successful integration into healthcare.

Related News

Public Helps Shape Healthcare Data Decisions at Yorkshire & Humber Secure Data Environment (SDE) Citizens’ Jury

Last week, 25 members of the public from across the region collaborated to develop recommendations that will directly inform secure data environment decision-makers. The Citizens’ Jury began with online sessions...

New Podcast: Essential Implementation Podcast explores the change in Implementation Science and the key to Implementation Success

The Essential Implementation podcast has released two new episodes delving into the evolving landscape of implementation science, a field crucial for translating research into real-world healthcare improvements. The episodes, released...

Bradford and Craven Innovation Hub – Improvement Academy supporting leaders to implement innovations that will improve outcomes locally

Bradford District and Craven has been selected by the Health Foundation to be one of four Innovation Hubs. The project will involve all NHS organisations across the Bradford District and...