Nottingham University Business School
female radiographer operating an MRI scanner

Governing artificial intelligence safety in healthcare: developing a learning architecture for regulation, investigation and improvement

This projects aims to analyse how artificial intelligence (AI) safety should be regulated and governed in healthcare and how those systems of governance should be able to adapt over time to accommodate future developments.

Duration: Awarded 2018

Funder: Wellcome Trust

Partners: 

CHILL investigators:

Professor Carl Macrae

Carl Macrae
 


Research summary

Background:

Artificial intelligence (AI) has enormous potential to improve the safety of care that patients receive by, for example, improving diagnostic accuracy and treatment planning. But AI also has the potential to introduce a range of new risks to patient safety, such as making decisions that are difficult to understand or check. To maximise the benefits of AI in healthcare it is critical that the patient safety risks it poses are robustly regulated and governed.

Design and methods:

I will analyse how AI safety should be regulated and governed in healthcare and how those systems of governance should be able to adapt over time to accommodate future developments. The types of patient safety risks posed by AI will be analysed, along with the key regulatory functions needed to effectively govern those risks.

A series of workshops will bring together key stakeholders to help develop this proposed governance framework and plan future research.

 

 


 

Instagram LinkedIn Twitter YouTube

Nottingham University Business School

Jubilee Campus
Nottingham
NG8 1BB

Contact us