Interdisciplinary group of experts examined how Canada can maximize potential benefits of AI in health care while minimizing dangers
An interdisciplinary group of experts examining the use of artificial intelligence in Canadian health care has found liability, algorithmic bias, and the protection of privacy without stifling innovation are key concerns to address before integrating such technology into the health network.
With support from the Canadian Institute For Advanced Research (CIFAR) and the University of Ottawa’s AI + Society Initiative, uOttawa Professors Colleen M. Flood (Professor of Law, Common Law section, and Research Chair in Health Law & Policy) and Céline Castets-Renard (Full Professor, Civil Law Section, Faculty of Law, and University Research Chair on Accountable Artificial Intelligence in a Global World) joined forces with McGill University’s Joelle Pineau to assemble over two dozen international experts in the areas of Artificial Intelligence (AI), law, ethics, policy, and medicine. Their aim? To discuss the core regulatory challenges raised by these complexities while trying to answer the question of how can Canada maximize the potential benefits of the use of AI including improving safety, quality and efficiency in health care while minimizing potential dangers such as possible discrimination?
Several panels examined whether existing laws were sufficient to protect patients from harms and to maximize the benefits that can emerge from AI technologies properly employed in health care, while also exploring what form reforms may take.
A summary of core takeaways from the AI & HEALTH CARE: A FUSION OF LAW & SCIENCE AN INTRODUCTION TO THE ISSUES talks can be found here.
“The potential of AI to improve the quality, safety and efficiency of health care is simply enormous,” said Dr. Flood. “However, as we innovate in science we need to innovate in regulation. We want to make sure we have the right incentives to support terrific innovation in the market-place but protect patients and other users from harm from, for example, algorithmic bias, which can result not only in discrimination but unsafe products.”
For enquiries or interviews, media can reach out directly to experts by consulting this Need an Expert or contact:
Media Relations Agent