Dykeman examines artificial intelligence, possibility, and law in health care
By AdvocateDaily.com Staff
The intersection of artificial intelligence (AI) and patient data opens new frontiers in health care but also comes with privacy concerns, Toronto health lawyer Mary Jane Dykeman tells AdvocateDaily.com.
Dykeman, partner with DDO Health Law, has been advocating for clients on a broad range of health law matters for more than two decades. She says Big Data and AI are at a point where exciting opportunities exist in health care and beyond.
She says that while there are privacy concerns, they need not be an absolute barrier to the advancement of medicine and science.
Earlier this year, Dykeman addressed that delicate balance during a Canadian Association for Health Services and Policy Research (CAHSPR) forum in Toronto.
While acknowledging the necessity of protecting privacy, Dykeman says it is “self-defeating” to assume the only approach is more legislation and regulation.
Dykeman says she was also asked during the forum about possible liability should a machine using AI generate a diagnosis that results in a poor outcome.
“The issue has really been addressed with the advent of other technology and devices,” she says. “I don’t think we need to worry about that at this time as the more pressing issues are about privacy, how to manage the volume of data, and make sense of it, and where consent and engagement of patients fit.”
Still, Dykeman notes that diligence is always prudent. She cites a Forbes story detailing the travails of a software company developing an AI chatbot designed to dispense medical advice. The project held promise until issues were raised that some of the information being provided was inaccurate.
She says it needs to be determined at what point the data should be 'identifying,' meaning, will it reasonably identify the person.
"For example, technology has evolved over time to permit us to know that it is a single person having different clinical encounters in the health system, but without needing to know who that person is. In other instances, such as personalized medicine, it is all about the individual patient."
Patients also need to be part of the process, Dykeman says. That means engaging them in the conversation about what data is collected, and how it is used in diagnosis, treatments, and related issues so that society as a whole can benefit — in much the same way it does through the collection of blood or organ donations, she says.
"They must either consent, or in some cases, there may be a public good in permitting the use or disclosure of their identifying information. However, this is not always well done," Dykeman says.
“Too often it’s don’t ask, don’t tell. We need to define the narrative and ask, what is the story we need to share. And it’s simple — the truth,” she says.
The success stories of AI and Big Data are legion, she says, pointing to ongoing work at the Hospital for Sick Children, where a chair in Biomedical Informatics and Artificial Intelligence has been created, and research involving trillions of data points stretching back to 2007 is being analyzed in real time to predict cardiac arrest in infants, and prepare for interventions.
“These are stories which should be part of the narrative because ultimately, everyone benefits from advances driven by such research,” Dykeman observes, reflecting on the existing frameworks for health data governance.
“This is not entirely novel. As new as AI is, and recognizing its enormous potential, we already have oversight mechanisms such as research ethics reviews to draw on. Let’s not reinvent the wheel — even as the wheel is in many ways being reinvented by AI,” says Dykeman.