AI useful for detecting breaches, but training is vital
By AdvocateDaily.com Staff
Following a successful pilot project, more health-care organizations may be looking to artificial intelligence (AI) to help minimize data breaches — but institutions must have checks and balances in place and implement robust training when rolling out this technology, says Toronto health and corporate lawyer Shanon Grauer.
While the Personal Health Information Protection Act (PHIPA) sets out the rules for how health custodians and their agents may collect, use and disclose personal health information and prohibits unauthorized access to patients’ health information, Ontario’s Information and Privacy Commissioner (IPC) Brian Beamish noted in his 2018 annual report, which was released in 2019, that there is still “a pervasive privacy issue” in health-care settings.
For example, there were 506 self-reported privacy breaches in the health-care sector last year, of which 120 involved ‘snooping,’ 15 were a cyberattack and more than 370 resulting from another type of unauthorized collection, Beamish says the report.
Looking at technology to help detect and deter unauthorized access, last year, the IPC participated in a steering committee that resulted in the procurement of a smart auditing tool by one health organization.
The organization conducted a six-month pilot using the tool, which employed big data analytics and AI to flag unexplained accesses to a patient’s health information for follow-up and explained accesses by making an intelligent connection between the patient and staff who accessed the information.
While the IPC detected numerous breaches in the initial stages of the pilot, it says numbers significantly decreased as the solution was refined and information, such as staff roles and schedules, was incorporated into the tool. Final results showed that the majority of records accesses were appropriate, while approximately two per cent were unexplained.
Following the results of the pilot, Beamish says he “supports the rollout of this innovative and proactive solution across Ontario’s health sector to help custodians better detect and minimize the risk of unauthorized access and improve patient privacy.”
Grauer, counsel with DDO Health Law and INQ Data Law, headed by Carole Piovesan, which between its two divisions, focuses on health privacy as well as AI and data management, says the use of AI as a risk management tool to control both unauthorized access and prevent cybersecurity attacks is a positive move for health-care organizations.
She says detecting cybersecurity threats early via AI would be especially valuable for hospitals, as when systems are taken offline by malware — as evidenced by recent news of three Ontario hospitals hit by a ransomware attack — the end result can be “a nightmare.”
“I was really happy to see that a hospital had such a pilot project because that means that 1) the technology exists and 2) the review seems to say it was quite successful,” Grauer tells AdvocateDaily.com.
At the same time, she says the concept of AI in health care remains a challenge, as some people within an organization may not be sure what it involves, and as such, education is essential when rolling out this technology.
“AI is a buzzword right now, but I think, as with cybersecurity attacks and privacy issues, there has to be really robust training to have the institution and its employees understand what this tool is, what its limitations are and what its benefits are and not to just do a one-time training but to have regular training because, in part, I think people learn by repetition,” Grauer says.
“It’s also a question of keeping up to date in technological advances because what might work in 2019 will undoubtedly be out of date in 2020, and so it’s that kind of dynamic interface that needs to have a really good AI program that will also supply updates too,” she adds.
Another challenge to widespread implementation of AI to detect security breaches in hospital records, says Grauer, is the cost of implementing this technology.
“You’re going to be dealing with largely third-party service providers, and so you’ve got the added costs of using it. Organizations will have to do some sort of cost-benefit analysis to see if the cost of breaches on privacy is offset by the benefits of AI and its expense.”
One solution, she says, may be to start small by deploying the AI technology to minimize the organization’s larger risks first.
Also, Grauer says, hospital buying groups may wish to work together to acquire and share a licence among its member hospitals, rather than each hospital entering into contracts one-on-one with AI suppliers. Hospital foundations may also be part of the funding solution, she adds.
As these technologies will likely be over the $100,000 limit and will have to go through a Request for Proposal and compliance with mandatory procurement rules, someone within the hospital will need to understand what is out there from a supplier point of view, and look for the best technologies suited to the health sector, Grauer says.
“I think part of this puzzle is having people within each institution knowledgeable of exactly what the pros and cons are of different offerings in the AI technology market,” she says. “So it’s not just all on the supplier side — you have to have personnel in the institution who speak the right language.”
Although there is a tendency for silos to develop when implementing technology such as AI — where IT people only communicate with other IT people during the rollout — Grauer says it is essential for lawyers to be part of the process.
“You’ve got to include us in the discussion — not because we’re trying to interfere, but our job is risk management from the legal perspective, and privacy breaches are a legal issue, there’s mandatory privacy reporting in Ontario for breaches of PHIPA.
“It has to be almost like a shared jurisdiction between the legal team and the technology team so that each gets comfortable understanding the other’s perspective,” she says.
Ultimately, says Grauer, as institutions become more familiar with AI and it becomes a more commonly deployed technology, its use will likely become more widespread.
“But like anything, I think it’s probably a good idea to roll it out slowly, have some checks and balances and evaluate how it works for specific uses,” she adds.