Jeffery: Immigration Department's use of AI to make decisions 'unfair'
By Kirsten McMahon, Associate Editor
The federal government’s interest in using artificial intelligence (AI) for automated decision-making involving immigration applications, brings up issues of fairness and unlawful fettering of discretion, Toronto immigration lawyer Matthew Jeffery tells AdvocateDaily.com.
Earlier this year, federal officials launched two pilot projects where automated systems sort through temporary resident visa applications. A spokesperson for Immigration Minister Ahmed Hussen told Global News the analytics program helps officers triage online visa applications to “process routine cases more efficiently.”
However, research conducted by the University of Toronto’s Citizen Lab outlines the negative impacts of the government’s use of AI and predictive analytics to automate certain activities involving immigrant and visitor applications.
Jeffery, who operates the immigration-focused Matthew Jeffery Barrister & Solicitor office in Toronto, says there is an inherent fairness issue in having a computer program determine the fate of immigration applications.
“In the context of any discretionary decision, an algorithm that decides an outcome based on preset factors would engage in an unlawful fettering of discretion,” he says.
While an Immigration Department spokesperson told Global the technology is being used exclusively as a “sorting mechanism” to help officers deal with a large number of visitor visas, it is developing other machine-learning projects for humanitarian and compassionate applications as well as pre-removal risk assessments.
“It appears that the Immigration Department is using AI to make what are really discretionary decisions and I think that's where the line is crossed,” Jeffery says. “There are concepts in constitutional and administrative law which require a decision maker to be open-minded in forming a discretionary decision.”
There is a duty of fairness that decision makers not fetter their discretion and be open to persuasion by looking at factors that may not normally be taken into consideration as well as the context of the decision they're making, he says.
“If a computer is programmed to consider a certain set of criteria in making a determination about whether a marriage is genuine or a person is at risk, then it’s not open to persuasion,” Jeffery says. “That's why we need human decision makers because they're open to hearing new aspects for consideration and they are capable of empathizing.”
He says he understands the temptation to use AI for things like routine visitor visa applications, for example, where the immigration department receives millions in any given year.
“Sure, you would save so much time and effort if a computer was making those determinations, but unfortunately, it’s not fair because all these people who want to come to Canada have their own unique situations and sets of circumstances,” Jeffery says. “It would really be doing a disservice to apply an algorithm to decide who gets to visit”
If the Immigration Department chooses to explore pilot projects where AI is assessing risk and refugee determinations, then Charter challenges will likely abound.
“If a computer is deciding if somebody is at risk based on a certain set of factors, that would obviously invoke s. 7 Charter considerations,” Jeffery says. “Is a computer competent enough to decide whether somebody is at risk of persecution in their home country? I don't think so.”
The government tells Global that officials are only interested in developing or acquiring a tool to help officials manage litigation and develop legal advice in immigration law — with the intent to support decision-makers in their work rather than replace them.
Meanwhile, the authors of the Citizen Lab report issued a list of seven recommendations calling for greater transparency and public reporting and oversight on government’s use of AI and predictive analytics to automate certain activities involving immigrant and visitor applications.
“We know that the government is experimenting with the use of these technologies … but it’s clear that without appropriate safeguards and oversight mechanisms, using AI in immigration and refugee determinations is very risky because the impact on people’s lives is quite real,” one of the authors of the report told Global.