The Challenges of Accountability in AI for Immigration: The IRCC and Canadian AI Governance
The use of AI in the IRCC continues to be a topic of debate, especially regarding ethical concerns and the potential for harm. While AI has been used to streamline processes, several of its uses by the IRCC have raised concerns.
The key characteristics of IRCC applicants are location, age and gender. Each comes with its own set of issues. Even when attempting to use debiasing tools, developers risk “removing characteristics that could be important for decisions like refugee determinations.” This can cause concerns, especially when AI is used in departments or agencies that experience high public scrutiny, such as the IRCC. Another concerning precedent is the use of facial recognition technology (FRT) findings as evidence in immigration and refugee hearings, where the burden of proof is critical.
This case study discusses the governance dilemma about whether to continue with systems that “get the job done” but are opaque and pose a risk to institutional integrity and obligation, or to integrate more transparent processes and human rights safeguards that could make the implementation of some AI tools more difficult.
Case Study #12
Download Includes: Case Study, Teaching Note
ISSN 2819-0475 • doi:10.51644/BCS012