AI being used to add fake details in immigration, asylum applications, federal officials say
Artificial intelligence is being used to bolster immigration and asylum cases in Canada by generating fake narratives, including references to fabricated court decisions.
Both the federal department, Immigration, Refugees and Citizenship Canada (IRCC), and the Immigration and Refugee Board (IRB), an independent tribunal that rules on asylum applications, say they have detected the use of AI in applications containing fake or inaccurate information.
The IRB said that the use of AI in applications to stay in Canada as a refugee is creating a fresh challenge for its employees.
“Recently, we have observed that memoranda of appeal are becoming lengthier, yet this increase in volume does not necessarily translate to stronger arguments. In fact, occasionally these documents include references to case law that do not exist or cite legal precedents for propositions they do not actually support,” the IRB in a statement. “This adds unnecessary complexity and time to our work.”
Scores of asylum claimants warned they may face deportation after immigration law passes
If misrepresentation, use of faked documents or other types of fraud are confirmed, foreign nationals can face a five-year ban from entering Canada.
The Canada Border Services Agency, IRCC and the RCMP investigate immigration fraud.
The IRB said if it “identifies potential integrity concerns through its regular review of files conducted by employees” it would alert partner organizations.
The immigration department said it has detected people using AI to fake their paperwork but would not disclose examples to avoid helping fraudsters find ways to evade being caught.
“We have observed instances where AI has been used to help generate fraudulent applications,” said IRCC spokesperson Isabelle Dubois. “As we work to detect and prevent fraud, publicly sharing these specific examples could inadvertently help fraudulent claimants identify alternative methods to circumvent detection.”
IRCC faced calls recently to bolster its investigation of fraud on immigration files. Last month, it was sharply criticized by the Auditor-General for failing to investigate more than 149,000 international students flagged as not complying with the terms of their study permits.
The report by Karen Hogan into the international student program run by IRCC concluded there were “critical weaknesses” in the department’s anti-fraud controls.
Refugee tribunal ruled on more than 45,000 cases since 2019 without in-person hearings
Toronto immigration lawyer Max Berger said he fears “that in asylum cases, AI will become the new ghost [immigration] consultant.”
“Currently ghost consultants who make up stories for some claimants are the scourge of the refugee determination process. Instead of paying ghost consultants, the minority of refugee claimants trying to game the system can now ask AI to make up a history of persecution for them at no cost,” he said in an e-mail.
Thousands of refugee claims are decided by the IRB without an oral hearing and based only on paperwork. But Mr. Berger said refugee hearings allow the IRB to question claimants, including about fake narratives.
“The antidote is in holding an oral refugee hearing where credibility is tested by the IRB board member,” he added.
In 2024, the Federal Court issued a policy directive instructing lawyers and litigants to disclose the use of AI in submissions to the Court, including in immigration cases.
IRCC’s AI strategy, published earlier this year, said the department is experimenting with a number of AI tools, many of which focus on fraud prevention. It said artificial intelligence is helping the department detect false narratives. Machine-learning tools are also being used to detect anomalies in applications and irregular travel patterns, which could signal that a refugee or immigrant came from a country or region other than the one claimed.
AI systems have been trained to detect fraudulent manipulation of documents, such as academic records and bank statements, as well as artificially “morphed” photographs that could be used in an attempt to commit identity fraud or to mislead an immigration officer about a person’s age.
Both IRCC and the IRB say they are also using AI and other technology tools to boost efficiency, but not for decision-making, for example on whether someone should be allowed to remain in Canada.
Opinion: Canada has a hidden asylum-policy problem
The IRB in its departmental plan for 2026-27 said it plans to introduce tools to “support faster preparation of files.”
It said it “is examining the use of AI to increase productivity and optimize operations across the Board but will not adopt the use of the technology for adjudicative decision‑making.”
The tribunal already uses speech-to-text transcription of what is said at refugee hearings, which are then checked for accuracy. Its legal team also uses AI to prepare draft summaries of Federal Court decisions, which are reviewed by either paralegals or lawyers before being finalized, the IRB said in a statement.
The 2026-27 departmental plan said the tribunal plans to “advance decision-making tools to accelerate file preparation and support decision‑drafting.”
It said such tools will “help decision‑makers generate their reasons in a format that is concise, focused and accessible. These tools do not replace decision‑makers or limit decisions, rather, they aim to streamline decision‑writing.”
The IRB has already established mandatory AI training for its employees. Its departmental plan says it aims to use AI to reallocate effort from repetitive tasks and to improve overall efficiency.
“This includes introducing tools to enhance triage capacity, to optimize scheduling and to support faster preparation of files.”




