An immigration barrister was found by a judge to be using AI to do his work for a tribunal hearing after citing cases that were “entirely fictitious” or “wholly irrelevant”.
Chowdhury Rahman was discovered using ChatGPT-like software to prepare his legal research, a tribunal heard. Rahman was found not only to have used AI to prepare his work, but “failed thereafter to undertake any proper checks on the accuracy”.
The upper tribunal judge Mark Blundell said Rahman had even tried to hide the fact he had used AI and “wasted” the tribunal’s time. Blundell said he was considering reporting Rahman to the Bar Standards Board. The Guardian has contacted Rahman’s firm for comment.
The matter came to light in the case of two Honduran sisters who claimed asylum on the basis that they were being targeted by a criminal gang in their home country. Rahman represented the sisters, aged 29 and 35. The case escalated to the upper tribunal.
Blundell rejected Rahman’s arguments, adding that “nothing said by Mr Rahman orally or in writing establishes an error of law on the part of the judge and the appeal must be dismissed”.
Then, in a rare ruling, Blundell went on to say in a postscript that there were “significant problems” within the grounds of appeal put before him.
He said that 12 authorities were cited in the paperwork by Rahman, but when he came to read the grounds, he noticed that “some of those authorities did not exist and that others did not support the propositions of law for which they were cited in the grounds”.
In his judgment, he listed 10 of these cases and set out “what was said by Mr Rahman about those actual or fictitious cases”.
Blundell said: “Mr Rahman appeared to know nothing about any of the authorities he had cited in the grounds of appeal he had supposedly settled in July this year. He had apparently not intended to take me to any of those decisions in his submissions.
“Some of the decisions did not exist. Not one decision supported the proposition of law set out in the grounds.”
Blundell said the submissions made by Rahman – who said he had used “various websites” to conduct his research – were therefore misleading.
Blundell said: “The most obvious explanation is … that the grounds of appeal were drafted in whole or in part by generative artificial intelligence such as ChatGPT.
“I am bound to observe that one of the cases cited in Mr Rahman’s grounds … has recently been wrongly deployed by ChatGPT in support of similar arguments.”
Rahman told the judge that the inaccuracies in the grounds were “as a result of his drafting style” and he accepted there might have been some “confusion and vagueness” in his submissions.
Blundell said: “The problems which I have detailed above are not matters of drafting style. The authorities which were cited in the grounds either did not exist or did not support the grounds of which were advanced.”
He added: “It is overwhelmingly likely, in my judgment, that Mr Rahman used generative artificial intelligence to formulate the grounds of appeal in this case, and that he attempted to hide that fact from me during the hearing.
“Even if Mr Rahman thought, for whatever reason, that these cases did somehow support the arguments he wished to make, he cannot explain the entirely fictitious citations.
“In my judgment, the only realistic possibility is that Mr Rahman relied significantly on Gen AI to formulate the grounds and sought to disguise that fact when the difficulties were explored with him at the hearing.”
The judge’s ruling was made in September and published on Tuesday.