This post is a reminder to anyone using artificial intelligence "large language models" to conduct legal research that these systems are prone to 'hallucination'. They make things up.
There was a well-publicised case in the States a few months ago in which a lawyer had used an AI model to help prepare submissions to the court. The model cited case law - but the cases did not exist. They were fabrications. Now this has happened in an English tribunal. Read paragraphs 1 to 24 of Harber v Commissioners of HMRC, it won't take long.
If you must use an AI model, check everything that it produces for you.
There was a well-publicised case in the States a few months ago in which a lawyer had used an AI model to help prepare submissions to the court. The model cited case law - but the cases did not exist. They were fabrications. Now this has happened in an English tribunal. Read paragraphs 1 to 24 of Harber v Commissioners of HMRC, it won't take long.
If you must use an AI model, check everything that it produces for you.
Comment