29th August 2025
Posted in Articles by Josh Barlow
A recent First-tier Tribunal has shone a light on the growing risks of taxpayers turning to artificial intelligence for self-help. In HMRC v Gunnarson, an unrepresented taxpayer sought to appeal HMRC’s decision to claw back a £13,000 Self-Employed Income Support Scheme (SEISS) grant. In preparing their case, the taxpayer leaned heavily on arguments drafted by an AI chatbot, which appeared to provide useful precedent in the form of supporting case law.
Unfortunately for the taxpayer, the tribunal quickly discovered that the sources cited simply did not exist. The chatbot had “hallucinated” – generating plausible sounding but entirely fictitious cases to bolster its position. The judge dismissed the appeal, criticising the individual for trying to take short-cuts. Not only did the taxpayer lose their £13,000 claim, but they also faced the embarrassment of presenting an argument based on fiction.
Following a familiar pattern
This case is not an isolated one. Similar episodes in recent tax cases have seen litigants and even lawyers present AI-generated submissions only to discover that references were invented. The technology is impressive in its fluency, but its tendency to create convincing sounding, but baseless content has now been repeatedly exposed.
In the tax world, this is particularly dangerous. AI models have been known to cite outdated tax rates, fabricate passages from HMRC manuals, or mix snippets of commentary into something that looks like a direct quote when in reality it is nothing of the sort. For an untrained taxpayer, it can be virtually impossible to tell the difference between accurate analysis and confident-sounding nonsense.
Forbes Dawson view
This case underlines why AI should not be relied upon as a “DIY tax adviser”. In our own testing, we find that chatbots give inaccurate or misleading tax answers in at least half of cases. Sometimes the numbers are wrong, sometimes the legislation is out of date, and sometimes the model simply makes things up. All too often, the output is presented in a way that looks authoritative — which makes it even more dangerous when it is wrong.
The real risk is that clients may be lulled into a false sense of security, relying on technology that can generate quick answers but cannot provide the nuanced judgement required in practice. Tax advice is rarely black and white, and it requires both technical knowledge and professional scepticism. AI, for now, has neither.
The lesson from this tribunal is clear: while AI may have a role in undertaking research, it cannot replace human expertise. At best, it should be treated as a starting point that always needs rigorous review by a qualified adviser. For taxpayers tempted to cut corners, the message could not be stronger — fake cases can lead to very real losses.
You can use this form to request us to give you a call or if you prefer just leave us a message. Please be sure to leave us a contact number or email address for you and we will get back to you as soon as we can.