We are sharing this update from ACCA, our professional body, for the interest of clients and contacts. The content is (c) ACCA

Tribunal orders HMRC to reveal use of AI in R&D tax credit decisions

A landmark ruling has compelled the UK’s tax authority to disclose whether artificial intelligence (AI) played a role in rejecting research and development tax relief claims, raising broader questions about transparency and automation in government.

A First-tier Tribunal has ordered HMRC to confirm whether it has used generative AI tools – including ChatGPT, Gemini, and other large language models (LLMs) – in the process of assessing applications for research and development (R&D) tax credits. The decision is being viewed as a significant victory for advocates of transparency and scrutiny in the public sector.

The case was brought by Thomas Elsbury, co-founder of the R&D tax consultancy and software platform Novel, who filed a Freedom of Information (FOI) request in December 2023 seeking information about HMRC’s use of AI by its R&D tax compliance team.

Elsbury raised the alarm after identifying recurring patterns in rejection letters issued by HMRC, suggesting potential AI involvement. He requested information on what AI systems were being used, what selection criteria were followed and what safeguards were in place to protect taxpayer data.

He also expressed concern about the possible use of publicly available AI models, like ChatGPT, in processing sensitive tax information. Some R&D tax claims relate to national defence or advanced technologies developed for the Ministry of Defence, and the improper handling of such data via external AI platforms could risk national security.

Initially, HMRC refused to disclose the information, citing section 31 of the Freedom of Information Act 2000, arguing that disclosure could prejudice the assessment or collection of tax. In November 2024, this position was upheld by the Information Commissioner’s Office (ICO), which agreed that confirming or denying the use of AI could provide insights that might benefit fraudulent claimants.

However, on 2 August 2025, the First-tier Tribunal (General Regulatory Chamber) overturned this decision. In her ruling, Judge Alexandra Marks said Elsbury’s arguments were ‘compelling’ and emphasised that transparency was in the public interest – particularly given the growing use of AI in government.

The tribunal found that the ICO had given undue weight to ‘unsubstantiated and unevidenced’ risks of fraud and had failed to adequately consider the public benefits of disclosure.

The tribunal criticised HMRC for initially confirming it held the requested information, only to later reverse course and adopt a ‘neither confirm nor deny’ position. This shift was described as ‘untenable’ and ‘like trying to force the genie back in its bottle’.

The tribunal also found that a lack of transparency could further erode trust in the tax system, especially if AI is being used without clear governance or public accountability.

The ICO has confirmed it will not appeal the ruling. HMRC must comply with the order by 18 September 2025, though it is reportedly still reviewing its position and ‘considering next steps’.

This ruling comes as government departments increasingly explore the use of AI to streamline services and boost productivity. The UK’s public sector is trialling AI in areas such as passport applications, NHS diagnostics, and education.

HMRC is no exception. Its Transformation Roadmap includes plans to embed generative AI into core operations. However, professional bodies have raised concerns over the lack of clarity in its use and called for a formal HMRC AI Charter. Recent updates to HMRC’s privacy notice now acknowledge the use of AI and machine learning ‘where the law allows’.

Yet the question of what the law actually allows remains open. UK tax law was written long before the advent of AI. A 2020 update to tax administration legislation confirmed HMRC may use ‘any means’ to carry out its duties – including computers – but how this applies to complex AI systems remains unsettled.

This case has brought attention to the growing intersection of AI, public administration and taxpayer rights. Around 70% of global tax authorities already use some form of AI, according to the OECD (Organisation for Economic Co-operation & Development). As the technology becomes more deeply embedded, so too does the demand for accountability.

For tax authorities, AI offers a way to manage rising data volumes and combat fraud. But for taxpayers, it introduces concerns about fairness, accuracy and human oversight. In areas like R&D tax credits, where both the stakes and complexity are high, these issues become even more critical.

This ruling signals that AI in government cannot operate in the shadows. Whether assisting human officers or automating decisions, its presence must be acknowledged, and its use scrutinised. For businesses navigating the evolving tax landscape – and for the wider public – this could mark a turning point in the push for transparency.

Whitefield Tax - Isle of Wight Accountants - IR35 specialists
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

To review our full Privacy and Cookie Policy please click here