Responsible AI in tax administration: Who or what should be responsible?
Responsible AI
Innovations in the field of artificial intelligence and its use in government and administrative services is advancing quickly.1 At the same time, there has been an increased reference to the risks of AI, its impact on the society, end users, jobs and its reverse influences on processes that it was integrated into.2 More recent contributions in the field of AI and tax administration devote attention to examples such as Robodebt and Toeslagenaffaire, and the potential negative impacts on taxpayers’ rights.3 This note does not intend to explain the source of the problem with these isolated incidents. Instead, this note seeks to highlight a difference between the impact-based, inherent and process risks that can (RC) arise from the inclusion of AI tools in the public tax service value chain.
An understanding of these different types of risks is central to the design of an organizational AI governance framework. The different types of risks tend to be overlooked in the description of responsible AI in the public services domain as the primary focus is the impact on the public end user. The conceptualization of risks that arise from the inclusion of AI tools can be used to organize responsibility amongst human actors and therefore enhance public trust in governmental use of AI tools. For the purposes of this note, the term ‘users’ is used to refer to tax officials and government employees who develop and utilize AI in their work, rather than taxpayers who are on the receiving end of the final impacts of a service process.
Contributions that explore the risks of AI in tax administration refer largely to two types of risks. The first is the risk of negative impact on the taxpayer and their rights. The coercive (eg. risk assessment of taxpayers for audit procedures) and non-coercive (eg. chatbots) categorization of AI tools are based on the potential of negative impact on taxpayers’ rights.4 The EU AI Act classification by levels of AI risk and the US Blueprint for an AI Bill of Rights are based on the impacts on the public end user and society.5 Impact-based risks that culminate at the end of a process value chain will answer the question of – ‘What are the potential negative consequences for the public end-user?’. The second type of risk is the inherent risk of AI tools. These risks vary across the type of AI tool and include biases embedded in input data, accuracy, robustness, explainability and security.6 Inherent risks are associated with the question of – ‘How reliable is the AI tool at accomplishing its main intended function?’. 7
Compared to an explanation of AI impact-based and inherent risks, there is little research available on the incremental process risks that arise from the use of AI tools in tax administrations.8 Traceability, communication, and human oversight9 are examples of incremental risks posed to a tax administration process at the inclusion of AI tools. An impact-based definition of risks moves quite quickly over the lengthy tax process value chain and looks straight to the changes in the public end-user’s position. The service processes of taxpayer risk assessment, tax audits and administrative decision-making are already high impact processes before AI tools came into the picture. Then it is important to ask ‘What are the incremental risks that AI tools bring to the process?’ The distinction between these three types of risks - impact-based, inherent AI risks and incremental process risks, are essential to assigning responsibilities in the value chain of the process and to design governance frameworks for the use of AI.
Who is bearing the weight of the recent national, regional and organization level policies to govern the use of AI?10 In many cases the use of the AI tool in a tax administration would fall under both national regulation and organizational policy. Complying with one or more of these governance requirements is not simple, neither is it a low effort matter.11 The first key consideration is who can mitigate or control the risk being examined? The responsibility for impact-based and incremental process risks lies predominantly with the participating tax official, whereas the responsibility for inherent risks lies with the data scientists, designers, developers and employee users of the AI tool.
Another key consideration is the scope of practical compliance. There are a variety of AI tools and techniques that are part of different tax processes. Given the limited workforce in government service, the responsible tax official with domain expertise will now have to evaluate the relevance and accuracy of the AI output, be knowledgeable of the algorithmic design, be understanding of its inherent limitations and document compliance with governance standards. Is this too high of an expectation from a select few actors in the tax administration, so that incremental process risks can be controlled? This question should challenge the design of national and organizational approaches to AI governance. The practicality and feasibility of AI governance approaches in tax administration should be considered alongside the risks and benefits of AI tools. Given the recent rise in both AI innovation and simultaneous governance standards, this will soon become a challenge of a large scale, that is not limited to any one tax administration.12
One medium-term approach to address these challenges is to start out with AI tools in processes that are mainly bearers of good news or no news for public end-users. Tax administrations should be encouraged to seek out low risk - higher benefit uses of AI. Some examples of good news delivery to taxpayers are reassuring verification processes and customer service assistance. These are activities that are less likely to result in immediate automated liabilities or costs to the public end-user.13 Examples of ‘no news’ processes are analytical functions, detection of patterns and relationships amongst economic actors in the dataset and information collation that is used to advance the knowledge of economic representations without direct impacts on the taxpayer.14
International collaboration can help tax administrations learn faster by bringing together tax officials’ practical understanding of the different types of risks - impact-based, inherent, and incremental process risks to inform the design of AI governance frameworks. The OECD Forum on Tax Administration TA 3.0 Project fosters such international collaboration to tackle the common challenges of responsible AI in tax administration processes.15 The governance responsibilities that come with the use of AI need to account for relevant actors across development and deployment of AI tools within and beyond the tax administration.16 To gauge and address the incremental process risks it is important to share best practice across countries and other users who engage with similar tools for similar process goals. It is important to clarify the differences between types of risks and responsibilities within the tax process value chain so that the design of AI governance frameworks incorporate the practicality of compliance obligations.
By Benita Mathew
Benita Mathew is a lecturer in AI and Fintech at the Surrey Institute for People Centred AI and the Department of Finance and Accounting at the University of Surrey. Her research interests are cross-jurisdictional tax co-operation, trustworthy AI frameworks in tax administration and the use of digital tools to inform tax policy decision making. Benita’s PhD at the Surrey School of Law rethinks the role of the digitalising economy in international business tax reform. She holds an MSc in Accounting and Taxation from the University of Exeter and is an ACCA and ACGP Affiliate from PwC Academy.
Footnotes:
Forum on Tax Administration, OECD, ‘Inventory of Tax Technology Initiatives’
Faúndez-Ugalde, Mellado-Silva and Aldunate-Lizana (2020) ‘Use of artificial intelligence by tax administrations: An analysis regarding taxpayers’ rights in Latin American countries’, Computer Law & Security Review, 38
Hadwick (2023) ‘Error 404 – Match not found: tax enforcement and law enforcement in the EU Artificial Intelligence Act’, Eucrim: the European Criminal Law Associations' forum, ; Wardell-Johnson (2023) ‘Future of AI in the Tax World Prompts Both Excitement and Fear’ Bloomberg Tax
Hadwick (2023) ‘Error 404 – Match not found: tax enforcement and law enforcement in the EU Artificial Intelligence Act’, Eucrim: the European Criminal Law Associations' forum
European Parliament (2023) ‘EU AI Act: first regulation on artificial intelligence; Brookings Institution (2022) ‘Unpacking the White House blueprint for an AI Bill of Rights'.
Hamon, R., Junklewitz, H. and Sanchez Martin, J., (2020) ‘Robustness and Explainability of Artificial Intelligence’ doi:10.2760/57493
European Commission Report (2020) ‘Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment
OECD (2020) ‘Tax Administration 3.0: The Digital Transformation of Tax Administration’; Berryhill, J., et al. (2019), ‘Hello, World: Artificial intelligence and its use in the public sector’, OECD Working Papers on Public Governance, No. 36
European Commission Report (2020) ‘Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment’; OECD AI Policy Observatory, ‘Robustness, security and safety (Principle 1.4)’
Taeihagh A. (2021) ‘Governance of artificial intelligence’, Policy and Society, 40(2)
Dafoe, A. (2018) ‘AI Governance: A Research Agenda’
IBM, ‘AI governance services’
W.E. Ebbers, W.J. Pieterson, H.N. Noordman, Electronic government: Rethinking channel management strategies, doi:10.1016/j.giq.2006.11.003
Heaven, W., (2020) ‘An AI can simulate an economy millions of times to create fairer tax policy’ MIT Technology Review; Zheng, S. et al. (2020) ‘The AI Economist: Taxation policy design via two-level deep multiagent reinforcement learning’, Science Advances, 8(18) <DOI: 10.1126/sciadv.abk2607>
OECD FTA, Tax Administration 3.0 projects
OECD (2020), ‘Tax Administration 3.0: The Digital Transformation of Tax Administration’