Why explainability matters for accountancy and finance professionals

Explainable AI (XAI) emphasizes the role of the algorithm not just for providing an output, but for also sharing with the user, supporting information on how it reached that conclusion. XAI approaches aim to shine a light on the algorithm’s inner workings and/or to reveal some insight on what factors influenced its output. Furthermore, the idea is for this information to be available in a human-readable way, rather than being hidden within code.

The purpose of this report is to address explainability from the perspective of accountancy and finance practitioners.

A survey of ACCA members conducted in November 2019 revealed that more than half of respondents were not aware of explainability as a focus of attention within the AI industry (Figure 1.1). Increasing awareness can improve the ability of accountancy and finance professionals to ask the right questions about AI products in the market and those in use within their organizations. All the factors involving the public interest and the wider case for explainability apply, but it is worth additionally reflecting on why explainability matters for accountancy and finance professionals in particular.

Adoption – Engaging with AI

Professional accountants frequently refer to the concept of ‘skepticism’ as a Pole Star to guide their ability to serve their organizations. Skepticism involves the ability to ask the right questions, to interrogate the responses, to delve deeper into particular areas if needed and to apply judgement in deciding whether one is satisfied with the information as presented. More than twice as many survey respondents agreed than disagreed that explainability does have relevance when trying to display skepticism as a professional accountant (Figure 1.2).

XAI can provide a record/evidence or illustration of the basis on which the algorithm has operated. For AI to be auditable, it needs to incorporate principles of explainability. This provides an important foundation for adoption, whereas an opaque system in which the technology cannot be interrogated limits the ability to use model outputs. That’s no longer a realistic position to take. Moreover, establishing the ROI of adoption will be an important consideration for any organization. And better explainability drives these returns because users no longer just wait to see what the model says, but have a more precise understanding of how the model can be used to drive specific business outcomes.

Impact – Use at scale

The mathematics underlying AI models is theoretically well tested and has been understood for decades, if not longer, and converting it to production-ready models is a core task of data scientists. For accountancy and finance professionals, having an appreciation of the model they are using is essential, but their particular interest is scaling up its use to enterprise level, because this is the point at which the theory becomes reality. Scaling up presents challenges for deriving value from the model owing to the volume and variety of additional data, and the ‘noise’ that comes with it, to all of which the algorithm is exposed.

Greater explainability could help finance professionals understand where a model might struggle when production is scaled up. A recognized risk with AI algorithms is that of ‘over-fitting’. This means that the model works very well with the training data, i.e. the historical data set chosen to train the algorithm, but then struggles to generalize when applied to wider data sets.

This defeats the purpose. It usually happens when the model takes a very literal view of the historical data. So instead of using the data as a guide to learn from, it practically ‘memorizes’ the data and all its characteristics as they are (verbatim).

Consider a simplified example of a machine learning model for identifying suspicious transactions that need further investigation. During the training phase, the model observed that a high proportion of transactions that turned out to be suspicious occurred outside normal office hours. It therefore attached a high weight to this feature, the timestamp of the transaction, as a predictor for suspicious activity. When the model was applied more widely across all the organization’s transactions, however, the accuracy rate was poor. It identified a large number of ‘out-of-hours’ transactions as suspicious but most of these turned out to be perfectly legitimate, resulting in a considerable waste of time and resources, as the follow-on investigation of these flagged transactions had to be done manually.

A closer look revealed that the training data comprised transactions involving the core full-time staff of the organization, but when rolled out across the organization, the data comprised transactions involving all staff. This included the company’s large pool of shift workers, who often worked outside the regular office hours as part of their agreed contracts.

Using the actual values of the timestamp feature from the historical training data set caused the model to misinterpret the link with suspicious transactions. An obvious and better correlation would have been to analyze the transactions’ time stamp in relation to the contractual hours of the individual inputting the transaction. As discussed later, this improves model accuracy but increases complexity.

Rather than merely telling the user which transactions appear suspicious, an explainable approach would illuminate the components affecting the prediction most often or to the greatest extent. This could help the user spot when outlier values, such as the timestamp, were overrepresented in the flagged transactions. In other words, an XAI approach could efficiently identify the most high-impact features that the algorithm is using to power its deductions and the level of importance (probability) it was attaching to each feature, when deciding whether to flag a feature as suspicious.

While this is a highly simplified illustration, the wider point is that when scaling a model with hundreds of features in a production environment with considerable noise, volume and complexity of inputs, details get lost or misinterpreted. And finding the reasons might feel like looking for a needle in a haystack. Ultimately, this situation creates costs of adoption.

XAI helps to test the model’s decisions against finance professionals’ domain knowledge and understanding of the process and business model.

Trust – Ethics and compliance

As AI enters the mainstream through scaling up, having the necessary governance, risk and control mechanisms becomes extremely important. Greater explainability can help to ensure that one is checking for the right things. Human responsibility doesn’t go away, but explainability tools will be the support mechanism to augment the ability of professional accountants to act ethically.

Use at large scale highlights the role of explainability for model effectiveness, while ethics and compliance issues relate to the role of explainability for model trustworthiness. This includes ensuring that the model is fair, and is designed to allow for the rights of users in areas such as data privacy.

In the EU, for example, the General Data Protection Regulation (GDPR) requires various factors to be taken into consideration when using AI-based systems. For instance, is there sufficient transparency so that the user can understand the purpose of data use for making automated decisions? Have users given meaningful consent and is there a way for them to withdraw this if they wish? And is there sufficient explanation of how the algorithm works in general, and potentially how specific decisions for a particular user were arrived at?

Given the large volumes of data involved with machine learning and AI systems’ ability to arrive automatically at decisions using the data, these questions quickly get tricky to resolve.

This is an evolving area and some of the questions related to a legal right to explainability may well be tested in court to establish the answers. While the precise legal boundary lines may be up for discussion, explainability is broadly accepted as a principle for long-term sustainable adoption.

Source: ACCA

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *