Desktop version

Home arrow Business & Finance

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Challenges and Ethical Considerations of AI

From virtual assistants answering our questions to Al-powered drones and vehicles taking over tasks that humans perform, it is inevitable that Al will profoundly transform society. In the previous chapters, we have discussed many of the applications of Al and machine learning. The acceleration of Al innovation across all the accounting profession (accounting, audit, tax, and business advisory) will continue increasing at a rapid pace, and firms must be prepared to keep up.

At the same time, this transformation comes with enormous challenges. Accounting professionals need to develop controls that mitigate risks such as algorithmic bias, security, privacy, and change management. They must also rise to meet other challenges, such as the lack of standards, and the relative immaturity and lack of transparency of specific technologies.

In this chapter, we will focus on the challenges of Al for the accounting profession by discussing specifically the issues currently being debated about the risks of algorithmic bias, privacy, security, and change management. We will summarize the current state of regulations involved with the use of Al technology. We will conclude by providing an overview of Al considerations that are the most relevant to the profession and the practice of accounting in general.

Algorithmic Bias

Definition of Algorithmic Bias

Algorithmic bias is among the most notable challenges facing Al and ML systems. Several definitions of algorithmic bias exist in the literature. Among them, we like this straightforward and simple definition proposed by Gartner, in a recent research note, where the IT research firm defined it by stating that an algorithmic bias occurs when an algorithm reflects the implicit bias of the individuals who wrote it or the data that trained it (Jones, 2018).

To further increase our understanding of the algorithmic bias phenomenon. Professor Joni Jackson of Chicago State University (2018) analyzed it in more detail. He pointed out that, although algorithms are assumed to be neutral, our biases are often deeply embedded in these algorithms. For him, an important question to consider is whether the models used by the algorithms predict in a way that perpetuates existing biases. Jackson urges caution in interpreting the outcomes and the decisions that result from these models, and he suggests that, as the models become more sophisticated and learn, their predictive accuracy should continue to be tested. He also suggests bringing on diverse voices to collaborate on the design of algorithms (as people who build the algorithms are often not diverse) and to more closely examine algorithmic decisions to uncover potential adverse outcomes on specific populations due to hidden biases.

Numerous examples of algorithmic bias have been observed or identified as potential risks by researchers in recent years. Examples found in the literature include:

  • • Ad or search engine algorithms discriminating against specific categories of the population in terms of unfair differences in contents they see based on biased patterns or user profile characteristics,
  • • Recruitment systems that discriminate specific categories of applicants based on biased training data (such as data sets that lack diversity in the data),
  • • Facial recognition systems relying on data sets primarily composed of people with white faces that fail to recognize people with non-white faces,1
  • • Loan underwriting systems discriminating borrowers due to biased training data resulting in specific categories of borrowers experiencing lower acceptance rates or higher interest rates,
  • • Law enforcement systems that are making unfair crime risk predictions regarding African Americans based on biased historical data.

In the context of accounting, algorithmic bias might occur when accountants or auditors are using:

  • • Biased input data: such as pre-existing bias in the historical data selected for analysis, resulting in the underrepresentation of certain types of transactions due to a recent shift in the business not reflected in the chosen historical data; or
  • • A system that uses biased training data; for example, this would occur when the engineers who trained the system introduced a bias, intentionally or unintentionally, by missing to train the system for a particular customer segment or profile type. In such a situation, the algorithm would fail to return an accurate computation or answer by referring to a model that is not representative of the population; or
  • • A system that is biased in its design: such as a system that embeds biased rules or logic due to errors or omissions in the coding of the algorithm itself. Software errors (bugs) are frequent in the software development life cycle, which requires extensive quality assurance (QA) and user testing (regardless of the type of software). Unfortunately, critical software errors are sometimes only detected after the system is rolled out. That is why it is always vital for end-users to have internal controls in place to mitigate this type of risk.
 
<<   CONTENTS   >>

Related topics