Desktop version

Home arrow Business & Finance

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Regulations Related to AI

There are several federal, state, and international data privacy and security laws that currently regulate information technologies and related services. These laws apply to Al systems today, such as the Gramm-Leach-Bliley Act (GLBA), industry-specific laws such as the Health Insurance Portability and Accountability Act (H1PAA), and international laws such as the European Union’s General Data Protection Regulation (GDPR). In the US, state-level privacy and security laws may require individuals and organizations to have reasonable safeguards in place to protect their customers’ personal information, comply with specific breach notification requirements in case of a data breach, or take specific remediation steps based on state law. In response to the more stringent international laws (GDPR) and growing public concern, some states are advancing their regulations to protect consumer privacy and security. For example, the California Consumer Privacy Act (“CCPA”) and the New York Stop Hacks and Improve Electronic Data Security Act (“SHIELD Act”), are both effective in 2020 and with application outside their respective states (Lazzarotti, 2019).

Little regulation has been written to address the risks represented using Al in the new world of hyper-automation, big data analytics, and data-driven decision systems. However, more regulation is expected in the coming years, in light of the recent scandals (e.g., Facebook with Cambridge Analytical and the public’s growing concerns related to the

Challenges and Ethical Considerations of Al 91 mass data collection and use practices of tech giants (such as Facebook, Google, and Amazon).

Among current Al-related regulation efforts, on April 10, 2019, Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) proposed the Algorithmic Accountability Act of 2019, with Rep. Yvette Clarke (D-NY) sponsoring an equivalent bill in the House. The Act addresses specifically the concerns of algorithmic bias in Al systems and is the first federal legislative effort to regulate Al systems across industries in the US (Tait et al., 2019). The Act would lead to new regulations requiring individuals and organizations who use, store, or share consumer personal information to conduct periodic impact assessments on high-risk systems and their training data, and commit to addressing any identified biases or security issues in a timely manner.

Another proposed bill, introduced in the US Senate on May 21, 2019, is the Artificial Intelligence Initiative Act, which would significantly increase funding to accelerate Al research and development, education, and standards development in the US.

The need for Al regulation is also emerging as a critical global issue. For example, the Organization for Economic Cooperation and Development (OECD) issued a set of Al recommendations (OECD Al Principles) approved by OECD member countries in May 2019. In June 2019, the G20 countries adopted the G20 Al Principles drawn from the OECD Al Principles. In parallel with these efforts, there has been a proliferation of Al guidelines or principles issued by a variety of institutions and organizations, such as:

  • • Governments: for example, the EU (Ethics Guidelines for Trustworthy Al, by the High-Level Expert Group on Artificial Intelligence, an independent expert group that was set up by the European Commission in June 2018) and country-level initiatives (e.g., US, Canada, China, France, UK)
  • • Companies: for example, Google, IBM, Microsoft; and
  • • Industry associations and advocacy groups: for example, the Information Technology Industry Council, Partnership on Al, the IEEE (IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems), Amnesty International, and Access Now (e.g.. The Toronto Declaration).

The Cyberlaw Clinic of the Berkman Klein Center for Internet and Society at Harvard Law School (Harvard University) recently launched the Principled Artificial Intelligence Project to map all existing Al principles and guidelines and create a data visualization tool tosummarize their assumptions, methodology, and key findings (Cyberlaw Clinic, 2019). The current dataset comprises 32 Al principles documents with their characteristics (i.e., the actor behind the document, date of publication, intended audience, geographical scope, and data on the principles themselves).

From the findings of Harvard Law School’s Cyberlaw Clinic (2019), we summarize the key themes of these Al principles as follows:

  • • Accountability and professional responsibility
  • • Fairness and non-discrimination,
  • • Human control of technology,
  • • Privacy and security,
  • • Transparency and explainability, and
  • • Promotion of human values

In the following subsection, we discuss these themes in more detail and elaborate on their related ethical considerations from the perspective of accounting professionals.

Ethical Considerations

<<   CONTENTS   >>

Related topics