Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

New media ethics in the age of AI

Introduction

New digital technologies, including Al, big data, and algorithms, have raised numerous concerns, as several Western societies and global mega platform giants like Google, Facebook, and Netflix preoccupy Al and algorithms to make profits, resulting in socio-economic disparities between platform owners and platform users. These digital platforms have begun to utilize Al as a major tool for cultural production, and therefore, several socio-cultural concerns have been raised as “Al may be misused or behave in unpredicted and potentially harmful ways” (Cath, 2018,2). As Elliott (2019, 49) points out, new digital technologies, including Al, produce “stunning opportunities and dangerous risks” at the same time. Several ethical issues in Al, such as “interface with social and cultural issues,” “the manipulation of data,” and “complex systems and responsibility,” as well as “privacy invasion,” are shared with other rapidly developing digital technologies (Boddington, 2017).

In other words, concerns about the massive circulation of fake news, the ability to manipulate algorithms for financial profits, and alleged privacy breaches and the misuse of personal data through social media platforms, have been features of contemporary society worldwide (Flew, 2018b). Al has also caused several new socio-economic dilemmas, including the potential replacement of humans in workplaces. Therefore, questions on the role of the law, relevant policy, and ethics in governing Al systems are more relevant than ever before. In other words, as Al systems are developed, it is crucial to assess their social and ethical standards and implications (Hancock et al., 2020).

Governments and corporations around the globe seem to advance mechanisms to secure socio-economic fairness; however, due to Al’s fast and recent growth, these measures are not practical or transparent. Many governments have enacted legislation to present a vision aimed for the intelligent information society, while attempting to establish human-centered ethics to govern data collection processes and Al algorithms. By emphasizing social security, transparency, and accountability, these standards underscore whether Al and big data-driven industrial policies present any biases or produce reliable results and ethical frameworks of society. Of course, governments proposed these legal and ethical standards eventually to develop best practices for Al and big data, creating policy mechanisms meant for the government to establish guidelines (Chadwick, 2018; Copeland, 2018; Christians, 2019). Alongside government initiatives, platform and cultural firms develop their own corporate ethics. As digital platforms and cultural corporations face challenges due to their inappropriate use of Al and algorithms in many cases like fake news and privacy infringement, several platform firms, including Google, Facebook, and Kakao, design and actualize their own ethical codes. It is not deniable that governments and corporations show different approaches to Al and big data.

This chapter examines new media ethics in the realms of Al and big data. It especially discusses whether governments and corporations have advanced reliable ethical codes to secure socio-economic equality in the Al era. As UNESCO (2020) emphasized, the term “Al for social good” (6) is increasingly used by tech firms and several civil organizations; however, there is much less discussion about what comprises social good. Although I don’t attempt to develop some mechanisms to advance society, it addresses some socio-cultural and economic issues stemming from the intensive use of Al and digital platforms from a critical political-economy perspective, which emphasizes not only power relationships between politics and the economy but also socio-economic justice and equality.

 
<<   CONTENTS   >>

Related topics