Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Critical interpretation of media ethics in the Al-driven cultural industry

Governments and corporations around the globe have gradually advanced several standards to govern Al and its underlying big data in the midst of developing supporting mechanisms for the growth of the Al and big data-driven society and industrial system. These standards are “concerned with ensuring transparent practices and establishing accountable methods” for the intelligent information society (McKelvey and MacDonald, 2019, 44; Government of the Republic of Korea Interdepartmental Exercise, 2016). Again, as Al, big data, platforms, and cultural corporations face challenges due to their inappropriate uses of these new technologies in many cases like fake news and privacy infringement, governments and corporations progressively develop relevant policies and ethical codes.

Several platforms, both nationally and globally, have developed several significant strategies to deal with ethics. In the Global North, in January 2018, Microsoft published its ethical principles for Al, starting with “fairness.” In May of the same year, Facebook released its “commitment to the ethical development and deployment of Al” and a tool to “search for bias” called “Fairness Flow,” while Google announced its “responsible practices” for Al research and development in June 2018 (Ochigame, 2019). As Ochigame (2019) reported in The Intercept to characterize the corporate agenda, it is helpful to distinguish between three kinds of regulatory possibilities for a given technology: (1) no legal regulation at all, leaving “ethical principles” and “responsible practices” as merely voluntary; (2) moderate legal regulation encouraging or requiring technical adjustments that do not conflict significantly with profits; or (3) restrictive legal regulation curbing or banning deployment of the technology. Unsurprisingly, the tech industry tends to support the first two and oppose the last. The corporate-sponsored discourse of “ethical Al” enables precisely this position.

Although we cannot disparage digital platforms’ efforts to develop their own ethical codes, it is doubtful that they are actually controlling ethical

New media ethics in the age of Al 145 issues occurring on their own platforms with no legal regulations, because their ethical discussions are closely related to digital platform corporations’ strategy to avoid any forced legal regulation. For digital platform firms, in order to avoid any regulations from governments, they must actualize not only practical but also reliable ethical codes that protect human rights and fairness.

In the Global South, the Korean government and cultural industries corporations have developed two major normative structures that are legal and ethical frameworks. The government has enacted legislation to present a vision aimed for the intelligent information society, while it attempts to establish human-centered ethics to govern data-collection processes and Al algorithms (Government of the Republic of Korea Interdepartmental Exercise, 2016, 56-57). On the one hand, the government aimed to prepare the legal basis for improving social security in consideration of the increasing job losses and transitions, income polarization, and aging of the population. It also amended legal provisions to recognize the rights involved in creative Al products, including products in the areas of literature, music, and design.

On the other hand, the government wanted to establish a charter of ethics for intelligent information technology (IT) to minimize any potential abuse or misuse of advanced technology by presenting a clear ethical guide for developers and users alike. Due to the nature of new intelligent IT systems, featuring advanced algorithms for data-based self-learning, they may cause or exacerbate a wide range of issues and social problems if left without an ethical guide or means of intervention (e.g., socio-economic polarization, biases and discrimination against minorities). Research and development protocols exist with which developers must comply when collecting data and developing algorithms to ensure that the resulting algorithms do not reflect or perpetuate social prejudices (Government of the Republic of Korea Interdepartmental Exercise, 2016, 56-57).

By emphasizing social security, transparency, and accountability, these standards underscore whether Al and big data-driven industrial policies present any biases or produce reliable results and ethical frameworks for society. As previously discussed, the Korean government proposed these legal and ethical standards eventually to develop best practices for Al and big data, creating policy mechanisms meant for the government to establish guidelines (Chadwick, 2018; Copeland, 2018; Christians, 2019).

Alongside government initiatives, platform firms and cultural corporations also develop their own ethics. For example, Kakao as one of the leading platform firms designs and actualizes its own ethical codes. For the first time among leading platforms in Korea, Kakao (2018) developed its own Algorithm Ethics charter, and it is committed to enhancing the quality of life of their service users and creating a better society through the development of ethical algorithms. Kakao’s efforts in the algorithm development and management process attempt to be in line with the ethical principles of society.

Under this basic principle, Kakao emphasizes three major norms: avoidance of all biases, management of data for algorithm learning, and independence of algorithms. First, Kakao attempts to ensure that algorithms will not generate biased results, and it encourages diverse society. Second, Kakao plans to collect and manage data for algorithm learning in accordance with social ethical norms. In other words, Kakao carries out the entire process of algorithm development; performance enhancement; and the collection, management, and utilization of data to maintain service quality, within the scope of the ethics of society (Kakao, 2018).

Finally, Kakao aims to ensure that algorithms will not be manipulated internally or externally. This means that Kakao prevents the possibility of algorithms being destroyed or misused due to certain intentional influences. As such, both the government and corporations attempt to make transparent and accountable standards. It is not deniable that the government and corporations show different approaches to Al and big data, compared to old cultural policies that did not include these legal ethical standards (Kakao, 2018).

Admitting that the significance of this kind of effort made by Kakao exists, what we have to critically understand is that digital platforms have continued to try to shift the major focus related to their responsibility from legal standards to “voluntary ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition) (Orchigame, 2019).

Regardless of these governmental and corporate attempts, the overall practices are not promising. The use of Al-related technologies does not resolve socio-economic inequality; instead, it even intensifies inequality in our contemporary capitalism. In contemporary capitalism, “societal problems tied to inequality are very much connected with emerging technologies” (West, 2018,132). New digital technologies, such as Al and platforms, have created tremendous wealth for a few who own and appropriate these technologies. “Indeed, most of the large fortunes created by those under the age of forty have involved digital technology. Moreover, with innovation accelerating, the money tied to technology is likely to make inequality even more problematic in the future” (West, 2018, 132).

As discussed, the Korean government prioritizes the growth of the digital economy through new technologies like Al, big data, and investment in R&D, although it develops new regulatory and ethical provisions. The primary goal of Al-focused policy is “strongly oriented to promoting economic growth and competitiveness” (Mansell, 2017, 4288). In other words,

The dominant orientation of digital economy policy is toward stimulating economic competitiveness based on the premise that, if a country does not achieve a leadership position in emerging fields of technological

New media ethics in the age of Al 147 innovation such as machine learning, big data analytics, artificial intelligence, and their applications, some other county will achieve this.


When President Moon Jae-in spoke during the DEVIEW 2019 conference held in Seoul, Korea, in October 2019, he only emphasized that the government will put forward a brand-new “artificial intelligence national strategy” within this year in a bid to become an Al powerhouse, riding on the country’s prowess in the ICT field. President Moon vowed full-scale efforts to create conditions for relevant firms to make aggressive investment and quick profits, instead of promising transparency and social equality (Yonhap, 2019b). The governance of Al and big data in conjunction with accountability has not been prepared well. The regulation of privacy for Koreans will still be critical regardless of its minimal protection measures (Shin, 2019a).

Mega platform and cultural firms are also busy acquiring venture capitals and emerging as new forces to benefit from their technologies, knowhows, and manpower. The Korean government does not have any practical measures to prevent them from integrating to become some of the largest digital platforms and cultural giants. As discussed in Chapter 4, major entertainment firms have partnered with Al-driven telecommunications firms or venture capitals as well. Consequently, the customers or audiences do not seem to enjoy newly advanced technologies because only a few limited corporations control the vicious chain of cultural production and, therefore, cultural consumption. Al is not only about the convergence of human beings and technologies but also about the benefits from this convergence; however, with no feasible measures to enhance social equality and diverse voices, Al will give rise to post-AI technologies, which emphasize moral philosophy and social justice.

Al and algorithms, again, do not bring social equality and transparency as the majority of people cannot see it happening and may have heard nothing about them, as The Guardian reports (Pilkington, 2019). The Al revolution is “being planned by engineers and coders behind closed doors, in secure government locations far from public view. Only mathematicians and computer scientists fully understand the sea change, powered as it is by Al, predictive algorithms, and risk modeling” (Pilkington, 2019). At the end of the radical reshaping of the industrial structure, people do passively receive the shift, and therefore are vulnerable. Under this circumstance, securing practical legal and ethical mechanisms is crucial for the Al users.

The global organization of media, culture, and informational systems has transformed contemporary societies and especially “generated new power imbalances and social inequalities” (Elliott, 2019, 46). As Feenberg (1991) argued, technological innovation has always functioned to divide the members of capitalist industrial societies into two groups. One is made up of intellectually skilled managers or technical experts like Al-equipped computer scientists and skilled workers, while the other contains much larger numbers of de-skilled and less-valued workers. Al, algorithms, and big data in the ICT and cultural industries have certainly expanded the gap between these two groups, as these new technologies need higher skills, education, and capital, none of which the general population secures easily. What is implicitly foreclosed is the notion that

humanity, as a collective subject, has the capacity to somehow limit impersonal and anonymous socio-historical development, to steer it in a desired direction. Today, such a notion is quickly dismissed as ideological and/or totalitarian; the social progress is again perceived as dominated by an anonymous Fate beyond social control.

(Zizek, 2008, cited in Andrejevic, 2013, 145)

It is certain that Al can help to empower numerous cultural creators, make the cultural industries more efficient, and increase the number of artworks, which is in the interest of the public. However, there are still very few artists and companies that know how to use tools such as machine learning (Kulesz, 2018a). The commercial logic of Al and mega platforms certainly lead to increasing concentration of supply, data, and income, resulting in the impoverishment of cultural expressions. “The lack of inclusion of culture in national Al strategies—in both the North and South—could mean that countries no longer have any cultural expressions of their own, which would end up damaging the social fabric” (Kulesz, 2018a, 2).

It is crucial to develop fair and equal opportunities for many cultural creators who do not have necessary skills and access. In contemporary digital capitalism deeply related to digital platforms and Al, there are only a handful of countries and owners of these infrastructural assets. This means that the dominance of Western-based Al and digital platform corporations has deepened the gap between the West and the East so that the existing disparity in economy and culture has worsened. In this regard, Christians (2019, 337) argues, “In this new technological era, media ethics has become a universal need in all aspects of communication.” As Kulesz (2018a, 2) also points out, it is critical to advance “strategies that go beyond a merely abstract code of ethics and design public policies to ensure that Al systems—and the actors that exploit them—are auditable and accountable.” In other words, far from settling for a limited role on Al and algorithms, the cultural sector, both the public and the private in Korea, must claim its stance with a plausible mechanism.

In sum, artificial intelligence is not capable of taking over the world now, and it won’t happen in the very near future, but that should not prevent us from seeing potential problems (Filibeli, 2019). Likewise, given the major influence that Al is able to have on our society and the need to build solid trust, it is vital that “Al is grounded in our values and fundamental rights such as human dignity and privacy protection” (European Commission,

New media ethics in the age of Al 149 2020, 2). In other words, “The impact of Al systems should be considered not only from an individual perspective, but also from the perspective of society as a whole,” as the use of Al systems can have a significant role in not only achieving the digital economy but also supporting the democratic process and social rights (European Commission, 2020,2). As Christians (2019, 186) aptly puts it, “Human dignity demands that media ethics be based on cultural diversity rather than on the individualist morality of rights.” This implies that the ethical principle of human dignity emphasizes the respect “for the many varieties of humanity and for its refusal to rank and order human beings within this human dignity framework, media ethics work or ethnic diversity in cinema and entertainment programming” (186). In other words, human dignity in cultural production means cultural diversity, which Al-driven cultural production must develop. Admitting that countries aim to benefit from Al technology economically, nation-states have to position themselves as major actors in curtailing harms associated with the dominant Al and digital platforms and their owners.

<<   CONTENTS   >>

Related topics