The world’s leading digital media and regulatory policy journal

The (almost) unacknowledged revolution of AI in the media and creative industries

ELISA GIOMI says that paid-for news outlets don’t have a monopoly on accurate information and shouldn’t be allowed to amplify the threat of artificial intelligence at the expense of the benefits

The use of generative AI in the media and creative industries is radically changing the way content is produced and distributed. To be more precise, AI has already changed this. So to speak of AI in terms of a ‘revolution’ means for me to place it on a line of continuity with other media and other technological innovations which have been declared ‘revolutions’ when they had already largely taken place. And as with every revolution, ever since Gutenberg’s movable type printing press, for AI too the front is divided between enthusiasts and sceptics, or between ‘apocalyptic’ and ‘integrated’ as Umberto Eco described the opposing attitudes of intellectuals towards mass culture in the ‘60s.1

The value of comparison

The ‘unacknowledged’ of the title also refers to the printing press, as defined by Elizabeth L Eisenstein in her fascinating book The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe (1979). According to Eisenstein, the revolutionary effect of the printing press was underestimated due to the focus of historians on its role in disseminating ‘new’ ideas. But the real novelty was that readers for the first time could see several texts together and compare them, accumulating information and elaborating it in new ways. This produced a radical transformation of what we would today call the ‘brainframe’, with repercussions on western society at large.

The application of AI in the media sector is having the same disruptive yet equally ‘unacknowledged’ effects as the printing press: when it comes to AI, the media and creative industries remain somewhat under the radar in the public debate. Surely this is because these industries, according to IBM’s recent Global AI Adoption Index, are not among the leading adoption sectors: companies in the financial services sector lead the way (about half of IT professionals in that sector report that their company has actively implemented artificial intelligence). The telecommunications sector follows at 37 per cent.2

Moral panic

As far as Europe is concerned, the AI Act (based on the risk assessment of new products to be carried out before putting them on the market) does not include the media sector among the eight categories of ‘high-risk’, AI systems being defined as affecting fundamental rights and security.3 I agree with the criterion behind this classification, but as a media scholar I cannot help but think of the very delicate role that the media play in society, due to the peculiar nature of their products. Unlike other industries, the media produce ‘symbolic’ or ‘meaning making’ goods which shape our perception of the world and orient the choices we make in it, from who to vote for to what to buy or how to educate our children. We seem to be reminded of this peculiarity of the media only in the face of unexpected events which fuel dystopian imaginaries and often emergency responses by policymakers.

There have been recent occasions when the impact of AI on the media has burst into the public sphere, producing moral panics.

And here is the ‘almost’ referred to in the title. There have been recent occasions when the impact of AI on the media has burst into the public sphere, producing moral panics. In all cases, these were episodes that showed the unprecedented possibilities of reality falsification enabled by generative AI.

For instance, on 22 May 2023, before being debunked, the fake photo of a Pentagon headquarters bombing that went viral and appeared on a verified Twitter profile, Bloomberg Feed, caused the Dow Jones index to lose 85 points. More recently, the cloning of Biden’s voice by EventLab’s robocall broadcast on local radio stations dissuaded people from voting in the Republican primary in New Hampshire and resulted in a government ‘crackdown’ on AI.4

These reactions are understandable, especially considering that in 2024 half of the world’s population will be called to the polls. And yet, for every negative use of AI there are more positive ones. It is thanks to AI that it is possible to detect deepfakes. One example is NewsGuard’s ‘Misinformation Fingerprints’, which isolate the most widespread misinformation narratives and provide data in a format readable by artificial intelligence systems but also by analysts. An application developed by the University of Michigan allows false news to be accurately identified in 76 per cent of cases. These are applications that have been worked on for years now.5

Synthesised voice generation also lends itself to many uses in audiovisual production and dubbing, making it possible to cope with the streaming platforms’ need for programming tailored to demanding niches. Applications such as ComCast’s Hume AI enables the recognition of as many as 60 culturally encoded ‘non-verbal emotions’ (i.e. expressed through voice and sound). Until now there were only six universal emotions.

Symbolic vs material

I have been involved in media and ICT for long enough to know that ‘squinting’, in which perception of distorted uses is amplified to the detriment of the more numerous beneficial applications, has accompanied every technological innovation. It is, however, worth demystifying certain narratives that contribute to the demonisation of AI and risk obstructing the path to balanced regulation. I believe this means above all showing that behind ethical principles, philanthropic instances and social concerns there are often economic interests. These are entirely legitimate interests, but if we want to take a clear-eyed view, it is important to distinguish the symbolic from the material.

‘Squinting’, in which perception of distorted uses is amplified to the detriment of the more numerous beneficial applications, has accompanied every technological innovation.

I will take as a case study a much debated topic when it comes to AI and the cultural industry: copyright. As is well known, the affair came to collective attention with The New York Times’ complaint against OpenAI and Microsoft.6 Many news media subsequently joined in, preventing ChatGPT ‘crawlers’ – the bots that collect data from the internet – from accessing their content.7 Some platforms have done the same, including X, Pinterest and Amazon.

When news spreads that right-wing newspapers and sites in the US, such as Fox News, The Daily Caller and Breitbart, have not blocked crawlers, a question arises: is this simply a technical delay or a design aimed at politicising the algorithms’ responses by training them with the content of right-wing newspapers and thus exploiting the blocks of their liberal counterparts?8  This would not be the first case of politicisation of generative AI. For instance, ERNIE Bot, the Chinese version of ChatGPT, was designed to avoid discussions about Taiwan or Xi Jinping.

To this question is added another issue. If publishers of quality information, be they to the left or right, only allow paid access, bot crawlers will end up feeding on the ‘junk’ free content on the web.

‘Free’ is not inferior to ‘paid’

This fear in my opinion is based on two fallacious assumptions. First,  the belief that generative AI is fuelled solely or predominantly by newspaper articles. On the contrary, answers are produced using the entire ‘encyclopaedia’ on the web, consisting of texts, but also of images, videos, data and  sound, which greatly increase the likelihood of reliable output. Second, the information equation ‘free = poor’ and ‘paid = reliable’ reveals a short circuit: free business models have always been used in every sector of the media industry and first and foremost by newspapers that are financed by advertising. However, this does not prevent press publishers from considering themselves as producers of quality information.

The idea that generative AI poses a threat to the newsmaking industry value chain is widely shared. For instance, in October, the News/Media Alliance (a non-profit trade association representing approximately 2,000 newspapers in the United States and Canada) published a white paper highlighting the negative impact of AI and the unauthorised use of copyrighted content on the sustainability and availability of high-quality original content.9

Here there are similarities with the European Copyright Directive, which foresees the possibility for online platforms to provide fair remuneration to press publishers for the online use of their publications.10 In recital 55 of the Directive, this possibility is justified in order to ‘ensure the sustainability of the publishing industry and thereby foster the availability of reliable information’. It is no coincidence that dividing the proceeds from the use of copyrighted content fairly is the solution, also identified by Danielle Coffey, CEO and President of the News/Media Alliance, to deal with the feared depredation of such content by crawlers: ‘similar to how Netflix or Spotify compensate content creators for their IP’, Coffey argues.11

This results in another set of aspects that need clarity. The first is the insistence that reliable and verified information is the prerogative of the journalistic sector and the time and resources required to produce it must then be monetised with ‘fair compensation’. One response to this is that it takes much longer to publish scientific articles in peer-reviewed journals, which in terms of reliability are certainly not inferior to newspaper articles. However, no rights holder of scientific publications has ever demanded fair compensation for their online exploitation. Second, when we ask one market sector (Big Tech companies yesterday, generative AI companies today) to support another (publishers) through fair remuneration, we are attributing a subsidiary function to that sector which is more typically a function of the public authorities. Third, by enslaving copyright to the sustainability of ‘reliable’ or ‘quality’ information, we are distorting its rationale, which aims precisely at the opposite – to protect intellectual works, including press publications, in order to promote their abundance, regardless of evaluations of merit.

A conflict of economic interests

Alongside the theoretical issues, there is also a conflict of economic interests. The answers provided by generative AI such as ChatGPT could lead to bypassing the source (the newspaper articles), whereby instead users would land via a search engine, news aggregator or other services of online platforms. In this way, article publishers would find themselves deprived of the opportunity to monetise the use of their products.

The idea that generative AI poses a threat to the newsmaking industry value chain is widely shared.

Bearing this in mind, let us look at the solution introduced by the European AI Act, which states in Article 60i: ‘Any use of copyright protected content requires the authorization of the rightholder concerned unless relevant copyright exceptions and limitations apply.’12 This is certainly a reasonable solution. However, it is worth ‘playing devil’s advocate’ in order to identify – and in the future eliminate – any possible contradictions. The first is this: why do mere newspaper readers not require the authorisation of the rightholder concerned to use their content? Readers use this content in order to learn, know and give others and themselves answers about the world. Exactly as generative algorithms do. And if these readers, through reading newspapers, were to become real experts, they would certainly be able to provide reliable answers to questions about issues and facts. In this way, they could represent dangerous competitors for newspapers which they could then be induced to ‘bypass’. They would be acting in exactly the same way as generative AI.

Learning from copyright information

One could object that when generative AI algorithms train on newspapers articles or other copyright protected content, they do so to learn, so as to perfect the answers they give to users for profit. To this objection I would reply that I too have always learned and still learn (from books and newspaper articles) for profit. I obtained a degree and completed a PhD and passed public selections to become a university professor and thus monetise the expertise I have acquired through copyright protected content throughout my life. Today, (unfortunately) hardly anyone studies pro bono.

I will conclude this paradoxical argument here, which I have made with the sole purpose of showing how artificial intelligence and human intelligence follow not dissimilar logic. So we should not use a double standard to regulate them. I believe that a balanced regulation of AI, on the copyright infringement front as well as on many other fronts, cannot disregard two preliminary actions. The first, a rigorous analysis of the interplay of the different actors and of the real economic flows: who earns more from whose services/copyrighted content? The second, a precise diagnosis. We are supposed to regulate when there is a problem to be solved. Pathologies must first be identified and only then will we be able to decide on the most effective therapies.


Elisa Giomi

Elisa Giomi is an associate professor at Roma Tre University, Department of Philosophy, Communication and Performing Arts, and a commissioner of AGCOM. The views expressed in this article are her own.

1 Umberto Eco (1964). Apocalypse Postponed: Essays.

2 China (85%), India (74%) and the United Arab Emirates (72%) are the markets most likely to accelerate AI adoption, while companies in the UK (40%), Australia (38%) and Canada (35%) are the least likely to speed up its introduction. See IBM Global AI Adoption Index 2023. prn.to/432Y5jt

3 The only exception is the video game industry, which is harmful to the safety and development of minors.

4 Murphy M, Metz R and Bergen M (2024). AI Startup ElevensLabs Bans Account Blamed for Biden Audio Deepfake. Bloomberg UK, 26 January. bloom.bg/3T0M4pS

5 Pérez-Rosas V, Kleinberg, B, Lefevre A. and Mihalcea R. (2017). Automatic Detection of Fake News. Cornell University arXiv, 23 August. bit.ly/3Px53rx

6 Grynbaum MM and Mac R (2023). The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work. The New York Times, 27 December. nyti.ms/3IoxmUU

7 Including the Guardian, USA Today, The Washington Post, CNN, CBS and NBC.

8 Knibbs K (2024). Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them. Wired, 24 January. bit.ly/48OcQrn

9 News/Media Alliance (2023). White Paper: How the pervasive copying of expressive works to train and fuel generative artificial intelligence systems is copyright infringement and not a fair use. bit.ly/49Ylkxl

10 See Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC. bit.ly/45y9N6v

11 News/Media Alliance (2024). Alliance CEO Q&A: How The Generative AI Boom Proves We Need Journalism, 31 January. bit.ly/3TqtNUk

12 Council of the European Union (2024).Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, 26 January. bit.ly/3v5QrrO