AI and Political Communication

 

Claes de Vreese and Fabio Votta,

University of Amsterdam

http://dx.doi.org/10.17169/refubium-39047; PDF

 

How will AI impact the field of political communication? This seems to be one of the most pertinent questions among political communications scholars today. In this Political Communication Report, we highlight some of the substantive areas that AI might affect as we reflect on recent methodological opportunities and discuss critical ethical questions. We do not attempt to give a comprehensive or full overview of the challenges and opportunities ahead, but we do hope this entry can be a starting point for a broader dialogue.

 

  1. How does AI impact political communication research? 

AI offers a range of new opportunities to re-evaluate existing research questions and opens avenues to articulate new ones. Here are five examples:

  1. Political campaigning and microtargeting: political campaigns increasingly make use of online ad campaigns and the main platforms they use for this purpose, namely Meta and Google, are heavily driven by AI to determine pricing and delivery of political ads through black-box algorithms. Ad delivery algorithms on these platforms steer political ads to the most ‘relevant’ audiences leading to a form of ‘algorithmic’ microtargeting that goes beyond advertiser intentions. A further proliferation of these practices using AI leads to new questions about transparency and auditability.
    Political campaigns can also rely on autogenerated content from generative AI applications and according to an interview with Meta’s CTO Andrew Bosworth, tools to generate images that relate to different audiences is something that can be expected this year. At the time of writing (April 2023), a first attack ad on the Biden 2024 presidential bid from the Republican side has been launched, making use of (according to the disclaimer in the ad description) “AI-generated images.”
  2. Political journalism can deploy AI tools, both in the research phase (idea generation, data mining), finding story angles, fact-checking, and in the text development phase. Generative AI models can already provide text drafts, which might be further refined and edited. There is a lot of emerging scholarship on this topic and Nick Diakopoulos keeps a neat site on various ways generative AI affects the newsroom.
  3. Questions about authenticity and what is ‘real’ or not will need to be re-centered. This year’s ICA conference theme of “reclaiming authenticity through communication” is incredibly timely. Last Fall (2022), only a small fraction of the world population had heard of ChatGPT or generative AI. Today, OpenAI has experienced a more rapid growth than any existing social media platform (with over 100 million users worldwide). With the rapid rollout of accessible generative AI tools, for text, audio, and images, there will be a proliferation of augmented, alternated, and in-authentic content. In a world where any actor can easily create convincing but cheap deepfakes using generative AI models that are free and accessible to everyone, how will this affect citizens’ trust in politics, journalistic content, and any other type of information?
  4. Questions around media literacy have already been cast in terms of questions about digital skills and competencies in recent years (see e.g., the Digital Literacy across the Lifespan project). AI literacy in general and specifically the question of how to understand the worlds of media and politics will take on new and urgent importance. How can AI literacy help people to discern between fake or real? It seems imperative to develop and promote literacy interventions that teach individuals how to critically evaluate sources, identify biases, and spot AI-generated content. Such endeavors should go hand in hand with efforts to understand, discern and map public attitudes toward AI developments in media and politics.
  5. As a final example, political communication and journalism scholars need to revisit the news coverage of AI developments and their interactions with social media and online discourses. New technologies have always given rise to dystopian and utopian coverage and the current AI cycle is no different. From ‘pause all AI developments for six months’ to ‘the biggest transformation since the industrial revolution;’ these are all part of the news and public discourse on this topic. How journalists and news organizations cover these developments – and how equipped they are to do so – matters, because citizens rely on them for factual and grounded reporting that goes beyond the “bloom or doom” rhetoric espoused in public discourse.

 

  1. What does AI mean for methodological applications in political communication?

AI not only offers new avenues for substantive research questions; new methods and design approaches are also on the table. Here are five examples:

  1. Comparing the quality of texts: using generative AI tools, scholars can study how AI-generated or assisted news articles differ from human-written ones in terms of content, tone, and biases. Scholars can study the effects of AI-generated content on political discourse and how AI-powered text editors may shape the language used in political communication. This will be especially important as AI-generated content is likely to permeate all channels of political communication, whether this plays out on the level of citizens, journalists, or politicians.
  2. Scholars can investigate the impact of AI-driven recommender systems on the diversity and quality of political content users encounter by using algorithm audit studies. Recommender systems utilize AI algorithms to suggest and personalize content to users based on their preferences and behavior. These algorithms only continue to grow in importance when serving content. By studying how these systems operate and influence the information users receive, researchers can gain insights into potential biases, and the implications for democratic discourse.
  3. Using LLMs (Large Language Models), scholars can develop new measures for the latent ideology of lawmakers. So far, initial experimentation with these tools has demonstrated that the performance of measurement is on par with expert ratings and other metrics for ideology. For early examples, see this interesting study.
  4. Using LLMs rather than training a custom machine learning model with code, scholars can train chatbots like ChatGPT with examples for concepts of interest. For example, they could show it a sample of texts that are “toxic” and “not toxic,” and then ask it to identify patterns and predict future texts. Such fine-tuned chatbots may also be employed to analyze responses to open-ended questions in surveys.
  5. Scholars can deploy ‘Chatbot studies’ to understand how people may interact with political chatbots, and study their impact. Can chatbots persuade citizens to vote for specific candidates? They could also act as sparring partners for debate of political opinions. Chatbots could moderate (online) discussions when its tasked to bring up points that might have been left unaddressed or provide the perspective of voices that were otherwise excluded.

 

  1. What should we do next?

The proliferation of AI tools in communication, journalism, education, training, politics, and advertising is already transforming our field. This is tied to many substantive questions – both old and new. In answering these, we believe that political communication scholars are well advised to collaborate with experts beyond our field. This will include, for example, experts on the ethical implications of using e.g., AI chatbots in political campaigns or debates, as well as scholars of law and governance, who can situate the discussion in broader developments around the regulation of AI, such as the European Union’s AI Act.

In closing, of course, we also asked ChatGPT the central question of this piece: “How will AI affect the field of political communication?” This was the answer (personal communication, April 26, 2023). Our entry was completed before using this resource.

 

Artificial Intelligence (AI) has the potential to revolutionize political communication in several ways. Here are some possible ways that AI could affect political communication:

  1. Automated political messaging: AI can help political campaigns to generate targeted and personalized messages for individual voters. Campaigns can use machine learning algorithms to analyze voter data, such as demographics, voting history, and online behavior, to create messaging that resonates with specific voter groups.
  2. Social media monitoring and analysis: AI tools can help political campaigns to monitor social media activity and sentiment in real-time. By analyzing social media conversations, campaigns can identify trending topics and issues that are important to voters and adjust their messaging accordingly.
  3. Deepfake videos: With the advancements in deep learning, AI can create realistic deepfake videos that can be used to spread misinformation and propaganda. Political campaigns can use deepfake videos to damage the reputation of opposing candidates or to manipulate public opinion.
  4. Predictive analytics: AI can help political campaigns to predict the behavior of voters based on their past behavior and other data. For example, campaigns can use machine learning algorithms to predict the likelihood of a voter to turn out to vote and target those voters with specific messaging and get-out-the-vote efforts.
  5. Political chatbots: AI-powered chatbots can be used to provide voters with personalized information about political candidates and issues. Voters can interact with chatbots via text or voice to get answers to their questions, receive personalized recommendations, and engage in political conversations.

Overall, AI has the potential to transform political communication by enabling campaigns to generate more targeted and personalized messaging, analyze voter data more effectively, and engage voters in more interactive ways. However, there are also concerns about the ethical use of AI in politics, particularly in the areas of deepfake videos and voter manipulation.

 

We conclude that ChatGPT offers a good perspective on transformations in political communication. However, it does not refer to the role of political communication research or scholarship. It is up to us to change that.

 

 

 


Claes de Vreese is Distinguished University Professor of AI & Society, with a special emphasis on media and democracy, University of Amsterdam. He co-directs the research program AlgoSoc, the AI, Media and Democracy lab, and he is the director of the Digital Democracy Centre, SDU.

 

Fabio Votta is a Ph.D. candidate in Political Communication at the University of Amsterdam who studies (online) political microtargeting and its usage around the world. He is very passionate about reproducible open science, data visualization and communicating insights from data to a broader audience.

 


 

AI and Political Communication