Back to Basics: Studying Digital Campaigning While Our Objects of Analysis Are in Flux[1]

 

David Karpf, George Washington University

http://dx.doi.org/10.17169/refubium-43528; PDF

It seems to me that 2024 will be a year when we, as a research community, get back to basics. Three interlocking trends are undermining the foundations of empirical research. There is a lot that we simply do not know, and there are a number of once-stable assumptions that we may now need to revisit.

We have been through similar moments before. In fact, it calls to mind the bygone years when my career was just getting started.

This was back in the mid-to-late aughts, the peak of “Web 2.0” enthusiasm. Not a lot of people studied digital campaigning back then. It was not yet clear that digital tools mattered to any of the behaviors or outcomes that we, as a research community, tended to study.

But what made it an exciting time was the sheer rate of change in the digital media environment. One could make a strong argument that digital campaigning circa 2000 had little-to-no impact. (In fact, Bruce Bimber and Richard Davis did make such an argument in their 2003 book Campaigning Online, and it was strongly supported by empirical evidence.) But the Internet of 2004 and 2008 was not the Internet of 2008. We had to keep returning to descriptive questions – “what are these tools?” and “how are campaigns, journalists, and the public even using them?”

I still recall one of my first paper presentations. It was at a special conference on “YouTube and the 2008 election.” I asked my assembled peers why no one in the room had conducted any studies of YouTube in the 2004 election. An awkward silence descended for a moment before I delivered the punchline: YouTube did not even exist in 2004. We were dealing, in a nontrivial sense, with an N of 1.

The point I was trying to make, back then, was that this was the new normal. Platforms that had not existed in the previous election cycle could prove critical to the current cycle… with no guarantee that they’d even be around for the next one. We researchers were operating at the mercy of “internet time” (Karpf 2012). The ceteris paribus assumption that undergirds our primary research paradigm was routinely being violated. Both campaign organizations, news organizations, and the mass public were in a state of flux.

I innocently suspected it would forever be thus.

Instead, what followed was a decade of relative stability. We transitioned from the Web 2.0 era to the platform era. Google/YouTube, Facebook/Instagram/WhatsApp, and Twitter all solidified and defended their niches in the information ecosystem. Campaigns got used to relying on them. Digital campaign professionals stopped behaving like a strange avant-garde community and started to seem like… well, like an industry. And we researchers began to develop sophisticated tools for gathering data on how the campaigns, the news orgs, and the mass public used these platforms.

There were still plenty of interesting changes. Disinformation in the 2012 election, for instance, was not much of a topic of study because there simply wasn’t enough of it for researchers to notice. But the critical shift was that the platforms and structures became stable enough that we could develop and rely upon sophisticated empirical methods for studying and analyzing those changes. We knew enough about the platforms and behaviors to know what we did not know.

Once again, innocently, I suspected as recently as four years ago that it would forever be thus. The digital platforms were trillion-dollar companies. They were barely-regulated quasi-monopolies. It certainly did not seem as though they were going anywhere.

And times, once again, seem to be changing.

There are, as I see it, three significant changes underway right now: (1) platform researcher access is declining, (2) the decreasing centrality of digital platforms to news and campaigning, and (3) the frontier-moment with generative AI.

Platform researcher access

Empirical study of the platforms was on shaky ground even during the best of times. I used to remark that we were generating mountains of Twitter research and molehills of Facebook research, not because of their relative importance, but because Twitter data was much more accessible. This has been a fracture point between how we studied 20th-century broadcast media and how we study algorithmic social media: CBS cannot prevent you from performing a content analysis of the nightly news.

The trouble with studying platformized political communications is that the platforms are private companies, with lots of lawyers. It is not in their proximate corporate interest for independent scholarly researchers to gather data and potentially generate findings that negatively impact the company’s reputation.

And yet, for much of the past decade, these companies at least had an interest in appearing transparent. They were in the business of recruiting social scientists. They partnered with members of the research community on specific projects. This was particularly true in the early ‘10s, before the “techlash” years when tech journalism and public opinion took a hard critical turn against the platform companies (Weiss-Blatt 2021). Back when the platforms enjoyed broad popularity, they reasonably expected academic research would help make the case for their (positive) social impact.

Likewise, at the peak of the techlash, the platforms were at least reticent to shut down researcher access, since that could lead to a slew of uncomfortable questions when their CEOs were next summoned before Congress to testify about the latest controversies.

In recent years, the platforms have collectively become more hostile to researchers. There are still some high-level partnerships, but unless you work at a major research lab and spend years developing a relationship with Google or Meta, it is quite likely that the data you might have gathered 5-10 years ago is simply no longer available. This is to say nothing of Elon Musk’s attack on the research community, or the members of Congress who have launched a partisan assault against disinformation researchers.

Alongside those high-profile, direct attacks, we are also seeing a ton of indirect erosion of research tools. Meta is set to shut down CrowdTangle before the 2024 U.S. election, replacing it with a new Meta Content Library that will be less publicly accessible. All of the platforms have reduced the size of their Trust and Safety teams, and are providing fewer resources to engaging with the research community.

European researchers are in a better situation than their U.S.-based peers, thanks to provisions in the E.U.’s Digital Services Act. (My kingdom for a well-functioning regulatory state!) But the platforms are all U.S. companies, and the degradation of data access here is a strong signal of their behavior worldwide.

The research community is collectively stuck trying to cobble together novel solutions and workarounds just to keep apace with the sort of data access we had in the 2010s.

The status quo was bad-but-pretty-stable. The trendlines are much, much worse.

The Secular Decline of Social Media

We have reduced visibility into online political behavior on social media platforms. But, simultaneously, social behavior on those platforms is less central than it used to be.

X/Twitter stands out as the most obvious example. It is bad that many of our empirical tools for studying Twitter have been shut down. It is outrageous that Elon Musk has launched nuisance lawsuits aimed at silencing members of the research community. But also, X/Twitter is a platform in steep decline – not yet completely irrelevant, but obviously trending in that direction.

It’s not just Elon Musk being terrible at running a social media platform though. Meta’s various properties – Facebook, Instagram, and Threads – have all decided to algorithmically de-emphasize political news and conversation. Contentious politics, it appears, is no longer good for business.

The research community spent the past decade developing our collective understanding of these platforms that intermediate public discourse and political campaigning. And, today, it certainly appears as though these platforms are becoming less central than they once were. It is not yet clear what, if anything, will replace them. Perhaps this is an interregnum, and these or other centralized platforms will again become the main intermediaries between electoral campaigns, news outlets, and the voters they seek to reach. Perhaps not.

But as I look ahead to the still-forming 2024 election, here in the United States, what stands out is that Facebook and Twitter are likely to be less central to campaign communication strategies than in recent elections.

Think of this as a blessing-in-disguise. We have less access to the platforms. But it’s like being denied entry to a club that no one goes to anymore. All the most interesting stuff is probably happening somewhere else anyway.

Generative Artificial Intelligence

The hype surrounding Generative Artificial Intelligence (hereafter, “AI”) has not yet peaked. This is clear, here in the U.S., based on the number of political campaigns that have successfully grabbed cheap news cycles by introducing some AI gimmick into their campaign communications.

I personally suspect that it is a bit early for AI to have a significant, direct effect on campaign behavior or election outcomes. But it is poised to exacerbate the two previously mentioned phenomena. It will be harder to monitor online political behavior, because of an ocean of synthetic sludge. And the platforms – who, collectively, seem to be treating AI less as a problem-to-be-mitigated than as a race-to-be-won – will become less central to actual interpersonal political communication among human beings as they gallop toward AI as the next big thing.

It may, in other words, have substantial indirect effects. I personally worry it will exacerbate the “Liar’s Dividend” (Chesney and Citron 2019) by increasing the baseline mistrust of all political information and political institutions. Early empirical research (Weikmann et al 2024) certainly points in this direction – even if synthetic media lacks direct persuasive impact, its spread undercuts institutional trust.

It will, absolutely, be an area of testing and experimentation. Both legitimate electoral actors and less-legitimate external campaign organizations will see what they can do with it. It will accelerate some of the good, some of the bad, some of the necessary, some of the nonsense. We will see Cambridge Analytica-level overhyping from campaign consultants. We will not know until long after the election which of their claims were based in any semblance of reality.

AI, in other words, looks an awful lot to me like a destabilizing force. It will, most likely, eventually reshape how electoral campaigns function. But not right away, and not all at once. It is new enough that we don’t quite know what to make of it yet – and neither do elected officials, or campaigns, or journalists, or voters.

Which, in turn, reminds me an awful lot of the state of digital campaign research in the mid-aughts.

Back to Basics

I received some career-defining advice early in the dissertation process. I was interested in the political blogosphere, and was trying my best to fashion a dissertation proposal that would be empirically rigorous, methodologically sophisticated, and deeply enmeshed in the (still nascent) literature.

It wasn’t working. I was trying my best. I was not pulling it off.

After a long meeting, my (extraordinarily patient) dissertation advisor said to me, “well, it sounds like you want to study the blogosphere. So why don’t you study them.

He was inviting me, in essence, to go back to basics. There was a lot that we didn’t understand about the internet and political campaigning. This thing was new, and we didn’t have a handle on how it worked. So go observe political practitioners, figure some things out, and gather data that helps explain to the rest of the field what I had learned.

That advice led me to become a descriptivist, qualitative scholar. Scholarship of this sort is particularly valuable to the research community during times of chaos and change. When the underlying phenomena that we are used to measuring are in flux, when our standard measures of political behavior could use a bit of reevaluation, it is useful to the research community to go out and collect observations.

I suspect, as Gagrčin and Butkowski argued in a previous issue of Political Communication Report (2023), that we are reentering a period where this sort of descriptive research will be deeply useful. All three factors that I have discussed in this essay point in such a direction. We have less access to platform data. The platforms are declining in relevance – people are going elsewhere. I, for one, don’t know where that is, or what they are doing there. And Generative AI is being incorporated into a lot of communication activities, unevenly and with unclear impacts (de Vreese and Votta 2023).  The stability of the past decade seems to be coming apart.

So my recommendation to my fellow scholars is this: Take a look at some of the research from the late aughts (much of it published in the early teens, due to the glacial pace of scholarly publishing). (Re)read Rasmus Kleis Nielsen’s Ground Wars and Daniel Kreiss’s Taking Our Country Back and Jenny Stromer Galley’s Presidential Campaigning in the Internet Age and Jessica Baldwin-Philippi’s Using Technology, Building Democracy. Take a look at their methods, but also take a look at the theories they produced and see whether there is anything that deserves to be revisited.

These are interesting times to study digital campaigns and elections. There is, once again, so much that we do not know.

 

References

Baldwin-Philippi, J. (2015). Using Technology, Building Democracy: Digital Campaigning and the Construction of Citizenship. New York: Oxford University Press.

Bimber, B., & Davis, R. (2003). Campaigning Online: The Internet in U.S. Elections. New York: Oxford University Press.

Chesney, B. & Citron, D. (2019). “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review, 107, 1755-1819.

de Vreese, C., & Votta, F. (2023). AI and Political Communication. Political Communication Report, 2023(27). https://doi.org/10.17169/refubium-39047

Gagrčin, E., & Butkowski, C. (2023). Out of Sight, Out of Mind? Qualitative Methods in Political Communication Research. http://dx.doi.org/10.17169/refubium-39042

Karpf, D. (2012b.) Social science research methods in Internet time. Information, Communication & Society, 15(5), 636–661.

Kreiss, D. (2012). Taking Our Country Back: The Crafting of Networked Politics from Howard Dean to Barack Obama. New York: Oxford University Press.

Nielsen, R. K. (2012). Ground Wars: Personalized Communication in Political Campaigns. Princeton, NJ: Princeton University Press.

Stromer-Galley, J. (2014). Presidential Campaigning in the Internet Age. New York, NY: Oxford University Press.

Weikmann, T., Greber, H., & Nikolaou, A. (2024). After Deception: How Falling for a Deepfake Affects the Way We See, Hear, and Experience Media. The International Journal of Press/Politics, 0(0). https://doi.org/10.1177/19401612241233539

Weiss-Blatt, N. (2021). The Techlash and Tech Crisis Communications. New York, NY: Emerald Publishing.

 

 

David Karpf is an Associate Professor in the George Washington University School of Media & Public Affairs. He teaches and conducts research on political campaigning in the digital age.

 

[1] Copyright © 2024 (David Karpf). Licensed under the Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd). Available at https://politicalcommunication.org.

 


 

Karpf: Back to Basics