New Methods, “Old” Methods: Emerging Trends and Challenges in Political Communication Research

 

Regina G. Lawrence, Kevin Arceneaux, Bernhard Clemm von Hohenberg, Johanna Dunaway, Frank Esser, Daniel Kreiss, Eike Mark Rinke, and Kjerstin Thorson (The Political Communication Editorial Team)

http://dx.doi.org/10.17169/refubium-39044; PDF

 

One of the exciting aspects of political communication scholarship today is the range of methods available for analyzing a wide variety of communication about politics, broadly defined. Over the past decade, research published in Political Communication has drawn on a variety of different methodologies. An internal content analysis conducted last year of articles published in the journal over the previous six years showed that while surveys, experiments, and content analysis were deployed most frequently, computational analyses appeared in more than one-fifth of articles – a figure that is almost certain to increase in the near future, as discussed further below. Other methods such as interviews and document analysis were present as well. But overall, qualitative pieces were significantly outnumbered – though this pattern may be changing, as we also discuss further below.

The broad methodological toolkit available to scholars in our field brings several ways to understand political communication. Yet some tools are chosen more routinely than others, and new tools also present us with new challenges. From our vantage point as members of the journal’s editorial team, certain trends and developments in the kinds of research being undertaken under the large umbrella of “political communication” deserve special attention.

One broad challenge is that, as an editor’s note published earlier this year (Lawrence 2023) observed, “rapid changes in the technological platforms many of us study and in the methods and data available demand that we stay abreast of this rapid evolution while maintaining (and updating) the theoretical foundations of our field.” Given our expanding methodological toolkit, how can our field simultaneously maintain grounding in shared conceptual and theoretical frameworks – particularly as we interface increasingly with computer science and related fields? Given the complexity and sophistication of the new methodological landscape, we may lose the ability to understand, appreciate, and critically evaluate one another’s research without maintaining and expanding a shared vocabulary of concepts and theories that orient our work. 

This question points toward an even broader double-edged challenge of maintaining theoretical coherence in a rapidly expanding field of study. On the one hand, it is possible that “old” theories will prove no longer sufficient or appropriate given today’s more complex landscape of political communication. But we may also find the opposite to be true, as scholars schooled in newer approaches may seek to “reinvent the wheel” as they study changing media technologies, without drawing adequately from foundational theories – for example, on attention, cognition, and bias – that apply quite well to today’s media environment.

Along with that broad set of theoretical challenges, we consider here several emerging trends: questions around the representativeness of the populations studied and the size of communication effects accompanying the growing use of big data and computational methods; the move toward “open science” and what it will mean for how we conduct and report our research; the possibility of a resurgence in qualitative approaches; and the possibilities and problems associated with potential uses of generative AI.

 

New Data Sources and the Challenge of Availability Bias

As the number of submitted manuscripts that rely on computational methods has increased, we note some problems that need to be carefully considered by authors, reviewers, and our field in general. One relates to the broad theoretical challenge noted above: As certain forms of digital and social media data become more readily available compared to those of the past and researchers quickly converge on these data, we increasingly confront the problem of under-theorized studies. We draw a distinction here between valuable descriptive work done with the express purpose of providing rich quantitative description and categorization, and work that attempts to test hypotheses but sidesteps the broad body of shared theories foundational to our field.

Another problematic aspect of today’s “data rush” to new sources of digital data is that the availability of data of interest to scholars is extremely unequally distributed across relevant fora for political communication, such that availability bias has become a defining factor of contemporary work in our field. Perhaps more than ever before, political communication scholars tend to study the phenomena that are most easily accessible to them. In practice, that means that those platforms with good API access are much more likely to be studied than those without (not to mention completely closed and proprietary spaces that are off-limits to research). Consequently, for example, studies of Twitter in our field tend to outnumber studies of other platforms, which has at least as much to do with relative data availability as with its importance as a communication space. To the extent that “data availability” and “importance” of a forum diverge, availability bias poses a real challenge to the societal sensitivity and significance of political communication scholarship. Neither the platforms nor the people that we study as a result of availability bias may be particularly representative of the wider world of political communication. So, in the “computational age” we may face a new issue of sampling bias, not unlike the issue of “WEIRD” samples in psychology identified by Henrich et al. (2010) more than ten years ago.

 

Measuring Effects in Computational Studies

As we receive more studies using computational methods, often with extremely large sample sizes (e.g., n >10,000), it is important for authors, reviewers, and mentors to consider that p-values (and associated significance tests) are almost meaningless in that context. Often, we discover that what first seems like an important effect because it is highly statistically significant is in fact trivial when looked at from an effect-size perspective. Nature Human Behavior recently issued a statement on this problem (Points of Significance 2023) which perhaps should be echoed by our journal as well. It reads in part (p. 293):

In most empirical studies using null-hypothesis significance testing (NHST) that we receive, authors report only the statistical test, degrees of freedom, test value and P value. In some cases, we see only P values and nothing else. This extremely limited information can be misleading and in studies with very large sample sizes it is meaningless (as overpowered studies or studies with very large samples can identify statistically significant but trivial effects). We therefore require that authors also report effect sizes and confidence intervals. Reporting of NHST statistics should typically take the following form: statistic (degrees of freedom) = value; P = value; effect size statistic = value; and percent confidence intervals = values.

At Political Communication, we anticipate increasingly asking authors to consider their results in terms of effect sizes and CIs rather than statistical significance. Importantly, this consideration should be undertaken thoughtfully, since small effect sizes in political communication research could be important. For example, an effect size of 0.08 is average for persuasion research in the field—a figure that is quite small statistically speaking, but that could decide the outcome of an election under the right circumstances.

 

Open Science, Transparency, and Inclusiveness

As the editor’s note also observed earlier this year, the open science (OS) movement presses our field to make data and methods transparent, even as big data, computational methods, and other developments render our research more complex (Lawrence 2023). We see this move as a reaffirmation of widely shared, long-standing values and principles in political communication as a field: rigor, inclusion, and public value. This implies that OS is best understood as a continuous process of collective learning, of ever-growing self-reflection, and awareness about what it is that we actually do and should do in our work. It also implies a direct connection to questions of social justice, of “openness” as not only transparency but also inclusion (Rinke & Wuttke 2021).

At Political Communication, we are challenging ourselves to continuously figure out how we can do better in both respects: transparency and inclusion. With the addition of a new Data Editor to our team, we are working on developing OS standards to increase transparency that are sensitive to the distinct requirements of different methodologies (computational, classical quantitative, qualitative), and to find new ways to increase openness towards underrepresented groups of researchers, audiences, and under-studied contexts.

Openness, understood as a process of continuous self-reflection on the social and epistemic aspects of our work, extends both to time-honored methods and to developing methodologies. We must ask of all methodological approaches: What are their implications for the inclusiveness and transparency of political communication research?

For example, computational methods pose new challenges from an OS perspective. As research “pipelines” (from case selection to data collection and analysis) grow more complex they can become more obscure, with important decisions simply not documented (e.g., in the case of custom-made scripts for specific data sources), thus reducing transparency. Data collection can also become impossible to reproduce (e.g., in the case of websites or APIs going defunct or being used under restrictive licenses – see van Atteveldt et al. 2019).

With respect to inclusion, when scholars rely on “out-of-the-box” machine learning models (such as pre-trained transformer models like BERT), we need to be aware of the social biases these models may be reproducing as a result of biases in their training data and be wary of any social biases in our findings that may result (Bhardwaj, Majumder & Poria 2021). Moreover, the closed nature of platform data poses enormous inclusion challenges (Freelon 2018). Initiatives aimed at mitigating these challenges, such as the Facebook partnership Social Science One, also pose challenges for inclusive social science (Bruns 2019; Mancosu & Vegetti 2020). What kind of research questions are allowed for research “approved” by initiatives involving the platform operators themselves? Who gets to participate in research involving some of the most important datasets available to researchers? Is there a bias in such decisions towards resource-rich elite institutions? Do “terms of use” diminish public accessibility of results and data underlying them? These are all questions with which our journal and our field must continue to grapple.

 

A Resurgence of Qualitative Approaches

Meanwhile, qualitative political communication research is also on the rise. As recently as 2015, Karpf et al. (2015) pointed out the dearth of qualitative research in the field, despite the fact that many foundational texts and theories in political communication were based on insights from qualitative, interpretative methods, including the pioneering work of the Langs’ “two-step flow,” and other foundational insights. 

Since that time, there has been a resurgence in qualitative political communication work, including important publications in Political Communication and beyond. Katherine Cramer (2016) has shown the power of being attentive to the contexts of geography and status. Emily Van Duyn (2018) has revealed the lived experience of polarization and illiberalism. Allissa Richardson (2020) has shown us the continuities of Black witnessing across generations, and Jackson, Bailey, and Foucault-Welles (2020) illustrated the power of networked activism for movements for racial and gender justice. Usher (2021) has demonstrated the intersection of journalism, race, class, and power, while Toff and Nielsen (2022) have shown how anxiety can result in news avoidance. Tenove et al. (2022) revealed the contexts within which campaign staffers respond to incivility, and Kligler-Vilenchik et al. (2021) have shown how discourse shapes political attitudes and political action. And we see an important uptake of qualitative work in conjunction with other methods to reveal a more holistic picture of political communication in social life (e.g., Friedland et al., 2022).

This partial listing of works that have already proven influential suggests that the insights of qualitative work are more central to the field than a decade ago. While the reasons for this resurgence are surely multiple, no doubt one important reason is that in an era of rapidly shifting and multiplying political, economic, technological, and social contexts we need new theories, analytical frameworks, and inductive understandings of the world to make sense of it.

In the coming years, we look forward to not only continuing to foster the growth of qualitative methods in the field, but also to work in tandem with those with expertise in these approaches to ensure the highest standards of social science rigor. The embrace of open science standards at journals such as Political Communication means new opportunities for qualitative research to demonstrate the reliability and validity of findings, even as the unique nature of qualitative data requires thoughtful considerations about data sharing, such as potential risks to subject privacy (Humphreys et al 2021). While achieving the right balance of transparency and confidentiality will be challenging, we will continue to work toward sensible frameworks that support the insights of qualitative research in the years to come.

 

Looming Possibilities and Problems of Generative AI

Finally, the rapid advance of artificial intelligence tools means we will need to develop protocols regarding the use of generative AI. One obvious concern is the new ability of authors to use AI tools to draft papers and/or abstracts. Regarding that question, we note that Taylor & Francis, the publisher of Political Communication and dozens of other social science and humanities journals, has recently issued a restatement of its policy on this question.

A larger challenge is the potential for authors to use these tools to do data analysis. Some recent research suggests that large language models like Chat GPT are just as good if not better than human coders at performing content analysis (Gilardi et al 2023; Hoes et al 2023). Given the rapidity of recent developments, AI will certainly get better at such tasks in the near future. How should we evaluate the reliability and replicability of coding in such cases – especially given that the inner workings of emergent AI are often a black box and, as discussed above, may reproduce social biases? This is not to exaggerate the abilities of AI. Chat GPT-4, for example, appears to be very good at many things currently, but it cannot yet replicate the human subtleties of style, tone, emotion, and expression. But there is little doubt that AI tools will shape and challenge the enterprise of academic research and publishing, perhaps sooner than we may think.

Once again, transparency will be key. There are many potential uses of Chat GPT and related tools that are useful, dependable, and may increase efficiencies–and that may even help with some of the other human biases we seek to avoid in our research. But it is critical that authors be transparent—in very precise terms—about what they use AI tools to do, so that reviewers and editors can properly evaluate their use.

 

 

References

Bruns, A. (2019). After the ‘APIcalypse: Social media platforms and their fight against critical scholarly research. Information, Communication & Society 22(11), 1544-1566.

Cramer, K. J. (2016). The politics of resentment: Rural consciousness in Wisconsin and the rise of Scott Walker. University of Chicago Press.

Freelon, D. (2018). Computational research in the post-API age. Political Communication 35(4), 665-668.

Friedland, L. A., Shah, D. V., Wagner, M. W., Cramer, K. J., Wells, C., & Pevehouse, J. (2022). Battleground: Asymmetric communication ecologies and the erosion of civil society in Wisconsin. Cambridge University Press.

Gilardi, F., Meysam, A., & Kubli, M. (2023). ChatGPT outperforms crowd-workers for text-annotation tasks. https://arxiv.org/abs/2303.15056# 

Henrich, J.H., Heine, S.J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences 33(2-3), 61-83.

Hoes, E., Altay, S., & Bermeo, J. (2023). Using ChatGPT to fight misinformation: ChatGPT nails 72% of 12,000 verified claims. https://psyarxiv.com/qnjkf/ 

Humphreys, L., Lewis, N. A., Jr, Sender, K., & Won, A. S. (2021). Integrating qualitative methods and open science: Five principles for more trustworthy research. Journal of Communication, 71(5), 855 – 874. https://doi.org/10.1093/joc/jqab026

Jackson, S. J., Bailey, M., & Welles, B. F. (2020). # HashtagActivism: Networks of race and gender justice. Cambridge: MIT Press.

Karpf, D., Kreiss, D., Nielsen, R. K., & Powers, M. (2015). The role of qualitative methods in political communication research: Past, present, and future. International journal of communication (Online), 1888-1907.

Kligler-Vilenchik, N., de Vries Kedem, M., Maier, D., & Stoltenberg, D. (2021). Mobilization vs. demobilization discourses on social media. Political Communication, 38(5), 561-580.

Lawrence, R.G. (2023). Editor’s note. Political Communication 40(1), 1-3.

Mancosu, M. & Vegetti, F. (2020). What you can scrape and what is right to scrape: A proposal for a tool to collect public Facebook data. Social Media + Society, 6(3).

Richardson, A. V. (2020). Bearing witness while Black: African Americans, smartphones, and the new protest# journalism. New York: Oxford University Press.

Rinke, E. M., & Wuttke, A. (2021). Open minds, open methods: Transparency and inclusion in pursuit of better scholarship. PS: Political Science & Politics, 54(2), 281 – 284. https://doi.org/10.1017/S1049096520001729

Salganik, M. J. (2019). Bit by bit: Social research in the digital age. Princeton: Princeton University Press.

Tenove, C., Tworek, H., Lore, G., Buffie, J., & Deley, T. (2022). Damage control: How campaign teams interpret and respond to online incivility. Political Communication, 1-21. https://doi.org/10.1080/10584609.2022.2137743

Toff, B., & Nielsen, R. K. (2022). How news feels: Anticipated anxiety as a factor in news avoidance and a barrier to political engagement. Political Communication, 39(6), 697-714.

Usher, N. (2021). News for the rich, white, and blue: How place and power distort American journalism. New York: Columbia University Press.

Van Duyn, E. (2018). Hidden democracy: Political dissent in rural America. Journal of Communication, 68(5), 965-987.

van Atteveldt, W., Strycharz, J., Trilling, D., & Welbers, K. (2019). Toward open computational communication science: A practical road map for reusable data and code. International Journal of Communication, 13, 3935 – 3954. https://ijoc.org/index.php/ijoc/article/view/10631

 


 

New Methods, “Old” Methods: Emerging Trends and Challenges in Political Communication Research