47.7 F
Washington D.C.
Wednesday, November 30, 2022
spot_img

Monitoring Social Media Platforms: How Intertemporal Dynamics Affect Radicalization Research

A longitudinal perspective takes into account changing size and topics of echo chambers over time and considers that radicalization is a process.

Social media platforms like Twitter have demonstrated a continuous increase of active users over the most recent years (Pereira-Kohatsu et al. 2019). An average of 500 million tweets per day combined with a low threshold regarding the participation leads to a high diversity of opinions (Koehler 2015). Platforms such as Facebook, YouTube and Instagram record even more activity with increasing growth rates over time (Dixon 2022a, Dixon 2022b). Furthermore, Twitter as well as other social media platforms are not to be interpreted as one singular social network, but as several social sub-networks, which enable users to exchange information with each other. Some of these sub-networks are so-called echo chambers (Bright 2017). Echo chambers can arise through an accumulation of thematically related comments, replies, likes and followers on social media platforms. Usually users participate within echo chambers that correspond with their own opinion, and the so-called echo arises. Since most social media platforms allow its users to switch quickly and uncomplicated from one social sub-network to another (Prior 2005), it is to be assumed that echo chambers are most likely to arise and shape the inherent and intertemporal dynamics on social media platforms, whereby the topics and the intensity of communication about these topics can change over time. Within an echo chamber one’s own opinion can be confirmed and this confirmation bias can lead to distortions regarding the perception of social phenomena outside of a social media platform (Cinelli et al. 2021; Jacobs & Spierings 2018). It has already been confirmed that these confirmation biases within echo chambers – especially those with political agendas – can lead to a gradual accumulation from radical to extreme to anti-constitutional opinions (O’Hara & Stevens 2015).

However, according to Neumann (2013), extreme and anti-constitutional opinions are context-specific and must be compared and adapted to the accepted socio-political realities of the observed society. Extremism as phenomena emerges from the process of radicalization over time and can be divided into cognitive and violent extremism which could ultimately endanger the life, freedom and rights of others (Wiktorowicz 2005; Neumann et al. 2018). The process of radicalization is particularly favored by the fact that echo chambers enable users to perform a continuing defamation of dissenters and in some cases these defamation strategies follow the aim of political influence as well (Glaser & Pfeiffer 2017). Some specific forms of negative communication are called hate speech and aim at the exclusion of single persons or groups of persons because of their ethnicity, sexual orientation, gender identity, disability, religion or political views (Pereira-Kohatsu et al. 2019; Warner & Hirschberg 2012). According to Kay (2011) and Sunstein (2006), extremist networks show a low tolerance toward individuals and groups who think differently and are generally less cosmopolitan. As a result of these echo chambers, hate speech as well as radicalizing elements show an increasing number on social media platforms (Reichelmann et al. 2020; Barberá et al. 2015). Therefore, social media platforms are often accused of being a platform for polarizing, racist, antisemitic or anti-constitutional content (Awan 2017; Gerstenfeld et al. 2003). This content is usually also freely accessible to children and young people and it seems that a small minority of extremists is able to shape and make use of the intertemporal dynamics on social media platforms in order to spread their point of view beyond their echo-chamber (Machackova et al. 2020). One could also say that these users seem to have mastered the rules of social media platforms.

Longitudinal analyses as methodological approach

Social media elements, such as comments, replies, likes and followers, are used in radicalization research in order to investigate communication patterns inside social networks and social subnetworks. They allow focus on the role of individual users and what influence the content of their social media-based behavior might have on the underlying structures of a social network or social sub-network (Klinkhammer 2020, Wienigk & Klinkhammer 2021). Some research methodologists assumed, not only in the context of radicalization research, that social media platforms could also become a sensor of the real world and provide important information for criminological investigations and predictions (Scanlon & Gerber 2015; Sui et al. 2014). Corresponding research has been published by the German Police University (Hamachers et al. 2020) and five studies represent the scientific efforts regarding the identification of hate speech and extremism on social media platforms (Charitidis et al. 2020; Mandl et al. 2019; Wiegand et al. 2018; Bretschneider & Peters 2017; Ross et al. 2017). Some of these research papers refer to mathematical and statistical methods in order to identify hate speech as well as extremism. So far, regression models and classification models are most commonly used in machine learning-based approaches and a few approaches are based on simple neural networks (Schmidt & Wiegand 2017), whereas more sophisticated approaches make use of convolutional neural networks (Hamachers et al. 2020).

While methodologically it is feasible to count the number of hate speech- associated comments and radicalizing elements and, for example, to study the impact of anti-hate laws regarding social media platforms by using semi-automated and merely descriptive approaches, the process of automated identification without human supervision has proven to be error-prone. For example, means and variances used as reference values in many of these cross-sectional approaches only lead to a correct identification in the short term (Klinkhammer 2020). However, the same approaches can lead to false positive or false negative results when conducted again at a later point in time. For example, at a certain point in time, a user writes an above-average number of comments. The average value is derived from the patterns of communication within the echo chamber of the user. At another point in time, however, this average value may have changed so that the original user can no longer be considered above average. In this example quantitative indicators have changed, but not the attitude the user expressed in a comment. This poses a challenge for the monitoring of social media platforms and shall be further illustrated with an application example: With regard to the identitarian movement in Germany, it could be shown that the blocking of accounts of extremist users, as provided by a newly drafted anti-hate law, led to a temporal reduction in hate speech and extremist content. However, this only applied immediately after the accounts have been blocked. Only a short time later, followers – who had not been blocked – mobilized, increased their social media behavior and switched to different sub-networks and echo chambers. As a result, there were more hate speech and extremist comments than before the blocking (Wenigk & Klinkhammer 2021). These intertemporal dynamics could only be discovered by using a longitudinal approach.

Furthermore, when it comes to a cross-sectional approach, the fact that someone writes more comments, gets and gives a lot of replies as well as likes and has a large number of followers does not necessarily indicate that a radicalization process has started or is ongoing, even if the content is primarily polarizing, racist, antisemitic or anti-constitutional. This might be due to the fact that narratives and counter narratives tend to clash on social media platforms, especially in the course of interventions like the application of anti-hate laws. For example, a longitudinal perspective reveals an increased scattering within the patterns of communication as reaction to counter narratives. As a result, the amplitude of intertemporal dynamics is influenced as well. Although this effect does not seem to be permanent, it tends to disguise relevant actors within the increased scattering (Figure 1).

Monitoring Social Media Platforms: How Intertemporal Dynamics Affect Radicalization Research Homeland Security Today

Therefore, cross-sectional analyses, as they are conducted in radicalization research these days, might lack in terms of reliability as scientific research criteria. Again, the decisive factor could be the intertemporal dynamics on social media platforms (Klinkhammer 2022; Grogan 2020). Accordingly, in respect to the changing size and topics of echo chambers over time and considering that radicalization is a process, a longitudinal perspective seems recommended (Greipl et al. 2022).

Intertemporal dynamics: Light and shadow for radicalization research

Taking into account the permeability of echo chambers on social media platforms and the resulting intertemporal dynamics, it seems that a longitudinal approach is necessary in order to depict process-based phenomena in the context of radicalization research. A longitudinal analysis of collected tweets from Jan. 6, 2021, the day the U.S. Capitol in Washington was stormed, was able to depict these intertemporal dynamics. The aim was to answer the question of whether Trumpists, Republicans and Democrats can be identified over the course of the day based on their social media behavior. Using available retrospective data made it possible to reconstruct the course of the day on social media platforms precisely, but supporters and opponents of this political event turned out to be more similar in their patterns of communication than expected (Klinkhammer 2022). In fact, they turned out to be so similar that it was almost impossible to differentiate them solely based upon their quantitative characteristics shown on social media platforms. In detail, the social media-based behavior of supporters and opponents seem to vary only within the same range, or as statisticians would say: Over time they vary within the inherent confidence interval of a social media platform. This is due to the fact that the intertemporal dynamics are affected by political events and corresponding social media comments, replies, likes and followers vice versa. As a result, in this example, the quantitative characteristics from Trumpists, Republicans and Democrats turned out to be quite similar regarding the storming of the U.S. Capitol in Washington.

This leads to the assumption that if a political event elicits increased activity on social media platforms from one side, it appears to do the same for the other side. Accordingly, the intertemporal dynamics create synchronous highs and lows regarding that political event and its representation on social media platforms. This influence is not exclusively due to political or similar events: Topics with different patterns of communication, like sexual content, can significantly influence the intertemporal dynamics as well, as these not only affect one echo chamber, but can spread throughout the social media platform as a whole. As a result, the permeability of social media platforms like Twitter and the interaction between different echo chambers does not only affect the intertemporal dynamics globally (Cinelli et al. 2021), but also partially within the echo chambers. Thereby, relevant phenomena for the context of radicalization research are at risk to be overshadowed by other political events, topics and patterns of communication. The assumption that users who support such events can be identified by above-average quantitative characteristics would therefore be wrong. Furthermore it would be wrong to use means and variances – most commonly used values within social media- based radicalization research – without considering the intertemporal dynamics framed by the context. This could result in false-positive identifications in the context of radicalization research.

As a result, longitudinal analyses solely on basis of quantitative characteristics seem less suitable for the targeted identification of individual users on social media platforms, but more suitable for depicting a development over time within echo chambers and on social media platforms as a whole. This still seems to be in accordance with the findings of Grogan (2020) as well as the suggestion made by Greipl et al. (2022) to conduct longitudinal analyses in radicalization research, albeit they need to be conducted cautiously and prudently. So far, intertemporal dynamics and ongoing developments can be mapped almost in real time via longitudinal analyses, which offers the possibility for qualitative inspections of social media comments, which seems necessary. Accordingly, the importance of qualitative perspectives was appropriately emphasized in the anthology of Hamachers et al. (2020), yet many contributions turn out to be cross-sectional and exclusively quantitative. Finally the question arises of whether the similarities found between supporters and opponents of the storming of the U.S. Capitol in Washington are not merely a result of the predefined structures of social media platforms, which specify the same input format for all their users and thus contribute to this challenge all along. Accordingly, a profound social media monitoring should always address the question of whether the data would enable similar insights if the measurements were repeated another time and whether comparable conclusions would be possible. The current state of research raises doubts.

 

Sources
Awan, I. (2017): “Cyber-Extremism: Isis and the Power of Social Media.” Society, 54 (3). Online: https://link.springer.com/article/10.1007/s12115-017-0114-0
Barberá, P.; Jost, J.; Nagler, J.; Tucker, J.; & R. Bonneau (2015): “Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?” Psychological Science, 26 (10). Online: https://doi.org/10.1177/0956797615594620
Bretschneider, U. & R. Peters (2017): “Detecting Offensive Statements towards Foreigners in Social Media.” International Conference on System Sciences: http://dx.doi.org/10.24251/HICSS.2017.268
Bright, J. (2017): “Explaining the emergence of echo chambers on social media: the role of ideology and extremism.” Online: https://arxiv.org/abs/1609.05003
Charitidis, P.; Doropoulos, S.; Vologiannidis, S.; Papastergiou, I., & S. Karakeva (2020): “Towards countering hate speech against journalists on social media.” Online Social Networks and Media, 17. Online: https://arxiv.org/abs/1912.04106
Cinelli, M; Morales, G. D. F.; Galeazzi, A.; Quattrociocchi, W. & M. Starnini (2021): “The echo chamber effect on social media.” Online: https://doi.org/10.1073/pnas.2023301118
Dixon, S. (2022a): “Number of global social network users 2018-2027.” Statista. Online: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/
Dixon, S. (2022b): “Most popular social networks worldwide as of January 2022, ranked by number of monthly active users.” Statista. Online: https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/
Gerstenfeld, P.; Grant, D. & C.-P. Chiang (2003): “Hate Online: A Content Analysis of Extremist Internet Sites.” Analyses of Social Issues and Public Policy, 1. Online: https://doi.org/10.1111/j.1530-2415.2003.00013.x
Glaser, S. & T. Pfeiffer (2017): “Erlebniswelt Rechtsextremismus: modern – subversiv – hasserfüllt. Hintergründe und Methoden für die Praxis der Prävention.” 5. Auflage. Wochenschau. Frankfurt am Main.
Greipel, S.; Hohner, J.; Schulze, H. & D. Rieger (2022): “Radikalisierung im Internet: Ansätze zur Differenzierung, empirische Befunde und Perspektiven zu Online-Gruppendynamiken.” In: MOTRA-Monitor 2021. Bundeskriminalamt. Wiesbaden.
Grogan, M. (2020): “NLP from a time series perspective. How time series analysis can complement NLP.” Towards Data Science. Online: https://towardsdatascience.com/nlp-from-a-time-series-perspective-39c37bc18156
Hamachers, A.; Weber, K. & S. Jarolimek (2020): “Extremistische Dynamiken im Social Web.” Verlag für Polizeiwissenschaft. Frankfurt am Main.
Jacobs, K. & N. Spierings (2018): “A populist paradise? Examining populists’ Twitter adoption and use.” Information, Communication & Society, 22 (12). Online: https://doi.org/10.1080/1369118X.2018.1449883
Kay, J. (2011): “Among the Truthers: A Journey Through America’s Growing Conspiracist Underground.” HarperCollins. New York.
Klinkhammer, D. (2020): “Analysing Social Media Network Data with R: Semi-Automated Screening of Users, Comments and Communication Patterns.” Online: https://arxiv.org/abs/2011.13327
Klinkhammer, D. (2022): “Longitudinal Sentiment Analyses for Radicalization Research: Intertemporal Dynamics on Social Media Platforms and their Implications.” Online: https://arxiv.org/abs/2210.00339
Koehler, D. (2015): “The Radical Online. Individual Radicalization Processes and the Role of the Internet.” Journal for Deradicalization, 15 (1), 116 – 134.
Machackova, H.; Blaya, C.; Bedrosova, M.; Smahel, D. & E. Staksrud (2020): “Children’s experiences with cyberhate.” Online: https://www.lse.ac.uk/media-and-communications/assets/documents/research/eu-kids-online/reports/eukocyberhate-22-4-final.pdf
Mandl, T.; Modha, S.; Majumder, P.; Patel, D.; Dave, M.; Mandlia, C. & A. Patel (2019): “Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages.” 11th Forum for Information Retrieval Evaluation. Online: https://doi.org/10.1145/3368567.3368584
Neumann, P. (2013): “The Trouble with Radicalization.” International Affairs, 89 (4). Online: https://doi.org/10.1111/1468-2346.12049
Neumann, P.; Winter, C.; Meleagrou-Hitchens, A.; Ranstorp, M. & L. Vidino (2018): “Die Rolle des Internets und sozialer Medien für Radikalisierung und Deradikalisierung.” PRIF Report, 9.
O’Hara, K., & D. Stevens (2015): “Echo Chambers and Online Radicalism: Assessing the Internet’s Complicity in Violent Extremism.” Policy & Internet, 7 (4). Online: https://doi.org/10.1002/poi3.88
Pereira-Kohatsu, J. C.; Quijano-Sánchez, L; Liberatore, F. & M. Camacho-Collados (2019): “Detecting and Monitoring Hate Speech in Twitter.” Sensors, 19 (21).
Prior, M. (2005): “News vs. Entertainment: How Increasing Media Choice Widens Gaps in Political Knowledge and Turnout.” American Journal of Political Science, 49 (3), 577 – 592.
Reichelmann, A:; Hawdon, J.; Costello, M.; Ryan, J.; Blaya, C.; Llorent, V.; Oksanen, A.; Räsänen, P. & I. Zych (2020): “Hate Knows No Boundaries: Online Hate in Six Nations.” Online: https://doi.org/10.1080/01639625.2020.1722337
Ross, B.; Rist, M.; Carbonell, G.; Cabrera, B.; Kurowsky, N. & M. Wojatzki (2017): “Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis.” University of Duisburg-Essen Press. Duisburg-Essen.
Scanlon, J. & M. Steven Gerber (2015): “Forecasting violent extremist cyber recruitment.” IEEE Transactions on Information Forensics and Security, 10 (11). Online: http://dx.doi.org/10.1109/TIFS.2015.2464775
Schmidt, A., & M. Wiegand (2017): “A Survey on Hate Speech Detection using Natural Language Processing.” 5th International Workshop on Natural Language Processing for Social Media. Online: http://dx.doi.org/10.18653/v1W17- 1101
Sui, X.; Chen, Z.; Wu, K.; Ren, P.; Ma, J. & F. Zhou (2014): “Social media as sensor in real world: Geolocate user with microblog.” Communications in Computer and Information Science, 496. Online: http://dx.doi.org/10.1007/978-3-662- 45924-9_21
Sunstein, C. (2006): “Infotopia: How Many Minds Produce Knowledge.” Oxford University Press. Oxford.
Warner, W. & J. Hirschberg (2012): “Detecting Hate Speech on the World Wide Web.” Proceedings of the Second Workshop on Language in Social Media. Online: https://aclanthology.org/W12-2103/
Wiegand, M.; Siegel, M. & J. Ruppenhofer (2018): “Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language.” Saarbrücken: University of Saarland Press. Saarbrücken.
Wienigk, R. & D. Klinkhammer (2021): “Online-Aktivitäten der Identitären Bewegung auf Twitter – Warum Kontensperrungen die Anzahl an Hassnachrichten nicht reduzieren.” Forum Kriminalprävention. Online: https://www.forum-kriminalpraevention.de/online-aktivitaeten-der-identitaeren-bewegung.html
Wiktorowicz, Q. (2005): “Radical Islam Rising: Muslim Extremism in the West.” Rowman & Littlefield. London
Dennis Klinkhammer
Dennis Klinkhammer is a Social Data Scientist and Professor for Empirical Research at the FOM University of Applied Sciences. He also teaches Data Science at the University of Cologne and offers workshops for doctoral candidates at the RWTH Aachen as well as for the Network Terrorism Research in Germany. In addition to his academic teaching he advises public as well as governmental organizations on the application of multivariate statistics and limitations of artificial intelligence by providing introductions to Python and R.

Related Articles

- Advertisement -

Latest Articles