Desktop version

Home arrow Political science

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Conservatives and Twitter Bots

Michael W. Kearney

Whether social media has an effect on public opinion and political elections is no longer in question. Research suggests exposure to and use of social media can affect where people stand on certain issues (Messing & Westwood, 2012; Holt, Shehata, Stromback, & Ljungberg, 2013), their favorability toward political figures (Barnidge, Gil de Zuniga, & Diehl, 2017), and their likelihood of voting (Boulianne, 2015; Gil de Zuniga, Jung, & Valenzuela, 2012; Gil de Zuniga, Molyneux, & Zheng, 2014; Kim, Hsu, & Zuniga, 2013). Social media’s influence is not entirely surprising given 69% of adults in the United States have at least one social media account and 66% report getting some of their news from social media (Shearer & Gottfried, 2017). But the widespread use of social media has yet to translate into a representative snapshot of public opinion (Mellon & Prosser, 2017).

Although social media use among Americans has become the norm, opinions expressed on social media are still not representative of opinions found in the general public. For instance, we know social media discussions are predominantly composed of people who are highly educated, men, and white (Hargittai, 2018). We also know that much of social media activity—at any given time or on any given topic—is disproportionately driven by a handfl.il of highly active or highly influential accounts (Dang-Xuan, Stieglitz, Wladarsch, & Neuberger, 2013; Weeks, Ardevol-Abreu, & Gil de Zuniga, 2017). Despite its representative shortcomings, however, social media activity often gets interpreted and used as a barometer of public opinion (DiGrazia, McKelvey, Bollen, & Rojas, 2013; Gleason, 2010).

The combination of the growing role of social media in the dissemination of news (Vis, 2013) and the rise of automated accounts on social media platforms (Ferrara, Varol, Davis, Menczer, & Flammini, 2016) has naturally given rise to concerns about the manipulation of our political landscape via inauthentic, automated accounts on social media (Ehrenberg, 2012). To date, there has been numerous studies examining political user networks on social media (Barbera, 2014; Lee, Choi, Kim, & Kim, 2014; Lee, 2016), and research on the subject of automation and manipulation of information on social media has examined political rumors (Shin, Jian, Driscoll, & Bar, 2016), indicators of influence (Haustein et al. 2016), and distortion of political discussion (Bessi & Ferrara, 2016; Dickerson, Kagan, & Subrahmanian, 2014; Ratkiewicz et al., 2011). But relatively little has been done to describe the extent to which political user networks vary in terms of their connections and interactions with automated users. Accordingly, the purpose of this chapter is to explore the extent to which automated, or “bot,” accounts exist in ditferent political user networks on a major social media platform—Twitter.

Bots and Conservative Versus Liberal User Networks

There is no innate reason that conservative user networks would be intrinsically vulnerable or welcoming to bots on Twitter. Indeed, political ideologies are simply provisional snapshots of sociopolitical norms. Over time, for example, political ideologies often do shift and change—even in ways that would, by many, be considered contradictory. With that said, in the current political and cultural moment, there are some reasons to suspect that American-centric conservative user networks are more likely to connect and interact with Twitter bots.

American-centric conservative user networks on Twitter may be more likely to include bots than liberal or politically moderate user networks because users identifying as nonmainstream, or low-status in the context of political media, are more susceptible to persuasion and exploitation from relatively unknown or nontraditional sources. At the current time, American conservative identity frequently identifies itself as outside the “mainstream”—especially as it relates to media coverage. It makes sense that this status imbalance—i.e., perceived underrepresentation in “mainstream” channels of information—would result in the low-status group being more open and willing to accept information from nontraditional or relatively unknown digital entities. This line of reasoning is also consistent with recent research, which found that conservatives were more vulnerable to misinformation due to the structure of their network and information systems and their historical use of social media (McCright & Dunlap, 2017; Tucker et al., 2018).

Conservative user networks may also attract more anonymous accounts, which may, in turn, be more likely to connect with bots than user networks with less anonymous users. At least in recent history, extreme conservative views have often been portrayed as “reactionary” and, as a consequence, criticized for being close-minded and outdated. It makes sense, then, that views more likely to be perceived as offensive are more likely to come from anonymous accounts. And, because, in theory, anonymous web users experience less social pressure than non-anonymous users, it seems reasonable to assume they would be more willing to interact with bots. These patterns would explain why, for example, bots in the 2016 election were more conservative and/or pro-Trump and why conservative users were more likely to retweet posts by bots (Badawy, Ferrara, & Lerman, 2018). This study therefore theorizes the following:

• Hypothesis: Conservative user networks on Twitter will include more bots than will liberal or political moderate user networks.

Detecting Twitter Bots

Scholars have taken a number of different approaches to detect and examine the fake and/or automated accounts on social media (Xiao, Freeman, & Hwa, 2015; Chu, Gianvecchio, Wang, & Jajodia, 2012). However, exporting these approaches and/or reproducing the human labor used to power the classification of automated accounts in these studies remains unrealistic. Fortunately, there is an alternative to using potentially outdated lists of automated accounts and labor-intensive classification systems. By leveraging a user-driven labeling system built into Twitter’s platform (i.e., publicly available Twitter “lists”), it is possible to identify clusters of accounts that are similarly categorized as “bots” or other relatively meaningful words used to label lists.

Method

Data for this study came from the friends, or accounts followed by a Twitter user, of3,000 randomly sampled users (Kearney, 2017) from a frame comprised of all of the followers, or followers of a Twitter user, of 12 accounts chosen to represent three political groups—conservatives (@DRUDGE_REPORT, @foxnewspol- itics, @SarahPalinUSA, and @seanhannity), liberals (@Salon, @HuffPostPol, @paulkrugman, and @maddow), and political moderates (@AMC_TV, @ Americanldol, @SInow, and @survivorcbs). Sampling proceeded in two stages with the first stage involving an initial, larger sample that was filtered to exclude inactive, exceptionally active, or popular users before a second stage, which resulted in a sample of 1,000 [each] conservatives, liberals, and moderates.

To make it easier to manage Twitter’s rate limits and to focus on at least reasonably meaningful or central nodes in political user networks, the final sample consisted of Twitter accounts (N = 6,761) followed by more than 20 of the sampled accounts (as described in previous paragraph). These accounts were associated with their largest number of politically conservative, liberal, or moderate users

(e.g., an account followed by 50 liberals, 30 conservatives, and 20 moderates was categorized as “liberal”).

Data

The final dataset consisted of 6,761 observations with 2,088 accounts followed most frequently by liberals, 2,310 accounts followed most frequently by moderates, and 2,363 accounts followed most frequently by conservatives. All data were collected from Twitter’s REST API in R (R Core Team, 2015) using the rtweet package (Kearney, 2016). Data returned by the REST API also included user-level meta information about each of the users. This users-level data, along with the 100 most recent tweets posted by each user, was then used via the tweetbotornot package (Kearney, 2018) to estimate the probability that accounts were bots.

In addition to bot probabilities and the associated political groups and sampling-linked accounts, several of the user-level variables were also retained as used, as covariates throughout the analyses. Due to the varying scale sizes and count-related nature of many of the variables, all covariates except “Profile URL” were logged and then normalized (mean-centered and divided by the standard deviation) prior to modeling. Descriptive statistics for the final sample can be seen in Table 9.1.

Results

In this study it was hypothesized that conservative user networks on Twitter would include more bots than would liberal or politically moderate user networks. To

TABLE 9.1 Descriptive Statistics

Variable

Meat i

SD

Min

Median

Max

Account age

6.20

2.22

.24

6.96

10.48

Favorites (k)

16.36

42.87

.00

2.97

904.87

Followers (k)

1072.71

4362.19

6.57

212.81

106,927.34

Friends (k)

17.66

59.68

.00

1.35

1,664.73

Description

100.75

49.10

.00

111.00

177.00

Location

10.85

8.73

.00

12.00

142.00

Profile URL

.79

.41

.00

1.00

1.00

Liberal

.31

.46

.00

.00

1.00

Moderate

.34

.47

.00

.00

1.00

Conservative

.35

.48

.00

.00

1.00

Bot probability

.53

.33

.00

.52

1.00

test this hypothesis, hot probabilities were estimated for the accounts followed by more than 20 randomly sampled conservative, liberal, and moderate users. A simple inspection of the bot probabilities means—by ideology and by sampling- linked source accounts—yields at least initial support for the hypothesis. The mean bot probability was highest (.62) for accounts followed by moderate users. Accounts followed by conservative users were the next highest (.56). Finally, with the lowest mean bot probability, were the accounts followed by liberal users (.43). This pattern can also be seen in the Table 9.2, which contains the mean bot probabilities of accounts followed by users in each of the sampling-linked source accounts.

Of course, mean estimates, by themselves, do not communicate uncertainty and can be subject to systematic sources of variation in the sample. To conduct a more formal and robust test of the hypothesis, four quasi-binomial linear models were also estimated to predict the probabilities of accounts being bots. In addition to making it easy to hypothesis test individual parameters, linear models also allow for the inclusion of other covariates.

To further isolate the unique contributions of the predictor variables, the first model, Model 1, contains only covariates—account age, statuses, favorites, followers, friends, the number of characters used in the user “description” field (bio statements), number of characters used in the user “location” field, and whether users uploaded a (non-default) profile image. Model 2 adds the political grouping variables. Model 3 adds an interaction between statuses and account age (rate of activity). And, finally, Model 4 adds an interaction between the number of friends and followers (friend-follower ratios). Model coefficients for all models can be found in Table 9.3. For the sake of interpretability, estimates from ordinary least squares (OLS) versions of the models, which yielded

TABLE 9.2 Mean Bot Probabilities of Accounts Followed by Users in Each of the Sampling-Linked Source Accounts

Sample Origin

Bol Probability

AMC_TV

.63

SarahPalinUSA

.62

seanhannity

.61

DRUDGE_REPORT

.56

Slnow

.54

survivorcbs

.54

foxnewspolitics

.47

maddow

.47

paulkrugman

.45

Salon

.42

HuffPostPol

.37

TABLE 9.3 Model Coefficients Predicting the Probability of Accounts Being Bots

Predictor

Ml

М2

М3

M4

Estimate

Estimate

Estimate

Estimate

(S.E.)

(S.E.)

(S.E.)

(S.E.)

Intercept

.40***

(.04)

.11*

(.05)

.11*

(.05)

.03

(.05)

Account age

  • -.30***
  • (.02)
  • —.2b***
  • (.02)
  • —.25***
  • (.02)
  • —.24***
  • (.02)

Statuses

  • —.22***
  • (.02)
  • —.22***
  • (.02)
  • —.22***
  • (.02)
  • —.23***
  • (.02)

Favorites

  • -.18***
  • (.02)

_ 1

(.02)

  • -.16***
  • (.02)
  • -.16***
  • (.02)

Followers

.46***

(.02)

.53***

(.02)

.53***

(.02)

.50***

(.02)

Friends

(.02)

.40***

(.02)

  • 40***
  • (.02)
  • 40***
  • (.02)

Description

  • -.06***
  • (.02)
  • -.06***
  • (.02)
  • -.06***
  • (.02)
  • -.07***
  • (.02)

Location

  • -.16***
  • (.02)

_ 1

(.02)

  • —.17***
  • (.02)

__ 17**r*

(.02)

Profile UkL

  • -.34***
  • (.04)

_ 9C)★★★

(.04)

  • -.30***
  • (.04)
  • —.25***
  • (.04)

Moderate

.18***

(.04)

.18***

(.04)

.16***

(.04)

Conservative

.56***

(.04)

.57***

(.04)

(.04)

Account age * statuses

.05**

(.02)

.07***

(.02)

Followers * friends

  • —.24***
  • (.02)

N

6761

6761

6761

6761

Deviance

2906.08

2842.63

2838.63

2776.13

l1

615.35***

678.80***

682.80***

745.30***

similar results to those provided by the generalized linear model, can be found in Table 9.4

As can be seen in Table 9.3, conservative users were significantly more likely to follow bots than were liberal users. And the effect held with covariates and the addition of multiple interaction terms. For a visual representation of the estimated bot probabilities, see Figure 9.1. As the figure illustrates, the difference in means between liberal and conservative users is clear, but the range of probabilities is still quite large. This finding is also supported by

TABLE 9.4 Estimates from Ordinary Least Squares (OLS) Versions of the Models

Predictor

Mi

М2

М3

M4

Estimate

Estimate

Estimate

Estimate

(S.E.)

(S.E.)

(S.E.)

(S.E.)

Intercept

t^ty-kick

(.01)

.52***

(.01)

  • 59***
  • (.01)

.50***

(.01)

Account age

  • -.07***
  • (.00)
  • -.06***
  • (.00)
  • -.06***
  • (.00)
  • -.05***
  • (.00)

Statuses

  • -.05***
  • (.00)
  • -.05***
  • (.00)
  • -.05***
  • (.00)
  • -.05***
  • (.00)

Favorites

  • -.04***
  • (.00)
  • -.04***
  • (.00)
  • -.04***
  • (.00)
  • -.04***
  • (.00)

Followers

. 10***

(.00)

  • 12***
  • (.01)
  • 12***
  • (.01)
  • 1 1 ★★★
  • (.00)

Friends

. 10***

(.00)

(.00)

QC)***

(.00)

  • 0C)-***
  • (.00)

Description

  • -.01**
  • (.00)
  • -.01**
  • (.00)
  • -.01**
  • (.00)
  • -.01**
  • (.00)

Location

  • -.04***
  • (.00)
  • -.04***
  • (.00)
  • -.04***
  • (.00)
  • -.04***
  • (.00)

Profile URL

  • -.08***
  • (.01)
  • -.07***
  • (.01)
  • -.07***
  • (.01)
  • -.06***
  • (.01)

Moderate

.04***

(.01)

.05***

(.01)

.04***

(.01)

Conservative

.13***

(.01)

.13***

(.01)

  • 1 1 ★★★
  • (.01)

Account age * statuses

.01*

(.00)

.01**

(.00)

Followers * friends

  • -.05***
  • (.00)

N

6761

6761

6761

6761

RMSE

.30

.29

.29

.29

R:

.20

.21

.22

.23

Adj IV

.19

.21

.21

.23

the R~ estimates in the OLS models, which make it possible to infer the proportion of explained variation unique to the partisan grouping variables. Based on change in the R2 estimates between models with and without the political grouping variables, the political orientation of the user network uniquely explains roughly 2% of the variation in bot probabilities between accounts. So, while the results unanimously support the hypothesis, they also signal caution for interpretations that overlook the sheer amount of variation in estimated bot probabilities.

Estimated Bot Probabilities by Political User Network Among Users Followed by Each Group

FIGURE 9.1 Estimated Bot Probabilities by Political User Network Among Users Followed by Each Group

Discussion

The goal of this chapter was to examine differences in the distribution of automated, bot-like accounts by political user network. Earlier in the chapter, it was theorized that conservative user networks would be more likely to include bots than liberal networks because of their perceived status as counter to the “mainstream media” and because of their sociopolitical reasons for expecting anonymity to be more of a norm among conservative networks. To test the hypothesis, Twitter users were randomly sampled from well-known accounts used to represent politically liberal, conservative, and moderate. The accounts followed by several of these randomly sampled users were assigned to the political group with their most followers and then compared in relation to their estimated probabilities of being Twitter bots.

At every level, the results supported the hypothesis that conservative users would be more connected to bots and bot-like accounts than liberal users. This was evidence in a comparison of the mean bot probabilities as well as in the model coefficients of quasi-binomial regression models, which also controlled for multiple other user-related covariates. The results provide empirical support for the theory that the current configuration of conservative user networks reflect anti-mainstream sentiment (at least in relation to the media) and exhibit lower standards for connecting with relatively less-known and/or more anonymous accounts.

While the findings provide evidence in support of the hypothesis, there are several reasons to be cautious of the results presented in this chapter. First, it is hard to properly detect bots. Bots and bot-like accounts are frequently suspended, removed, or deactivated. The definition of “bot” is far from settled. And the line between organizational and/or semi-automated accounts and humans is terribly unclear. Second, the use of Twitter lists to identify potential bots—or, in the case of tweetbotornot, to train an algorithm to detect bots— undoubtedly reflect systematic biases that we do not yet fully understand. At the same time, however, these lists may also be the only viable way to capture Twitter-specific consensus or conventions. In other words, it may not be far off to say the unknown systematic sources of variance-shaping patterns and use of Twitter lists may actually be the unique and valued effects of the platform itself—e.g., list use may be a reflection of users managing timeline or friend/ follower dynamics (they offer users a way to keep up with other accounts without cluttering up their timeline feeds or inflating their friend-to-follower ratio). Finally, there is no intrinsic reason that conservative users would be more welcoming or contain more bots. Like most differences associated with political orientations, these patterns are subject to the current sociopolitical foundations that shape the political system.

Discussion Questions

  • 1. At the time this chapter was written, 69% of adults in the United States have at least one social media account and 66% report getting some of their news from social media. Do you anticipate these numbers will increase or decrease over time? Why?
  • 2. American conservatives identify frequently as outside the mainstream. What do you think that means for how they process other types of messages (outside of tweets)?
  • 3. Anonymity appears to be more of a norm on conservative (than liberal) Twitter. What are some advantages and disadvantages of this pattern for the Republican Party? For the United States, more broadly?

References

Badawy, A., Ferrara, E., & Lerman, K. (2018). Analyzing the digital traces of political manipulation: The 2016 Russian interference Twitter campaign. arXiv preprint arXiv:1802.04291.

Barbera, P. (2014). How social media reduces mass political polarization: Evidence from Germany, Spain, and the US. Job Market Paper, New York University, 46.

Barnidge, M., Gil de Zuniga, H., tk Diehl, T. (2017). Second screening and political persuasion on social media. Journal of Broadcasting & Electronic Media, 61(2), 309-331. doi: 10.1080/08838151.2017.1309416

Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 U.S. Presidential election online discussion. First Monday, 21(11). Retrieved from http://firstmonday.org/ojs/ index, php/fm/article/view/7090

Boulianne, S. (2015). Social media use and participation: A meta-analysis of current research. Information, Communication & Society, 18(5), 524—538. doi:10.1080/13691 18X.2015.1008542

Chu, Z., Gianvecchio, S., Wang, H., tk Jajodia, S. (2012). Detecting automation of twitter accounts: Are you a human, hot, or cyborg? IEEE Transactions Dependable Secure. Computing, 9(6), 811-824. Los Alamitos, CA, USA: IEEE Computer Society Press. doi:10.1109/TDSC.2012.75

Dang-Xuan, L., Stieglitz, S., Wladarsch, J., tk Neuberger, C. (2013). An investigation of influential and the role of sentiment in political communication on Twitter during election periods. Information, Communication & Society, 16(5), 795-825. doi:10.1080/1 369118X.2013.783608

Dickerson, J. R, Kagan, V, tk Subrahmanian, V. S. (2014). Using sentiment to detect bots on twitter: Are humans more opinionated than bots? 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (asana 2014, pp. 620-662).

DiGrazia, J., McKelvey, K., Bollen, J., & Rojas, F. (2013). More tweets, more votes: Social media as a quantitative indicator of political behavior. PLoS One, 8(11), e79449. doi: 10.1371 /journal, pone.0079449

Ehrenberg, R. (2012). Social media sway: Worries over political misinformation on Twitter attract scientists’attention. Science News, 182(8), 22-25. doi:10.1002/scin.5591820826

Ferrara, E., Varol, O., Davis, C., Menczer, F., tk Flammini, A. (2016). The rise of social bots. Communications. ACM, 59(7), 96-104. New York, NY, USA: ACM. doi:10.1145/2818717

Gil de Zuniga, H., Jung, N., tk Valenzuela, S. (2012). Social media use for news and individuals’social capital, civic engagement and political participation. Journal of Computer- Mediated Communication, 17(3), 319-336. doi:10.1111/j.1083-6101.2012.01574.x

Gil de Zuniga, H., Molyneux, L., tk Zheng, P. (2014). Social media, political expression, and political participation: Panel analysis of lagged and concurrent relationships. Journal of Communication, 64(4), 612-634. doi:10.1111/jcom.12103

Gleason, S. (2010). Harnessing social media: News outlets are assigning staffers to focus on networking. American Journalism Review, 32(1), 6-8.

Hargittai, E. (2018). Potential biases in big data: Omitted voices on social media. Social Science Computer Review, 38(1), 10-24. doi: 10.1177/0894439318788322

Haustein, S., Bowman, T. D., Holmberg, A., Kimand, T, Sugimoto, C. R., & Larivie re, V. (2016). Tweets as impact indicators: Examining the implications of automated “bot” accounts on Twitter. Journal of the Association for Information Science and Technology, 67(1), 232-238. doi: 10.1002/asi.23456

Holt, K., Shehata, A., Stromback, J., & Ljungberg, E. (2013). Age and the effects of news media attention and social media use on political interest and participation: Do social media function as leveler? European Journal of Communication, 28(1), 19-34. doi: 10.1177/0267323112465369

Kearney, M. W. (2016). Retweet: Collecting Twitter data. Comprehensive R Archive network. Retrieved from https://cran.r-project.org/

Kearney, M. W. (2017). A network-based approach to estimating partisanship (Dissertation, pp. 15-30). Lawrence, KS: University of Kansas.

Kearney, M. W. (2018). 7weetbotornot: Detecting Twitter bots. Retrieved from https://tweet- botornot.mikewk.com

Kim, Y., Hsu, S.-H., & Gi! de Zuniga, H. (2013). Influence of social media use on discussion network heterogeneity and civic engagement: The moderating role of personality traits. Journal of Communication, 63(3), 498-516. doi:10.1111/jcom. 12034 Lee, F. L. (2016). Impact of social media on opinion polarization in varying times. Communication and the Public, 1(1), 56-71. doi:10.1177/2057047315617763 Lee, J. K., Choi, J., Kim, C., & Kim, Y. (2014). Social media, network heterogeneity, and opinion polarization. Journal of Communication, 64(4), 702-722. doi:10.11U/ jcom. 12077

McCright, A. M., & Dunlap, R. E. (2017). Combatting misinformation requires recognizing its types and the factors that facilitate its spread and resonance. Journal of Applied Research in Memory and Cognition, 6(4), 389-396. doi:10.1016/j.jarmac.2017.09.005 Mellon, J., & Prosser, C. (2017). Twitter and Facebook are not representative of the general population: Political attitudes and demographics of British social media users. Research & Politics, 4(3). doi:l0.1177/2053168017720008 Messing, S., & Westwood, S. ). (2012). Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication Research, 41, 1042-1063. doi:l0.1177/0093650212466406 R Core Team. (2015). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from www.R-project.org/ Ratkiewicz, J., Conover, M., Meiss, M., Goncalves, B., Flammini, A., & Menczer, F. (2011). Detecting and tracking political abuse in social media. Retrieved from www.aaai.org/ ocs/index.php/ICWSM/ICWSM 11/paper/view/2850 Shearer, E., & Gottfried, J. (2017). News use across social media platforms 2017. Pew Research Center, Journalism and Media. Retrieved from www.journalism.org/2017/09/07/ news-use-across-social-media-platforms-2017/

Shin, J., Jian, L., Driscoll, K., & Bar, F. (2016). Political rumoring on Twitter during the 2012 U.S. presidential election: Rumor diffusion and correction. New Media & Society, 1-22. doi: 10.1177/1461444816634054

Tucker, J., Guess, A.. Barbera, R, Vaccari, C., Siegel, A., Sanovich, S., & Stukal, D. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Palo Alto, CA: William+ Flora Hewlett Foundation. Retrieved from https:// hewlett.org/wp-content/uploads/2018/03/Social-Media-Political-Polarization-and- Political-Disinformation-Literature-Review.pdf Vis, F. (2013). Twitter as a reporting tool for breaking news: Journalists tweeting the 2011 UK riots. Digital Journalism, 1(1), 27-47. doi:10.1080/21670811.2012.741316 Weeks, В. E., Ardevol-Abreu. A., & Gil de Zuniga, H. (2017). Online influence? Social media use, opinion leadership, and political persuasion. International Journal of Public Opinion Research, 29(2), 214—239.

Xiao, C., Freeman, D. M., & Hwa, T. (2015). Detecting clusters of fake accounts in online social networks. Proceedings of the 8th ACM workshop on artificial intelligence and security, AlSec’15 (pp. 91-101). New York, NY. USA: ACM. Retrieved from http://doi.acm. org/10.1145/2808769.2808779

10

 
<<   CONTENTS   >>

Related topics