In August of 2018, the cyber-security agency, FireEye, released a report detailing coordinated information operations on various social media platforms, including Twitter, targetting audiences in the U.S., U.K., Latin America, and the Middle East. These researchers pinpointed the source of this campaign to be originating in Iran, primarily from the Islamic Republic of Iran Broadcasting (we will refer to them as IRIB), a prominent Iranian state media corporation. This campaign consisted of a a system of connected social media accounts that promoted several viewpoints of interest to Iran, such as:
Messaging that promoted a negative view of the Saudi Arabian government, as well as associated foreign conflicts.
Messaging that promoted Palestine in the Israeli-Palestine conflict, which dates back to the mid-20th century.
Messaging that denounced unfavorable policies by the Trump administration, as well as direct attacks on the President.
In response to this (and other foreign influence campaigns), Twitter has disclosed a dataset of accounts and tweet activity that were directly linked to the Iran campaign in October of 2018.
This interactive report will be exploring this Twitter dataset (specifically, the subset of the English language tweets), investigating what kinds of messages were being communicated to the people who were the targets of these operations, and how this influenced their ability to gain a following and spread their ideas. The goal of this report is to give the reader a better sense for the kinds of messaging indicative of these sorts of information operations and encourage them to think more critically about their information consumption.
The IRIB, utilizing a strategy called astroturfing, promoted their viewpoints under the guises of fake organizations, usernames, contact information, and stock images. In this way, they were able to spread messages to their respective audiences via a faux grassroots campaign, growing the popularity of their posts and accounts steadily from 2012 to 2018:
Gaining a following requires momentum. In order to gain influence, the IRIB likely leveraged current events in order to frame the discourse around topics that were relevant in the minds of Twitter users. The following interactive visualization shows possible evidence for this strategy at play:
Here we can see how the campaign grew over time, and how news events (annotated with circles) may have helped the campaign spur increases in Twitter interaction (reflected in spikes in reply, like, quote, and retweet counts). Although these associations may be confounded by lurking variables (e.g. higher news coverage increases overall Twitter activity, increasing interaction metrics), it does allow us to begin thinking like an information operative, and the possible opportunism involved in garnering attention on social media.
We invite the reader to explore the relationship between time and tweet interactions by:
From looking at the events in the last visualization, we may be able to infer broadly what events or subject matter were being discussed by these operatives, but what exactly were they saying? What were these accounts actually tweeting about that so many users found worth reading?
The following is a curated representation of how Twitter users experienced tweets that originated from the Top 10 accounts with the highest average followerships. For example, the user with the most followers, السعودية تايمز, had mainly tweeted about content related to the Islamic religion. In order to experience the kinds of tweets that a random user may have experienced during the campaigns most active periods (2015 to 2018), we invite the reader to:
Now that we have a sense for the type of content that the major players were tweeting, let’s take a step back and see what topics were popular items of discussion throughout this campaign, and how successful they were at garnering likes, retweets, etc.
The following gives a similar sense as the previous visualization about the experience that Twitter users may have had during this campaign, but augments the view to shed a light on the topics, hashtags, and tweets that gained the most traction (sorting tweets from most to least likes). For example, in the Israel category, one of the recurring topics was the Israel-Palestine conflict, particularly territorial claims for Jerusalem. We invite the reader to discover these recurring themes and how successful they were at garnering attention by filtering by a set of topics (manually categorized by subject matter similarity based on the Top 50 most prevalent hashtags), as well as the Top 5 most frequent hashtags within these categories (sorted from most to least frequent).
Finally, after examining the topics and hashtags that gained the most likes, let’s examine the hashtags that were the most prevalent during this campaign, and how this changed through time. Looking at frequency of hashtags across the years in isolation, we can begin to see the discussions IRIB was focusing on promoting at different points in time.
The years with the highest activity (2017-2018) are indicative of another important by the FireEye investigation: the operations were fully mobilized after the 2016 election. This gives credence to the effect of current events explored in the second visualization.
We invite the reader to explore how the main tweet topics changed over time by:
The Iran influence campaign provides an example of how social media can provide access for foreign actors to influence the discourse of other nation states. From possibly taking advantage of current events, to focusing on specific topics at different points in time, the IRIB was able to tap into a network of fake, but influential accounts to spread their ideas.
We’ll never know exactly how much influence the Iran campaign had on political discourse. However, we do know that in a democracy, information operations of this scale have the potential to distort the true issues within a society, calling upon us to think twice about the information, and actors, on these social media platforms.
Note: In making this project we had to make some assumptions about the data (e.g. that likes corresponded to influence, despite the possibility of bot networks and within-campaign liking/retweeting/etc.), as well as making several biased decisions in order to threshold (e.g. choosing the 500 count cut-off for the final visualization) and give meaning to the data (e.g. manually created categories for the second to last visualization). Our findings are meant to be interpreted as primarily exploratory in nature, providing possible future directions of inquiry for more rigorous analyses.