The Internet Research Agency (IRA), funded by friends of Russian Intelligence, used social media to try to influence the US 2016 election. They did so in an elaborate and systematic fashion. While the number of purchased ads and money spent on Facebook was small there were significant resources devoted to this endeavor as a whole.
It’s overall objectives have been to explicitly help the more non-traditional candidates in the 2016 presidential campaign (p 23 of Mueller Report) and, more broadly, to sow distrust in the American populace towards its institutions (p 4 of Mueller Report). It used a variety of tools such as bots, fake twitter accounts, and advertising on Facebook to do so.
However, it’s unclear what the IRA optimize when they made these Facebook Advertisement. Part of the issue is the difference between outputs and the outcomes of these campaigns (terminology taken from Hostile Social Manipulation from Rand). Outputs are the observable metrics that can be tied to a Facebook campaign (eg likes, impressions, etc) while outcomes are the desired changes in public opinion the advertisements are trying to change. Since the outcomes are unobservable they would have had to use the outcomes of these campaigns as proxies. What were their objectives while looking at these proxies?
Looking into this is interesting for a couple reasons. The first is that learning about IRA objectives might tell us how to better combat these campaigns in the future. There is no reason to suggest the IRA or other nations will stop attempting to manipulate populations through social media so this problem will only get bigger so any research might help. Another reason is that, as a data scientist myself, I could learn something in how to advertise more effectively.
This analysis will attempt to answer what particular objectives the IRA were looking to optimize with regards to the information they had available for Facebook ads; Clicks, Impressions, and Costs. I've found evidence that the IRA adapted it’s ad placement to be more cost effective in terms of click through rates suggesting they tried to optimize some sort of Clicks/Cost metric.
General Idea of Analysis
The general idea is to think like a data scientist. If I were trying to optimize something I would look at the performance of previous campaigns and make future decisions according to my objectives. The decisions in this case are what kind of ads to deploy to which people. While I do have the text associated with each ad I will ignore that information for now and focus only on whom the ad targeted (Facebook allows a marketer to target based on just about any demo, interest, or behavior. Focusing on this should make the problem more tractable).
I try to simulate this activity in two steps for a given time period.
1 - The first is to select a period in time and build ridge regression’s on the five available metrics and interactions (Clicks, Costs, Impressions and Clicks/Costs, Impressions/Costs I added because I thought they'd be useful) using ad features (targets like 'users between ages of 18-65' or 'liked MLK') as explanatory variables. Each explanatory variable will therefore have 5 regression coefficients.
2 - If they were updating ads based on previous information I’d expect to find some correlations between previous performances (estimated coefficients of step 1) and future decisions. Therefore I treat these regression coefficients as explanatory variables in a second step; regressing on number of times that feature occurred in ads in the subsequent period.
An elaboration of this method and what it is trying to capture is in order. The IRA may explore different subsets of people to target ads to and adapt as they gain more information. They can see themselves what features are positively / negatively correlated with metrics, something we are trying to recreate in step one. The IRA would then make decisions on future ads.
Suppose the IRA was interested in maximizing impressions without regards to costs or clicks. In the second step we would expect to see a positive relationship between features that had large positive coefficients for impressions but no relationship with coefficients for costs or clicks. Thats the theory anyways.
I do not expect this to be the ‘true’ data generation process. All I’m trying to do is get features that can roughly capture the efficacy of ads targeting and future decision making. To some degree this is kind of hacky way of trying to do Inverse Reinforcement Learning.
Data And EDA
The US government and Facebook found released ads that were purchased by the IRA. The data I used was found at https://russian-ira-facebook-ads.datasettes.com/ and they cleaned up the metadata very nicely. However, only those ads that paid in Rubles (yes they didn't hide their tracks too much) had costs data. I subsetted data to those observations. In addition I focused on 2016 election and right after of dates between 2016-01-01 and 2017-02-01. This left with 1296 ads in the data set.
Below is a pairs plot of all output metrics in log scale.
Pairs Plot of Metrics |
One thing to note is the frequency of ‘horizontal’ points for costs. Since These are clustered around integer values it is most likely due to ‘max spend’ limits facebooks allows purchases to do.
The highest correlations are between Impressions and Clicks. Correlation between Impressions and Costs are also high most likely because IRA chose to be charged by Impressions and not Clicks which is another option.
For targeting data I used only those that have more than 10 observations and left with a 240 targets. Below is a plot of most common targeted data.
After some generic ones (NewsFeed, Desktop, English) one can see they focused heavily on African Americans. In a previous blog I mentioned how these really took off in summer / fall of 2016 right after Manafort gave Russians some as yet to be disclosed polling data (but thats another story).
Results
For a particular date I looked at ads that were deployed between 28 days prior and up to that date for step 1) described above. I then look at the subsequent 7 days following that day and count number of times each topic has been deployed in an ad. I do this calculation for every week for dates between 2016-01-01 and 2017-02-01. There are 57 overall time periods and 240 features. Combined we have 13680 ‘observations’ of regression coefficients along with. The reasons why I chose a 28 day lookback for step 1) and 7 day look forward for step 2) was fairly arbitrary. I did try other dates and got largely similar results.
Below is what one time period coefficients look like for Costs and Clicks.
This was during the week of the 2016 Election. Each point is a topic along with it’s estimate regression coefficient on each axis. The size is how many subsequent times that topic was named in the subsequent week. For example; targeting users with the "United States" was associated with more costs and more clicks and it was one of the commonly used target in subsequent 7 day period. Conversely the 'Pan Africanism' is associated with relatively low costs and clicks.
I included an x=y line for clarity. One can see that most observations are above this line indicating that some sort of click per cost metric is important.
I included an x=y line for clarity. One can see that most observations are above this line indicating that some sort of click per cost metric is important.
We can see all time periods / coefficients in a pairs plot below.
The the first five variables are the estimated coefficients for all targets for all time periods. The final column is each coefficients following weeks number of occurrences. The highest correlation among estimated features is between Costs and Impressions adding further evidence that they are charged by impressions and not clicks.
We also see that Costs and impressions are negatively correlated with counts while clicks are positively related. I take this to mean that IRA was interested in maximizing its non paying metric of clicks while minimizing costs of and subsequently impressions. Finally, the clicks/costs metric is slightly more correlated with counts that impressions/ costs topics.
For these reasons I ran a regression of log(counts+1) (l Iogged due to skewness in data) on costs, clicks. and clicks/costs while dropping impressions features. I did so because there is too much collinearity between impressions and costs and I think they tracked the same thing.
Below is a regression output of those features with time dummy features and the number of times the topic was targeted during current time period (time features excluded from output due to length).
Below is a regression output of those features with time dummy features and the number of times the topic was targeted during current time period (time features excluded from output due to length).
Observations | 13680 | |||||||||||||||||||||||||||||||||
|
This shows that the IRA was focused on minimizing costs while maximizing clicks and is in broad agreement to regression output printed above.
Conclusions
Overall it appears that IRA did try to maximize clicks on Facebook ads while minimizing costs. I've described these results to some friends and got the response 'well yea ok - that makes sense'. It's not a very surprising result.. it's almost banal. But it does underscores the fact they're using similar techniques as I might for helping a business... its just they're trying to f*** with my country.
This did not answer whether the IRA changed public opinion or had defining influence and 'hacked' the 2016 election. I don't think there is enough public information to answer that question. (side rant: I bet Facebook could come up with a decent analysis of that. They know who saw the ads / posts of IRA and know similar people who didn't see ads and where they probably voted. Couldn't Facebook could look at precinct level results and do some sort of ecological inference?)
But even if they didn't hack the election it seems like they did some decent analysis on Facebook ads that goes above and beyond what Facebook gives to its ad purchasers. Facebook does not give performance indicators on a target level basis. But the evidence presented here suggests they did some decent analysis on target level attributes. They might have hacked Facebook and that gave me an idea.
github code
github code