Monthly Archives: October 2014

A disturbing trend of stats abuse

Recent analysis of Twitter activity around the #GamerGate hashtag has given us new insights. Unfortunately, those insights are about understanding the lengths that the critics of #GamerGate will go to in order to dismiss their opposition instead of understanding how the tag is used.

It started with an interesting snapshot from Andy Baio looking into the activity of the #GamerGate hashtag on Twitter versus the join date of the account posting.

Baio (@waxpancake) runs XOXOfest and is the former CTO of Kickstarter. It’s important to keep in mind he is not a neutral party in the #GamerGate debate. It’s important because the first question we should ask ourselves when looking at an analysis is: why now? Why is this measurement being given to us, and why does the author think it is important to know? This allows us to be skeptical- to ask the right questions of the subsequent analysis, and to check our understanding. It does not allow us to dismiss the data we do not like, if we are going to be rational.

Even a simple tweet like the one above can lead to some big questions. As Joel Best, the author of many books on statistical literacy, including “Stat-Spotting: A field guide to identifying dubious data” says:

An easy way to make a statistic seem impressive is to use superlatives: ‘the greatest,’ ‘the largest,’ ‘the most,’ ‘record-setting,’ and so on. Superlatives imply comparison; that is, they suggest that someone has measured two or more phenomena and determined which is more significant.

p. 33 of Stat-Spotting

How can something both be “so far” as if to indicate the analysis is incomplete, and yet include “total traffic” at the same time? It can’t. Analysts should use caution in the early phases of an analysis by using an abundance of qualifiers to avoid confusion, and not include hyperbole.

As one might expect, this tweet lead to a lot of questions about whether the posting from #GamerGate was due to “bots” or whether the activity was really genuine. Baio followed up with a more complete analysis a few hours later:

As you can see from his analysis, the majority of the tweets are from “recent” accounts, meaning the account was created on Twitter within the last 3 months (when #GamerGate started). He is combining the #GamerGate and #NotYourShield tags throughout his analysis. To be honest, I’m not sure why he does that since there is plenty of data available, but he’s clear throughout his comments that this is the case.

Proponents of #GamerGate stated the activity was not unexpected since a lot of people had joined Twitter to participate in the discussion, especially after being censored on other sites. Opponents of #GamerGate claimed it was further evidence that the movement was illegitimate.

At least four times in that conversation, Baio was asked to do a comparison with the #StopGamerGate2014 hashtag, which is the rallying cry of the anti-#GamerGate crowd. After four days, Andy posted that he would be capturing that data. This is important to note, since he never included it in his subsequent analysis.

Prior to that time, I decided to give the #StopGamerGate2014 hashtag the same treatment. After grabbing the tweets for roughly the same length of time that Andy had used in his analysis, I found similar patterns for those using the opposing tag.


While the #StopGamerGate2014 hashtag is not nearly as popular, we see the same pattern in the join date when compared to the tweet activity. Perhaps the confounding factor here is that new political or social movements will have many people jumping into the conversation by joining Twitter for the first time. Not much conversation was generated from my analysis, but it’s important to note that Baio was aware of it:

The request for raw data is an important step in reproducible analysis. Baio had already shared his data and after a few minutes I was able to publicly share mine as well.

A few days after the initial exchange, Taylor Wofford of Newsweek posted an article titled: Is GamerGate About Media Ethics or Harassing Women? Harassment, the Data Shows

An astonishing claim, to be sure, and it was apparently backed up by an analysis from BrandWatch. Even a cursory glance at the data starts to raise some glaring questions about the conclusion given by Taylor Wofford, however. A quick glance at Wofford’s Twitter feed (@dogstoevsky) will also give you some strong indications that he’s not impartial about #GamerGate. Those unfamiliar with the #GamerGate controversy may even find some of his statements to be inciting or hateful- including a tweet he made in support of bullying during National Bullying Prevention Month. However, this behavior from reporters is nothing new to those who have been questioning reporter ethics for some time.

An in-depth analysis of Wofford’s statements in the article, as well as a closer look at the statistics is available from @cainjw in his blog post An Actual Statistical Analysis of #GamerGate? That analysis identified many of the concerns I had with how the data was used, so I won’t repeat them here. However, it’s important to note that the damage was already done: those who needed a reason to dismiss the questions of ethics in journalism raised by proponents of #GamerGate now had their “statistical justification.”

When pressed, people like L. Rhodes were unwilling to address the inherent contradictions in the data, or the context in which it was gathered. Again, author Joel Best can help us here:

We have all counted things, so when someone estimates that some social problem affects X people, we understand this to mean that, if we had enough time and money to count every case, the total number of people affected would be roughly equal to X. It seems perfectly clear.

Alas, this clarity is often an illusion, because advocates wind up trying to inform the public by translating fairly complex research findings into clearer, more easily understood figures. These numbers are products of the researchers’ definitions and measurement choices.

p. 60 “Stat-Spotting”

What we are faced with in the analysis from Newsweek is “tweets to someone” as a proxy for “harassment of that person.” Further, Wofford tries to make the case that not tweeting to someone is an implied support of their behavior, which means that #GamerGate as a whole doesn’t care about journalists ethics because they aren’t spending a majority of their tweets directed at journalists- according to Wofford. The obvious confounding factor is that the number of tweets at someone is strongly correlated with their media appearances. Wofford makes no attempt to account for that factor.

Fast forward to today, October 27th, and Andy Baio has additional analysis he shared with the public.

Again, it’s important to note where Andy is coming from in his analysis. In his own words, from the article:

Without question, I have a strong anti-Gamergate bias. I co-organize an festival called XOXO that invited two frequent #Gamergate targets to speak,Anita Sarkeesian and Leigh Alexander. I backed Anita’s project, and I think they both do great work. I’m also friends or acquaintances with a few dozen independent game designers, developers, and journalists, most of whom have come out publicly against Gamergate. I think the whole thing’s pretty awful, and that it has critically wounded the public perception of videogames.

That said, I think the numbers below accurately and objectively reflect the data, and the analysis I’m doing is very straightforward. You could reproduce everything with a copy of Excel or I included a dump of the complete dataset at the end of this post, and I encourage you to double-check my work.

Is it admirable to state your bias upfront? Perhaps, but to then state unequivocally that “the numbers below accurately and objectively reflect the data” is a rather odd statement. In his 2001 book, “Damned Lies and Statistics” Joel Best points out that “all statistics are socially constructed.” This means that the problem is not in the data, but how it is collected, counted, presented, and defined. An analyst that strives to be honest should understand that “bias” is not the opposite of objectivity. While some will say that objectivity is impossible (a contradiction in terms), honesty in journalism and analysis comes not from pretending that the act of analysis is immune to bias, but that our ability to think is critical to honest analysis. To pretend that bias prevents us from seeing facts that might otherwise be available is to undermine the thing that makes us distinctly human. Whether it is worse to pretend that bias is impossible in data analysis or to pretend one is not biased at all seems to be two sides of the same evil coin.

In performing an analysis, one must understand that there should be no such thing as “a fact that contradicts your findings.” To have such a thing occur is not only a red flag that your analysis is insufficient, it’s also an indicator that your “bias” has prevented you from asking the right questions. Therefore, it is necessary to include all the facts sufficient to make your case, including the facts that are already in your head.

While there is some room for criticism in Baio’s analysis and commentary, the most troubling statement is one in which he dismisses a fact that contradicts his findings. When commenting on the same distribution characteristic for the number of tweets compared to the account start date, Baio makes the following statement:

Is this distribution unusual, though? For contrast, I tried another hashtag for a similar length of time, the #kashmirfloods hashtag used during last month’s tragic floods that ravaged northern India. The distribution is much closer to what you’d expect: evenly distributed, roughly following Twitter’s rise in popularity.

Why Baio chose to compare a sociopolitical movement like #GamerGate to a tag used during a natural disaster is a bit of a mystery. Regardless, he had the available data and the knowledge to compare the activity of anti-#GamerGate tags to the activity of the #GamerGate tag. He cannot state that the activity is unusual since he knows it to be otherwise. When faced with a fact that contradicts his findings, Baio had this to say:

Again, we are faced with comparisons that are suspect: why should we expect an even distribution of tweets compared to join date? Why is it unusual for “online movements” or hashtags to display this kind of distribution? When faced with a fact that contradicts your findings, an analyst should work to integrate that fact into their understanding. What we have seen from the analysts with an admitted bias towards anti-#GamerGate is not an integration of the facts, but rather a dismissal of them.