Michael LaCour likely faked more than just his LGBT canvassing study

Last year, Michael LaCour, a political science Ph.D. candidate at UCLA, published a widely-reported study in Science indicating that single, 20-minute, one-on-one conversations between canvassers and voters could produce significant and lasting increases in support for marriage equality.

There is significant empirical evidence that in-person canvassing can produce significant attitude changes on other issues, and those findings, along with LaCour’s were used as “a template” in the recent Yes Equality campaign in Ireland.

However, when a group of researchers were unable to replicate LaCour’s findings, they dug a bit deeper into his work and discovered that large portions of his paper appear to be faked. Even the organization credited with funding the study has no record of involvement with LaCour or his work. This led LaCour’s faculty co-author to request that the paper be retracted, and led Science to publish an official “editorial expression of concern.”

For his part, LaCour has vowed to respond to the allegations of faked data in detail:

But LaCour is learning the hard way that one allegation of faked findings only invites further scrutiny. Yesterday, Tim Groseclose at Ricochet detailed evidence that he believes shows other instances of faked data in LaCour’s work.

Canvassers, via Wikimedia Commons

Canvassers, via Wikimedia Commons

Groseclose examined two papers, one published in the Journal of Political Communication and one unpublished working paper that was presented at the most recent Midwest Political Science Association conference. While Groseclose believes that the research behind the peer-reviewed work is sound, he documents a number of irregularities in the working paper that lead him to believe that the data are pulled out of thin air.

The paper in question examines the “news diet” of voters and concludes that, contrary to conventional wisdom, voters don’t live in echo chambers; LaCour claims to show that our “news diet” is, for the most part, balanced.

Depending on which research you rely on, there’s support for some version of this conclusion. However, as Groseclose outlines, LaCour’s path to results he claims to have found is checkered at best.

LaCour’s paper relies on a method of measuring ideology in the media that he appears to not understand. The method, previously developed by economists Matthew Gentzkow and Jesse Shapiro, constructs a list of “loaded phrases” (like “death tax” or “amnesty”) and measures how often such phrases are used in congressional speeches and news outlets. By comparing frequencies, one can then make statements like “The New York Times is approximately as liberal as a speech by Sen. Joe Lieberman” with some degree of empirical confidence. As Groseclose notes, this method was itself inspired by a method that he had formulated previously — a method that Gentzkow and Shapiro give him credit for.

In LaCour’s paper, however, the method appears to have been tweaked manually, leading Groseclose to write: “I do not believe he actually wrote the code that would be necessary to execute his method and, accordingly, I don’t believe that he really computed the estimates that he reports from his method.”

Perhaps the simplest reason to believe that this is the case is that there doesn’t appear to be any rhyme or reason to LaCour’s confidence intervals. Basic statistical theory dictates that the larger your sample, the smaller your confidence interval should be. However, in LaCour’s work, this is not the case; the news shows with smaller sample sizes have smaller confidence intervals than shows with larger ones. One would expect anomalies like this to have a plausible explanation in the methods section, but, by Groseclose’s estimate, “approximately 90% of the first two pages [of the methods section] is a word-for-word copying of sentences from Gentzkow and Shapiro’s article;” not a sin in and of itself if the paper states that it relies on Gentzkow and Shapiro’s work, but also no help when it comes to explaining the anomalous results.

As Groseclose also notes, LaCour appears to have a loose understanding of the statistical principles he cites, mistranscribing terms like “Chi-squared” with an “x” instead of the Greek letter. Again, this isn’t incriminating in and of itself, but it does suggest a lack of understanding of the work LaCour is claiming a working knowledge of. As Groseclose writes: “If a researcher is not really familiar with the Chi-Squared distribution, I don’t see how he or she could fully understand the Gentzkow-Shapiro method. And if LaCour does not fully understand the Gentzkow-Shapiro method, I do not see how he could have executed the method that he describes in his paper.”

Groseclose goes on to outline other suspect elements of LaCour’s work, including the lists of loaded phrases used by liberals and conservatives bearing highly unlikely levels of consistency over the three-year period LaCour observes.

Also of note, LaCour’s method labels the Thom Hartmann show as having a conservative bent which couldn’t be farther from the truth. Groseclose discusses what this anomalous result means, and speculates as to why LaCour declined to discuss it in his paper, but at the end of the day it’s either a sign of methodological error or imputation.

As Groseclose notes, the simplest way to prevent issues like these in the future is for academics to provide the data and code they used to produce their results — something LaCour has yet to do. If their data weren’t faked, the output should be the same on someone else’s computer. As political science has become increasingly technical over the years, with the mathematical models becoming more and more complex, Groseclose speculates that if the major political journals were to institute this requirement they would find that a significant number of the papers submitted for publication — and many that have already been published — would not pass muster, as they rely on methodologies that the papers’ authors don’t understand and, therefore, cannot use effectively.

What’s a shame in all of this is that LaCour’s faked LGBT canvassing study is going to be used as evidence against the LGBT movement — and the field  of political science — for years to come, when in fact the finding that LaCour made his research up is an example of the field working as intended: No matter how groundbreaking your research is, it doesn’t mean anything if no one can replicate it. The scientific community polices itself by not taking the work of others for granted, and researchers should always conduct their work under the assumption that whatever they publish will be tested elsewhere.

And in the grand scheme of things, LaCour’s fraud looks rather benign compared to prior debunkings of studies after attempted replications. After all, it’s not like LaCour sparked a global austerity movement based on a typo in Excel.


Jon Green graduated from Kenyon College with a B.A. in Political Science and high honors in Political Cognition. He worked as a field organizer for Congressman Tom Perriello in 2010 and a Regional Field Director for President Obama's re-election campaign in 2012. Jon writes on a number of topics, but pays especially close attention to elections, religion and political cognition. Follow him on Twitter at @_Jon_Green, and on Google+. .

Share This Post

© 2018 AMERICAblog Media, LLC. All rights reserved. · Entries RSS