Michael LaCour likely faked more than just his LGBT canvassing study

Last year, Michael LaCour, a political science Ph.D. candidate at UCLA, published a widely-reported study in Science indicating that single, 20-minute, one-on-one conversations between canvassers and voters could produce significant and lasting increases in support for marriage equality.

There is significant empirical evidence that in-person canvassing can produce significant attitude changes on other issues, and those findings, along with LaCour’s were used as “a template” in the recent Yes Equality campaign in Ireland.

However, when a group of researchers were unable to replicate LaCour’s findings, they dug a bit deeper into his work and discovered that large portions of his paper appear to be faked. Even the organization credited with funding the study has no record of involvement with LaCour or his work. This led LaCour’s faculty co-author to request that the paper be retracted, and led Science to publish an official “editorial expression of concern.”

For his part, LaCour has vowed to respond to the allegations of faked data in detail:

But LaCour is learning the hard way that one allegation of faked findings only invites further scrutiny. Yesterday, Tim Groseclose at Ricochet detailed evidence that he believes shows other instances of faked data in LaCour’s work.

Canvassers, via Wikimedia Commons

Canvassers, via Wikimedia Commons

Groseclose examined two papers, one published in the Journal of Political Communication and one unpublished working paper that was presented at the most recent Midwest Political Science Association conference. While Groseclose believes that the research behind the peer-reviewed work is sound, he documents a number of irregularities in the working paper that lead him to believe that the data are pulled out of thin air.

The paper in question examines the “news diet” of voters and concludes that, contrary to conventional wisdom, voters don’t live in echo chambers; LaCour claims to show that our “news diet” is, for the most part, balanced.

Depending on which research you rely on, there’s support for some version of this conclusion. However, as Groseclose outlines, LaCour’s path to results he claims to have found is checkered at best.

LaCour’s paper relies on a method of measuring ideology in the media that he appears to not understand. The method, previously developed by economists Matthew Gentzkow and Jesse Shapiro, constructs a list of “loaded phrases” (like “death tax” or “amnesty”) and measures how often such phrases are used in congressional speeches and news outlets. By comparing frequencies, one can then make statements like “The New York Times is approximately as liberal as a speech by Sen. Joe Lieberman” with some degree of empirical confidence. As Groseclose notes, this method was itself inspired by a method that he had formulated previously — a method that Gentzkow and Shapiro give him credit for.

In LaCour’s paper, however, the method appears to have been tweaked manually, leading Groseclose to write: “I do not believe he actually wrote the code that would be necessary to execute his method and, accordingly, I don’t believe that he really computed the estimates that he reports from his method.”

Perhaps the simplest reason to believe that this is the case is that there doesn’t appear to be any rhyme or reason to LaCour’s confidence intervals. Basic statistical theory dictates that the larger your sample, the smaller your confidence interval should be. However, in LaCour’s work, this is not the case; the news shows with smaller sample sizes have smaller confidence intervals than shows with larger ones. One would expect anomalies like this to have a plausible explanation in the methods section, but, by Groseclose’s estimate, “approximately 90% of the first two pages [of the methods section] is a word-for-word copying of sentences from Gentzkow and Shapiro’s article;” not a sin in and of itself if the paper states that it relies on Gentzkow and Shapiro’s work, but also no help when it comes to explaining the anomalous results.

As Groseclose also notes, LaCour appears to have a loose understanding of the statistical principles he cites, mistranscribing terms like “Chi-squared” with an “x” instead of the Greek letter. Again, this isn’t incriminating in and of itself, but it does suggest a lack of understanding of the work LaCour is claiming a working knowledge of. As Groseclose writes: “If a researcher is not really familiar with the Chi-Squared distribution, I don’t see how he or she could fully understand the Gentzkow-Shapiro method. And if LaCour does not fully understand the Gentzkow-Shapiro method, I do not see how he could have executed the method that he describes in his paper.”

Groseclose goes on to outline other suspect elements of LaCour’s work, including the lists of loaded phrases used by liberals and conservatives bearing highly unlikely levels of consistency over the three-year period LaCour observes.

Also of note, LaCour’s method labels the Thom Hartmann show as having a conservative bent which couldn’t be farther from the truth. Groseclose discusses what this anomalous result means, and speculates as to why LaCour declined to discuss it in his paper, but at the end of the day it’s either a sign of methodological error or imputation.

As Groseclose notes, the simplest way to prevent issues like these in the future is for academics to provide the data and code they used to produce their results — something LaCour has yet to do. If their data weren’t faked, the output should be the same on someone else’s computer. As political science has become increasingly technical over the years, with the mathematical models becoming more and more complex, Groseclose speculates that if the major political journals were to institute this requirement they would find that a significant number of the papers submitted for publication — and many that have already been published — would not pass muster, as they rely on methodologies that the papers’ authors don’t understand and, therefore, cannot use effectively.

What’s a shame in all of this is that LaCour’s faked LGBT canvassing study is going to be used as evidence against the LGBT movement — and the field  of political science — for years to come, when in fact the finding that LaCour made his research up is an example of the field working as intended: No matter how groundbreaking your research is, it doesn’t mean anything if no one can replicate it. The scientific community polices itself by not taking the work of others for granted, and researchers should always conduct their work under the assumption that whatever they publish will be tested elsewhere.

And in the grand scheme of things, LaCour’s fraud looks rather benign compared to prior debunkings of studies after attempted replications. After all, it’s not like LaCour sparked a global austerity movement based on a typo in Excel.

Jon Green graduated from Kenyon College with a B.A. in Political Science and high honors in Political Cognition. He worked as a field organizer for Congressman Tom Perriello in 2010 and a Regional Field Director for President Obama's re-election campaign in 2012. Jon writes on a number of topics, but pays especially close attention to elections, religion and political cognition. Follow him on Twitter at @_Jon_Green, and on Google+. .

Share This Post

9 Responses to “Michael LaCour likely faked more than just his LGBT canvassing study”

  1. sactoresident says:

    if UCLA has any credibility they will kick him out of the phd program. We’ll see. If not it makes every phd from UCLA suspect.

  2. Duke Woolworth says:

    It’s FURTHER, not FARTHER from the truth. Farther refers to physical distance, like mileage.

  3. MaryMLachance says:

    ❋✦✧❋✦✧ $73.. per-hr @mi7//



  4. Ammy JHackson says:

    ♠♣♥♦♠♣♥♦..Moreable person wants to Earn money At home .. Its best opprutunity for you,,, i just got paid 160 usd per day …Make A huge profit just doing Simple Tasks……. Last saturday I got a great Alfa Romeo after I been earning $9498 this past four weeks and a little over 10k lass month . with-out a doubt this is the nicest-work Ive ever had . I actually started 4 months ago and pretty much immediately began to make more than $89.. per-hour . find out here now ->

    Going Here ,.,..

    you Can Find.,,, ->



  5. White&Blue says:

    And at the same time they will never miss a chance to remind how this study is flawed (conveniently forgetting their own flawed studies like you said) and use it against their opponents.

  6. Houndentenor says:

    I know a couple of professors who peer review for journals and they can spot problems rather quickly. It makes me wonder how this article passed peer review. That said I’ve heard stories from professors who peer review for a journal in their field about passing on articles (they gave detailed critiques and told the authors what to do to make it ready for publication) only to find that same article show up later with no changes in another journal. This is happening far too much lately. I’m not sure how to fix this but something has to be done.

  7. DianaTKing says:

    ΩΩΩΩΩΩΩ $73.. per-hr @mi15//


  8. BeccaM says:

    I had my doubts about this study from when it was first announced. Yes, one-on-one conversations do help, but after 20 minutes, you’ve still only conversed with a stranger. This is not likely to change deep-seated attitudes on a lasting basis. Or to put it another way, 20 minutes of a sympathetic chat with someone won’t offset the beliefs of someone who watches Faux News (by choice) several hours every day and who listens to Hannity and Limpballs because they like the radio shows.

    What we’ve seen is that what seems really to change people’s attitudes about LGBT folks and with respect to civil rights is when someone ends up with an LGBT person close to them — a family member or good friend. True, many react with rejection, but a surprising number have their personal epiphany and realize they don’t want their family member or friend to be disadvantaged. It’s familiarity which is important.

    Anyway, it’s always sad to see when someone f*cks up so spectacularly. I’ve no doubt LaCour was sure he could prove his hypothesis, but then he got sloppy. And he let desired outcomes get in the way of ethical research methods.

    The real lesson here: The liberal/progressive left will see a study like this and acknowledge it was flawed, and furthermore not continue to depend on it for scientific support. Whereas the conservatives take THEIR flawed studies such as Regnerus and Marks, who didn’t even study the relationships they purported to find negative conclusions about, and still cite them.

  9. Indigo says:

    It was a nice idea so maybe it was kinda sort real after all because LaCour had a vision he wanted to communicate. Okay, but that’s not science, it’s not even reputable tealeaf-reading.

© 2020 AMERICAblog Media, LLC. All rights reserved. · Entries RSS