Wednesday, June 18, 2014

Epistemological Toxicology

"The man of understanding can no more sit quiet and resigned while his country lets literature decay than a good doctor could sit quiet and contented while some ignorant child was infecting itself with tuberculosis under the impression that it was merely eating jam tarts." (Ezra Pound)


I want to be the Jeremy Freese of storytelling! I can't be the Andrew Gelman of storytelling because Andrew already is the Andrew Gelman of storytelling, but when I read Freese's posts about those "himmicanes" I realized that I would like to be able to critique the use of narrative in social science with exactly that kind of detail and precision, teasing out the full range of issues from minor errors, to honest incompetence, to outright misconduct. Fortunately, there is a craft community that preserves the relevant critical competence in the statistical social sciences. I'm not sure we have anyone as formally qualified to critique a narrative. In any case, I want to be that guy.

This post is inspired by Freese's coinage "statistical chemotherapy", by which (if I understand the idea right) a set of results is "poisoned" enough to kill a "diseased" result, i.e., render it insignificant, without killing the rest. There's got to be an analogy for this sort of thing in the use of stories, in which the "fictionality" of a story is adjusted in the light of criticism in order to deal with implausible plot elements instead of rethinking the basis of the story itself (its documentation, for example). As in the case of PSA screening (I'd also like to be the Gilbert Welch of storytelling) such treatments are often over-prescribed. Obviously, the critic will also be on the look-out for non-therapeutic poisons as well, such as plagiarism and fabrication. But the important work lies simply in ensuring the actual, if you will, "robustness" of the stories we tell in the social sciences.

The hurricane study offers a good sense of the stakes. In fact, something interesting struck me when reading Jane Hu's coverage of the affair in Slate. "Too many writers love stories with a familiar narrative, even if that narrative is based on shoddy science," she says. Then, a little later on she points out that "[t]hese stories are more than just annoying—they have real-world effects. By reinforcing gender stereotypes and roles, the media affects people’s perception of women." Notice how much this looks like PSA screening. The original hurricane study was itself worried about gender perceptions, i.e., something we all agree is a social problem. But its approach to problem ends up poisoning the conversation about gender in an attempt to "cure" an abnormality that is (almost certainly) too insignificant to worry about. Remember, the criticism of PSA screening is that millions of men were treated on the basis of those screens for prostrate cancers they would have never suffered any ill effects from in their lifetime.

The point applies also at a higher level. Notice the difference between the "poison" that the hurricane people were worried about and the toxin that Freese is addressing. Surely we need a lot more work like that done by Freese and Gelman—i.e., criticism and replication of purported results in the social sciences—and much less "headline-grabbing" "click-bait" about hurricanes and clothing color. Right? Maybe we're suffering from a crisis of sociological overdiagnosis. Like Welch on medical screening of healthy people, I worry that we are trying to find and treat small insignificant social problems that will not have serious consequences for society before something much less exotic kills the body politic first.

5 comments:

  1. Thomas:

    Interesting idea! I'm going to apply it to "Gone Girl," which I recently finished reading. It's a thriller that has some real strengths, along with plot and motivational holes big enough to drive the proverbial SUV through, without any worry about scratching the wing mirrors. Could chemo be done on Gone Girl to preserve the good stuff while killing off the egregious implausibilities? I'm not sure. For that matter, perhaps the implausibilities are part of what made the book "work" in a commercial sense. And, don't get me wrong, had the book not been a monster hit, to the extent of being displayed in every airport bookstore for a year straight, I probably never would've thought of buying it.

    Sort of like himmicanes and hurricanes in that sense: if the research had been done a bit better, the authors would've known not to make the dramatic claims, and PNAS would never have considered publishing it. (Can you imagine PNAS publishing a paper with a title such as, "An exploration of the effects of hurricane naming"? that made no strong positive claims?) And if the researchers had known a little bit more than that, they wouldn't have tried their study in the first place.

    Similarly for the ovulation-and-pink-clothing people. Their success (that is, a success in terms of publicity; I certainly would not consider their work to be a success in any scientific sense) is a direct result of their ignorance and naivety. With a better understanding of statistics, they would've realized the hopelessness of studying what they were trying to study using such noisy measurements. And with better measurements and more data, they would've found smaller effects. And if they'd been open to uncertainty, they wouldn't have had statistical significance. And then Psychological Science wouldn't have published their paper. Etc.

    Complicated knots.

    Hmmm . . . I better blog this. After all, how many people are going to read this comment??? Thanks for pointing me to your post!

    ReplyDelete
  2. Yes, I was thinking about this while re-watching Welch's videos about this. First, imagine if there were researchers whose job it was mainly to find and expose pseudo-significant studies like this. I.e, imagine that Gilbert Welch spent his whole career doing mainly what I know him for. Then, imagine that researchers planned their research not with an eye to the positive publicity they might get from a sexy result, but the negative publicity they might get … like what we've seen with himmicanes. What we would get is a lot less "results".

    It may have been possible to delay the full introduction of PSA screening into the population until significant correlations between mortality-causing cancers and positive PSA tests had been found. This would have spared millions of men form a lot of unnecessary trouble. It would also have reduced our overall background anxiety about cancer.

    Mutatis mutandes for gender perception bias. Of course, like in medical science, there are "powerful forces" that think all public awareness of social problems is a good thing. We still have to learn the lesson that medical science (against the powerful force of the pharma industry) is starting to learn.

    ReplyDelete
  3. What a scary thought for a researcher: just as a stunningly good project can make a career, a stunningly bad project can harm a career!

    It's an interesting thought to me for a few reasons. First, my colleagues at the University of California many years ago did try to harm my career because of work that I did that they did not like. Second, one of the criticisms that's been levied at me in my science criticism is that I might hurt the careers of vulnerable young researchers.

    It's a funny thing, though: Nobody says you should suppress a legitimate point in a referee report, even though lack of publication can hurt someone's career. But, post-publication, it's another story. I'm reminded of Kahneman and Tversky's work on loss aversion, the idea that a loss is worse than an equivalent foregone gain.

    ReplyDelete
  4. Everything here depends on the way people try to "hurt your career". I don't know what those UC people tried to do, but if it amounted only to publicly criticizing your work, opening laying out their reasons for disliking it, etc. then, well, the chips should fall where they may, I'd say. (They won't be having any direct effect on your opportunities; they will only have their ordinary influence on the formation of your peers' opinions of you.) But if the use behind-the-scences machinations to undermine your opportunities then that's obviously creepy.

    In short, I don't think we should worry about setting other people's careers back by publishing our doubts about their work. I like your point about pre-pub/post-pub too. But I have heard some people complain about too harsh review reports by people "who just won't give me a chance", and even people who been "kind" because the author probably needs the publication.

    ReplyDelete
  5. Rasmus1:05 PM

    Hej Thomas,

    This might be of interest, in case you haven't seen it: The 'exchange' between Jill Lepore and Clayton Christensen about the theory of disruptive innovation.

    The critique:
    http://www.newyorker.com/reporting/2014/06/23/140623fa_fact_lepore?currentPage=all

    The response
    http://www.businessweek.com/articles/2014-06-20/clayton-christensen-responds-to-new-yorker-takedown-of-disruptive-innovation#p1

    ReplyDelete

Note: Only a member of this blog may post a comment.