"The man of understanding can no more sit quiet and resigned while his country lets literature decay than a good doctor could sit quiet and contented while some ignorant child was infecting itself with tuberculosis under the impression that it was merely eating jam tarts." (Ezra Pound)
I want to be the Jeremy Freese of storytelling! I can't be the Andrew Gelman of storytelling because Andrew already is the Andrew Gelman of storytelling, but when I read Freese's posts about those "himmicanes" I realized that I would like to be able to critique the use of narrative in social science with exactly that kind of detail and precision, teasing out the full range of issues from minor errors, to honest incompetence, to outright misconduct. Fortunately, there is a craft community that preserves the relevant critical competence in the statistical social sciences. I'm not sure we have anyone as formally qualified to critique a narrative. In any case, I want to be that guy.
This post is inspired by Freese's coinage "statistical chemotherapy", by which (if I understand the idea right) a set of results is "poisoned" enough to kill a "diseased" result, i.e., render it insignificant, without killing the rest. There's got to be an analogy for this sort of thing in the use of stories, in which the "fictionality" of a story is adjusted in the light of criticism in order to deal with implausible plot elements instead of rethinking the basis of the story itself (its documentation, for example). As in the case of PSA screening (I'd also like to be the Gilbert Welch of storytelling) such treatments are often over-prescribed. Obviously, the critic will also be on the look-out for non-therapeutic poisons as well, such as plagiarism and fabrication. But the important work lies simply in ensuring the actual, if you will, "robustness" of the stories we tell in the social sciences.
The hurricane study offers a good sense of the stakes. In fact, something interesting struck me when reading Jane Hu's coverage of the affair in Slate. "Too many writers love stories with a familiar narrative, even if that narrative is based on shoddy science," she says. Then, a little later on she points out that "[t]hese stories are more than just annoying—they have real-world effects. By reinforcing gender stereotypes and roles, the media affects people’s perception of women." Notice how much this looks like PSA screening. The original hurricane study was itself worried about gender perceptions, i.e., something we all agree is a social problem. But its approach to problem ends up poisoning the conversation about gender in an attempt to "cure" an abnormality that is (almost certainly) too insignificant to worry about. Remember, the criticism of PSA screening is that millions of men were treated on the basis of those screens for prostrate cancers they would have never suffered any ill effects from in their lifetime.
The point applies also at a higher level. Notice the difference between the "poison" that the hurricane people were worried about and the toxin that Freese is addressing. Surely we need a lot more work like that done by Freese and Gelman—i.e., criticism and replication of purported results in the social sciences—and much less "headline-grabbing" "click-bait" about hurricanes and clothing color. Right? Maybe we're suffering from a crisis of sociological overdiagnosis. Like Welch on medical screening of healthy people, I worry that we are trying to find and treat small insignificant social problems that will not have serious consequences for society before something much less exotic kills the body politic first.