Friday, March 5, 2010

Hot quant on qual action

Over at Abu Muqawama last week, Ex posted a quantitative analysis manifesto that he thinks quantitative analysts in security studies should follow - a post that has generated a moderate amount of buzz around the interwebs. I'm not going to through each of his six points individually, but I encourage you to read the whole post. Suffice it to say and not surprisingly, this has really pissed off a lot of the IR community, with interesting commentary here, here, here, here, and here. I didn't look too much, but here is a supportive post on Ex's position and I'm sure there are others.

I certainly appreciate Ex's position on this, there are few things more irritating than some social scientist who creates models completely at odds with reality. So my interpretation of Ex's manifesto is that social scientists who employ quantitative measures should use humility in describing their systems and recognize and acknowledge the downsides of their models. On the other hand, I understand why these IR folks are upset - specifically about the last three points (which strike me as unnecessary swipes at social scientists, but hey, flippant statements are a hallmark of blogging).

I've written on a related topic before (metrics specifically), and having an undergraduate in math, I firmly believe there are uses for quantitative analysis in security studies. As long as that analysis framed with qualitative analysis to better describe these complex human endeavors. In fact, I would argue most social scientists would agree with this. But I think the same goes for qualitative analysts - while logic and common sense can often win a debate, those two things could often simply be conventional wisdom. Qualitative analysis is much more persuasive when backed by quantitative analysis, otherwise it's pretty much somebody's opinion.

This whole argument strikes me as silly. Both factions need to (and many do) understand and state the limitations of their chosen methods of analysis. I firmly believe bad analysis is and will be identified for what it is, no matter if the analysis uses words or graphs. As for me, I like both.


  1. I couldn't have said it better myself.

    The appropriate method is the one that provides an appropriate test. Sometimes that is a survival analysis. Other times it is interviewing former combatants.

  2. I'm reminded of a recent course that I took on corporate governance. It was taught by a guy who has been a director on many corporate boards who thinks that he has found the holy grail of good governance. His solution? Checklists of countless variables and standards by which you rate the board with numbers (1 to 10) or with "qualitative" measures ranging from "very bad" to "very good." How do you discern "fair" from "acceptable" or 3 from 4? Why is a corporate board with a quorum requirement of 60% better than one of 55%? Why is a board with 40% women better than one with 25%?

    Approaching the topic fresh from a career in the military where I based my decision to ETS partly upon extended sessions of viewing red/amber/green assessments for everything under the sun, I took issue with everything that he said in that course (so I wisely took it pass/fail!). To demonstrate my problems with his quantitative and pseudo-qualitative measures and rules, I presented three mini-case studies. First, I created an organization chart that met all of his standards and presented it to the class as a model of good governance. When everyone agreed that I had developed a wonderful system of checks and balances and oversight, I explained that the organizational chart was nothing more than a copy and paste of the organizational chart of central government of the Islamic Republic of Iran. So much for organizational structure. Second, I used his checklist of numbered assessments and gave one major American company high marks for good governance. That company was AIG. Third, I also used his analysis to give a major American company very low marks for governance. That company was Whole Foods. Rock on.

  3. The Black Swan by Nassim Taleb is very relevent here.

    -Deus Ex

  4. Schmedlap, that's one great story.

    Gunslinger: I agree totally. The quantitative vs. qualitative debate is rather meaningless, since both methods have their use once everyone understands that neither will provide a perfect explanation (of anything. No wonder social scientists have a hard time getting up in the morning). In both cases, studies are based on so many assumptions ("we are only going to look at time period X," "We define 'intervention', 'peace,' etc. in the following way...", "here is my coding method," "my interviewees recollections are reasonably accurate," ...) that any intelligent work will necessarily show a good amount of humility.

    The Q vs Q fight is more, it seems to me, a personal one. Researchers who are big on quantitative studies tend to think that those who criticize them simply have no idea how to run this type of analysis and hide ignorance behind contempt. Qualitative researchers are irate that quants believe a monkey could run a qualititative analysis because there is no maths involved and thus it can't be that complicated (you just have to tell a story, right?).

    I guess that's how you keep life in PoliSci departments exciting...

  5. "I certainly appreciate Ex's position on this, there are few things more irritating than some social scientist who creates models completely at odds with reality."

    Gentlemen, I give you "Triage."


    I'll be here all night. Don't forget to tip your waitress.


  6. Were you the teacher's favorite after you demolished his pet theory? You're a funny (ha ha) man, Mr. Schmedlap.

    You too, Mr. SNLII.

    Question - what is an example of successful quantitative analysis in the poli sci world that would serve as an introduction for the likes of me? I mean, is there a uniformly accepted analytic model that is a success and serves as the sort of "gold standard" for such studies? By asking the question am I revealing such depths of ignorance as to be useless?

    Hey, look, the hard scientists I know think medicine barely qualifies as science, so you can imagine how snotty they are toward the social scientists. I, dear friends, am not such a snob. We are studying different phenomena. It's not like you can do a double-blinded trial in war.

    Or can you?

  7. Alma - it's likely the humility part that gets everyone so riled. Look at the Nature paper everyone was talking about (and, trust me, if I show that article around all the scientists with their test tubes and engineered mice will be furious that stuff like that gets published WHEN THEIR STUFF ISN'T! (Not saying the study is bad, just talking about academic jealousies and prejudices).

    So, I've got some issues with academia being a complete and utter creature of it, but one of the things I dislike the most if that academic fiefdoms, fads, and insecurities lead to a lot of bad that gets out there. And then, academics make fun of the non-academic world! Please, people, you got a ton of stuff wrong, too. May I reference the twentieth century?

    Done venting now.

  8. Speaking of "models completely at odds with reality," John Hollinger's power rankings have Dallas as the 13th best team in the NBA despite leading their division, sitting three games back of the Lakers for second in the Western Conference, and being in the midst of an 11-game (and counting!) winning streak. Down with the quants! Down with statistical models that don't reflect common sense, which is to say those that don't value the fundamental imperative of the entire enterprise, which is winning basketball games, in this case.

    In Hollinger's World Warfighting Rankings, the U.S. would be eighth because of an aging roster, or something.

  9. Hollinger is insane. The Mavs are the twelth best team in the NBA. Markov chain functions, baby.