Tuesday, November 24, 2009

Peer Review & Skepticism

One of the places where fringe-science and junk-science attempt to grab credibility is through the peer review process. Peer review nominally works like this:

You do research (or what's called a literature review - more on that later). You write your paper. You submit it to a journal. The journal sends it out to other people in the field who read it and say "This is interesting" or "This confirms what we already know" or "This was written by someone with an axe to grind" or "This was written by a guy who makes the TimeCube hypothesis seem sane."

Publication in peer reviewed journals is competitive; an academic's professional reputation (salary, grants, interesting research projects) is based on getting published in peer reviewed journals. One side effect of this is that getting your article in the journals first is an important consideration. Much like Internet news versus newspapers, the person who breaks the story wins the race.

And science is a LOT of hard work. Most people who don't do science for a living have no clue how much work goes into it, or how difficult that work is. Anyone who flailed at second semester Calculus in high school or college, or who found doing lab notes in physics class to be about as interesting as whacking their hand with a hammer has the vague inkling that this is difficult.

It's even worse for actual practicing scientists. Take your "beat my head against Calculus" moments, combine with "gather data with instruments, and record error bars" for two years and THEN race to get your results out so that you get the A and someone else doesn't - provided nobody spots a nitpicky thing you overlooked early in setting up your experiment design. You're also usually rushing to get your paper ready for the submission deadline for a conference.

Oh, and if you don't win approximately 1/3 of the races, you lose your job.

This results in two incentives in behavior. As a game designer, I've found that you reliably get the behavior you incentivize.

Incentive 1: Until you're close to publication, you say nothing about what you're researching, your preliminary results, etc.

Incentive 2: You submit papers to journals that you think will agree with your outcome. Subsequently, peer reviewed journals have a strong tendency towards confirmation bias.

Now, actual science is hard, even when researchers have grad students in near peonage to do the scut-work for them. Cranks show up in peer review journals not with actual research (because that requires data set gathering, computation and rigorous analysis - and more to the point, takes time), but with something called 'Literature Analysis'.

In scientific journals, Literature Analysis means combing through related studies and their data sets and looking for contradictions or gaps; either one is a marker for "Hey, there's something interesting to research going on."

Literature reviews are more common in the worlds of law - in literature reviews, you go through the published papers and draw conclusions that match your pre-supposing condition. A lawyer that brought up things contrary to his own case would be laughed at; a scientist who doesn't has some professional quandaries to consider.

Which leads us back to the CRU data leak. One of the places where high dudgeon is being raised is over the work of Jones, Goodess, Mann et al to get Climate Research Journal 'shunned from the peer review list'.

The article that triggered this reaction was a "Literature review" by policy wonks Willie Soon and Sallie Baliunas. The review was a very selective set of cherrypicking, and was a pure political hatchet job. It's difficult to get things past the confirmation bias on a politicized issue...but Jones and Mann, in this instance, were doing what peer review is supposed to do - point out when something slipped through that should not have gotten through.

There is real, legitimate skepticism about climate science out there. There are also a lot of cranks and political hacks trying to cloak their propaganda in the trappings of skepticism. Just like we should be dubious about the people who say "Your SUV is KILLING US ALL", we should be dubious about skepticism that doesn't do its own research, or is a Trojan horse for political activism.

10 comments:

  1. by policy wonks Willie Soon and Sallie Baliunas

    Uh, no, actually Soon and Baliunas are both astrophysicists at Harvard. And from the CRU emails, it would appear that being a "Trojan horse for political activism" is more common than you imply.

    ReplyDelete
  2. Take a closer look at the papers and her publication history. Baliunas effectively stopped doing research science when she joined the George C. Marshall Institute in 1996. Her last publication for Harvard was in 1994.

    Soon is the chief scientist at the Science and Public Policy Institute, a public policy advocacy group.

    The Harvard–Smithsonian Center for Astrophysics is primarily an umbrella organization for maintaining different telescopes; it's an administrative position, not a research position. Nearly all research on the telescopes that the Harvard-Smithsonian Center for Astrophysics maintains is done by the conventional NSF research grant process and by universities other than Harvard.

    Note particularly the near complete lack of papers published by them before Soon and Baliunas.

    As a basic rule of thumb - if someone is listed as a Chief Scientist at an organization devoted to public policy advocacy, the person is more focused on public policy advocacy than on research.

    ReplyDelete
  3. Peer review is an artificial construct that arose because of the expense of publishing scientific articles in paper journals with the explosion of scientists after WWII. There is no reason for it anymore with the cost of storage so cheap and the availability of the internet. A good example is arXive.org for physics. Publish there following a minimal set of rules and let the world see what you have to say for good or bad.

    The real problem with CRU and the rest of the climate scientologists is that much of the data is unique. Without a time machine -- physics isn't that advanced yet -- there is no way to reproduce thermometer measurements from 50 and 100 years ago. The only way for outsiders to check CRU's work or do their own science is to have free access to the publicly funded data. That's the real crookedness at work here.

    That's not to downplay the despicable behavior of Mann, Jones, et. al. to suppress conflicting results. Maybe Baliunas and Soon didn't have an airtight case, maybe they hadn't published many papers in the recent past. So what. That doesn't excuse the climate mafia from its behavior. The correct and only way to counter incorrect science is by pointing out errors and correcting them.

    By the way, there is a whole book on historical climate by the very important and famous English climatologist, H. Lamb, that certainly makes the case for the Medieval Warm Period that Mann tried to deny with the hockey stick.

    ReplyDelete
  4. Ben, have you read Baliunas and Soon's paper? Or the critiques of it?

    Or noted the timing of its publication, and its rush through the jury? (It got published in about 1/3 the usual review period.)

    1) There was no research - there wasn't even a source citation of original data sets.
    2) It was rushed to be in press and in print in time for a specific political event.
    3) Both scientists published it under the Harvard-Smithsonian titles, rather than the title of the organizations that actually paid them.

    IF you accept that peer review is necessary, then what Mann et al did was correct and within bounds. A scientific journal that bends its own submission and publication policies to further a political agenda should be drummed out.

    I feel this should apply to both sides.

    I'm also firmly (as my second post has stated) of the opinion that no public policy should be based on any climate data set, method of interpretation, analysis or manipulation that's not open sourced.

    At the very least, if Jones et al had had their material posted up on something akin to SourceForge, it wouldn't have been 'accidentally' deleted.

    ReplyDelete
  5. I just read on ESR's blog that you're contemplating revisiting climate models. Since your last visit in 1999, things have changed quite a bit.

    You're welcome to download the source code and all necessary input data for one of the IPCC AR4 models, CCSM3.0, at

    http://www.ccsm.ucar.edu/models/ccsm3.0/#src

    Instructions are located within that page.

    ReplyDelete
  6. Thank you, G-Man. This will require creating a Linux partition on my computer and downloading the tool chain.

    Just so it's clear - my prior experience with the models was as a journalist, not a coder OR a climatologist.

    I was there to ask questions while people who ostensibly knew what they were doing were doing their part.

    ReplyDelete
  7. Yes, Thank You G-Man! I've always wanted to take a look at the nuts & bolts of a GCM (I code CFD routines, so I imagine it'll have some similarities).

    ReplyDelete
  8. Literature reviews are more common in the worlds of law - in literature reviews, you go through the published papers and draw conclusions that match your pre-supposing condition.

    What you've called a "literature review" is the only way to ascertain law, in a common law jurisdiction; however, "papers" (i.e., law review or bar journal articles) are not really law; they are at best places to discuss policy or make predictions about which way the courts and legislatures will go. What lawyers in a common law jurisdiction have to perform is "literature reviews" on the opinions of courts, and reach conclusions based upon whether the courts in question are recognized as constituting precedential authority in a particular context.

    A lawyer that brought up things contrary to his own case would be laughed at....

    This is *not* entirely correct. A lawyer has an ethical obligation to the court before which he/she is arguing to raise directly applicable legal authority against his/her position. Failure to do so is deemed to be a breach of the lawyer's duty as an "officer of the court." It may also be deemed a "fraud upon the court" depending on how directly applicable the relevant authority is.

    A good lawyer will, of course, raise arguments against accepting that authority if it is at all possible (e.g., showing why the authority doesn't apply, has been deprecated though not overruled, has been misread), but may not ignore such authority.

    Do lawyers actually mention authority that goes against their position? Yes; I've seen it done and have had to do so myself. Do they always do so? I doubt it, and unfortunately law is a sufficiently inexact discipline that it is rare for it to be clear that a particular case is controlling authority that needs to be mentioned. My point here is that not even lawyers are entirely free of the obligation to acknowledge authority that goes against them.

    ReplyDelete
  9. Many thanks for the correction and clarification, Cathy.

    ReplyDelete
  10. Ken,

    The problem, like I've told you before, is that you have to be a HELL of a good programer to spot where you make a mistake in a Scientific calculation (since computers refuse to carry units around for you its easy to do something stupid like add momentum and mass.... which you clearly can't do).

    The 2nd issue is you almost HAVE to have a Ph.D. in Atmospheric Chemistry to understand WHAT algorithm you should put in there.

    Its pretty freaking easy to make a mistake in either spot. (Haveing done modeling in the past)

    Good luck getting it to work.

    Since I didn't get the job at NCAR I wanted, I can't go down the hall and ask someone what you are doing wrong when it all explodes in your face. :)

    ReplyDelete