Wednesday, December 16, 2009

And first, we play with the hardware...

In order to run the climate-modelling software more effectively, our second volunteer (Eric Raymond) bought a new motherboard that should roughly double the speed of his computer. It's a 2.66 GHz Intel Core 2 Duo. It is *not* a super-high-end quad-core hotrod; and thereby hangs a tale.

Some kinds of algorithms parallelize well - graphics rendering is one of the classic examples, signal analysis for oil exploration is another. If you are repeatedly transforming large arrays of measurements in such a way that the transform of each point depends on simple global rules, or at most information from nearby measurements, the process has what computer scientists call "good locality". Algorithms with good locality can, in theory, be distributed to large numbers of processors for a large speedup.

Some algorithms are intrinsically serial. A good example is the kind of complex, stateful logic used in optimizing the generation of compiled code from a language like C or FORTRAN.

Sometimes, you can artificially carve an intrinsically serial problem into chunks that are coupled in a controlled way - for example, to get faster compilation of a program linked from multiple modules, compile the modules in parallel and run a link pass on them all when done. This approach requires a human programmer to partition the code up into module-sized chunks and control the interfaces carefully.

The kind of math used in climate modeling has some parts have good locality (and thus could theoretically parallelize well) and others that don't. Unfortunately it's difficult to capture any benefits from throwing the parts with good locality onto multi-core machines, because recognizing that locality and using it to do automatic partitioning is
hard.

Here's what "hard" means: computer scientists have been poking at the problem for four decades and parallelizing compilers are still weak, poor makeshifts that tend to be tied to specialized hardware, require tricky maneuvers from programmers, and not work all that well even on the limited categories of code for which they work at all.

What it comes down to is that if you're compiling C or Fortran climate-modeling code on a general-purpose machine, each model run is going to use one core and one only. Two cores are handy so that one of them can run flat-out doing arithmetic while the other does housekeeping and OS stuff, but above two cores diminishing returns start to set in pretty rapidly. By the time you get to quad-core machines, two of the processors will be space heaters.

This is good news, in a way. It means really expensive hardware is pointless for this job - or, to put it another way, a modern commodity PC is already nearly as well suited to the task as any hardware anywhere.

Now, to downloading the code to look more closely at what it does and to poke at it. Yeah, Eric was going to upgrade his computer soon anyway, and yes, this was an excuse...

Tuesday, December 15, 2009

Amateur Climate Modelling - What Is CCMS, How Does It Relate To CRU?

One of my readers, with considerably more knowledge of this subject than any of us on the "Let's do this as a project" group, asked why, if we were spurred into this by the CRU data leak, why we're using the UCAR CCSM 3.0 model...since it has no real ties to dendrochronology, and is in fact, a pretty solid predictive model rather than a retrodictive one based off of past temperature data runs. Indeed, you can argue that what CRU and CCMS do are obverse sides of the same coin - one is trying to reconstruct the past climate record, one is trying to do high end fluid dynamics to make some sort of guess on future climate. Now, the reconstructions of past climate are used to tune the predictive models, so they're not completely unrelated...

The first answer is that we're doing this largely because, well, we didn't know CCMS was a predictive model. Fools stomp in where angels won't even fly over for reconnaissance, and all that. What we did know was that it was open source, and while our chances of doing meaningful science with this are vanishingly small, we can document the transparency of the process. Again, the narrative of our efforts is one of "We Do Dumb Things So Others Can See Our Mistakes And Avoid Them."

The second point, it's the contention of most of us that CRU has significantly damaged its credibility. If similar behavior had occurred in the financial sector, people would be facing criminal charges. In scientific circles, they've put a dent on their reputation...and we can't get their data sources to validate from, yet. If we can, we'll expand the scope of what we're trying. In the mean time, we'll work with what we can get.

Because CCMS is a predictive model, we can do a couple of sideband tests - like see if we can get it to replicate, say, 1900 to 1940 and match the temperature data as a 'stable CO2 emissions epoch' with reasonable temperature data. Or tweak a known variable in a known way and see if it gets a result that makes some sort of sense as a first approximation at falsifiability.

We still, as mentioned, may not be able to get this up and running at all. If we can, we have, for lack of a better term, an exploratory toy to poke around with. We really aren't trained to do substantiative climate science with it; if we're very lucky, we manage to help the people building this model identify some undocumented assumptions and variables, and open discussion on forcing factors.

Sunday, December 6, 2009

Amateur Climate Modelling - Predictions and Physics Reductionism

There is a fundamental question about climate modeling and its intersection with public policy.

In theoretical physics, your model must make some prediction about the real world that can then be validated by an experiment. It must pass the test of falsifiability. This test must be replicable; someone else following your procedure and your data analysis methods must be able to get the same result.

Ideally, this test gets run multiple times, and with slightly different permutations, to see if the same result can be approached in different ways.

Similar standards apply to chemistry, and that's the experimental model in a nutshell. In the early years of both physics and chemistry (and largely what's going on in geology), you have to build a model that isolates enough factors that it can have a predictive value.

I am not sure that GCM models are at that state of rigor yet; I build models for a living - albeit simple ones meant for entertainment. I have a healthy suspicion of corner cases in models, and I know quite well that the biases of the modeler, and the person using the model lead to blind spots.

Has anyone made a prediction with a climate model and said "If this prediction proves false, this particular model needs to be abandoned?" Note the term there - abandoned, not refined. You can make a critique that's hard (at least for me) to refute by saying that climate models, as currently implemented, are a too heavy on tweaked input parameters light on predictions. The usual thing that happens when a climate model doesn't forward verify is to go back into it and tweak parameters. I have seen references to a model for tropospheric temperature predictions that is deprecated, because they got results that didn't match the weather balloon and satellite data in IPCC AR4. However, I don't know if that deprecation is because they decided they didn't have enough variables modeled, or because they felt the variables they were modeling were handled incorrectly.

EG, is it something they can fix, or is it a disproof?

Of course, from the physics reductionist perspective, it's not enough to winnow out the bad models. You also have to have a model that accurately predicts something. In physics, the number of variables is smaller and it's easier to isolate things. When you slam beams of protons into lead nuclei and look at the energy tracks of the debris, they either work the way you expected, or they didn't. (As Asimov wrote, "Advances in science aren't made by "Eureka!" moments, they're made by "Huh. That's weird. Can we get the same result a second time?"

So, one argument from physics reductionists is that we should make a prediction based on the model, we should then wait long enough to see if the prediction is accurate, determine what happened to make it inaccurate, and refine the model. By which point, given we're talking about models that get more statistically accurate in multi-decadal sweeps, we should have a good climate model somewhere around 2200 AD. :)

Or is this "Physics Arrogance" writ large?

Related to this is whether or not we are currently at an optimum point for climate. There is a natural tendency in the human brain to fill in the edge of the map (or the data graph) with shibboleths and monsters. Our current measurements are baselined from 1960, which was an unusually cool period in the 20th century, and our earlier baselines are coming from the tail end of the Maunder Minimum and the end of the Little Ice Age from 1600 to 1880.

Both of these are interesting questions, and by raising them, I hope to create a discussion - I'm not raising them because I think they refute global warming as a grand conspiracy. I raise them because I think they need discussion

Wednesday, December 2, 2009

Amateur Climate Modelling - Getting The Code

One of the easiest models to get at is the UCAR CCSM 3.0 model. You go to their web site (linked above), sign up for it with a login, and wait for an approval. One of the team members did this after 7 PM MST on Sunday on Thanksgiving weekend. Either there was a research assistant/post-doc working the holiday doing the validations, or the process was automatic; we got the login inside of 30 minutes. For our trembling remnants of faith in humanity, we're hoping this was automated and not some poor post-doc stuck there.

You do have to give a name, phone number and why you're downloading it. I'm not sure what would happen if I put in, say, "James Inhofe", his Congressional office number (202-224-4721), and "To prove the fraud once and for all!"...but I suspect that, as amusing as that might be, I'll leave that particular exercise in validating wingnuttery to someone else.

Now, scientists are guys who know programming to do something else. This is not the same as being a production coder, and scientists are traditionally about a decade behind the rest of the computer-science world when it comes to computer languages, largely because they're chained to legacy code and data sets that they lack the budget to uprate to modern standards. (The first person who manages to write a documented, production grade conversion filter running the myriad forms of climate data into a JSON archive will probably be offered several graduate students to do with as they please...)

Which means that scientists are even worse about documenting code than most programmers are. You need to twist arms and legs, and threaten to force them to teach English Composition, to get them to document code. (The number of graduate students I've known who've been forced to use cutting edge computers from 1992 more than 15 years after they've been obsolete is terrifying.)

You can guess our trepidation at opening the documentation. We were expecting spaghetti, intermittently documented in Sanskrit by cut and paste. We were wrong. It's in pretty decent shape for scientific code and documentation. Overall, this is a pleasant surprise, and we chalk it up to the fact that we're on a 3.0 release.

Then comes our first stopping point.

You see, the code says it's only certified to run on an IBM PowerPC machine, which is not the architecture we have. Now, this isn't exactly surprising; these guys aren't going to be testing the code on every hardware platform out there from the Commodore VIC 20 on. This isn't the sign of any sort of conspiracy, they're just documenting what THEY run it on.

However, architecture differences matter. Brief digression for people who aren't computer geeks (and likely want further affirmation that they don't want to become said):

Computers can't add, subtract, multiply or divide base 10 (the system we use). They use Base 2. (2 looks like 10, 4 looks like 100 and 5 looks like 101) This is fine for integer math 3+7=10 ALWAYS works on a computer. Its not so hot for floating point math (3.3 + 2.3 more or less = 5.6) Certain numbers do not convert well to base 2 representations. If you paid attention to the computer press in the early 1990s, you may remember the "Pentium floating point math error". That was, functionally, this problem embedded in real silicon, and to most computer scientists, it was a tempest in a thimble; none of them ever trusted non-integer math, because they'd been doing it on computers that would flip a bit when something like 10/7 would overflow. In short, computers don't handle decimal fractions well; one consequence of this is that banking software actually calculates how many pennies you have, and formats dollars and cents as the very last step, because they can avoid using floating point math at all.

Our team member doing this project tells a story about finding a bug in a terrain collision algorithm where the end result of a pair of long series of calculations was showing anomalous errors due to floating point math issues. Once it was identified, the solution was to store it as a 64 bit floating point operation, until the final calculation, and then convert it to a 32 bit number; this avoided a couple of hidden rounding error issues. There's a lot more on this subject for people who are truly interested.

Anyway - back onto the climate model, the question becomes "How hardware dependent are these models?" Which led to more digging. Basically, computers can screw this stuff up in two places - the hardware and the compiler. For the last four or five years, the hardware has rarely been a point of stress; 64 bit computers are the norm for anything purchased in the last 3 years, and there are crazy compiler tricks if you need more than a 64 bit register for your numerical operations.

Which runs into the compiler. UCAR has only validated their model on the IBM XL compiler. They've run it on MIPSPro, Cray, NEX and Compaq compilers. On the Linux side of the fence, which is what we'll probably be running this on, they've used the Portland compiler.

We don't know that g++ or Microsoft Visual C 7.1 won't work. We won't know if they will, either, though it seems likely. However, if we show up with an error on a compiler they haven't validated against, they're perfectly within their rights to say "You guys are, um. Courageous. We don't have the resources to help you. Write us if it works!"

Now, one of our team members has a strong suspicion. They're using the Portland compiler because they want a debugger that doesn't inspire postal rages and murder sprees. (One of the common rants about open source code is that it's usually documented by computer nerds for computer nerds, and that only the weak want graphical debuggers. We're weak, and want to have lives left over for petting cats and the like.)

A quick look through their forums shows evidence of people trying to compile it with g++/gfortran; this is hopeful and probably evidence that, like scientists usually do, they'd rather not be caught out promising something they can't deliver.

So the next step is to spend some time figuring out what dependencies have to be overcome to run a make with this, and to choose what compiler to use. We'd love to use the Portland compiler; if any of our readers here have an extra license and can make a donation of it, please let us know.

Tuesday, December 1, 2009

The Perils of Cassandra

One of the things that costs global warming scientists credibility - other than the CRU data leak - is how the science gets reported in the news. I've linked to explanations of how science reporting gets done to show you the wincing scientist's side.

Another person has gathered up a tool that scrapes all the hyperbolic claims of the disasters due to befall us should we Not Do Something Now!

I give you the Catastrophe List.

Most of my readers are probably able to read a scientific report and look at an error bar. I want you to imagine how this looks to someone who gets most of their science news from breathless exhortations on CNN or Fox News.

Always there's something cited as proof of global warming; sometimes it is. Sometimes (like the Kilimanjaro glaciers), it turns out not to be the case....but the follow ups never happen because the 24/7 news cycle constantly needs New Crises To Alarm You and Keep You Tuned In! Like the comment from Orwell that the primary function of a state is to manufacture new existential threats to protect you from, the same applies to the 24/7 news services.

Can you say "Climate Apocalypse Fatigue"?

Monday, November 30, 2009

Amateur Climate Modelling - Introduction

There are plenty of discussions on the way that the CRU data leak has impacted the field of climate science. It was, for me, the final straw on demanding that data used for public policy analysis be publicly available.

Fortunately, somewhere around 80% of the climate data sets out there ARE publicly available, if not interoperable, and about 30% of the climate models are as well. Now it's time to put them to the test - myself, and a few others, are attempting to do an amateur run of a climate sim, then go through the data. The climate sim we're going to use is CCMS 3.0. If you're particularly brave, you can click that link and follow along with us.

I'm the mouthpiece and bandleader. I'm neither a mathematician, climate scientist, or computer programmer. I have access to some, who are curious about what comes of this. What I am is a writer.

I'll be editing a lot of what comes through here from other people who are running the sims and stitching them into a narrative. My model on this is Jerry Pournelle's "Computing At Chaos Manor", which is largely a narrative of a Very Bright Amateur doing Dumb Things so You Don't Have To. Jerry discovered that the secret to conveying a lot of computer stuff to his readers wasn't to come down like the Guy Who Knows Everything, but rather, to be The Ordinary Guy Who Wants To Know Why This *&#&ing Thing Doesn't Work, including the narrative of how he eventually resolved the problem. To people who were doing major IT planning, Pournelle's columns were far too simplistic. To the guy who has to set up his Aunt Minnie's computer after Christmas, they were cathartic, and moderately educational.

In one respect, we are breaking from Pournelle's formula - Pournelle does this with a lot of different things each month, and the thing that ends up in the column is whatever he managed to get working in the end, showing you all the mis-steps he took along the way. As he says, the key is to know the happy ending and work backwards from there.

We have no guarantee of a happy ending. Even with the talent pool I have available for this project, I figure there's at least a 50-60% chance we can't get this thing up and running at all. However, I'll document as we go along.

Sunday, November 29, 2009

How And When To Read A Scientific Paper As A Layman

Science is conducted by people, most of whom are bright, a fraction are very bright, and a small fraction are brilliant. These people are, as discussed in the post Peer Review & Skepticism competing for prestige and grant money. The vast majority of papers are rushed at the end to get them ready for presentation at conferences.

The paper publication process starts with publication and presentation at a conference. The paper is distributed to all conference attendees, who hopefully read it before your presentation on it, and there's usually a question and answer phase afterwards.

The paper gets commented on (generally by email) from attendees who read it, went through the presentation, and had questions or insights.

Most paper presenters incorporate those comments into the next draft of the paper (or, sometimes, decide that the line of research needs to be abandoned), and then submit it for peer review and publication in a journal.

Once the paper is in the journal, there will usually be a journal moderated commentary and letters section, where a second round of "let's make sure this says what you think it says" comes into play, usually with responses by the original author put back in.

In a lot of ways, it behaves like USENET discussion groups, but more slowly and with a bit more deliberation in the outcome. (And yes, it has its flame wars and trolls.)

Scientific papers have an abstract, a number of chapters or sections, and a set of end notes. If you're not used to reading them, the abstract is a one or two paragraph summary of what's to be covered in the paper, the notes cover where their sources are, accredit people whose work they've referenced, and sometimes point to "Unpublished Annexes" that give people the ability to dig deeper if they're interested.

This means that - as a layman - you should start reading papers when they have their journal mandated Question and Response come through. This also means, between conference presentations and journal review and question and response that anything that's published within the last 9 months is still probably waiting for that commentary process to run through.

It will also be a difficult read. Most scientists are, to put it mildly, mediocre writers. Science papers also have a particular formalism that makes perfect sense in context, but explains why scientists generally can't write.

The ideal is that the authorial voice comes out to "Experiment X was run in methodologies Y, with results Z, the conclusions derived from the data are ...". This is the near inverse of journalistic writing, where you'd start with the conclusion, work your way through the data gathering, then gloss over the methodologies, and show how the results supported the conclusion.

Just because it's difficult doesn't mean you shouldn't do it, but you should, as a layman, budget time to look over related papers. You cannot read one paper in isolation and get a realistic appreciation of what's going on.

And This Is How We Make The News...

I used to do freelance science journalism back when newspapers could afford to pay 4 cents a word for freelancers. (Now, they can't even afford that). It was a kinder, more gentle age back then. Well, a quieter one, anyway. :)

Science journalism is a challenging task, because you're the middle of an elaborate game of postman. And every step of the way, uncertainty gets removed because uncertainty doesn't get ratings...or requires mathematics to quantify.

See PHD Comics for the entire science news cycle. I was, approximately, the guy in the fedora with the notepad in the cycle.

The rule was that every equation you had in an article halved the readership; it had to be a really sexy article to get my editor to let me have ONE equation in the article. It only happened twice.

If you were lucky, you got a call from your features editor, and he sent you to a conference with some topics he wanted covered. Or he had you following up a press release at the University.

If you weren't lucky, you were hoovering up press releases, trying to arrange interviews, and writing the pieces 'on spec', hoping to interest an editor, do another round of fact checking, sell them, and keep the lights on.

Topics that sold well were anything dealing with alcohol, food, cars or computers, or anything talking about the End of the World.

The Internet has chopped the bottom out of the freelance science journalism market. Newspapers are dying, and quickly. Anyone with an Internet connection can get a Blogger account (hey, they let ME have one...) If you want to get it published, it has to be relevant and timely...which all sounds wonderful until it's time for fact checking. Most newspapers don't even have a science section any longer.

Eventually, I moved from science journalism to technical writing, and from technical writing to game design. And arguably, with my game designs, I've combined both tech writing and science journalism...except I don't have to bow to editor's whims, and I can put in all the equations I want! Muahahahah!

Friday, November 27, 2009

On Data Sets and Merges

One of the problems with science (observational and experimental both) is that you can get conflicting data sets. Indeed, more scientific papers are discredited by misused (or non-repeatable) data sets than any other.

Data set gathering is incredibly tedious and expensive, and in many ways nonreplicable. For example, there are plenty of dendochronology data sets out there (measurements of the width of tree rings as a measurement of rainfall, CO2 and nitrogen in soils); not all of them are useful as climate proxies. Some - coming from very carefully selected points - may be good measurements of local climate effects.

This is particularly important for creating climate records prior to about 1640, when the first scientifically reliable thermometers came about.

One of the important tasks in ANY scientific endeavor is to winnow the data. Particularly, in the effects of climatology, you have a lot of noisy, geographically discontiguous data sets that need to be interpreted and a chronology built. Even the thermometer records from roughly 1860 onwards have issues; Pielke et. al 2007 documents some of the noise signals in the thermometer data. (I'm not going to touch the issue on whether Dr. Pielke's work was blocked from publication by Dr. Karl at NCDC; the paper is a good discussion of where noise comes into contemporary data.).

What this means is that I'm somewhat more forgiving of fudge factors. I've been around practicing science enough to know that everything has fudge factors in science, because if you don't limit the variable set, you can't quantify the outcomes with any reliability.

That lack of reliability is why science reports things in probabilities. Which gets turned by people on both sides of the political spectrum into "But that means you're not certain of the outcome. I'll change my position when you can give me absolute certainty." Normally, this comes about when you have a data read or analysis that reveals an uncomfortable truth.

Good science discusses where the uncertainties in the data set and analysis methods are. Unfortunately, good science that does this routinely gets picked apart by agenda-hawks in the method described above.

Between noisy data sets coming from geographically discontiguous areas, and having to state uncertainty percentages, it is NOT unreasonable to say 'the data are clearly wrong'. The data can be from bad instrumentation, the data could be corrupted by factors that aren't being accounted for, and the data could be correct; when you have multiple data sources and one or two of them are clear outliers, there may well be an instance where 'the data are clearly wrong'. (This is why I don't consider the Phil Jones email to be a smoking gun.)

A later post is going to cover what data sets are being used for climate science, what their noise sources are, and how those noise sources are corrected for. It will be written for a technical layman (because that's what I really am); I've got friends who know more who'll look things over to make sure that any obvious gaffes are fixed. There will, almost certainly, be elisions of technical information.

More Contributors, an Open Source Model and Data Source

Thanks to a poster (G-Man), I've been pointed to UCAR's CCSM3.1 data set and source code.

I've brought on Larry Ramey (and am hoping to bring on Eric Raymond) to give a walk through of what that software does and what its data set means. Ideally with both of them compiling it, running it and cross checking each other.

Larry and Eric are both pretty blunt at what they consider bovine scatology; neither of them is *likely* to be posting, but both will be looking over my posts before they go up. They also take opposite views on the issue....aside from saying that both sides should open source everything.

If, in the private review section, they can't agree on something, I'll probably put three posts out - my post, one summarizing Eric's objections, one summarizing Larry's objections and so on.

One of the reasons why some stuff in climatology is NOT open sourced - including data sets - is because it's funded by private sources and is very much commercial information and practice. Larry asked me what my response would be to someone who had a model that had a mixture of public and private data, some with restrictions on it.

My answer was "I'm sorry, but unless we can show all the steps along the way, we cannot run it - it's outside the purview of this blog." It is my hope that by showing that, yes, a bunch of talented amateurs can run the open source stuff and give a reasonable explanation of what each model is doing, that we can 'clear the air' a bit. Perhaps, about the time of the fifth Assessment Report, we can post an honest to god literature analysis.

That being said, there's a LOT to the UCAR modeling stuff. Indeed, there may be more there than two very smart people can independently confirm or analyze. It's a huge data set.

There will be periodic updates on the progress of that thread, but more articles here and there about what data sources are, signal and noise, just so there's something to read here.

Tuesday, November 24, 2009

Peer Review & Skepticism

One of the places where fringe-science and junk-science attempt to grab credibility is through the peer review process. Peer review nominally works like this:

You do research (or what's called a literature review - more on that later). You write your paper. You submit it to a journal. The journal sends it out to other people in the field who read it and say "This is interesting" or "This confirms what we already know" or "This was written by someone with an axe to grind" or "This was written by a guy who makes the TimeCube hypothesis seem sane."

Publication in peer reviewed journals is competitive; an academic's professional reputation (salary, grants, interesting research projects) is based on getting published in peer reviewed journals. One side effect of this is that getting your article in the journals first is an important consideration. Much like Internet news versus newspapers, the person who breaks the story wins the race.

And science is a LOT of hard work. Most people who don't do science for a living have no clue how much work goes into it, or how difficult that work is. Anyone who flailed at second semester Calculus in high school or college, or who found doing lab notes in physics class to be about as interesting as whacking their hand with a hammer has the vague inkling that this is difficult.

It's even worse for actual practicing scientists. Take your "beat my head against Calculus" moments, combine with "gather data with instruments, and record error bars" for two years and THEN race to get your results out so that you get the A and someone else doesn't - provided nobody spots a nitpicky thing you overlooked early in setting up your experiment design. You're also usually rushing to get your paper ready for the submission deadline for a conference.

Oh, and if you don't win approximately 1/3 of the races, you lose your job.

This results in two incentives in behavior. As a game designer, I've found that you reliably get the behavior you incentivize.

Incentive 1: Until you're close to publication, you say nothing about what you're researching, your preliminary results, etc.

Incentive 2: You submit papers to journals that you think will agree with your outcome. Subsequently, peer reviewed journals have a strong tendency towards confirmation bias.

Now, actual science is hard, even when researchers have grad students in near peonage to do the scut-work for them. Cranks show up in peer review journals not with actual research (because that requires data set gathering, computation and rigorous analysis - and more to the point, takes time), but with something called 'Literature Analysis'.

In scientific journals, Literature Analysis means combing through related studies and their data sets and looking for contradictions or gaps; either one is a marker for "Hey, there's something interesting to research going on."

Literature reviews are more common in the worlds of law - in literature reviews, you go through the published papers and draw conclusions that match your pre-supposing condition. A lawyer that brought up things contrary to his own case would be laughed at; a scientist who doesn't has some professional quandaries to consider.

Which leads us back to the CRU data leak. One of the places where high dudgeon is being raised is over the work of Jones, Goodess, Mann et al to get Climate Research Journal 'shunned from the peer review list'.

The article that triggered this reaction was a "Literature review" by policy wonks Willie Soon and Sallie Baliunas. The review was a very selective set of cherrypicking, and was a pure political hatchet job. It's difficult to get things past the confirmation bias on a politicized issue...but Jones and Mann, in this instance, were doing what peer review is supposed to do - point out when something slipped through that should not have gotten through.

There is real, legitimate skepticism about climate science out there. There are also a lot of cranks and political hacks trying to cloak their propaganda in the trappings of skepticism. Just like we should be dubious about the people who say "Your SUV is KILLING US ALL", we should be dubious about skepticism that doesn't do its own research, or is a Trojan horse for political activism.

Monday, November 23, 2009

When Did "Skeptic" Become An Epithet?

Skepticism is at the root of science. It's the fundamental kernel that science is built upon, the demand of 'show me'. And yet, in some circles (yes, I'm referring to the CRU data leak again), 'skeptic' has been as a derogatory epithet. This saddens me.

Science and its presentation falls to the errors of driven agendas as much as any other human endeavor. I'm going to outline some of the tells that the science reporting you're getting is skewed, intentionally or not, for a particular agenda.

A Penchant for Secrecy

If the presenters do not show their work to the general audience, it's likely that they're afraid of what would happen when this occurs. Now, as scientific inquiries get politicized, dealing with every nitpicking question will eat tremendously into the time of the researchers. Even worse, should the issue formally reach 'hot potato' status, or be the subject of an Oscar winning documentary, the number of kooks and cranks trying to get an oar in - even if it's only to 'ride the controversy' and get publicity for themselves, will grow asymptotically.

So I have sympathy - a lot of it - for the scientists who get stuck in that position. It's not fun being the person making assertions and getting the rocks thrown at you, and once the ravenous beast of controversy has sunk its teeth into you, it's nigh impossible to get back to what you really want to do - which is gather data, analyze it, and look for anomalous results.

The answer to this is to give up the gatekeeper role. It's your job to publish your raw data sets and analytical tools (including their source code) to the widest array possible. While the open source maxim is "All bugs are shallow with enough eyes", that truism is harder to press into mathematically rigorous sciences or sciences with Very Large Data Sets.

It's OK to say "This is very technical in its details, here's the simple summary - and here's the data and methodology for you to replicate our result."

The Doomsaying of Sybil

This one should, in theory, be too obvious to be worth mentioning. It isn't. Scientific sounding twaddle cloaks itself in the medium of disaster movies. "We're all gonna die..." gets headlines. Science reporting saying that unless you stop driving your car, Manhattan will sink like Atlantis gets headlines - and gets ratings on news organizations. So does science reporting saying that anything new (that materially improves the lives of people) always has a hidden cost.

Scientific reporting telling about the Coming Doom (Resource Shortages, Population Explosions) are almost always exaggerations. The sorts of things that ARE significant problems (overfishing, wild catch population decline, the mechanism by which methylated mercury from coal plants accumulates in ocean caught fish) tend to get published in obscure journals and debated quietly.

The Cloak of Consensus

Science isn't about consensus. It's about checking your data and hypotheses against the real world. If the real world says your hypothesis doesn't work, you make a new one to explain the data. Anyone aiming to do consensus based science is trying to deflect attention from secret data, methods or processes, by cutting off inquiries before it begins.

Should you see any of these three signs (Secrecy, Doomsaying, Consensus), it's right to be skeptical. Skepticism is not saying "They're wrong automatically because they're saying this".

Skepticism is saying "I wish to see the data and follow the reasoning for myself."

Any honest scientist welcomes skepticism and the chance to show off their work.


The Teaching Moment From The CRU Data Leak

I care about the data, its corroboration, transparency, and the validity of models. I won't touch the role of consensus in investigatory science - Michael Crichton covered this better than I could, here:

Let's be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.
With the "ClimateGate" incident rippling out from the CRU data leak, people on the skeptic side are having a field day assaulting the bloody barricades of the professional credibility of a government sponsored climate research team.

I'm not. I know what it's like to merge data sets together to build a complete picture. I have empathy for what the people who're working at Hadley were doing; it's a lot of tedious, mind numbing work, prone to error and revision and re-doing. It's work that makes sitting down to file your taxes look like playing Sudoku for fun.

I can build a plausible chain of intent for everything the Hadley crew did. At no point does it involve "And now, now we shall perpetrate FRAUD!" with maniacal glee. I don't need to throw stones; they're getting enough to build a patio hurled at them.

I'd like to express an opportunity. We have a chance to educate on the fundamental science, rather than proselytize in sound bites. That window will close once this gets mulched in the news cycle, and exploiting that window is more important than smearing people. It's time to put away petty revenge, and teach.

We have a window where we can get an open, clear, and public debate on the climate going. In an effort to preserve their reputations, the Hadley people will eventually realize that their best effort is to present ALL the data, noise and all, and their methods of filtering it.

We need to let them have that opportunity. And then we need to establish that any data set that's out there, and the methods that are used to filter it, massage it, demonstrate how it works, are documented. I would recommend that we talk to the folks at code repositories like SourceForge about versioning information and data set check in.

That it takes FOIA requests to get data sets and methodologies released is a travesty in climate science. I propose that policy decisions can only be made based on data sets that are published with an open source or creative commons license, such as Creative Commons Attribution-No Derivative, and that the source code of all statistical tools and all statistical methods used to interpret and analyze them be made open as well. We're all being asked to pony up; shouldn't we allow informed citizens to see what it is they're buying with their tax dollars?

Mission Statement

This blog is focused on issues of science, statistics, and their intersection with the creation of public policy.

I'm an apolitical conservative. I care about the facts, not the sound bites, or who gets political advantage from something. I care that if science is being used to inform public policy, that that science be transparent, open, and the data sets and means of analysis are available to all.

Let the data speak for itself. If your knowledge of climate change boils down to talking points from one side or the other of the political spectrum, please read on and ask questions.

I have a handful of people who can do deeper analysis than I can, I hope to lure a few of them in here. When I make mistakes (and I will), I expect people to tell me what they are, where the method I misfollowed was, and help me understand better what's going on.

I'm here to explore the science, not become yet another political blog.