Wednesday, December 16, 2009

And first, we play with the hardware...

In order to run the climate-modelling software more effectively, our second volunteer (Eric Raymond) bought a new motherboard that should roughly double the speed of his computer. It's a 2.66 GHz Intel Core 2 Duo. It is *not* a super-high-end quad-core hotrod; and thereby hangs a tale.

Some kinds of algorithms parallelize well - graphics rendering is one of the classic examples, signal analysis for oil exploration is another. If you are repeatedly transforming large arrays of measurements in such a way that the transform of each point depends on simple global rules, or at most information from nearby measurements, the process has what computer scientists call "good locality". Algorithms with good locality can, in theory, be distributed to large numbers of processors for a large speedup.

Some algorithms are intrinsically serial. A good example is the kind of complex, stateful logic used in optimizing the generation of compiled code from a language like C or FORTRAN.

Sometimes, you can artificially carve an intrinsically serial problem into chunks that are coupled in a controlled way - for example, to get faster compilation of a program linked from multiple modules, compile the modules in parallel and run a link pass on them all when done. This approach requires a human programmer to partition the code up into module-sized chunks and control the interfaces carefully.

The kind of math used in climate modeling has some parts have good locality (and thus could theoretically parallelize well) and others that don't. Unfortunately it's difficult to capture any benefits from throwing the parts with good locality onto multi-core machines, because recognizing that locality and using it to do automatic partitioning is
hard.

Here's what "hard" means: computer scientists have been poking at the problem for four decades and parallelizing compilers are still weak, poor makeshifts that tend to be tied to specialized hardware, require tricky maneuvers from programmers, and not work all that well even on the limited categories of code for which they work at all.

What it comes down to is that if you're compiling C or Fortran climate-modeling code on a general-purpose machine, each model run is going to use one core and one only. Two cores are handy so that one of them can run flat-out doing arithmetic while the other does housekeeping and OS stuff, but above two cores diminishing returns start to set in pretty rapidly. By the time you get to quad-core machines, two of the processors will be space heaters.

This is good news, in a way. It means really expensive hardware is pointless for this job - or, to put it another way, a modern commodity PC is already nearly as well suited to the task as any hardware anywhere.

Now, to downloading the code to look more closely at what it does and to poke at it. Yeah, Eric was going to upgrade his computer soon anyway, and yes, this was an excuse...

Tuesday, December 15, 2009

Amateur Climate Modelling - What Is CCMS, How Does It Relate To CRU?

One of my readers, with considerably more knowledge of this subject than any of us on the "Let's do this as a project" group, asked why, if we were spurred into this by the CRU data leak, why we're using the UCAR CCSM 3.0 model...since it has no real ties to dendrochronology, and is in fact, a pretty solid predictive model rather than a retrodictive one based off of past temperature data runs. Indeed, you can argue that what CRU and CCMS do are obverse sides of the same coin - one is trying to reconstruct the past climate record, one is trying to do high end fluid dynamics to make some sort of guess on future climate. Now, the reconstructions of past climate are used to tune the predictive models, so they're not completely unrelated...

The first answer is that we're doing this largely because, well, we didn't know CCMS was a predictive model. Fools stomp in where angels won't even fly over for reconnaissance, and all that. What we did know was that it was open source, and while our chances of doing meaningful science with this are vanishingly small, we can document the transparency of the process. Again, the narrative of our efforts is one of "We Do Dumb Things So Others Can See Our Mistakes And Avoid Them."

The second point, it's the contention of most of us that CRU has significantly damaged its credibility. If similar behavior had occurred in the financial sector, people would be facing criminal charges. In scientific circles, they've put a dent on their reputation...and we can't get their data sources to validate from, yet. If we can, we'll expand the scope of what we're trying. In the mean time, we'll work with what we can get.

Because CCMS is a predictive model, we can do a couple of sideband tests - like see if we can get it to replicate, say, 1900 to 1940 and match the temperature data as a 'stable CO2 emissions epoch' with reasonable temperature data. Or tweak a known variable in a known way and see if it gets a result that makes some sort of sense as a first approximation at falsifiability.

We still, as mentioned, may not be able to get this up and running at all. If we can, we have, for lack of a better term, an exploratory toy to poke around with. We really aren't trained to do substantiative climate science with it; if we're very lucky, we manage to help the people building this model identify some undocumented assumptions and variables, and open discussion on forcing factors.

Sunday, December 6, 2009

Amateur Climate Modelling - Predictions and Physics Reductionism

There is a fundamental question about climate modeling and its intersection with public policy.

In theoretical physics, your model must make some prediction about the real world that can then be validated by an experiment. It must pass the test of falsifiability. This test must be replicable; someone else following your procedure and your data analysis methods must be able to get the same result.

Ideally, this test gets run multiple times, and with slightly different permutations, to see if the same result can be approached in different ways.

Similar standards apply to chemistry, and that's the experimental model in a nutshell. In the early years of both physics and chemistry (and largely what's going on in geology), you have to build a model that isolates enough factors that it can have a predictive value.

I am not sure that GCM models are at that state of rigor yet; I build models for a living - albeit simple ones meant for entertainment. I have a healthy suspicion of corner cases in models, and I know quite well that the biases of the modeler, and the person using the model lead to blind spots.

Has anyone made a prediction with a climate model and said "If this prediction proves false, this particular model needs to be abandoned?" Note the term there - abandoned, not refined. You can make a critique that's hard (at least for me) to refute by saying that climate models, as currently implemented, are a too heavy on tweaked input parameters light on predictions. The usual thing that happens when a climate model doesn't forward verify is to go back into it and tweak parameters. I have seen references to a model for tropospheric temperature predictions that is deprecated, because they got results that didn't match the weather balloon and satellite data in IPCC AR4. However, I don't know if that deprecation is because they decided they didn't have enough variables modeled, or because they felt the variables they were modeling were handled incorrectly.

EG, is it something they can fix, or is it a disproof?

Of course, from the physics reductionist perspective, it's not enough to winnow out the bad models. You also have to have a model that accurately predicts something. In physics, the number of variables is smaller and it's easier to isolate things. When you slam beams of protons into lead nuclei and look at the energy tracks of the debris, they either work the way you expected, or they didn't. (As Asimov wrote, "Advances in science aren't made by "Eureka!" moments, they're made by "Huh. That's weird. Can we get the same result a second time?"

So, one argument from physics reductionists is that we should make a prediction based on the model, we should then wait long enough to see if the prediction is accurate, determine what happened to make it inaccurate, and refine the model. By which point, given we're talking about models that get more statistically accurate in multi-decadal sweeps, we should have a good climate model somewhere around 2200 AD. :)

Or is this "Physics Arrogance" writ large?

Related to this is whether or not we are currently at an optimum point for climate. There is a natural tendency in the human brain to fill in the edge of the map (or the data graph) with shibboleths and monsters. Our current measurements are baselined from 1960, which was an unusually cool period in the 20th century, and our earlier baselines are coming from the tail end of the Maunder Minimum and the end of the Little Ice Age from 1600 to 1880.

Both of these are interesting questions, and by raising them, I hope to create a discussion - I'm not raising them because I think they refute global warming as a grand conspiracy. I raise them because I think they need discussion

Wednesday, December 2, 2009

Amateur Climate Modelling - Getting The Code

One of the easiest models to get at is the UCAR CCSM 3.0 model. You go to their web site (linked above), sign up for it with a login, and wait for an approval. One of the team members did this after 7 PM MST on Sunday on Thanksgiving weekend. Either there was a research assistant/post-doc working the holiday doing the validations, or the process was automatic; we got the login inside of 30 minutes. For our trembling remnants of faith in humanity, we're hoping this was automated and not some poor post-doc stuck there.

You do have to give a name, phone number and why you're downloading it. I'm not sure what would happen if I put in, say, "James Inhofe", his Congressional office number (202-224-4721), and "To prove the fraud once and for all!"...but I suspect that, as amusing as that might be, I'll leave that particular exercise in validating wingnuttery to someone else.

Now, scientists are guys who know programming to do something else. This is not the same as being a production coder, and scientists are traditionally about a decade behind the rest of the computer-science world when it comes to computer languages, largely because they're chained to legacy code and data sets that they lack the budget to uprate to modern standards. (The first person who manages to write a documented, production grade conversion filter running the myriad forms of climate data into a JSON archive will probably be offered several graduate students to do with as they please...)

Which means that scientists are even worse about documenting code than most programmers are. You need to twist arms and legs, and threaten to force them to teach English Composition, to get them to document code. (The number of graduate students I've known who've been forced to use cutting edge computers from 1992 more than 15 years after they've been obsolete is terrifying.)

You can guess our trepidation at opening the documentation. We were expecting spaghetti, intermittently documented in Sanskrit by cut and paste. We were wrong. It's in pretty decent shape for scientific code and documentation. Overall, this is a pleasant surprise, and we chalk it up to the fact that we're on a 3.0 release.

Then comes our first stopping point.

You see, the code says it's only certified to run on an IBM PowerPC machine, which is not the architecture we have. Now, this isn't exactly surprising; these guys aren't going to be testing the code on every hardware platform out there from the Commodore VIC 20 on. This isn't the sign of any sort of conspiracy, they're just documenting what THEY run it on.

However, architecture differences matter. Brief digression for people who aren't computer geeks (and likely want further affirmation that they don't want to become said):

Computers can't add, subtract, multiply or divide base 10 (the system we use). They use Base 2. (2 looks like 10, 4 looks like 100 and 5 looks like 101) This is fine for integer math 3+7=10 ALWAYS works on a computer. Its not so hot for floating point math (3.3 + 2.3 more or less = 5.6) Certain numbers do not convert well to base 2 representations. If you paid attention to the computer press in the early 1990s, you may remember the "Pentium floating point math error". That was, functionally, this problem embedded in real silicon, and to most computer scientists, it was a tempest in a thimble; none of them ever trusted non-integer math, because they'd been doing it on computers that would flip a bit when something like 10/7 would overflow. In short, computers don't handle decimal fractions well; one consequence of this is that banking software actually calculates how many pennies you have, and formats dollars and cents as the very last step, because they can avoid using floating point math at all.

Our team member doing this project tells a story about finding a bug in a terrain collision algorithm where the end result of a pair of long series of calculations was showing anomalous errors due to floating point math issues. Once it was identified, the solution was to store it as a 64 bit floating point operation, until the final calculation, and then convert it to a 32 bit number; this avoided a couple of hidden rounding error issues. There's a lot more on this subject for people who are truly interested.

Anyway - back onto the climate model, the question becomes "How hardware dependent are these models?" Which led to more digging. Basically, computers can screw this stuff up in two places - the hardware and the compiler. For the last four or five years, the hardware has rarely been a point of stress; 64 bit computers are the norm for anything purchased in the last 3 years, and there are crazy compiler tricks if you need more than a 64 bit register for your numerical operations.

Which runs into the compiler. UCAR has only validated their model on the IBM XL compiler. They've run it on MIPSPro, Cray, NEX and Compaq compilers. On the Linux side of the fence, which is what we'll probably be running this on, they've used the Portland compiler.

We don't know that g++ or Microsoft Visual C 7.1 won't work. We won't know if they will, either, though it seems likely. However, if we show up with an error on a compiler they haven't validated against, they're perfectly within their rights to say "You guys are, um. Courageous. We don't have the resources to help you. Write us if it works!"

Now, one of our team members has a strong suspicion. They're using the Portland compiler because they want a debugger that doesn't inspire postal rages and murder sprees. (One of the common rants about open source code is that it's usually documented by computer nerds for computer nerds, and that only the weak want graphical debuggers. We're weak, and want to have lives left over for petting cats and the like.)

A quick look through their forums shows evidence of people trying to compile it with g++/gfortran; this is hopeful and probably evidence that, like scientists usually do, they'd rather not be caught out promising something they can't deliver.

So the next step is to spend some time figuring out what dependencies have to be overcome to run a make with this, and to choose what compiler to use. We'd love to use the Portland compiler; if any of our readers here have an extra license and can make a donation of it, please let us know.

Tuesday, December 1, 2009

The Perils of Cassandra

One of the things that costs global warming scientists credibility - other than the CRU data leak - is how the science gets reported in the news. I've linked to explanations of how science reporting gets done to show you the wincing scientist's side.

Another person has gathered up a tool that scrapes all the hyperbolic claims of the disasters due to befall us should we Not Do Something Now!

I give you the Catastrophe List.

Most of my readers are probably able to read a scientific report and look at an error bar. I want you to imagine how this looks to someone who gets most of their science news from breathless exhortations on CNN or Fox News.

Always there's something cited as proof of global warming; sometimes it is. Sometimes (like the Kilimanjaro glaciers), it turns out not to be the case....but the follow ups never happen because the 24/7 news cycle constantly needs New Crises To Alarm You and Keep You Tuned In! Like the comment from Orwell that the primary function of a state is to manufacture new existential threats to protect you from, the same applies to the 24/7 news services.

Can you say "Climate Apocalypse Fatigue"?

Monday, November 30, 2009

Amateur Climate Modelling - Introduction

There are plenty of discussions on the way that the CRU data leak has impacted the field of climate science. It was, for me, the final straw on demanding that data used for public policy analysis be publicly available.

Fortunately, somewhere around 80% of the climate data sets out there ARE publicly available, if not interoperable, and about 30% of the climate models are as well. Now it's time to put them to the test - myself, and a few others, are attempting to do an amateur run of a climate sim, then go through the data. The climate sim we're going to use is CCMS 3.0. If you're particularly brave, you can click that link and follow along with us.

I'm the mouthpiece and bandleader. I'm neither a mathematician, climate scientist, or computer programmer. I have access to some, who are curious about what comes of this. What I am is a writer.

I'll be editing a lot of what comes through here from other people who are running the sims and stitching them into a narrative. My model on this is Jerry Pournelle's "Computing At Chaos Manor", which is largely a narrative of a Very Bright Amateur doing Dumb Things so You Don't Have To. Jerry discovered that the secret to conveying a lot of computer stuff to his readers wasn't to come down like the Guy Who Knows Everything, but rather, to be The Ordinary Guy Who Wants To Know Why This *&#&ing Thing Doesn't Work, including the narrative of how he eventually resolved the problem. To people who were doing major IT planning, Pournelle's columns were far too simplistic. To the guy who has to set up his Aunt Minnie's computer after Christmas, they were cathartic, and moderately educational.

In one respect, we are breaking from Pournelle's formula - Pournelle does this with a lot of different things each month, and the thing that ends up in the column is whatever he managed to get working in the end, showing you all the mis-steps he took along the way. As he says, the key is to know the happy ending and work backwards from there.

We have no guarantee of a happy ending. Even with the talent pool I have available for this project, I figure there's at least a 50-60% chance we can't get this thing up and running at all. However, I'll document as we go along.

Sunday, November 29, 2009

How And When To Read A Scientific Paper As A Layman

Science is conducted by people, most of whom are bright, a fraction are very bright, and a small fraction are brilliant. These people are, as discussed in the post Peer Review & Skepticism competing for prestige and grant money. The vast majority of papers are rushed at the end to get them ready for presentation at conferences.

The paper publication process starts with publication and presentation at a conference. The paper is distributed to all conference attendees, who hopefully read it before your presentation on it, and there's usually a question and answer phase afterwards.

The paper gets commented on (generally by email) from attendees who read it, went through the presentation, and had questions or insights.

Most paper presenters incorporate those comments into the next draft of the paper (or, sometimes, decide that the line of research needs to be abandoned), and then submit it for peer review and publication in a journal.

Once the paper is in the journal, there will usually be a journal moderated commentary and letters section, where a second round of "let's make sure this says what you think it says" comes into play, usually with responses by the original author put back in.

In a lot of ways, it behaves like USENET discussion groups, but more slowly and with a bit more deliberation in the outcome. (And yes, it has its flame wars and trolls.)

Scientific papers have an abstract, a number of chapters or sections, and a set of end notes. If you're not used to reading them, the abstract is a one or two paragraph summary of what's to be covered in the paper, the notes cover where their sources are, accredit people whose work they've referenced, and sometimes point to "Unpublished Annexes" that give people the ability to dig deeper if they're interested.

This means that - as a layman - you should start reading papers when they have their journal mandated Question and Response come through. This also means, between conference presentations and journal review and question and response that anything that's published within the last 9 months is still probably waiting for that commentary process to run through.

It will also be a difficult read. Most scientists are, to put it mildly, mediocre writers. Science papers also have a particular formalism that makes perfect sense in context, but explains why scientists generally can't write.

The ideal is that the authorial voice comes out to "Experiment X was run in methodologies Y, with results Z, the conclusions derived from the data are ...". This is the near inverse of journalistic writing, where you'd start with the conclusion, work your way through the data gathering, then gloss over the methodologies, and show how the results supported the conclusion.

Just because it's difficult doesn't mean you shouldn't do it, but you should, as a layman, budget time to look over related papers. You cannot read one paper in isolation and get a realistic appreciation of what's going on.