You do research (or what's called a literature review - more on that later). You write your paper. You submit it to a journal. The journal sends it out to other people in the field who read it and say "This is interesting" or "This confirms what we already know" or "This was written by someone with an axe to grind" or "This was written by a guy who makes the TimeCube hypothesis seem sane."
Publication in peer reviewed journals is competitive; an academic's professional reputation (salary, grants, interesting research projects) is based on getting published in peer reviewed journals. One side effect of this is that getting your article in the journals first is an important consideration. Much like Internet news versus newspapers, the person who breaks the story wins the race.
And science is a LOT of hard work. Most people who don't do science for a living have no clue how much work goes into it, or how difficult that work is. Anyone who flailed at second semester Calculus in high school or college, or who found doing lab notes in physics class to be about as interesting as whacking their hand with a hammer has the vague inkling that this is difficult.
It's even worse for actual practicing scientists. Take your "beat my head against Calculus" moments, combine with "gather data with instruments, and record error bars" for two years and THEN race to get your results out so that you get the A and someone else doesn't - provided nobody spots a nitpicky thing you overlooked early in setting up your experiment design. You're also usually rushing to get your paper ready for the submission deadline for a conference.
Oh, and if you don't win approximately 1/3 of the races, you lose your job.
This results in two incentives in behavior. As a game designer, I've found that you reliably get the behavior you incentivize.
Incentive 1: Until you're close to publication, you say nothing about what you're researching, your preliminary results, etc.
Incentive 2: You submit papers to journals that you think will agree with your outcome. Subsequently, peer reviewed journals have a strong tendency towards confirmation bias.
Now, actual science is hard, even when researchers have grad students in near peonage to do the scut-work for them. Cranks show up in peer review journals not with actual research (because that requires data set gathering, computation and rigorous analysis - and more to the point, takes time), but with something called 'Literature Analysis'.
In scientific journals, Literature Analysis means combing through related studies and their data sets and looking for contradictions or gaps; either one is a marker for "Hey, there's something interesting to research going on."
Literature reviews are more common in the worlds of law - in literature reviews, you go through the published papers and draw conclusions that match your pre-supposing condition. A lawyer that brought up things contrary to his own case would be laughed at; a scientist who doesn't has some professional quandaries to consider.
Which leads us back to the CRU data leak. One of the places where high dudgeon is being raised is over the work of Jones, Goodess, Mann et al to get Climate Research Journal 'shunned from the peer review list'.
The article that triggered this reaction was a "Literature review" by policy wonks Willie Soon and Sallie Baliunas. The review was a very selective set of cherrypicking, and was a pure political hatchet job. It's difficult to get things past the confirmation bias on a politicized issue...but Jones and Mann, in this instance, were doing what peer review is supposed to do - point out when something slipped through that should not have gotten through.
There is real, legitimate skepticism about climate science out there. There are also a lot of cranks and political hacks trying to cloak their propaganda in the trappings of skepticism. Just like we should be dubious about the people who say "Your SUV is KILLING US ALL", we should be dubious about skepticism that doesn't do its own research, or is a Trojan horse for political activism.