Wednesday, April 23, 2014

"The science is settled"? Like hell it is!


I've long since become fed up with the intellectual and academic dishonesty of those who claim that climate change is 'settled science' (when in fact it's far from settled), or who proclaim that this or that or the other study proves that this or that or the other dietary component is good or bad for us.  (Funny how those verdicts tend to change so often, isn't it?)

The problem of scientific misconduct is widely known.  What's less widely known is that many so-called 'scientific' journals have a pattern of misconduct as well - brilliantly described by an article in the Ottawa Citizen.

I have just written the world’s worst science research paper: More than incompetent, it’s a mess of plagiarism and meaningless garble.

Now science publishers around the world are clamouring to publish it.

They will distribute it globally and pretend it is real research, for a fee.

It’s untrue? And parts are plagiarized? They’re fine with that.

Welcome to the world of science scams, a fast-growing business that sucks money out of research, undermines genuine scientific knowledge, and provides fake credentials for the desperate.

And even veteran scientists and universities are unaware of how deep the problem runs.

When scientists make discoveries, they publish their results in academic journals. The journals review the discovery with independent experts, and if everything checks out they publish the work. This boosts the reputations, and the job prospects, of the study’s authors.

Many journals now publish only online. And some of these, nicknamed predatory journals, offer fast, cut-rate service to young researchers under pressure to publish who have trouble getting accepted by the big science journals.

In academia, there’s a debate over whether the predators are of a lower-than-desired quality. But the Citizen’s experiment indicates much more: that many are pure con artists on the same level as the Nigerian banker who wants to give you $100 million.

. . .

At the University of Saskatchewan, medical professor Roger Pierson wonders how can scientists trust the journal system to share knowledge.

“Basically you can’t any more,” he said, except for a stable of well-known journals from identifiable professional societies, where members recognize ethical work is in all their best interests.

He had just spent time with the committee that oversees tenure and promotions at his university.

“We had three cases where people had published things in what were obviously predatory journals, and they didn’t think anything was wrong with that.

“The reality though is that these (fake journals) are used for promotion and tenure by people who really shouldn’t be there. The world is changing fast ... It’s a big problem.”

He tracked a paper from one job applicant to the journal website and found the giveaway clue: It takes weeks to publish, the site said, but if authors needs faster service to impress their universities then “it costs another $500 and they’ll publish it in days.

“It’s got absurd. There are hundreds if not thousands” of shady publishers, Pierson said.

“Universities are particularly vulnerable” to being fooled by these fake credentials.

It used to be pretty easy to spot them, said Pierson. “But the predatory journals are becoming a little more sophisticated, (and) new journals in every field are popping up weekly.”

Even Pierson didn’t know the latest trick. Journals are rated on their “impact factor” — how often their articles are used as references in later studies. And the predatory journals are now buying fake impact factors from equally fake rating agencies.

He believes this taints the reliability of what is published everywhere.

There's more at the link.

So, when 99% of published articles on a subject all agree about it . . . and more than half of them are published by these predatory journals . . . how trustworthy is their consensus?  As far as I'm concerned, it's not worth the paper it's printed on or the pixels used to display it.




Peter

8 comments:

c w swanson said...

Excellent article. I'm always impressed by how quickly those who, from a political standpoint, want to shout that the science is settled, when anyone who knows anything about how science works knows it is never ever really settled, and that's in fact the way we want it.

Al Gore and his ilk are a plague on our society.

Stu Garfath. Sydney. said...

Climate change....hmmm, it could be worse.
At least the self elected so called éxperts' aren't complaining about the sudden rise of witchcraft, demons, hobgoblins, and sorcerers, well, not yet anyway!.

Paul said...

Scientific method is a synonym with con job for these guys.

I have had very well educated people tell me that they believe global warming from this kind of research.

If you repeat a lie loud enough it must be truth.

What a crock. No wonder the USA fell.

Anonymous said...

I am surprised that folks do not understand the difference between a scientific theory and a scientific law.

Man-Made Global Warming is a theory, the force of gravity is a law.

If you suggest a theory, you are bound as a scientist, to be able to defend it against all challenges or modify it to make it defendable.

To question a theory should not be seen as a personal attack, but as a challenge to the logic and evidence of the presenter. Only after a complete defense of all questions can a theory be accepted as the best explanation of phenomenon.

Prince Charles recently stated that science and technology should not be question. Pure poppy cock! Scientific theories should always be questioned or research as you pointed out can be completely misleading or bogus.

Gerry

Anonymous said...

Ahh! Greed, pride, lust, and vanity are terrible taskmasters; merciless, relentless, remorseless, and utterly wasteful.
It appears to me that these lie at the heart of all the corrupt, evil, wicked, and destructive desires of this world currently.

The remedy therefrom is to seek, find, and embrace "the more excellent way".

My hope and desire for all is that we will be more wise, honorable, faithful, just, true, and virtuous henceforth,
that we may escape the awful fate of those who will not.

Joe Harwell

Mike_C said...

Long, 3-part rant follows.

This post raises some good points, but it’s all too easy to fall into a sort of nihilistic anti-intellectualism and deem all science to be untrustworthy crap. There are a number of issues that your post, and the linked article, touch upon. The global warming/climate change thing is clearly driven by political ideology rather than science at this point, so I’m going to steer clear of that and address the matter of scientific journals and peer review.

The scientific publishing/peer review system is flawed in many ways, but it’s supposed to work like this. (And actually works not too badly.) As a medical researcher, I or my team come up with a research hypothesis. We do literature search to see how our idea can extend understanding and contribute to, in our case, cardiovascular health (if it’s already known or just obviously dumb, why waste the time and money). We plan the protocol, get approvals as needed (e.g. to perform procedures, animal studies, or to access the patient database with appropriate anonymizing safeguards, etc), secure funding, and actually do the work. After the experiments we do the statistical analyses, discuss results among the team and write the manuscript. We make sure the team members all agree on the findings and interpretation of those findings. Then we submit our work to a peer-reviewed journal.

Let’s switch perspectives a bit now, from submitting reseacher to journal-side. The journal receives a manuscript. The editor-in-chief or one of his (or her) deputy editors (almost always another MD or PhD) gives the paper the once over. Some stuff is obviously crap, wrong topic for the journal specialization, or otherwise clearly problematic. Such submissions are rejected flat out. More promising papers go on to formal peer review. The editor goes through his figurative Rolodex of other researchers in the field who are qualified to review the paper, taking care not to send the paper to people who work at the same institution or otherwise work closely together (if at all possible) to minimize the chances of “nepotism.” Usually two peer reviewers are asked, sometimes more. If someone declines to do the review then we go down the list until we find qualified people willing to do the review. Each reviewer is sent the manuscript and given a few weeks to review it.

As a reviewer, I read the paper carefully, and if there are details I don’t understand I may end up spending hours looking up and reading other papers. I then write a review which consists of two major parts: 1) Confidential comments to editors (where I speak my mind freely) and 2) Comments to authors (where I am direct and honest, but avoid comments such as “what were you people thinking?”). In #2 I begin by summarizing the paper in a paragraph to show I read and understood the damn thing. Then I have a point-by-point list of suggestions as to how things could/should have been done better, requests for additional information/clarification, and sometimes speculative questions of the “would be nice to know” sort. Anal-retentive types such as myself may also have a list of formatting and typographic errors. Anyway, the reviewers’ comments get sent back to the journal (there may also be a separate statistical reviewer who evaluates not so much the medical/scientific content as the methodology). The editorial board (say 6-12 MDs/PhDs) then meet and go over the comments and decide the disposition of the paper. VERY rarely is a paper accepted as submitted. Most published papers go through one or two rounds of revisions, which are not just rewriting. They usually involve additional data analyses (or gathering new raw data even). Many papers are flat out rejected. (We can talk about publication bias another time.)

Mike_C said...

Part 2
One difficulty with the present peer reviewer system is that we are NOT compensated for our time as reviewers. My paying job doesn’t cut me slack to do the review. The journal pays me nothing. I do it out of duty to the overall scientific community, and if it’s for a good journal, for the prestige of being able to put on my C.V. that I review for Journal X (because it’s helpful come promotion time). Some papers take only an evening to review, but the latest one that crossed my desk consumed a week of evenings. That’s time I’m not doing my own work (because we don’t get to turn our brains off when we leave the office – I often get/respond to messages late evenings from colleagues working on our own stuff at night), not with family, and not doing chores around the house. So there can be reviewer fatigue. Personally I now cheerfully (as opposed to grudgingly) review for 3-4 higher tier journals (each of which sends me between 2-5 papers per annum to review) and say “yes” to lower tier journals only if the particular manuscript is about something in which I have special interest. (Or in one case because the deputy editor was not only a friend, but also extremely cute. That’s actually a probably inappropriate joke, but she IS cute.)

Back to the researcher hat. Obviously we want our work to appear in the best (for values of best) journal possible. So we go through a little shuffle each time. What’s the highest quality journal this could reasonably go to? (We know some stuff is scientifically right, but not earthshaking, so it goes to a mid-tier journal; no shame in that.) We then submit to that journal, or one a level higher on the basis of “sometimes you get lucky” and even if you don’t, if your paper gets formal review, the reviewers’ comments may help you improve your paper for the “realistic” journal. So, the “realistic” journal gets your manuscript. If you’re lucky, you get asked for a revision (which generally means it will get published if you do the revision right – sometimes the new experiments show you were on the wrong track, and all bets are off). If it gets rejected, off to the “next journal down.” Also, NB journals can have different stylistic requirements (e.g. how the bibliography is formatted) so one spends additional time translating between formats, which adds nothing scientifically, and is just a pain in the ass.

At some point, after a paper is serially rejected, it is probably a message from above (i.e. scientific consensus) that your work sucks, and you should take it out behind the woodshed and kill it mercifully with a shovel or something*. But some people just “want a publication” and send their paper to some pay-to-play piece of crap. I always tell my trainees to NEVER do this. To put it crudely, it’s like wanting a date so bad that you hook up publically with a completely undesirable person. Sure, you get laid (your publication listing to put in your CV), but you ruin your professional reputation, because once online forever online. Better to have fewer publications in high-quality journals, rather than a lot of metaphorical hookups accompanied by metaphorical multidrug-resistant STDs.
(* I know that the paper linking H. pylori to ulcers had a great deal of difficulty getting published initially, yet it was and is very important, but common things being common, personally I’m pretty sure that my apparently stupid ideas actually ARE stupid ideas, and not the intellectual equivalent of Cinderella in rags.)

Mike_C said...

Part 3
As to the metrics of journal quality, the most commonly used one is the Impact Factor which is calculated by a private company and has to do with how often articles from a given journal are cited in other credible journals, weighted by a time factor (i.e. how long ago was the citation). This actually works pretty well, so long as you understand the system. Unfortunately, many people don’t understand the system very well, including I suspect the Ottawa reporter. WITHIN a given field, generally a higher impact factor indicates a better journal, but you shouldn’t look at absolute impact factor numbers across fields, because one field may simply be more active/generate more papers than another. Also, sometimes I have something fairly esoteric and of interest only to other sub-subspecialists. Those papers get sent off to the specialty journals, which usually have lower impact factors, but that does not mean it’s a loss of some sort. It’s a matter of sending the paper to not only the “best” journal in terms of impact factor, but also the most appropriate readership.

Regarding the alternative (shady) journal ranking systems, I can’t think of any serious researcher I know who takes those seriously. Also the mercenary journals are thought of very poorly. We laugh (or grimace depending on how our day is going otherwise) and look VERY askance at someone with a CV full of publications from crap journals.

Finally, the Ottawa article was not correct about “pay to publish.” Most of the good, reputable journals I deal with do not charge you anything to publish your article. Some have a per color illustration/color page fee (since there are still paper copies for some reason and color costs them more money to print), but it’s hardly the case that a “geniune journal” will always want $1000 to 5000. Finally, although there is certainly good work in the PLoS realm, being in a PLoS journal is absolutely no guarantee that it’s good or important work. If nothing else, just go onto pubmed.gov and look at the sheer number of articles published by PLoS. Frankly there isn’t that much good work around, period, much less all of it should appear in a few journals.

So, be a little skeptical about the scientific publishing racket. Most of us who live and work in that realm are a little skeptical too. But don’t trash all of it as some sort of scam. Or worse yet, reject the scientific method altogether. We take our work very seriously, and are in it not only because we think it’s neat and fun, but because we want to find and develop techniques and procedures to keep people healthy and active, or make better computers, cars, or whatever. (As an academic physician, for example, your salary is often something like only 30-60% of what you could make in private practice, on average. We take the pay hit for the chance to do research because it’s fun, and because we like being around students and trainees.) Take the public press’ commentary with a grain of salt. As an imperfect analogy, consider the bullshit some of the less responsible members of the press put out about how all veterans are PTSD-ridden timebombs, or how some idiot reporter uncritically accepts a story about “how I was a Marine SEAL commando who had to kill babies in Kuwait” from some joker with a CIB (with two stars yet) pinned to his ball cap. (No, I’m not accusing the Ottawa reporter of incompetence or dishonesty, but it’s also clear that it was written to be sensational, and some stuff is not quite accurate.)