Mokele Posted June 1, 2009 Posted June 1, 2009 This is a science forum, and as such, in a lot of debates, we ask each other for sources to back up assertions or claims. And the gold standard is, as it rightly should be, papers in peer-reviewed scientific journals. But that shouldn't be where it stops. Just because a source is in a peer-reviewed journal *doesn't* mean it's correct, or even that it's not embarrassingly wrong. Like any website, book, or article, you should read the paper and make sure that it actually says what the person citing it claims, and that it's not complete crap. Consider a recent example: Chatterjee, Templin & Campbell. The aerodynamics of Argentavis, the world's largest flying bird from the Miocene of Argentina. PNAS doi.10.1073/pnas.0702040104. On the surface, it seems pretty plausible. Even the paper itself seems OK, until you ask "where did they get those numbers?". A quick trip to the Supplementary material reveals that their calculated maximal power of the animal in a way that's not just wrong, it's embarrassingly wrong - they're calculating from basal (=resting) metabolic rate, and as a result, they get a power output that's pretty pathetic. Even a low-balled estimate I made showed the paper to be off by more than an order of magnitude, completely demolishing their central claim. Now, obviously, not everyone even has access to anything more than the abstract, nor does everyone have the experience or knowledge to check a paper's methods in detail. But still, one should be cautious in simply assuming that any given paper that turns up on google scholar is definitive. Hell, the paper above is in PNAS, a very well-respected journal. A key part of the scientific process is skepticism towards everything, even other publications. Keep that skepticism alive, no matter what the source. Mokele
ecoli Posted June 1, 2009 Posted June 1, 2009 nice post... It's actually the more powerful labs that you have to watch out for. They usually get publications through more quickly and avoid the more stringent peer review.
Glider Posted June 1, 2009 Posted June 1, 2009 It's what I keep telling my students. "The fact of publication in a peer reviewed journal is a necessary, but not sufficient condition for citing the work of others. You should actually read it and not persist in trying to bypass your brain entirely".
CharonY Posted June 1, 2009 Posted June 1, 2009 You should actually read it and not persist in trying to bypass your brain entirely That is a good one. Mind if I borrow the phrase?
John Cuthber Posted June 1, 2009 Posted June 1, 2009 You also need to look at the journal in which the article is published. http://en.wikipedia.org/wiki/Sokal_Affair
Glider Posted June 2, 2009 Posted June 2, 2009 That is a good one. Mind if I borrow the phrase?Be my guest. It's dissapointing how useful it is in teaching these days
CharonY Posted June 2, 2009 Posted June 2, 2009 Yes indeed. Today alone there were two occasions in which I employed the quote to make a point. I credited it by saying: "As a wise man once said:..."
Bignose Posted June 3, 2009 Posted June 3, 2009 Clearly, it means something that so far, only moderators and resident experts have posted in this thread. This thread almost needs to be copied to P&S --- those are the people that need to read it far, far, far more than the people who have responded to this thread to date!
iNow Posted June 3, 2009 Posted June 3, 2009 This thread almost needs to be copied to P&S... And made a sticky... Although, those in P&S rarely even bother to cite references, but that's another issue entirely, I suppose.
Glider Posted June 3, 2009 Posted June 3, 2009 Yes indeed. Today alone there were two occasions in which I employed the quote to make a point. I credited it by saying: "As a wise man once said:..."Aww shucks
CaptainPanic Posted June 3, 2009 Posted June 3, 2009 Clearly, it means something that so far, only moderators and resident experts have posted in this thread. This thread almost needs to be copied to P&S --- those are the people that need to read it far, far, far more than the people who have responded to this thread to date! There's not much to discuss here... perhaps that's why people remain silent? I mean, if you base a design or research on a single paper... you're taking a risk. Peer review is not the same as repeated experiment. Peer review isn't even always a critical review of methods, results and conclusions. (In addition, the amount of papers that are published seems to increase exponentially (I have no reference for that), so possibly the amount of time spent on a paper is less nowadays than in the good old days?) Often there is no way to check results, and you just have to assume that it's correct. It depends on how important the data is for you as a researcher (or what the effects might be if the data is wrong). The point of this thread reoccurs in many shapes and forms. And it's a good thing that on this forum, most (if not all) aspects of scientific thinking are treated every now and then: This thread is about realizing that all you read might be rubbish, or might be quality. You need to evaluate this. Then there's the "open mindedness" thread... What is an open mind, and who has one? Then there's the constant asking for references. What's your source, what's its value? Did you even understand it correctly? We talked about proof: what is proof. We discussed about new theories, new explanations for observations, which aren't always backed up properly. Etc. Etc. All those points, together with this thread, have an overlap. A scientific education teaches you how to deal with these topics.
CharonY Posted June 5, 2009 Posted June 5, 2009 In addition, the amount of papers that are published seems to increase exponentially I think this is actually not true. I read an article a while back which discussed paper output in relation funding opportunities. I forgot the precise value but the annual increase in US in the last years was less than 2%, IIRC. I only remember that east Asia (most probably especially China) had the highest increase with over 6%. In any case, it is clear that an exponential increase is unlikely, given the fact that there is only an extremely low increase in scientists over the years to begin with. While in recent times some new disciplines have established themselves that are able, on average, to publish faster (e.g. bioinformatics or informatics in general), that cannot leverage an exponential increase. What has changed, however, is the the speed with which you can make searches, so in theory you have actually more time to read (as opposed as being in the library to copy them).
timo Posted June 5, 2009 Posted June 5, 2009 But of course an annual increase by any constant percentage is an exponential function
CharonY Posted June 6, 2009 Posted June 6, 2009 Ouch, you are right, of course. I was comparing apple to oranges and writing rubbish in general (to my defense I have to say that I had a student overload today). Since the 90s the total number actually stagnated in the US with the single largest overall increase being around 2%. China on the other hand had a steady increase averaging to the around 6% annually. Upon reflection even this corrected version this does not address the point made by Captain Panic at all. The only measure that makes sense would, of course, the total world-wide output. But those also fluctuate with some years being more productive than others. While there is a steady increase overall, it is linear at best, with some years being less productive than the previous ones. Here the output also was faltering around the mid-90s. Incidentally this was also the time when the US started to produce less papers than the EU. One has to add, though that also the total ever published amount is of importance, even older papers have to be taken into account, although ideally reviews should (if they did their job) account for that.
iNow Posted December 5, 2009 Posted December 5, 2009 Not fully on-topic, but also didn't warrant its own thread. I think some of you folks might get a real kick out of this. -VRBWLpYCPY h/t Effect Measure 1
Mr Skeptic Posted December 7, 2009 Posted December 7, 2009 It may even be possible that most papers published in peer reviewed journals are in fact wrong. This is especially likely for some fields in general and also if small sample sizes are common. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/
CharonY Posted December 7, 2009 Posted December 7, 2009 Actually this is a given, especially for association studies. For instance, let's say we want to investigate what causes cancer. There are virtually unlimited degrees of freedom, when it comes to the study design. We could for instance look at any type and amount of food, or excercisse, mobile phone usage, etc. In contrast, only a few of these choice actually will be true positives. Since the search space is virtually unlimited there will always be more false than true positives. Of course this is even worse if the study design itself has limitations.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now