SkepticLance Posted May 1, 2008 Posted May 1, 2008 This morning I read an article by Dr. Patrick Frank. http://www.skeptic.com/the_magazine/featured_articles/v14n01_climate_of_belief.html He presents a sceptical argument in relation to global warming. He does not deny global warming as such, but throws real doubts on the reliability and accuracy of the computer models used by such organisations as the IPCC. Before I go any further, let me make this clear. I am not trying to deny anthropogenic global warming. This argument is purely about the reliability and accuracy of climate computer models. Dr. Franks suggests, with considerable evidence to support this, that these models are not to be relied upon. He points out that the predictions for the next 100 years (shown on the graph in his article as figure 1) include error bars that are merely ONE standard deviation from the mean. In other words, 32% of the data points generated by the computer programs lie outside these error bars. This is simply not acceptable as good science. I was always taught that we need to work at the 95% confidence level. 68% is simply not good science. In addition, this is only those errors that are internally generated by the computer model. It does not include errors that come from input data that is uncertain or plain wrong. A major source of this kind of data copmes from uncertaincies surrounding cloud formation. Franks, in his figure 3, shows levels of cloud formations at different latitudes as from direct observation compared to model prediction. The error from the models compared to reality varies from 5% to 85% depending on latitude. When you add these errors to the internally generated errors, we get results with error bars that make the results totally meaningless. To those who would like to comment, I would ask you to actually read the article first. Thank you.
bascule Posted May 1, 2008 Posted May 1, 2008 General circulation models use radiative forcings as input. When reconstructing the historical climate, forcing responses can be estimated from historical data and indirect evidence when specific measurements are unavailable (e.g. dendrochronology, ice core samples) When predicting the future, forcing responses can only be estimated. This is why projections generally deal with a best case, worst case, and happy medium estimates which span the range of uncertainties. All this says nothing about the ability of climate models to test the validity of estimated forcing responses against the historical record. This too becomes a problem when using a model to predict the future: you have no way of knowing the model outputs are valid, since there's no historical record to compare to.
swansont Posted May 1, 2008 Posted May 1, 2008 He points out that the predictions for the next 100 years (shown on the graph in his article as figure 1) include error bars that are merely ONE standard deviation from the mean. In other words, 32% of the data points generated by the computer programs lie outside these error bars. This is simply not acceptable as good science. I was always taught that we need to work at the 95% confidence level. 68% is simply not good science. Such a blanket statement is bollocks. Different branches use different standards. It's bad science if you don't use them consistently and aren't clear about which confidence interval you are using. I only skimmed the article, but the author's point didn't seem to be about issues with using one standard deviation, it was about precision vs accuracy.
SkepticLance Posted May 2, 2008 Author Posted May 2, 2008 In a previous thread, I made the statement that GCMs cannot predict cloud formation accurately and this constitutes a major source of error in those models. I was subject to an attempt at personal ridicule for my trouble. This is what Dr. Franks says about the effect of cumulative errors relating to cloud formation on long term predictions. " The result is a little embarrassing. The physical uncertainty accumulates rapidly and is so large at 100 years that accommodating it has almost flattened the steep SRES A2 projection of Figure 1. The ±4.4°C uncertainty at year 4 already exceeds the entire 3.7°C temperature increase at 100 years. By 50 years, the uncertainty in projected temperature is ±55°. At 100 years, the accumulated physical cloud uncertainty in temperature is ±111 degrees. Recall that this huge uncertainty stems from a minimal estimate of GCM physical cloud error." Does this not make predictions from the GCMs rather ridiculous?
iNow Posted May 2, 2008 Posted May 2, 2008 In a previous thread, I made the statement that GCMs cannot predict cloud formation accurately and this constitutes a major source of error in those models. I was subject to an attempt at personal ridicule for my trouble. Did you ever even attempt to quantify "major," or were you content to use this gray rhetoric as the "end all, be all" of your approach?
bascule Posted May 2, 2008 Posted May 2, 2008 For the record: one of the main duties of my previous job was to help integrate a model of the climactic effects of cloud cover into a GCM...
Pangloss Posted May 2, 2008 Posted May 2, 2008 Swansont, if it's bollocks then why does the magaine call the article "controversial"? I particularly enjoyed this quote at the end: Many excellent scientists have explained all this in powerful works written to defuse the CO2 panic, but the choir sings seductively and few righteous believers seem willing to entertain disproofs.
swansont Posted May 2, 2008 Posted May 2, 2008 Swansont, if it's bollocks then why does the magaine call the article "controversial"? I was referring to SkepticLance's claim that "68% is simply not good science," which dismisses any use of an error bar at one standard deviation as being poor science. Rubbish. That wasn't the claim of the article, as far as I could tell. The thrust of the argument was that precision (i.e. the size of the error) isn't relevant if the error isn't including terms that make the prediction inaccurate, not that one sigma is bad and they should have used two sigma.
DrP Posted May 2, 2008 Posted May 2, 2008 Here's an idea - why don't we keep all of the computer predictions somewhere -perhaps in a new thread in forums archives or something, then wait 50 - 100 years (some of us may still be around - if not we could pass on our quest to the next gen of science forum members) Then we can compare to the actual data recorded. We'd be like the monks in The Fifth Element who keep the info about the aliens and the master weapon secrate for 300 years untill they return..... OK, I'm getting a bit carried away here, but you get the idea?
Pangloss Posted May 2, 2008 Posted May 2, 2008 I was referring to SkepticLance's claim that "68% is simply not good science," which dismisses any use of an error bar at one standard deviation as being poor science. Rubbish. That wasn't the claim of the article, as far as I could tell. The thrust of the argument was that precision (i.e. the size of the error) isn't relevant if the error isn't including terms that make the prediction inaccurate, not that one sigma is bad and they should have used two sigma. So then why does the editor of Skeptic call the article "controversial"? The following is Patrick Frank’s controversial article challenging data and climate models on global warming.
swansont Posted May 2, 2008 Posted May 2, 2008 So then why does the editor of Skeptic call the article "controversial"? Why are you asking me?
SkepticLance Posted May 2, 2008 Author Posted May 2, 2008 To swansont I made the claim that an error bar of one standard deviation is not good science. It was my conclusion - not the author's. And you have not addressed my conclusion. One standard deviation includes only 68% of the data points. I was taught to work to 95%, not 68%. The vastly greater error makes the conclusions from those models very suspect in my opinion.
swansont Posted May 2, 2008 Posted May 2, 2008 To swansont I made the claim that an error bar of one standard deviation is not good science. It was my conclusion - not the author's. And you have not addressed my conclusion. One standard deviation includes only 68% of the data points. I was taught to work to 95%, not 68%. The vastly greater error makes the conclusions from those models very suspect in my opinion. I thought I made it clear in both post #3 and in post #8 that I was addressing a statement of yours, and not the paper's. Just so everybody understands this. Your claim has little or nothing to do with what the paper asserts is a problem: accuracy vs precision. If it's one standard deviation and you'd rather it be two, double the length of the error bars. It's really only an issue if your standard is e.g. 95% confidence interval, and you report numbers using 68%, and misrepresent your findings. But using 68% does not make it "poor science"
SkepticLance Posted May 2, 2008 Author Posted May 2, 2008 To swansont The reason the error bars are not doubled is obvious with just a glance at the graph. Double the error bars and the various lines representing the predictions from different scenarios will overlap. The distinctions will diminish and make them rather closer to meaningless. The use of one standard deviation, rather than the more proper two is obviously the result of a political decision, instead of a scientific decision. I think that the people behind these published model results are trying to pretend the results are more meaningful than they really are, if we analyse them in a proper scientific way.
bascule Posted May 3, 2008 Posted May 3, 2008 I made the claim that an error bar of one standard deviation is not good science. It was my conclusion - not the author's. And you have not addressed my conclusion. One standard deviation includes only 68% of the data points. I was taught to work to 95%, not 68%. The vastly greater error makes the conclusions from those models very suspect in my opinion. Good science for what? Predicting the global mean surface temperature decades into the future? As you agree that anthropogenic forcings are dominating the present radiative imbalance, how do you expect them to do that without predicting human behavior? The present models do a good job at reconstructing the past. Models should be seen as a way of testing present theory through historical reconstructions, not as a reliable predictive tool.
JohnB Posted May 3, 2008 Posted May 3, 2008 For the record: one of the main duties of my previous job was to help integrate a model of the climactic effects of cloud cover into a GCM... That is interesting. Could you elaborate a bit on that? High or low level cloud? Was the cloud cover a feedback result or a forcing in it's own right? I'm not trying to put you on the spot, I really would like to know.
stingray78 Posted May 3, 2008 Posted May 3, 2008 Check this out. Is really interesting: http://www.tubepolis.com/play.php?q=global%20warming&title=Most%2BTerrifying%2BVideo%2BYoull%2BEver%2BSee&engine=1&id=bDsIFspVzfI&img=http%253A%252F%252Fi.ytimg.com%252Fvi%252FbDsIFspVzfI%252Fdefault.jpg
Aardvark Posted May 3, 2008 Posted May 3, 2008 Be careful SkepticLance, any questioning of Global warming will get you in trouble with the Thought police! How dare you imply that human understanding of the Earths climatic system is not perfect?
Aardvark Posted May 3, 2008 Posted May 3, 2008 You don't even appear to understand what the word 'strawman' means. Do you ever actually try to make an argument using facts and reason or do you always rely on name calling and labelling?
iNow Posted May 3, 2008 Posted May 3, 2008 You don't even appear to understand what the word 'strawman' means. Do you ever actually try to make an argument using facts and reason or do you always rely on name calling and labelling? I think you'll find that I was accurately describing your post, and was not attacking you personally. However, that is precisely what you've done in response, resorting to personal attack.
Aardvark Posted May 3, 2008 Posted May 3, 2008 I think you'll find that I was accurately describing your post, and was not attacking you personally. However, that is precisely what you've done in response, resorting to personal attack. Ah yes, I am the one indulging in name calling! :D It does seem to be a habit of yours, name calling and the use of labels to try and shut down discussion. A shame really.
SkepticLance Posted May 3, 2008 Author Posted May 3, 2008 To iNow Aardvark has a point. You do tend to over-use the strawman accusation, including using it where it is not appropriate. In this case Aardvark was being ironic rather than raising a serious argument, and your accusation was definitely not appropriate. A better response would have been to respond to irony with humour.
swansont Posted May 3, 2008 Posted May 3, 2008 To iNow Aardvark has a point. You do tend to over-use the strawman accusation, including using it where it is not appropriate. In this case Aardvark was being ironic rather than raising a serious argument, and your accusation was definitely not appropriate. A better response would have been to respond to irony with humour. An application of Poe's law. It is impossible to make a parody of denialism that someone won't mistake for the real thing. To swansont The reason the error bars are not doubled is obvious with just a glance at the graph. Double the error bars and the various lines representing the predictions from different scenarios will overlap. The distinctions will diminish and make them rather closer to meaningless. The use of one standard deviation, rather than the more proper two is obviously the result of a political decision, instead of a scientific decision. I think that the people behind these published model results are trying to pretend the results are more meaningful than they really are, if we analyse them in a proper scientific way. Repeating your argument doesn't make it right. You need to establish that using one standard deviation isn't common practice if you want to call it "proper science." Good luck with that. Just because your field does it one way does not allow you to validly extrapolate that to all science. It's been my observation that fields that do population sampling (e.g. medical testing) tend to use confidence intervals rather than standard error. But other fields use standard error. You would have to establish that it isn't common practice in climate science to begin to conclude that the decision was political. (more on error bars: http://scienceblogs.com/cognitivedaily/2007/03/most_researchers_dont_understa.php )
Aardvark Posted May 3, 2008 Posted May 3, 2008 An application of Poe's law. It is impossible to make a parody of denialism that someone won't mistake for the real thing. So my suggestion of a 'thought police' in Scienceforums, doesn't strike you as obviously not meant to be taken literally? And as for 'denialism', gosh, what a great word, just slap it on anyone who ever dares to raise any queries about Global Warming and job done! Afterall, if they are a 'denialist' then you don't have to worry about their questions or arguments do you? (Smiley added just for you)
Recommended Posts