bascule Posted June 10, 2008 Posted June 10, 2008 Well, we've had quite the thread discussing various data errors in sources scientists use to build a comprehensive picture in our climate system. However, are these errors affecting the bigger picture, in the form of model outputs? Now that the errors have been corrected, are we seeing a radically different picture of the climate system? As far as I'm aware, we are not...
Reaper Posted June 10, 2008 Posted June 10, 2008 It would seem that most of the errors are in reality really trivial ones, since they are on mostly things that don't have much of an effect on the overall picture (that AGW is real, and that the temperature changes mostly match the observed values).
JohnB Posted June 13, 2008 Posted June 13, 2008 (edited) Now that the errors have been corrected, are we seeing a radically different picture of the climate system? Have they been corrected? And are the corrections themselves correct? There is a rework in progress concerning SST data and it is too soon to tell the final fallout. I've sen estmates that the trend for the last 50 years may decrease by 20%, this is not minor. Simple recent example. Hadley Centre radiosonde data for the tropical troposphere is found here. Steve M from ClimateAudit graphs it with this result. As can be readily seen and corroborated by the .txt from Hadley, not much happening at the 200hPa level. Enter Allen and Sherwood their paper in Nature paints a very different picture. Over the period of observations, we find a maximum warming trend of 0.650.47 K per decade near the 200 hPa pressure level, below the tropical tropopause. So which is it? Virtually nothing or .65 K per decade? Or the raw v adjusted data I posted for Wellington. 1.5K warming or SFA? If a 1.5 K difference doesn't effect the models, then they can't be much good can they? BTW, I hope there is more meat in Allen and Sherwood than the abstract implies. From the abstract I find it hard to call this paper anything but BS. Warming patterns are consistent with model predictions except for small discrepancies close to the tropopause. Good, it's consistent with the models. Our findings are inconsistent with the trends derived from radiosonde temperature datasets and from NCEP reanalyses of temperature and wind fields. So our theoretical proxy actually does not agree with obs in the real world. Note that this means the models also do not agree with obs. The agreement with models increases confidence in current model-based predictions of future climate change. Am I insane or are they? Neither the proxy nor the models agree with obs, but they agree with each other and so must be right? Since when does proxy data trump real world obs? Note that the paper is getting it's raw data from the very radiosondes that it claims are inaccurate to the tune of .65 K/decade. Very strange. Edited June 13, 2008 by JohnB typos
iNow Posted June 14, 2008 Posted June 14, 2008 John, I just want to point out that you didn't address the question which you quoted at the beginning of your response.
bascule Posted June 14, 2008 Author Posted June 14, 2008 Have they been corrected? And are the corrections themselves correct? To the best of our knowledge, yes, and that's the best answer science can ever give. Am I insane or are they? Neither the proxy nor the models agree with obs, but they agree with each other and so must be right? Since when does proxy data trump real world obs? The problem here is whether or not the "real world obs" are a suitable data set for the type of analysis being performed. Here's RealClimate's take: http://www.realclimate.org/index.php?p=179 This assumes that the observed trends are all real, which is reasonable when two independent measurements agree. But both upper-air observing systems are poorly suited in many respects for extracting small, long-term changes. These problems are sufficiently serious that the US National Weather Service (NESDIS) adjusts satellite data every week to match radiosondes, in effect relying upon radiosondes as a reference instrument. This incidentally means that the NCEP/NCAR climate reanalysis products are ultimately calibrated to radiosonde temperatures. And getting back to the topic at hand, the question is do these data errors actually affect the model output? RealClimate concludes... no: The most likely resolution of the "lapse-rate conundrum," in my view anyway, is that both upper-air records gave the wrong result. The instrument problems uncovered by these papers indicate that there is no longer any compelling reason to conclude that anything strange has happened to lapse rates. From the point of view of the scientific method, the data do not contradict or demand rejection of the hypotheses embodied by models that predict moist-adiabatic lapse rates, so these hypotheses still stand on the basis of their documented successes elsewhere. Further work with the data may lead us to more confident trends, and who knows, they might again disagree to some extent with what models predict and send us back to the "drawing board." But not at the present time.
JohnB Posted June 14, 2008 Posted June 14, 2008 iNow, the question presupposed that the errors have in fact been corrected. Since I don't think this is the case, the question becomes moot. To the best of our knowledge, yes, Since the bucket thing blew to prominance a couple of weeks ago, I doubt that adjustments have been published. However, even at the minimum guesstimates I've seen the warming trend for post 1975 could drop by 15%. How could this not effect the models? The problem here is whether or not the "real world obs" are a suitable data set for the type of analysis being performed. I hate to appeal to popularity:D, but NOAA, UAH and the Hadley Centre all seem to think the data is correct and useful, but who are they to argue? I would add that the author of the article in the link is the same person who just published a paper that argues that the speed of a drifting balloon at altitude is a more accurate guage of temperature than the thermometer carried by said balloon. I also fail to see how temp records for the troposphere are not a suitable dataset for considering temps in the troposphere. Sherwood in this piece offers nothing but nit picking and handwaving. In his paper he offers proxy theory and model theory. The Hadley Centre gives data that contradicts his assertions. Does theory trump obs, or do obs trump theory? Either the part of the theory that predicts tropospheric warming is wrong or all the independent methods for taking temp are wrong. (And are wrong in the same way to the same degree.) Or do we "adjust" the data to fit the models? As a slight aside, the Douglass 2008 paper found two of the models whose predictions matched the trop obs, these two unfortunately were also way out WRT surface temps. I relise I must come across sometimes as "anti model", I'm not really. I want the models to work and be accurate. For this to happen the modellers must have accurate data to calibrate against, or GIGO. I don't point out data flaws to say "Ha, the models are wrong", I'm trying to say "How can modellers get it right if they are given flawed data to work with?" If we are to calibrate against the paleoclimate record then those reconstructions must be accurate. The current "Divergence Problem" counts against this being the case. Our intrumental records need to be accurate, not changed from month to month as GISS do. For me it's not about AGW v skeptics, it's about getting the basic science right. It's about being open with data and methods so they can be checked. Its about showing the methodology in papers. I've read many papers during my time at SFN and only in climate papers do I recall reading things like "We applied the usual statistical calculations". Only in climate coould someone say, as Phil Jones did to metoerolgist Warwick Hughes; Why should I make the data available to you, when your aim is to try and find something wrong with it. How would that go down if said by someone defending a thesis? How do we confirm Lonnie Thompson's calculations for using O18 as a temp proxy if he won't archive the data? I understand that Realclimate tries to be the seat of received wisdom, but CA make some very valid arguments, as do Pielkes Sn and Jn. There are a number of people out there just as qualified as the boys at RC who disagree with them. There are a range of bloggers who quote peer reviewed literature for their points and it behooves (I've been looking for a reason to use that word:D) anyone who wishes to consider the situation fully to read as much as possible, rather than just the sanitized, edited version that RC puts out. Try reading what mathematicians and statisticians say about the appropriatness of the statistical methodology used in some climate papers. It's not about right or wrong, it's about the science. And in the interests of full disclosure, so that I too can be justly reviled by Exxonsecrets, I have received money from the oil industry. 30 years ago I pumped petrol at a Shell service station. There, it's off my chest,my shameful secret is out. My bias is obvious.
bascule Posted June 14, 2008 Author Posted June 14, 2008 I hate to appeal to popularity:D, but NOAA, UAH and the Hadley Centre all seem to think the data is correct and useful, but who are they to argue? [...]] I also fail to see how temp records for the troposphere are not a suitable dataset for considering temps in the troposphere. Well, the question here was whether upper air observing systems are suitable for analysis of the long term variabilities that a climate model wishes to expose. Either the part of the theory that predicts tropospheric warming is wrong or all the independent methods for taking temp are wrong. (And are wrong in the same way to the same degree.) The latter begins to seem more likely if you consider that the satellite observations are calibrated against the radiosondes. The methods aren't independent at all. Or do we "adjust" the data to fit the models? Nobody's suggesting we do that. There is a problem here, and you're correct, it's a case where the models are contradicting empirical data. However as you noted originally, it's empirical data which is suspect due to measurement errors ultimately stemming from the sun heating up the temperature sensors on the radiosondes (and satellite data calibrated against the radiosondes). Our intrumental records need to be accurate, not changed from month to month as GISS do. On the contrary, if there's a reason to believe the data are wrong every month, and a sound methodology for performing a correction, I'd rather they correct the data rather than continuing to use incorrect data. I understand that Realclimate tries to be the seat of received wisdom, but CA make some very valid arguments, as do Pielkes Sn and Jn. There are a number of people out there just as qualified as the boys at RC who disagree with them. There are a range of bloggers who quote peer reviewed literature for their points and it behooves (I've been looking for a reason to use that word:D) anyone who wishes to consider the situation fully to read as much as possible, rather than just the sanitized, edited version that RC puts out. I entirely agree. There are also things affecting the climate system which are underassessed, such as land use (something which becomes difficult for a GCM to model). When you begin looking at a combination of regional effects which play into the climate system, it becomes difficult to form a gestalt model of them as they are so different around the globe. However, that's not to undermine the successes of GCMs to date... Try reading what mathematicians and statisticians say about the appropriatness of the statistical methodology used in some climate papers. I've read quite a few articles from angry statisticians with no climate science background. These tend to be FUD-laced articles operating under the premise that climate science groups and agencies don't employ statisticians and that climate scientists are making a bunch of amateurish mistakes. This certainly isn't the case.
JohnB Posted June 17, 2008 Posted June 17, 2008 Bascule, not ignoring this thread. I'm thinking about your comments. Cheers.
iNow Posted January 16, 2009 Posted January 16, 2009 Every year in April, there is a Mathematics Awareness Month with the goal to increase public understanding of and appreciation for mathematics. This year, the theme for the event is Mathematics and Climate. http://www.mathaware.org/index.html The American Mathematical Society, the American Statistical Association, the Mathematical Association of America, and the Society for Industrial and Applied Mathematics announce that the theme for Mathematics Awareness Month, April 2009, is Mathematics and Climate. One of the most important challenges of our time is modeling global climate. Some of the fundamental questions researchers are currently addressing are: How long will the summer Arctic sea ice pack survive? Are hurricanes and other severe weather events getting stronger? How much will sea level rise as ice sheets melt? How do human activities affect climate change? How is global climate monitored? Calculus, differential equations, numerical analysis, probability, and statistics are just some of the areas of mathematics used to understand the oceans, atmosphere, and polar ice caps, and the complex interactions among these vast systems. How sweet is that? Thanks, Chris. A great big hat tip your way.
JohnB Posted January 28, 2009 Posted January 28, 2009 I suppose $140 million from the Recovery Act will help accuracy too.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now