CharonY Posted March 10, 2014 Share Posted March 10, 2014 One of the recurrent arguments posts that are made touch upon intelligence in humans and its heritability. Almost in every case there are serious misconceptions that result in the same type of often fruitless discussions conducted over and over again. In this thread I would like to clear up some issues to the best of my abilities. While I will stick to present literature, I am not an expert in the area of intelligence research. Hopefully, I will provide enough information to start a proper discussion or at least to clear up some low-level confusion.I will preface by stating that many studies utilize measures such as IQ, which is generally accepted not to be a precise measure of intelligence. Nonetheless there is more comparable data using this type of measurement and I will start the discussions centered around this simplified measure. The first obvious question is: "What is heritability?"What some erroneously assume is that it is fixed value that indicates the genetic influence of a given trait. For example, various literature estimate heritability of IQ to be between 0.4-0.8 and some equate that to saying that IQ is up to 80% under genetic control. However this interpretation is not correct. The estimation of heritability is given as the ratio of genetic variation to total variation in a trait within a given population. What often causes confusion is the population bit. It actually means that (in this case) up to 80% of the observed variance across individuals is genetic. This is completely different to the assertion that 80% of the trait is under genetic control. This has several consequences: 1) Obviously using this measure you cannot elucidate the contribution of environment and genetics to the individual. In other words, in one person the genetic component to their IQ may be high, in another it may be low, but the measure of heritance will not resolve that.2) Related to that it means that looking at different populations the measure of heritability will change. It is not a fixed constant. 3) Conversely, the same population could yield very different heritability scores, depending on environmental conditions. For example, if everyone shares the same conditions, the contribution of environmental factors to IQ variance will go down and thus yielding higher heritability. The opposite is true for varied conditions.4) In addition, heritability requires variability to make any sense. Traits that are 100% genetically controlled, but do not show variance (say, number of heads) are not defined. This again highlights that one cannot conflate heritability with genetic control.Ok, you may ask, but what about twin studies? There is evidence that e.g. identical twins have more similar IQ values than fraternal ones, for example. But even this situation is not straightforward. For example older studies from Soviet times showed higher heritability of IQ than conducted in Western studies (Elena L. Grigorenko Nova Publishers, 1997). One interpretation was that environmental conditions (in economic and schooling terms) under the Soviet regime were much more restricted and less varied than in the Western world at that time. Another issue is that in order to conduct a "proper" twin studies for this purpose, the twins would have had to be assigned to very random environments (spanning the variability found in a given test population). This is usually not the case. But the issue is actually far more complex, which I shall discuss in a follow-up post (and adding references) once I find more time and if there is interest. 3 Link to comment Share on other sites More sharing options...
chadn737 Posted March 10, 2014 Share Posted March 10, 2014 (edited) Nice explanation of heritability. To add to this, let me make an analogy for why heritability changes depending on environmental/population context. Say you have two buckets. The first you fill halfway with oil, the second you fill 1/4 of the way with oil. The two vary by 50% in total volume and 100% of the variance is due to differences in oil. Now fill the first bucket up the rest of the way with water and the second bucket up to half-way with water. The two still vary by 50%, but now, differences in oil only contribute to 50% of the variance, while water contributes to the other half. Heritability is a ratio. The more equal you make two sources of variance, you necessarily increase the influence of the others. If you have a genetically homogenous population, such as with inbred lines of plants, then you reduce source of genetic variance and the effect of environmental variance increases. Conversely if you grow genetically different plants under exactly the same conditions, then the effects of genetics predominates. This is why it is fallacious to claim that differences of IQ in populations with very different and variable environmental conditions are largely heritable based on measures made in other populations. This can lead to overestimation of heritability and underestimation...the latter being a point often overlooked. The misinterpretation of heritability has been rampant in discussions of race, where people incorrectly apply measures of heritability in one population to another. Lets consider now two groups of two buckets each. In both groups one bucket is half full of oil and the other 1/4 with oil. So the variance in oil is the same in both groups. In group one, we dont fill any of the buckets with water, but in group two we fill the half-full bucket all the way with water and the 1/4 bucket up to 3/8s full the rest of the way with water. The second group has greater variability in volume than the first group, but the the two groups actually have identical differences in the amount of oil, so the variance BETWEEN the two groups of buckets are entirely due to water, not oil. Since heritability is so messy, why bother studying it? Because it actually has importance in terms of evolution. The more heritable a trait is, the stronger its response to selection will be. This is actually known as the Breeders Equation (R=h2*S), because of its use in plant and animal breeding. Ironically, when you talk how things have evolved, often times those traits that have been under the strongest selection during evolution are the least heritable. This is because selection typically eliminates variance, unless it is balancing selection. Strong positive selection will drive an allele to high frequencies in a population, even to fixation. In this case, you have now eliminated the genetic variance in a population because all individuals share the same alleles and so any variance is now due to non-genetic factors. This is not a universal rule, but it is a general trend. Thats why there is a lot more variance for traits like height than there are for traits like how many legs you have. As a note on twin studies, particularly the change of heritability with age, this is actually makes a lot of sense if you think in developmental terms. In adulthood, an individual reaches a point of almost stasis. If we are talking about something like height...that does not change much in adulthood. However, during early development, you are constantly growing and developing. You are also more restricted. You are under the care of parents, guardians, teachers, etc. Children are generally more plastic, while traits solidify, for lack of a better term, in adulthood. The observation that the heritability of IQ increases in age has actually been observed in many studies. Finally, while twin studies have been the predominant means of measuring heritability for many decades, with continued improvement, recently....by which I mean in 2010....a new method of estimating heritability directly from the linkage disequilibrium of SNPs in genome wide association studies has offered a very powerful new way of measuring heritability independent of the issues that often confounded twin or sibling studies. This method continues to be improved and has been shown to be more general in its applicability than previously assumed. This method makes it possible to determine a lower-bound on the heritability. When applied to IQ, it was estimated that heritability of intelligence to be at least ~0.4 for crystalized-type intelligence and ~0.51 for fluid-type intelligence. Edited March 11, 2014 by chadn737 2 Link to comment Share on other sites More sharing options...
CharonY Posted March 10, 2014 Author Share Posted March 10, 2014 Brilliant, I was secretly hoping that someone with more in-depth knowledge would add to that part (and I was planning on citing Davies et al.). Link to comment Share on other sites More sharing options...
overtone Posted March 11, 2014 Share Posted March 11, 2014 (edited) When applied to IQ, it was estimated that heritability of intelligence to be at least ~0.4 for crystalized-type intelligence and ~0.51 for fluid-type intelligence. A fine approach. However: At this point, two related warnings: one is that it is (as seen there) easy to forget, after a few paragraphs of sound discussion like this, the dutiful initial caveat that IQ is not a "precise" (and possibly not even a reliable) measure of "intelligence" as independently defined. Those lower bounds may need to be lowered yet by quite a bit, depending on the actual correlation of IQ scores and whatever properties or capabilities one eventually decides to have wanted to measure. Presumably it's not an inherited ability to sit in a chair and register details of the tones of a teacher's voice for hours, at age eight, eh? The second is that the relationship between IQ and environment even, let alone intelligence and environment, is very complex and poorly understood - there is a gap between inheritance of genetic features etc and their correlation with IQ, and ascription of some kind of causal relationship that fixes an inherited feature or capability. Remember the Flynn effect Komlos's height studies, Steele's measuring of stereotype bias in examinations, etc. If the heritability of IQ is 50% now, after ten decades of presumably non-heritable Flynn increase in the scores, it must have been much higher in the 1940s - is that reasonable? Edited March 11, 2014 by overtone Link to comment Share on other sites More sharing options...
chadn737 Posted March 11, 2014 Share Posted March 11, 2014 (edited) A fine approach. However: At this point, two related warnings: one is that it is (as seen there) easy to forget, after a few paragraphs of sound discussion like this, the dutiful initial caveat that IQ is not a "precise" (and possibly not even a reliable) measure of "intelligence" as independently defined. Those lower bounds may need to be lowered yet by quite a bit, depending on the actual correlation of IQ scores and whatever properties or capabilities one eventually decides to have wanted to measure. Presumably it's not an inherited ability to sit in a chair and register details of the tones of a teacher's voice for hours, at age eight, eh? The second is that the relationship between IQ and environment even, let alone intelligence and environment, is very complex and poorly understood - there is a gap between inheritance of genetic features etc and their correlation with IQ, and ascription of some kind of causal relationship that fixes an inherited feature or capability. Remember the Flynn effect Komlos's height studies, Steele's measuring of stereotype bias in examinations, etc. If the heritability of IQ is 50% now, after ten decades of presumably non-heritable Flynn increase in the scores, it must have been much higher in the 1940s - is that reasonable? In response to your first caveat This study actually made the distinction between crystalized-type intelligence and fluid-type intelligence. The former being a measure of acquired knowledge, in particular vocabulary. The latter is assessed by abstractions during a timed test and so are not based on acquired knowledge. While I think your argument has some validity regarding crystalized-type intelligence, I think it fails to address the abstract on-the-spot thinking involved in fluid-type intelligence. Secondly, the individuals used in this study are all adults, most having been tested at age 11 and then retested in adulthood. So we are not talking about school kids. I understand that there is much debate over IQ, there has been for decades. Despite all that, it has remained the most reliable and robust measure of intelligence we have. In response to the second point. It does not follow that measures in the 1940s would be more heritable because of the Flynn Effect. To explain to everyone, the Flynn Effect is the observed increase in the mean IQ score over the years. The key word in that description was "mean".....heritability, as both CharonY and myself have explained is about the variance in a measured trait, not the mean. In order for measures of heritability in the 1940s to differ significantly from today, there needs to be a change in the observed variance. So the real question is not whether or not the mean IQ score has changed, but whether or not the variance has changed. Contrary to what one might expect, the empirical data does not support a change in variance of IQ scores even as the mean has changed. So in fact we do not expect measures of heritability to differ greatly due to the Flynn Effect. We might expect them to differ because of methodology, the design of twin studies has change dramatically and the aforementioned study used a method that relies on GWAS data (aka post-human genome) and a technique that was only developed in 2010. In addition, I have reiterate the fact that these adults have been retested. So those who were tested as part of their study cohort back in the 30s or some other time have been tested more recently. Furthermore, the members of these cohorts were all analyzed within the context of their cohort. So those individuals from the various Lothian Birth Cohorts are all born in Scotland and within a year of each other. As a result, you are talking about analysis within a very well defined group, who all share the same cultural norms and birth period. By doing this, you control for the observed increases in mean IQ that go with the years and/or region. In other words you are controlling for the potential environmental factors and biases that you are talking about. Edited March 11, 2014 by chadn737 Link to comment Share on other sites More sharing options...
overtone Posted March 11, 2014 Share Posted March 11, 2014 (edited) This study actually made the distinction between crystalized-type intelligence and fluid-type intelligence. The former being a measure of acquired knowledge, in particular vocabulary. The latter is assessed by abstractions during a timed test and so are not based on acquired knowledge. While I think your argument has some validity regarding crystalized-type intelligence, I think it fails to address the abstract on-the-spot thinking involved in fluid-type intelligence. The Flynn effect, which most people agree is environmental, makes no distinction - the effect is at least as pronounced in "fluid" as in "crystallized" intelligence, to whatever extent that distinction actually makes sense. Meanwhile, I have seen no evidence that IQ tests are any more reliable or accurate cross-cultural measures of "fluid intelligence" than they are of any other kind, or that "fluid intelligence" measurement is any more independent of all the confounding factors known and unknown. In response to the second point. It does not follow that measures in the 1940s would be more heritable because of the Flynn Effect. To explain to everyone, the Flynn Effect is the observed increase in the mean IQ score over the years. The key word in that description was "mean".....heritability, as both CharonY and myself have explained is about the variance in a measured trait, not the mean. In order for measures of heritability in the 1940s to differ significantly from today, there needs to be a change in the observed variance. It's not that simple. As your bucket analogy illustrates for us, variance needs to be calculated carefully with respect to changing means. In the case of IQ tests, moreover, we are dealing with a manipulated variance - the tests are calibrated to produce not only a given median score, but a standard deviation of fifteen or sixteen points. To reiterate: we are faced with an apparently environmental addition of a least twenty points to the median IQ scores since WWII, greater than the original standard deviation since manipulated. This increase appears to have been disproportionately in the lower scores (which adds complexity to the removal of the calibration, and complicates your calculation of variance). How exactly are you subtracting that out, to calculate how much of the remaining score to use in calculating the variance you then attribute to inheritance? So in fact we do not expect measures of heritability to differ greatly due to the Flynn Effect That expectation is what I am pointing to, as easily misled by an overlooked environmental factor or two. And that is just the Flynn effect - one of several problems here. So those individuals from the various Lothian Birth Cohorts are all born in Scotland and within a year of each other. As a result, you are talking about analysis within a very well defined group, who all share the same cultural norms and birth period. By doing this, you control for the observed increases in mean IQ that go with the years and/or region. In other words you are controlling for the potential environmental factors and biases that you are talking about. I simply do not share your confidence in the amount of control of the relevant environment one can obtain by such methodology, absent a clear and complete understanding of what environmental aspects are in fact relevant; especially when used to make cross-cultural comparisons or determinations of "heritability". The underlying assumptions of relevant environment as well as genetic heritage seem dubious - "Scotland", for example, is not a uniform and accurately characterized environment, and so forth. Edited March 11, 2014 by overtone Link to comment Share on other sites More sharing options...
chadn737 Posted March 11, 2014 Share Posted March 11, 2014 It's not that simple. As your bucket analogy illustrates for us, variance needs to be calculated carefully with respect to changing means. In the case of IQ tests, moreover, we are dealing with a manipulated variance - the tests are calibrated to produce not only a given median score, but a standard deviation of fifteen or sixteen points. To reiterate: we are faced with an apparently environmental addition of a least twenty points to the median IQ scores since WWII, greater than the original standard deviation since manipulated. This increase appears to have been disproportionately in the lower scores (which adds complexity to the removal of the calibration, and complicates your calculation of variance). How exactly are you subtracting that out, to calculate how much of the remaining score to use in calculating the variance you then attribute to inheritance? Except that the variance calculated in the paper I cited as evidence was not so simplistic as that. Given how throughly IQ has been criticized and analyzed, I think we should give a little more credit to researchers in the field. The paper I cited in fact references Rogers (1999), who completely reanalyzed Flynn's original data. He discusses in length the various scenarios and potential changes in variance that would lead to observed changes in mean. What he showed, however, was that: "These results suggest strongly that the cause of the Flynn Effect is not an overall change in the variance". The data used by Rogers (1999) was used by Rowe & Rodgers (2002) and again there is little evidence for significant changes in variance that would explain the change in means. You claim that the "increase appears to have been disproportionately in the lower scores"....do you have actual data or evidence that supports this? From the papers I have just cited, there seems to be little evidence of significant change in the variance. If the change in scores occurred disproportionately in one tail of the distribution, as you have just claimed, then this would significantly effect the variance. So yes, I am asking you to back up this claim with an actual paper or data set. That expectation is what I am pointing to, as easily misled by an overlooked environmental factor or two. And that is just the Flynn effect - one of several problems here. How so? You are making allusions to vague environmental factors and claiming these will have an effect. I explained why this would not be the case. Measures of heritability would differ if the variance differed, but there is no evidence that the variance has indeed changed. So I have supported the claim that the Flynn effect will not change heritability measures. So what are all these environmental factors, how have they not been accounted for, etc? The studies conducted to measure heritability are not simplistic or ignorant of potential environmental factors. Thats why they are carefully designed to account for environmental factors and rule them out. I'd actually like to see a detailed explanation of how they have failed to account for the environment when they so clearly have. I simply do not share your confidence in the amount of control of the relevant environment one can obtain by such methodology, absent a clear and complete understanding of what environmental aspects are in fact relevant; especially when used to make cross-cultural comparisons or determinations of "heritability". The underlying assumptions of relevant environment as well as genetic heritage seem dubious - "Scotland", for example, is not a uniform and accurately characterized environment, and so forth. I understand your skepticism, but you are wrong. Its not simply that they used a dubious heritage like "Scotland". If you really think that the researchers conducting the study used such simplistic methodologies then you have clearly not read the paper. In GWAS studies, it is standard practice to account for population stratification, which can generate false association of genetic variants due to population structure. Robust methods of dealing with this exist and are used in the paper I cited. Since they have GWAS data, they can actually account for heritage, even filtering for degrees of relatedness (also done). As I have pointed out to you in a past, this method is capable of measuring heritability in an assumption free manner, the mathematical case for which is expanded upon in the second paper regarding this method that I cited. If we look at other methods, such as twin studies, there has been in-depth testing of the assumptions made regarding twins and also complex modeling to statistically test the alternatives of environmental influence. So I need a lot more detail and evidence to buy your skepticism of these methods. 1 Link to comment Share on other sites More sharing options...
overtone Posted March 11, 2014 Share Posted March 11, 2014 Given how throughly IQ has been criticized and analyzed, I think we should give a little more credit to researchers in the field. Or rather, given how consistently and repeatedly the research conclusions in the field have been shown to be based on erroneous assumptions and methodological infelicities we should be very wary of taking the most recent round of results at face value. The paper I cited in fact references Rogers (1999), who completely reanalyzed Flynn's original data. He discusses in length the various scenarios and potential changes in variance that would lead to observed changes in mean. What he showed, however, was that: "These results suggest strongly that the cause of the Flynn Effect is not an overall change in the variance". And the question would be: So? No one here has suggested otherwise. You claim that the "increase appears to have been disproportionately in the lower scores"....do you have actual data or evidence that supports this? Some are cited in the link I posted above. Do you need others? I'm a bit surprised you would accept an unchanging relationship of score variance and heritability in the presence of the Flynn Effect, without very careful argument - surely it seems as unlikely to you as it does to me that the inheritance influence on significantly rising IQ scores would maintain a given proportion to whatever environmental influences were boosting the overall score? From the papers I have just cited, there seems to be little evidence of significant change in the variance. If the change in scores occurred disproportionately in one tail of the distribution, as you have just claimed, then this would significantly effect the variance. What you said above was that the researchers concluded that change in variance did not explain the Flynn Effect. That is not the same as determining the underlying variance to have not changed. As the IQ tests themselves are manipulated to produce a certain standard deviation, little about any underlying "intelligence" can be concluded from a superficially unchanging score variance in itself - one must adjust for the manipulations of the test constructors. That, as far as I can tell, has not been done by the researchers cited. The studies conducted to measure heritability are not simplistic or ignorant of potential environmental factors. Thats why they are carefully designed to account for environmental factors and rule them out. I'd actually like to see a detailed explanation of how they have failed to account for the environment when they so clearly have. Unfortunately for the persuasiveness of that, we have a long and disreputable history of IQ research to draw lessons from. One of those lessons is that the illusion of having accounted for the environmental influences is universal and so far has meant nothing. if these guys really have accounted for all the significant environmental influences on IQ scores, so they can make cross-cultural comparisons of heritability, they are the first ever to achieve such a feat - and without, as yet, even being able to describe exactly what it is they are measuring with these IQ tests. They are not, of course, the first to claim the achievement. Link to comment Share on other sites More sharing options...
chadn737 Posted March 12, 2014 Share Posted March 12, 2014 Some are cited in the link I posted above. Do you need others? You mean a link to a wikipedia article????? You need to be a lot more specific than that. Specifically which ones? Yes....I need more than a vague reference to a wikipedia article with no specification of which sources you are citing. Link to comment Share on other sites More sharing options...
hypervalent_iodine Posted March 12, 2014 Share Posted March 12, 2014 ! Moderator Note overtone,This habit of ignoring requests for evidence and support for your claims stops now. I refer you to the many modnotes issued to you in the past addressing the same topic. Staff are getting a little tired of constantly having to say this to you and please be aware that suspension will result if you keep it up. Link to comment Share on other sites More sharing options...
Arete Posted March 12, 2014 Share Posted March 12, 2014 I've done a few studies looking at the comparative roles of plasticity vs adaptation in lizards, and one thing that I've always felt somewhat skeptical about with regards to IQ is just how squishy it is as a phenotypic character. Generally the phenotypic traits we use on animals are characters which have hard measurements - e.g. presence/absence characters (horns/no horns) , measurements like geometric morphometrics (measured differences in body shape), and count characters (e.g. scale or pore counts). These types of measurements are generally fixed for an individual's lifetime once they reach adulthood, they are also absolute rather than relative - in that for e.g. snout-vent length of a lizard doesn't change relative to the study group, and can be used or combined with any other measurements without significant issue. Where as IQ is a) normalized. That means the mean and standard deviation of a given IQ test in a given study group is fixed, regardless of whether the median scores of those two independent groups is different. This makes IQ inherently relative to the particular test and study group, and therefore results of one test in one group cannot easily be combined with scores from another group. B) IQ is variable for an individual i.e. "There's no such thing as "an" IQ. You have an IQ at a given point in time. That IQ has built-in error. It's not like stepping on a scale to determine how much you weigh." http://www.livescience.com/36143-iq-change-time.html c) There is also the issue of what exactly IQ is measuring, and how accurate it is as a metric http://news.sciencemag.org/2011/04/what-does-iq-really-measure What this means is that when measuring the heritability of intelligence, you can't treat IQ scores in the same way you do most morphological traits that evolutionary biologists look at when they examine the heritability of phenotypic traits. There's a lot more caveats and noise than there generally is for many other studies of trait inheritance, due to the inherent complexity of trying to measure an abstract trait like intelligence. Link to comment Share on other sites More sharing options...
chadn737 Posted March 12, 2014 Share Posted March 12, 2014 (edited) And the question would be: So? No one here has suggested otherwise. Actually, you have: "This increase appears to have been disproportionately in the lower scores..." This is actually a matter of basic statistics. If you have a distribution of scores around a mean and there is a disproportionate increase in one tail of the distribution, then the variance is going to change. The only way for the variance to not change is to have an equivalent change of scores throughout the distribution, but that is not what you are claiming. And I'm surprised that you are asking the question "So?". As CharonY and I explained at the beginning of the thread, heritability is a measure that explains the variances of a trait. Whether or not the variance changes is of quite importance to the measure of heritability. Some are cited in the link I posted above. Do you need others? I'm a bit surprised you would accept an unchanging relationship of score variance and heritability in the presence of the Flynn Effect, without very careful argument - surely it seems as unlikely to you as it does to me that the inheritance influence on significantly rising IQ scores would maintain a given proportion to whatever environmental influences were boosting the overall score? I reiterate my request for specific sources. The problem is that because the variance has not really changed, that suggests that there have been increases in the IQ scores in a rather proportionate manner. There are multiple possibilities. I don't think it is appropriate to talk of heritability as a set score, but rather as a range, given that it is context dependent. I agree with you that a changing IQ score likely reflects a changing environmental factor. That these scores have been increasing in the matter of only a couple of generations is far to rapid to really be explained by a significant genetic shift. Earlier you stated that you would expect that heritability measurements would have been stronger in the past than today: "If the heritability of IQ is 50% now, after ten decades of presumably non-heritable Flynn increase in the scores, it must have been much higher in the 1940s - is that reasonable?" If as you claim, there has been a disproportionate increase in scores at the lower end due to environmental changes, this actually suggests a reduction in the environmental sources of variability, so heritability would be higher today than it was in the 1940s, not lower, since we reducing one source of variance. But why might heritability not have changed despite an increase in the mean and no change in the variance? One possibility is that there has been the introduction of a factor which affects IQ scores in a rather uniform manner, while previous sources of variance remained unchanged. To use my bucket analogy. We have two buckets, one filled 1/4 with oil, 1/4 with water, the other with 1/3 oil, 1/3 water. Now lets say you add 1/3 the volume of sand to both. The mean volume for both increases uniformly, but the variance remains unchanged, as does the relative contribution of oil to that variance. Another possibility, albeit less likely, is a systematic change in both the genetic and environmental factors. I am not denying that heritability hasn't changed, I think it entirely likely, however, it would be in the exact opposite direction than you suggest. I also think that you have not ruled out the possibilities I just suggested, so it does not follow to me that heritability is significantly different today than it was in the 1940s. What you said above was that the researchers concluded that change in variance did not explain the Flynn Effect. That is not the same as determining the underlying variance to have not changed. As the IQ tests themselves are manipulated to produce a certain standard deviation, little about any underlying "intelligence" can be concluded from a superficially unchanging score variance in itself - one must adjust for the manipulations of the test constructors. That, as far as I can tell, has not been done by the researchers cited. As Rodger's points out, if there is a change in the variance, than the Flynn Effect could be explained simply by the effects of sampling, rather than an overall change in IQ scores (see the figure below). The argument that variance does not explain the Flynn Effect supports Flynn's own original Model of unchanging variance and a uniform increase in the scores. As to the argument that IQ tests are manipulated to produce a certain standard deviation. Lets examine how this would play out in terms of the distribution of variance. According to standard practice, 1 SD is set to 15 points above or below. In this case, we would not expect the variance to remain constant. If there were a disproportionate increase in scores from those on the lower scale, than this would increase the number of individuals within -1 SD of the median and a decrease within +1 SD. As a result, the distribution of the variances would shift and not remain constant. So your argument that this is all explained by the normalization methods does not hold up, because any real shift in the variance would still be evident in the data itself as a changing distribution. Unfortunately for the persuasiveness of that, we have a long and disreputable history of IQ research to draw lessons from. One of those lessons is that the illusion of having accounted for the environmental influences is universal and so far has meant nothing. if these guys really have accounted for all the significant environmental influences on IQ scores, so they can make cross-cultural comparisons of heritability, they are the first ever to achieve such a feat - and without, as yet, even being able to describe exactly what it is they are measuring with these IQ tests. They are not, of course, the first to claim the achievement. Thats a strawman. Nobody claims to have made a "cross-cultural" comparison. In fact I have made the exact opposite argument. Accounting for environmental effects is not an illusion. These are well established methods that deal with unknown environmental effects very well. You and I have gone back and forth on this quite a bit and not once have you provided a solid case that environmental effects will masquerade as genetic effects in well designed studies. Have there been badly designed studies....yes...that is true in all fields, but the papers I have cited, most notably Davies et al. are not poorly designed studies and they have gone to great lengths to account for compounding factors. How exactly did they fail to account for environmental influences? Please explain in detail, it would benefit me greatly. This seems nothing more than an argument from incredulity with vague unsupported claims that environmental influences have not been accounted for. Edited March 12, 2014 by chadn737 Link to comment Share on other sites More sharing options...
overtone Posted March 12, 2014 Share Posted March 12, 2014 Thats a strawman. Nobody claims to have made a "cross-cultural" comparison. Any assignment of a "heritability" fraction to IQ scores that one claims to be good for more than one culture involves a cross-cultural comparison. Earlier you stated that you would expect that heritability measurements would have been stronger in the past than today: "If the heritability of IQ is 50% now, after ten decades of presumably non-heritable Flynn increase in the scores, it must have been much higher in the 1940s - is that reasonable?" If as you claim, there has been a disproportionate increase in scores at the lower end due to environmental changes, this actually suggests a reduction in the environmental sources of variability, so heritability would be higher today than it was in the 1940s, not lower, since we reducing one source of variance. When the reduction in environmentally caused score variability is a consequence of an increase in the favorable environmental contribution to test performance in a fraction of the test takers, as the Flynn effect seems to indicate, that might easily be (depending on the genetic mediation of the mechanism of environmental effect) a reduction of the fractional influence of inheritance on each test taker's test performance. This reduction of the fractional inherited contribution to the scores is then concealed by the recalibration of the test, to restore the original standard deviation in the scores - the apparent restoration of the original linkage of variability and varied inheritance becomes at least partly a consequence of manipulating the variance, changing the test. The test will have been made more sensitive to environmental factors and influences, and less to inherited ones. This seems to have been overlooked by your linked analyses. And that is just one of the factors not dealt with. There's no problem here if we are not using IQ tests for anything except IQ scores - the recalibration just normalizes the scores. But if we are then taking the score as a proxy measure for some underlying physical reality such as "intelligence", we are getting into trouble. You and I have gone back and forth on this quite a bit and not once have you provided a solid case that environmental effects will masquerade as genetic effects in well designed studies. Have there been badly designed studies....yes...that is true in all fields, but the papers I have cited, most notably Davies et al. are not poorly designed studies and they have gone to great lengths to account for compounding factors. Well, you are free to cross your fingers and bet that this time, unlike all the other times in all the other hundreds of IQ studies going back for decades now, they covered their bases without knowing what the bases were, well designing their studies by - essentially - good luck. You might be right - we'll probably find out some day. Link to comment Share on other sites More sharing options...
CharonY Posted March 13, 2014 Author Share Posted March 13, 2014 The OP was aimed at discussing that even if we used IQ as measure of something (whatever it is) the assessment of heritability is not what many people assume it to be (i.e. the result of genetic control). Usually these discussion revolve around the measure of IQ and I wanted to provide a bit more context rather than only that the measure of IQ is not really robust. While discussing the role of IQ as a measure is relevant, it is a bit tangential to the overall point. In this context a paper from Turkheimer et al. (Psychol Sci. 2003;14:623-628) is quite interesting in which it was shown that heritability is dependent on the socioeconomic status of tested children. In children with a low socioeconomic status the heritability of IQ is lower (i.e. environmental conditions account more to the score), whereas in affluent families the situation is reversed. What the mechanisms may be for this is, and I think that everyone in this thread is more or less on the same page with this, are unclear. What can be said according to this study, however is that the forces (whatever they may be) are different in poor environments from rich ones. So even as a squishy score, the utility here is that its distribution will vary quite a bit depending on the environment you look at. Now, this finding may impact studies that do not correct for socioeconomic status, for example (depending on the population under investigation). The Flynn effect requires a deeper discussion, I think, as there are a host of study that have tried to shed illumination on this issue. For now I should mention that one cannot infer from that heritability actually has changed. That being said, depending on which populations are being analyzed, it is fully expected (as chadn737 implied) that since the 40s significant changes would be found in heritability, depending on the populations being evaluated. As in the above example, socioeconomic changes would have an influence (it would be hard to identify equivalent groups from the 40s an today's population, for example). Likewise changes in schooling system as well as general access to information could influence the assessment of heritability. If we took a random sample of the population I would expect a higher heritability measure in the 40s as I would think that there would be a higher disparity in educational attainment, for example. This, again highlights the relevance of thinking in population distributions when one talks about heritability. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now