nonstoptaxi Posted June 25, 2004 Posted June 25, 2004 Hi everyone, need advice on this. I realise that it should now be best practice to report effect sizes with respects to reporting on statistical results. I'm fine with this on anovas, t-tests, etc. However, I'm having trouble knowing if i should be reporting (or even if it is possible/appropriate) an effect size for a correlation (specifically a pearson's). Can someone give guidance on this? If I do have to do this, can you please give me the formula to calculate it correlation? Many thanks, await impatiently for some help. Cheers. J
aommaster Posted June 25, 2004 Posted June 25, 2004 Its a complex formula called the Product moment correlation co-efficient, or PMCC If the formula has been implemented correctly, you should get a value from 1 to -1. -1 means a PERFECT negative correlation and +1 means a PERFECT positive correlation. When I say perfect, I mean that it is a straight line! This is the link to it: http://www.mnstate.edu/wasson/ed602pearsoncorr.htm
Glider Posted June 26, 2004 Posted June 26, 2004 It's hard to report an effect size for correlation, as there is no effect. Correlation simply tests the degree to which two related variables vary together. The statistic showing the strength of the relationship is the correlation coefficient (r), and that is reported in the results anyway (along with n and p). However, it is convention to use R squared (sometimes called the coefficient of determination) as an indication of explanatory power. For example if you have a correlation of r = 0.9 between two variables then 81% of the variation in one variable is explained by variation in the other: R squared (0.9 x 0.9) = 0.81 (x 100 to make it a percent= 81). However, if you have a correlation of r = 0.5, then only 25% of the variation in one variable is explained by the variation in the other: R squared (0.5 x 0.5) = 0.25 (x 100 to make it a percent = 25). Therefore R squared will show you how much of the variation in one variable is explained by variation in the other, and so is a better indicator of the strength of the relationship than the correlation coefficient (r).
Skye Posted June 26, 2004 Posted June 26, 2004 Glider, I think you mean 0.9 in the first paragraph, where it says 0.09?
Glider Posted June 26, 2004 Posted June 26, 2004 Ooopsie...yep, typo. Thanks for the heads up (I fixed it).
nonstoptaxi Posted June 26, 2004 Author Posted June 26, 2004 thanks glider/aommaster, that's most helpful. I've come across r squared before to show the variation accounted for, so i'll use that in my reporting. Cheers! J
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now