a_egan9 Posted September 23, 2005 Posted September 23, 2005 In a multiple regression with statistically significant predictors, the part correlation for any predictor will always be greater or lesser in value than the partial correlation??? Or is it just that it will be higher/lower than the partial correlation if the predictors do not share variance??
Callipygous Posted September 23, 2005 Posted September 23, 2005 no idea what any of that means, but... 1. double posts are frowned upon and 2. you seem to have missed the forum that would probably be most helpful: applied mathematics. the first thing they list under it is statistics. i think there are people here who can help you with pretty much anything, you might have to wait for them to come online though : P good luck.
a_egan9 Posted September 23, 2005 Author Posted September 23, 2005 sorry i didnt know! thanx for the advice
Glider Posted September 24, 2005 Posted September 24, 2005 I'd need to know what you mean by 'part correlation' and 'partial correlation'. Are you using SPSS? If so, the output will be in two sections: There will be an ANOVA, which shows whether the overall model is significant. This also presents the value for the adjusted R squared which shows the overall variance accounted for by the model (i.e. the strength of the model). The second section will be the regression analyses which lists your predictors and shows the strength of each by presenting both the Beta and t values and level of significance. The higher the values for beta (or t), irrespective of sign (+ or -), the greater the predictive strength of the variable. Multiple regression is a very robust test and the only way to screw it up is if the predictor variables are not independent of each other (i.e. if they correlate with or are predictive of each other). In this case the adjusted R squared might be large showing that the model accounts for a large proportion of the overall variance, but the variance accounted for will be between the predictors and not between any predictor and the criterion variable. So, you get the situation where the ANOVA indicates a singnificant model and the adjusted R squared shows the model is strong (accounts for a lot of the variance), but none of the predictors actually predict the criterion variable. I'm not sure whether this helps you, I'd need to know the values you are looking at when you talk about 'part' and 'partial' correlations. If you have run separate correlations on the predictors and the criterion variable (e.g. Pearson's or Spearman's), then the values will be different, because running single bivariate correlations will not account for the overall variance inherent in the model.
a_egan9 Posted September 24, 2005 Author Posted September 24, 2005 I do use SPSS, part correlation is the same as semi-partial correlation. The question is just on the theoretical concept, not an actual calculation using SPSS. The question says which of the following are true? 1) in a multiple regression with statistically significant predictors, the part correlation for any predictor will always be greater in value than the partial correlation. 2) in a multiple regression with statistically significant predictors, the part correlation for any predictor will always be lesser in value than the partial correlation. 3) in a multiple regression with statistically significant predictors, the part correlation for any predictor will be greater in value than the partial correlation if the predictors do not share variance. 4) in a multiple regression with statistically significant predictors, the part correlation for any predictor will always be greater in value than the zero order correlation correlation. 5) none of the above.
a_egan9 Posted September 26, 2005 Author Posted September 26, 2005 please help me I dont understand!!!
Glider Posted September 26, 2005 Posted September 26, 2005 I have to admit, neither do I. We don't really use partial correlation here, because it's largely pointless. Partial correlation, as far as I recall, is an attempt by bivariate correlation to account for error in a model containing more than two variables. As you can only have two variables in a correlation, partial correlation is based on a huge assumption. I wish I could be of more help, but honestly, I don't even understand the relevence of the question.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now