bsanders149 Posted March 28, 2008 Posted March 28, 2008 Okay, so, for this project, I'm doing a regression for the data I measured and the data that was generated(an estimation made by a computer). I am also trying to find the rate of error between the two values. So, when I try to find the error, do I measure how far away the generated is from the measured, or how far the ordered pair that they make up is from the closest point on a 1:1 line? Or, am I going about this completely wrong? x3 Thanks for any help you can give me!
Klaynos Posted March 28, 2008 Posted March 28, 2008 Your regression program should be able to give you an R^2 value which is related to the average difference between the two lines.
bsanders149 Posted March 28, 2008 Author Posted March 28, 2008 Yes, I know about the r^2 value. It is how well the data set as a whole fits the lines of best fit. What I'm looking for is how well each individual data point fits, but I don't know if I should be comparing the x value to the y value (assuming that the x value is completely without error), or the (x,y) to the 1:1 line (ie (1,1), (2,2), (3,3), etc.).
Klaynos Posted March 28, 2008 Posted March 28, 2008 well you should compare values with the same x, as both will be some function of x, so f(x) is your experimental data, and g(x) is your fit. The difference between f(x) and g(x) will be the error of the fit for that point.
bsanders149 Posted March 28, 2008 Author Posted March 28, 2008 Thank you for the response. I decided to go with just subtracting the estimated values from the measured(which I have put with the assumption that they are accurate, eeeeeven though there's probably some errors there ). I think that's what you all were trying to explain, anyway. x3 Thanks.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now