MWresearch Posted August 27, 2015 Posted August 27, 2015 (edited) Just what the title says. Say I have some random equation like 2^x, then I have some random equation that I tested and it gives me the points (1,2), (2,4), (3,8), (4,16), (5,32), (6,64), (7,128) but there's no proof it works for (8, 256) other than by testing it to see it works for x=8, if I assume the curve is continuous, is there any sort of weak proof that says it's the equation 2^x? Every taylor series should be unique, yet for any given equation that was derived purely from integers, I could use some kind of a*cos(bx)+c that generates a coefficient of "1" for any given integer. For instance, the equation y=2^x generates (1,2), (2,4), (3,8), (4,16), (5,32), (6,64), (7,128). However, the equation y=(2^x)*cos(2*pi*x) generates (1,2), (2,4), (3,8), (4,16), (5,32), (6,64), (7,128) as well, so how do I deal with this? So if I assume there's no trig involved, there's nothing else that could perfectly match another continuous operator at each regular interval but is completely inaccurate at interpolated x values right? Or no?? So is there some weak proof that works if I assume no trig functions are involved? Edited August 27, 2015 by MWresearch 1
imatfaal Posted August 27, 2015 Posted August 27, 2015 I think No. The below would not fit your data points f(x) = (-2)^x - but it would fit (2,4) (4,16) (6,64) (8,256) . So if I gave you those 4 data points (and as many more even x as you wished) there are at very very minimum 2 functions which match those points
ajb Posted August 27, 2015 Posted August 27, 2015 Without some theory behind the equations you pick, I think it is pretty hopeless. For any finite number of points on the plane I can always push a polynomial through them.
MWresearch Posted August 27, 2015 Author Posted August 27, 2015 Is there a way to create special parameters? Like if I have the points (1,1) (2,2) (3,3) (4,4) (5,5) (6,6) (7,7)...and there's no indication of a complicated polynomial anywhere, is some complicated polynomial really more likely than just y=x? Is there some way to rule out certain conditions or assume it's not a polynomial and work with that assumption?
ajb Posted August 28, 2015 Posted August 28, 2015 This is all about curve fitting. There are methods of deciding how good various functions fit some data. I am not really up on these methods.
MWresearch Posted August 28, 2015 Author Posted August 28, 2015 This is all about curve fitting. There are methods of deciding how good various functions fit some data. I am not really up on these methods. It's not that simple though because as I pointed out, more than one curve can perfectly fit data points and yield a 100% correlation. There has to be some way to assume certain parameters, such as that it can't be a polynomial or cos(x) when there was no previous sign of those operators.
Bignose Posted August 28, 2015 Posted August 28, 2015 It's not that simple though because as I pointed out, more than one curve can perfectly fit data points and yield a 100% correlation. There has to be some way to assume certain parameters, such as that it can't be a polynomial or cos(x) when there was no previous sign of those operators. This is why when curve fitting, you'd really like to know what kind of phenomena you are trying to describe. When a function has a certain pathology, hopefully derived from the modeling of the phenomena, you usually get some insight as to what flavors of curve fitting are and are not appropriate. Here is a good example, I once worked where there was a model for the volume of a part in a plastic bag. The model worked well for boxy parts, ones where L1 was close to L2 and was close to L3. But, if a part was long and thin, L1 >> L2, L3, the calculation for the volume had a chance to actually go negative -- an obvious impossibility. The problem here was that the pathology and the phenomenology of the function wasn't appreciated when the initial curve fitting took place. As you stated, just having points alone doesn't mean a lot because often a great deal of functional forms can intersect those points. Some knowledge of what you are doing and why helps tremendously.
ajb Posted August 29, 2015 Posted August 29, 2015 (edited) There has to be some way to assume certain parameters, such as that it can't be a polynomial or cos(x) when there was no previous sign of those operators. You need some background theory to give some idea of the possible form of the curves you are looking for. Otherwise it is quite hopeless. You can always find a polynomial that will fit, but you need some theory to say 'I can't have a polynomial' or 'the polynomial order must be X' and so on. Given an educated guess on the form you are looking for you can then employ the tools of curve fitting to see what parameters fit and how well. So, you need to think about where these pairs of numbers came from and what sort of relation are you looking for? For example, how do we know there is not rapid oscillation of the curve between any two neighbouring points on your list? Do we know the curve is smooth or even just continuous? This is why when curve fitting, you'd really like to know what kind of phenomena you are trying to describe. When a function has a certain pathology, hopefully derived from the modeling of the phenomena, you usually get some insight as to what flavors of curve fitting are and are not appropriate. I think it is absolutely essential in this context. Edited August 29, 2015 by ajb
Daedalus Posted September 7, 2015 Posted September 7, 2015 (edited) Is there a way to create special parameters? Like if I have the points (1,1) (2,2) (3,3) (4,4) (5,5) (6,6) (7,7)...and there's no indication of a complicated polynomial anywhere, is some complicated polynomial really more likely than just y=x? Is there some way to rule out certain conditions or assume it's not a polynomial and work with that assumption? As is being said, you cannot imply a formula or equation for a set of numbers without some theory regarding how those numbers relate to each other. For instance, I have developed an exponential interpolation formula that will produce an exponential equation that can yield the set of numbers you posted. The following is from my Fourth Challenge and is an exponential interpolation for the points, (1,1), (2,2), (3,3), (4,4), and (5,5): [math]2^{-\frac{(x-5)(x-3)(x-1)(3x-8)}{6}} \times 3^{\frac{(x-5)(x-4)(x-2)(x-1)}{4}} \times 5^{\frac{(x-4)(x-3)(x-2)(x-1)}{24}}[/math] The function produces the following graph: We can see that the function approximates [math]y=x[/math] on the interval, [math]\{x \in \mathbb{R} \, |\, 1 \le x \le 5\}[/math]. The Process: I figured out how to do this when I discovered Newton's interpolation formula my 11th grade year in high school. I didn't know about interpolation at the time. I was in Trigonometry and had learned that summations of [math]x[/math] had a polynomial that generalized the sum. It took me one year and three months to discover the equation which predicted the summations of [math]x^p[/math]. I did not have a computer and I did all of the calculation using paper and a TI-88 (I sure do love Mathematica... It has saved a lot of trees): [math]F(s,\, n,\, p)=\sum_{j_{1}=1}^n \sum_{j_{2}=1}^{j_{1}} ... \sum_{x=1}^{j_{S}} (x^p)=\sum_{j=0}^p \left((-1)^{j+p} \left(\sum_{k=0}^j \frac{k^p \, \left \langle -j \right \rangle_{j-k}}{(j-k)!}\right)\left(\frac{\left \langle n \right \rangle_{j+s}}{(j+s)!} \right)\right)[/math] Where [math]s[/math] represents the recursion level of the summation (when [math]s=0[/math] we get the original sequence and when [math]s=1[/math] we get the first summation of the sequence and so on), and we sum the function, [math]x^p[/math], from [math]0[/math] to [math]n[/math] where [math]p[/math] is the power. The process, which involved recursively taking the deltas of the number sequences and applying the rising factorial or Pochhammer function, allowed me to derive the exponential version by recursively dividing instead of doing a subtraction. This is demonstrated below (note: I replaced the Pochhammer function with a product series operator for those that are not familiar with the rising factorial): Newton's Interpolation: Recursively take the deltas of each sequence: [math] \begin{matrix} & & & y_4 - 3\, y_3 + 3\, y_2 - y_1\\ & & y_3 - 2\, y_2 + y_1 & y_4 - 2\, y_3 + y_2\\ & y_2 - y_1 & y_3 - y_2 & y_4 - y_3 \\ y_1 & y_2 & y_3 & y_4 \end{matrix} [/math] We are only interested in the results at the top of each column. Also, we can see that each result is an alternating sum of the original sequence with coefficients that are pascal numbers. Next, we use each result with the rising factorial and standard factorial as follows: [math]\left(y_1\right)\frac{1}{0!}\ -\ \left(y_2 - y_1\right)\frac{(1-x)}{1!}\ +\ \left(y_3 - 2\, y_2 + y_1\right)\frac{(1-x)(2-x)}{2!}\ -\ \left(y_4 - 3\, y_3 + 3\, y_2 - y_1\right)\frac{(1-x)(2-x)(3-x)}{3!}\ +\ \text{etc...}[/math] The formula which defines this process is as defined below (note: I have expanded the formula to include recursively summing the sequence such that when [math]s=0[/math] we get the original sequence and when [math]s=1[/math] we get the first summation of the sequence and so on): [math]F(x, \, s,\, n) = \sum_{i=0}^{n-1}\left( \sum_{j=0}^{i}\left(f(j)\frac{(-1)^{j}\ i!}{j!\ (i-j)!}\right) \frac{(-1)^{s}}{(i+s)!} \prod_{k=1}^{i+s}\left(k-s-x\right)\right)[/math] Where [math]x[/math] is the variable, [math]s[/math] is the summation as explained above, and we sum from [math]0[/math] to [math]n-1[/math]. We can start from any number by modifying [math]f(j)[/math] to include a starting index, [math]f(j+start)[/math]. Daedalus' Exponential Interpolation: We must do the same thing as above except we will divide the numbers of the sequence: [math] \begin{matrix} & & & y_4^1 \times y_3^{-3} \times y_2^{3} \times y_1^{-1}\\ & & y_3^1 \times y_2^{-2} \times y_1^1 & y_4^1 \times y_3^{-2} \times y_2^1\\ & y_2^1 \times y_1^{-1} & y_3^1 \times y_2^{-1} & y_4^1 \times y_3^{-1} \\ y_1 & y_2 & y_3 & y_4 \end{matrix} [/math] The next part is also similar except additions become multiplications, subtractions become division, and multiplication becomes exponents: [math]\left(y_1\right)^{\frac{1}{0!}}\ \times\ \left(y_2^1 \times y_1^{-1}\right)^{-\frac{(1-x)}{1!}}\ \times \ \left(y_3^1 \times y_2^{-2} \times y_1^1\right)^{\frac{(1-x)(2-x)}{2!}}\ \times \ \left(y_4^1 \times y_3^{-3} \times y_2^{3} \times y_1^{-1}\right)^{-\frac{(1-x)(2-x)(3-x)}{3!}}\ \times \ \text{etc...}[/math] The following formula defines the above process such that when [math]p=0[/math] we get the original sequence and when [math]p=1[/math] we get the first product of the sequence and so on: [math]F(x, \, p,\, n) = \prod_{i=0}^{n-1}\left( \prod_{j=0}^{i}\left(f(j)^{\frac{(-1)^{j}\, i!}{j!\, (i-j)!}}\right)^{ \frac{(-1)^{p}}{(i+p)!} \displaystyle \prod_{k=1}^{i+p}\left(k-p-x\right)}\right)[/math] Where [math]x[/math] is the variable, [math]p[/math] is the recursion level of the product as explained above, and we multiply the outputs from [math]0[/math] to [math]n-1[/math]. We can start from any number by modifying [math]f(j)[/math] to include a starting index, [math]f(j+start)[/math]. This method can be applied to any operator that obeys the associative law (which is why it only works for summations and products). Please forgive any grammar errors in this post. I broke a tooth over thanksgiving and it is causing me a tremendous amount of pain. It's 3:00 am and I have a dentist appointment today. So... I kinda rushed this post. I will correct any issues tomorrow as long as the edit timer has not expired. Edited September 7, 2015 by Daedalus
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now