Jump to content

Recommended Posts

Posted (edited)

It is both a cubic and a quartic. Polynomials are best understood as "nesting" - all constants are linear, all linear polynomials are quadratic, all quadratic polynomials are cubic, etc.

No polynomial with nonzero A, correct.

I understand your point about nesting, but I don't think your terminology is standard.

 

The degree of a polynomial is the largest exponent of any term with nonzero coefficient. [math]x^3[/math] is a cubic and not a quartic. Nothing is gained by obfuscating this simple fact.

 

As evidence for my point of view I would simply submit the common usage in algebra of the degree of a polynomial. If a cubic is a quartic is a quintic, then the definition makes no sense. A polynomial of degree 3 is perfectly well defined, and it's called a cubic. There is no ambiguity in this terminology.

Edited by wtf
Posted (edited)

Any proof of what? That there is no "strict" quartic passing through those points?

 

Assume both a cubic and a "strict" quartic pass through the set of 5 points with distinct x-values. Then their difference is a "strict" quartic which has 5 zeroes. Some elementary polynomial algebra demonstrates that this is impossible.

But the five points could be passed by cubic and quintic but not quartic. remember I am talking always about the polynomial which coefficient of heighest degree is nonzero. Edited by here
Posted

 

understand your point about nesting, but I don't think your terminology is standard.

 

The degree of a polynomial is the largest exponent of any term with nonzero coefficient. 8c0fb3b076d9aea142467b34f0f794eb-1.png is a cubic and not a quartic. Nothing is gained by obfuscating this simple fact.

 

As evidence for my point of view I would simply submit the common usage in algebra of the degree of a polynomial. If a cubic is a quartic is a quintic, then the definition makes no sense. A polynomial of degree 3 is perfectly well defined, and it's called a cubic. There is no ambiguity in this terminology.

 

 

Sanity returns to this thread. +1

Posted

I understand your point about nesting, but I don't think your terminology is standard.

 

The degree of a polynomial is the largest exponent of any term with nonzero coefficient. [math]x^3[/math] is a cubic and not a quartic. Nothing is gained by obfuscating this simple fact.

 

As evidence for my point of view I would simply submit the common usage in algebra of the degree of a polynomial. If a cubic is a quartic is a quintic, then the definition makes no sense. A polynomial of degree 3 is perfectly well defined, and it's called a cubic. There is no ambiguity in this terminology.

Eh, I've seen reasons to use both notations - for example: the set of quartics naturally forms a 5-dimensional vector space, if you include "non-strict quartics". On the other hand, the dimension of the quotient of the polynomial algebra is the degree of the polynomial, which supports your definition. As a note, though, it does make sense to talk about the degree of a polynomial - it's the "smallest" thing that a polynomial is. For example, 2x + 9 is linear, quadratic, cubic, quartic, etc., but can't be constant - so it is degree 1.

 

In this case, it seemed more helpful to use the "nesting" notation.

Posted (edited)

This thread brought back some old memories of when I discovered my own interpolation formulas that generate polynomials and exponential equations for sets of points. Since the topic is about finding a polynomial for a given set of points, I'll share with you all the methods that I discovered myself along with the methods that I later learned as a result of researching my own.

Interpolating [math]F(x)[/math] for a Set of Points

First, we need to define the function [math]S(i,j)[/math] that gives us the subscripts for [math]x[/math] and [math]y[/math] that is used by the interpolating formula to derive the polynomial.
You'll notice that this function has two inputs, [math]i[/math] and [math]j[/math]. These represent the summation and product indices used by [math]\Sigma[/math] and [math]\Pi[/math].

[math]S(i,j)=j-\frac{1}{2}\left((-1)^{\frac{\left\vert j-i \right\vert+j-i}{2}}+(-1)^{\frac{\left\vert j-i+1 \right\vert+j-i+1}{2}}\right)+1[/math]

When we examine the outputs of [math]S[/math] by holding [math]i[/math] constant allowing [math]j[/math] to be the variable, we see that [math]S[/math] produces a linear sequence that skips [math]i[/math] in the sequence.

[math]i=1,\ j\Rightarrow \{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...\}[/math]
[math]i=2,\ j\Rightarrow \{1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...\}[/math]
[math]i=3,\ j\Rightarrow \{1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...\}[/math]
[math]i=4,\ j\Rightarrow \{1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...\}[/math]
[math]i=5,\ j\Rightarrow \{1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...\}[/math]

I discovered this equation back in 1999 right after I graduated high school and was attending OU. I was interested in interpolation because when I was in high school, I discovered Newton's interpolation formula on my own and extended it to account for increasing summations of f(x). I used the knowledge gained from working that problem to create an exponential interpolation formula that works like the polynomial version except it produces an exponential equation and accounts for increasing products of the interpolated function.

I figured out how to do this when I discovered Newton's interpolation formula my 11th grade year in high school. I didn't know about interpolation at the time. I was in Trigonometry and had learned that summations of [math]x[/math] had a polynomial that generalized the sum. It took me one year and three months to discover the equation which predicted the summations of [math]x^p[/math]. I did not have a computer and I did all of the calculation using paper and a TI-88 (I sure do love Mathematica... It has saved a lot of trees):

[math]F(s,\, n,\, p)=\sum_{j_{1}=1}^n \sum_{j_{2}=1}^{j_{1}} ... \sum_{x=1}^{j_{S}} (x^p)=\sum_{j=0}^p \left((-1)^{j+p} \left(\sum_{k=0}^j \frac{k^p \, \left \langle -j \right \rangle_{j-k}}{(j-k)!}\right)\left(\frac{\left \langle n \right \rangle_{j+s}}{(j+s)!} \right)\right)[/math]

Where [math]s[/math] represents the recursion level of the summation (when [math]s=0[/math] we get the original sequence and when [math]s=1[/math] we get the first summation of the sequence and so on), and we sum the function, [math]x^p[/math], from [math]0[/math] to [math]n[/math] where [math]p[/math] is the power.

The process, which involved recursively taking the deltas of the number sequences and applying the rising factorial or Pochhammer function, allowed me to derive the exponential version by recursively dividing instead of doing a subtraction. This is demonstrated below (note: I replaced the Pochhammer function with a product series operator for those that are not familiar with the rising factorial):

Newton's Interpolation:

Recursively take the deltas of each sequence:

[math] \begin{matrix} & & & y_4 - 3\, y_3 + 3\, y_2 - y_1\\ & & y_3 - 2\, y_2 + y_1 & y_4 - 2\, y_3 + y_2\\ & y_2 - y_1 & y_3 - y_2 & y_4 - y_3 \\ y_1 & y_2 & y_3 & y_4 \end{matrix} [/math]

We are only interested in the results at the top of each column. Also, we can see that each result is an alternating sum of the original sequence with coefficients that are pascal numbers. Next, we use each result with the rising factorial and standard factorial as follows:

[math]\left(y_1\right)\frac{1}{0!}\ -\ \left(y_2 - y_1\right)\frac{(1-x)}{1!}\ +\ \left(y_3 - 2\, y_2 + y_1\right)\frac{(1-x)(2-x)}{2!}\ -\ \left(y_4 - 3\, y_3 + 3\, y_2 - y_1\right)\frac{(1-x)(2-x)(3-x)}{3!}\ +\ \text{etc...}[/math]

The formula which defines this process is as defined below (note: I have expanded the formula to include recursively summing the sequence such that when [math]s=0[/math] we get the original sequence and when [math]s=1[/math] we get the first summation of the sequence and so on):

[math]F(x, \, s,\, n) = \sum_{i=0}^{n-1}\left( \sum_{j=0}^{i}\left(f(j)\frac{(-1)^{j}\ i!}{j!\ (i-j)!}\right) \frac{(-1)^{s}}{(i+s)!} \prod_{k=1}^{i+s}\left(k-s-x\right)\right)[/math]

Where [math]x[/math] is the variable, [math]s[/math] is the summation as explained above, and we sum from [math]0[/math] to [math]n-1[/math]. We can start from any number by modifying [math]f(j)[/math] to include a starting index, [math]f(j+start)[/math].

Daedalus' Exponential Interpolation:

We must do the same thing as above except we will divide the numbers of the sequence:

[math] \begin{matrix} & & & y_4^1 \times y_3^{-3} \times y_2^{3} \times y_1^{-1}\\ & & y_3^1 \times y_2^{-2} \times y_1^1 & y_4^1 \times y_3^{-2} \times y_2^1\\ & y_2^1 \times y_1^{-1} & y_3^1 \times y_2^{-1} & y_4^1 \times y_3^{-1} \\ y_1 & y_2 & y_3 & y_4 \end{matrix} [/math]

The next part is also similar except additions become multiplications, subtractions become division, and multiplication becomes exponents:

[math]\left(y_1\right)^{\frac{1}{0!}}\ \times\ \left(y_2^1 \times y_1^{-1}\right)^{-\frac{(1-x)}{1!}}\ \times \ \left(y_3^1 \times y_2^{-2} \times y_1^1\right)^{\frac{(1-x)(2-x)}{2!}}\ \times \ \left(y_4^1 \times y_3^{-3} \times y_2^{3} \times y_1^{-1}\right)^{-\frac{(1-x)(2-x)(3-x)}{3!}}\ \times \ \text{etc...}[/math]

The following formula defines the above process such that when [math]p=0[/math] we get the original sequence and when [math]p=1[/math] we get the first product of the sequence and so on:

[math]F(x, \, p,\, n) = \prod_{i=0}^{n-1}\left( \prod_{j=0}^{i}\left(f(j)^{\frac{(-1)^{j}\, i!}{j!\, (i-j)!}}\right)^{ \frac{(-1)^{p}}{(i+p)!} \displaystyle \prod_{k=1}^{i+p}\left(k-p-x\right)}\right)[/math]

Where [math]x[/math] is the variable, [math]p[/math] is the recursion level of the product as explained above, and we multiply the outputs from [math]0[/math] to [math]n-1[/math]. We can start from any number by modifying [math]f(j)[/math] to include a starting index, [math]f(j+start)[/math].

This method can be applied to any operator that obeys the associative law (which is why it only works for summations and products).


However, these formulas didn't work for any set of points. They had to be ordered. So, I started working the numbers and discovered the equation that interpolates any given set of points.

[math]S(i,j)=j-\frac{1}{2}\left((-1)^{\frac{\left\vert j-i \right\vert+j-i}{2}}+(-1)^{\frac{\left\vert j-i+1 \right\vert+j-i+1}{2}}\right)+1[/math]

[math]\mathcal{D}(x)=\sum_{i=1}^n \left(y_i(x_i-x+1)\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)\right)[/math]

The resulting polynomials always have a degree that is equal to the number of points in the set. So, my equation does not produce the lowest degree polynomial that interpolates the set of points. However, a few years later while taking Numerical Analysis at OU, I learned that my equation was actually a variation of an already known interpolation formula called Lagrange interpolation that produces the polynomial of least degree that interpolates the set of points.

Interpolating [math]F(x)[/math] for a Set of Points using Lagrange Interpolation

My equation was so close to Lagrange Interpolation that I only had to remove a single factor, [math](x_i-x+1)[/math], to make my equation equal to his.

[math]\mathcal{L}(x)=\sum_{i=1}^n \left(y_i\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)\right)[/math]

 

You can only imagine my amazement when I seen my equation staring back at me and was not only simplified, but also did something unique by producing the polynomial of least degree that interpolates the set of points. Well, I amazed myself once again when I discovered that the method I used to extend Newton's interpolation formula to exponential equations also works for Lagrange interpolation! I love mathematics!!!

 

Extending Lagrange Interpolation to Exponential Functions for a Set of Points

 

The work I did in high school to extend Newton interpolation to exponential functions also works for Lagrange interpolation. If we think of polynomials as summing the terms [math]F_{\sigma}(x)[/math] and exponentials as multiplying the terms [math]F_{\pi}(x)[/math], then we can easily extend interpolation formulas from polynomials to exponential functions.

 

Polynomial / Summation

 

[math]\mathcal{L}_{\sigma}(x)=\sum_{i=1}^n \left(y_i\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)\right)[/math]

 

Exponential / Multiplication

 

[math]\mathcal{L}_{\pi}(x)=\prod_{i=1}^n \left(y_{i}^{\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)}\right)[/math]

 

As we can see, sigma [math]\Sigma[/math] is replaced with [math]\Pi[/math] to multiply the terms. Then, instead of multiplying [math]y_i[/math] by the product series, we apply the exponent operation.

 

Using Mathematica to Explore my Method and Lagrange's

 

If you made it all the way to the end of this post, I have attached a Mathematica 7 file so you can play around with my method and Lagrange's. Enjoy!!! ^_^

Interpolation.zip

Edited by Daedalus
Posted

Neat... I just realized a really cool relationship between the polynomial and exponential interpolation formulas.

 

Polynomial / Summation

 

[math]\mathcal{L}_{\sigma}(x)=\sum_{i=1}^n \left(y_i\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)\right)[/math]

 

Exponential / Multiplication

 

[math]\mathcal{L}_{\pi}(x)=\prod_{i=1}^n \left(y_{i}^{\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)}\right)[/math]

 

Because the exponential equation is the product of the terms, we can take the natural logarithm of the equation and rewrite it as

 

[math]\mathcal{L'}_{\sigma}(x)=\text{ln}\left(\mathcal{L}_{\pi}(x)\right)=\text{ln}\left(\prod_{i=1}^n \left(y_{i}^{\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)}\right)\right)=\sum_{i=1}^n \left(\text{ln}\left(y_{i}^{\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)}\right)\right)=\sum_{i=1}^n \left(\text{ln}\left(y_i\right)\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)\right)[/math]

 

So, if we modify the polynomial interpolation formula [math]\mathcal{L}_{\sigma}(x)[/math] to take the natural log of the [math]y[/math] coefficient [math]\text{ln}\left(y_i\right)[/math] we can use the equation as the exponent for [math]e[/math] and get back the exponential interpolation formula [math]\mathcal{L}_{\pi}(x)[/math].

 

[math]\mathcal{L}_{\pi}(x) =e^{\mathcal{L'}_{\sigma}(x)}=e^{\sum_{i=1}^n \left(\text{ln}\left(y_i\right)\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)\right)}[/math]

  • 2 weeks later...
Posted (edited)

Any proof of what? That there is no "strict" quartic passing through those points?

 

Assume both a cubic and a "strict" quartic pass through the set of 5 points with distinct x-values. Then their difference is a "strict" quartic which has 5 zeroes. Some elementary polynomial algebra demonstrates that this is impossible.

I have two questions now:

1- If the five points are passed by cubic, they could not be passed by quartic. remember I am talking always about the polynomial which coefficient of heighest degree is nonzero. Am I right?

 

2- If I give you a set of points, can you find a minimum degree of polynomial that goes through them.

Edited by here
Posted (edited)

I have two questions now:

 

2- If I give you a set of points, can you find a minimum degree of polynomial that goes through them.

 

I'm not 100% sure about question 1 and, since I'm at work right now, I can't research the question, but I already provided you the answer to question 2:

 

In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of distinct points f4bb95e27d8505366199bb81f2b75b6f.png and numbers 0fb734d9f37222c4626ec4c6c6490ea3.png, the Lagrange polynomial is the polynomial of the least degree that at each point f4bb95e27d8505366199bb81f2b75b6f.png assumes the corresponding value 0fb734d9f37222c4626ec4c6c6490ea3.png (i.e. the functions coincide at each point).

...

The interpolating polynomial of the least degree is unique, however, and it is therefore more appropriate to speak of "the Lagrange form" of that unique polynomial rather than "the Lagrange interpolation polynomial", since the same polynomial can be arrived at through multiple methods.

 

So yes, if you give us a set of points, we can use Lagrange interpolation to find the minimum degree polynomial that goes through them, which is unique.

 

Let me demonstrate Lagrange interpolation:

 

[math]S(i,j)=j-\frac{1}{2}\left((-1)^{\frac{\left\vert j-i \right\vert+j-i}{2}}+(-1)^{\frac{\left\vert j-i+1 \right\vert+j-i+1}{2}}\right)+1[/math]

 

[math]\mathcal{L}(x)=\sum_{i=1}^n \left(y_i\prod_{j=1}^{n-1} \left(\frac{x-x_{S(i,j)}}{x_i-x_{S(i,j)}}\right)\right)[/math]

 

Given a set of points that are clearly cubic, [math]\{\left(1,1^3\right),\left(2,2^3\right),\left(3,3^3\right)\}[/math], the polynomial produced by Lagrange interpolation is of degree 2 instead of degree 3 as one would expect:

 

[math]\mathcal{L}(x)=6x^2-11x+6[/math]

 

As we can see, if you plug in the values of [math]x[/math] from the points in our set, we get back the corresponding values of [math]y[/math]:

 

[math]\{\left(1,1\right),\left(2,8\right),\left(3,27\right)\}[/math]

 

I attached a Mathematica 7 file that demonstrates Lagrange interpolation using the above points.

LagrangeInterpolation.zip

Edited by Daedalus

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.