Unity+ Posted March 20, 2014 Posted March 20, 2014 (edited) So, I found something that was interesting that could related to the Collatz conjecture. Here is what I found. Take any regular function f(x) and multiply it by its inverse. take the derivative of that product. Repeat this step until you reach a pattern of 2x repeating. Here is an example: [math]f(x) = x+1[/math] [math]\frac{\mathrm{d} }{\mathrm{d} x}\left ( (x+1)(x-1) \right )= 2x[/math] [math]\frac{\mathrm{d} }{\mathrm{d} x}\left ( \left (2x \right ) \left ( \frac{x}{2} \right )\right )=2x[/math] (This is just a simple example) I have tested this with other functions and it seems to check out. I haven't seen this before, so if someone knows if this has been found before I would like the link to the website or topic that talks about this. Edited March 20, 2014 by Unity+
John Posted March 20, 2014 Posted March 20, 2014 (edited) Consider f(x) = x2, so f-1(x) = x1/2. Then f(x)f-1(x) = x5/2. Though the results of applying this sequence of steps quickly become annoyingly tedious to type completely, the general form of the derivative seems to go as follows (letting ri denote some real number):1: r1x3/2 2: r2x7/6 3: r3x43/42 4: r4x1807/1806 and so on. In general, starting with our original function and continuing with each derivative thereafter, we have a term xq where q is some rational number. The inverse involves a term x1/q, thus the product of the derivative and its inverse has a term x(q^2 + 1)/q. Thus the new derivative will have a term x(q^2 + 1)/q - q/q = x(q^2 - q + 1)/q. You may also notice a pattern here, namely that letting qn = an/bn be the exponent of x in the nth derivative, qn + 1 = (anbn + 1)/(anbn). Unless I've made some mistake here, then, no matter how many times we repeat the process, starting with x2, we will never end up with 2x. Edited March 20, 2014 by John
Unity+ Posted March 20, 2014 Author Posted March 20, 2014 (edited) Consider f(x) = x2, so f-1(x) = x1/2. Then f(x)f-1(x) = x5/2. Though the results of applying this sequence of steps quickly become annoyingly tedious to type completely, the general form of the derivative seems to go as follows (letting ri denote some real number): 1: r1x3/2 2: r2x7/6 3: r3x43/42 4: r4x1807/1806 and so on. In general, starting with our original function and continuing with each derivative thereafter, we have a term xq where q is some rational number. The inverse involves a term x1/q, thus the product of the derivative and its inverse has a term x(q^2 + 1)/q. Thus the new derivative will have a term x(q^2 + 1)/q - q/q = x(q^2 - q + 1)/q. You may also notice a pattern here, namely that letting qn = an/bn be the exponent of x in the nth derivative, qn + 1 = (anbn + 1)/(anbn). Unless I've made some mistake here, then, no matter how many times we repeat the process, starting with x2, we will never end up with 2x. Interesting pattern you found. This actually makes me more curious. Yes, this is a mistake on my part. I would have to add the restraints of "n of x^n must be equal to 1 and r must be greater than 1" EDIT: The last restraint isn't required. EDIT2: The rule could also be changed. Iff n, the exponent, of any x^n within the equation is larger than 1, take the derivative of that function. [math]F(x)=\begin{cases} \frac{\mathrm{d} }{\mathrm{d} x} f(x)& \text{ if } n>1 \\ \frac{\mathrm{d} }{\mathrm{d} x}\left ( f(x)f(x)^{-1} \right )& \text{ if } n=1 \end{cases}[/math] EDIT3: What I noticed with the pattern you had shown if you were to add a limit to infinity for the steps it would eventually reach x, which would soon be multiplied by its inverse(x) and then the derivative would be taken to get 2x. Correct me if I misinterpret what you stated. [math]\lim_{(a_{n},b_{n})\rightarrow \infty }x^{\frac{a_{n}b_{n}+1}{a_{n}b_{n}}}=x[/math] Edited March 21, 2014 by Unity+
John Posted March 21, 2014 Posted March 21, 2014 (edited) I took "regular functions" to include things like exponentials, trigonometric functions, etc. (there's a more technical definition of the term in algebraic geometry, but I don't know much about that), but now I'm assuming we're restricting ourselves to polynomials here. Thus we have the built-in rules that the powers of x must be non-negative integers, the leading coefficient must be non-zero, etc.Of course, for a polynomial of degree n, the (n - 1)th derivative will be of the form f(x) = rx + s for some real numbers r ≠ 0 and s. The inverse of this will be f-1(x) = (1/r)x - s/r, giving us a product of x2 - sx + (s/r)x + s2/r = x2 + [(s - sr)/r]x + s2/r. The derivative of this product, then, is 2x + (s - sr)/r.Now, if s = sr, i.e. s = 0 or r = 1, then we're done. Otherwise, repeating the process, the inverse of this is (1/2)x - (s - sr)/(2r). Multiplying, we have x2 + [s/2-s/(2r)]x - s2/2 - s2/(2r2) + s2/r. The derivative of this is 2x+ [s/2-s/(2r)].Now, if s/2 = s/(2r), i.e. s = 0 or r = 1 again, then we're done. But if s = 0 or r = 1, then we'd have been done earlier.So I'm wondering if this process only works in the case that s = 0 or r = 1. It seems like, given the case that s ≠ 0 and r ≠ 1, we end up with 2x + m for some real number m ≠ 0, but never get to just 2x. Of course, completing a few more iterations might prove me wrong, but it's getting tedious keeping track of everything, heh. Unless you already have a counterexample to my conjecture here. If I'm correct, then the process works for a polynomial of degree n only if the coefficient of xn-1 is 0 or the coefficient a of xn is such that an(n - 1)(n - 2)...(n - (n - 2)) = 1, i.e. a = 1/(n!). But there's enough arithmetic here (read: opportunities for arithmetic errors) that I'm not entirely confident in my conclusions. EDIT3: What I noticed with the pattern you had shown if you were to add a limit to infinity for the steps it would eventually reach x, which would soon be multiplied by its inverse(x) and then the derivative would be taken to get 2x. Correct me if I misinterpret what you stated. [math]\lim_{(a_{n},b_{n})\rightarrow \infty }x^{\frac{a_{n}b_{n}+1}{a_{n}b_{n}}}=x[/math] This is certainly true. Naïvely, we can just think about the fact that as an and bn approach infinity, the added 1 in the numerator makes a smaller and smaller contribution. Edited March 21, 2014 by John 1
Unity+ Posted March 21, 2014 Author Posted March 21, 2014 (edited) This is certainly true. Naively, we can just think about the fact that as an and bn approach infinity, the added 1 in the numerator makes a smaller and smaller contribution. What do you mean by this? By naively, do you mean it would not be proper to think it in the light of limits? EDIT: I think it is actually a sound conjecture. I have tested with many functions and still get results expected. I would have to keep testing in order to prove the conjecture true of false. Either that, find proof of either. With your example, however, I am iterating and still haven't found an end to it. I will keep trying though. I took "regular functions" to include things like exponentials, trigonometric functions, etc. (there's a more technical definition of the term in algebraic geometry, but I don't know much about that), but now I'm assuming we're restricting ourselves to polynomials here. Thus we have the built-in rules that the powers of x must be non-negative integers, the leading coefficient must be non-zero, etc. Trigonometric functions and others could be included in some form or fashion. The process might just have to be modified in order to include for these. I would have to work these out to find out. EDIT2: It would be interesting if it could be predicted what m would equal in this case. It would provide a way to determine if your conjecture works for all numbers(this smells of Fermat ) Edited March 21, 2014 by Unity+ 1
John Posted March 21, 2014 Posted March 21, 2014 What do you mean by this? By naively, do you mean it would not be proper to think it in the light of limits? Nah, I just meant that while the reasoning is intuitively correct, it's not a rigorous proof. Don't read too much into it. It's a minor point, and I don't feel like tossing epsilons all around. EDIT: I think it is actually a sound conjecture. I have tested with many functions and still get results expected. I would have to keep testing in order to prove the conjecture true of false. Either that, find proof of either. With your example, however, I am iterating and still haven't found an end to it. I will keep trying though. Trigonometric functions and others could be included in some form or fashion. The process might just have to be modified in order to include for these. I would have to work these out to find out. EDIT2: It would be interesting if it could be predicted what m would equal in this case. It would provide a way to determine if your conjecture works for all numbers(this smells of Fermat ) I'll leave that to you for now, heh. Between an abstract algebra midterm earlier and thinking about this, I'm somewhat mathed out for the night. But we'll see if that changes later. 1
Unity+ Posted March 21, 2014 Author Posted March 21, 2014 (edited) Nah, I just meant that while the reasoning is intuitively correct, it's not a rigorous proof. Don't read too much into it. It's a minor point, and I don't feel like tossing epsilons all around. That is true. I tend to do that sometimes without looking into the beauty that could come out of it as a result of not plugging in limits. I'll leave that to you for now, heh. Between an abstract algebra midterm earlier and thinking about this, I'm somewhat mathed out for the night. But we'll see if that changes later. Alright. I will continue working on this for it is a mathematical curiosity to me. I wonder what will occur with m in 2x + m when it continues onto infinity. Will it provide some insight into something about this problem? Edited March 21, 2014 by Unity+ 1
John Posted March 21, 2014 Posted March 21, 2014 (edited) Oh, one thing I was thinking about earlier but forgot to mention: When we get to the point that we have y = rx + s, what we have is a linear equation, and its inverse is also a linear equation. Geometrically, we have a line, and to get the inverse, we're reflecting the line about y = x (example here). To arrive at 2x, we first need to arrive at x2 + k for some real number k, i.e. given our two lines, we somehow want the products of their points over the entire range of x-values to generate an upward vertical parabola with its vertex on the y-axis. While this still isn't a proof, it does seem that such a special result would require special starting conditions. Just something to keep in mind, if you're not thinking along those lines already. Edited March 21, 2014 by John
Unity+ Posted March 21, 2014 Author Posted March 21, 2014 (edited) Oh, one thing I was thinking about earlier but forgot to mention: When we get to the point that we have y = rx + s, what we have is a linear equation, and its inverse is also a linear equation. Geometrically, we have a line, and to get the inverse, we're reflecting the line about y = x (example here). To arrive at 2x, we first need to arrive at x2 + k for some real number k, i.e. given our two lines, we somehow want the products of their points over the entire range of x-values to generate an upward vertical parabola with its vertex on the y-axis. While this still isn't a proof, it does seem that such a special result would require special starting conditions. Just something to keep in mind, if you're not thinking along those lines already. I was actually thinking about this earlier and began to assume that "of course it would become 2x." However, it just baffles me how functions and their inverse, taking the derivative, would result in a function of 2x. EDIT: In effect, if this problem is solved then it potentially could solve the Collatz conjecture because it could provide insight on the interaction of functions. It might not, though. Edited March 21, 2014 by Unity+
Unity+ Posted March 22, 2014 Author Posted March 22, 2014 (edited) Here are properties I have found with this: Having a function [math]f(x)=x^{n}[/math], where n > 1. with , it will require an infinite amount of steps to reach 2x. Having a function [math]f(x)=x^{n}[/math], where n = 1 the amount of steps needed(conjecture) will be finite. Having a function [math]f(x)=x^{n}[/math], where n = 0 the result will only be 0. Having a function [math]f(x)=x^{n}[/math], where n = -1 the amount of steps needed(conjecture) will be finite because this would result in having the inverse multiplied by its normal function. Having a function [math]f(x)=x^{n}[/math], where n < -1 the amount of steps needed(conjecture) will be infinite because this would result in having the inverse multiplied by its normal function...with , it will require an infinite amount of steps to reach 2x. Here is a number line representing the properties: Where p is the finite value(by the conjecture) by conjecture. EDIT: Of course, these properties only apply if there are no other constants or variables in the equation. EDIT2: So, I tried with negative values of n of x^n+1, more specifically -1, and currently it is giving very ugly results. I am still trying to see if it ever gets to 2x. EDIT3: Turns out that if n of rx^n + c is less than -1 then the result goes onto infinity. There was a miscalculation. I am going to retry the calculations. Edited March 22, 2014 by Unity+
Unity+ Posted March 23, 2014 Author Posted March 23, 2014 When we get to the point that we have y = rx + s, what we have is a linear equation, and its inverse is also a linear equation. Geometrically, we have a line, and to get the inverse, we're reflecting the line about y = x (example here). One thing I forgot to mention about this earlier is 2x as a function doesn't follow this pattern, as 2x and 2x would be the same if the function is 2x and inverse is x/2.
Unity+ Posted March 24, 2014 Author Posted March 24, 2014 (edited) Here is a way to represent the new type of iterating function: [math]\gamma (a)=\begin{cases} \frac{\mathrm{d} }{\mathrm{d} x}\left ( x_{a}y_{a} \right )& \text{ if } n_{d}=1 \\ \lim_{a\rightarrow \infty }\frac{\mathrm{d} }{\mathrm{d} x}\left ( x_{a}y_{a} \right )& \text{ if } n_{d}>1 \end{cases}[/math] The reason why the two functions, the function and inverse, I represented by x and y is because the inverse of f(x) would be using the method of replacing x with y and y with x to solve for y. Edited March 24, 2014 by Unity+
John Posted March 24, 2014 Posted March 24, 2014 One thing I forgot to mention about this earlier is 2x as a function doesn't follow this pattern, as 2x and 2x would be the same if the function is 2x and inverse is x/2. I'm not sure what you mean here. The function y = 2x follows the pattern just fine. Here is a way to represent the new type of iterating function: [math]\gamma (a)=\begin{cases} \frac{\mathrm{d} }{\mathrm{d} x}\left ( x_{a}y_{a} \right )& \text{ if } n_{d}=1 \\ \lim_{a\rightarrow \infty }\frac{\mathrm{d} }{\mathrm{d} x}\left ( x_{a}y_{a} \right )& \text{ if } n_{d}>1 \end{cases}[/math] I think you'll have to clarify a bit. What is nd? What's the purpose of the limit? Keep in mind that the limit involving [math]x^{\frac{a_{n}b_{n} + 1}{a_{n}b_{n}}}[/math] assumed the original algorithm of taking the inverse and differentiating the resulting product, regardless of the starting function, whereas more recently we've been talking about differentiating until we have a linear equation before worrying about the inverse at all.
Unity+ Posted March 24, 2014 Author Posted March 24, 2014 I'm not sure what you mean here. The function y = 2x follows the pattern just fine. I meant the pattern you presented here. I think you'll have to clarify a bit. What is nd? What's the purpose of the limit? Keep in mind that the limit involving assumed the original algorithm of taking the inverse and differentiating the resulting product, regardless of the starting function, whereas more recently we've been talking about differentiating until we have a linear equation before worrying about the inverse at all. In the equation, nd refers to the exponent of the degree, or the x variable with the highest exponent. The limit results in the case you presented earlier, with n being larger than 1. Because, as presented earlier, the derivative would soon result in 1 with the limit, the result would eventually reach 2x. Of course, this only refers to the original case not the newly presented findings.
John Posted March 24, 2014 Posted March 24, 2014 I meant the pattern you presented here. I don't see the problem. In the equation, nd refers to the exponent of the degree, or the x variable with the highest exponent. Alright, but I'm still not sure why a limit is necessary in your definition of γ.
Unity+ Posted March 24, 2014 Author Posted March 24, 2014 Alright, but I'm still not sure why a limit is necessary in your definition of γ. Because it refers to , where the derivative results in this limit.
Unity+ Posted March 24, 2014 Author Posted March 24, 2014 (edited) I don't see the problem. Unless I misunderstood what pattern you are referring to, 2x does not fit. Here is what I am referring to. Edited March 24, 2014 by Unity+
John Posted March 24, 2014 Posted March 24, 2014 (edited) It's not really a pattern. It's just the geometrical interpretation of taking the inverse of a linear function. If we have y = 2x, then the inverse is y = (1/2)x. We can visualize this as reflecting the line y = 2x about the line y = x, as shown here. Now, about your definition of γ. I'm assuming that xa is some function of x, and ya is the inverse of xa. Given what you've told me about nd (why do you use nd here, and not na?), the first part of the definition seems to be what we've discussed before (i.e. applying the inverse-product-derivative process to a linear function), though I think we still have the restriction that the coefficient of x must be 1 or the coefficient of x0 must be 0. I'm fine with that. For the second part of the definition, I guess you're defining each xaya to be the new product at each step of the process, and as you said a few posts ago, this applies to the original process, not the more recent one. Thus the "limit as a goes to infinity" refers to the sequence of functions generated by applying this process over and over starting with our original function. While the exponent on x does (as far as I can tell, at least for positive exponents) approach 1 as the number of iterations increases, that doesn't guarantee the limit is 2x specifically, and even if it did, we'd never actually arrive at 2x (though we would get arbitrarily close). Edited March 24, 2014 by John
Unity+ Posted March 24, 2014 Author Posted March 24, 2014 (edited) It's not really a pattern. It's just the geometrical interpretation of taking the inverse of a linear function. If we have y = 2x, then the inverse is y = (1/2)x. We can visualize this as reflecting the line y = 2x about the line y = x, as shown here. Oh, I see how you are looking at it. I assumed you were looking at it as if it would be 2x, not x. Now, about your definition of γ. I'm assuming that xa is some function of x, and ya is the inverse of xa. Given what you've told me about nd (why do you use nd here, and not na?), the first part of the definition seems to be what we've discussed before (i.e. applying the inverse-product-derivative process to a linear function), though I think we still have the restriction that the coefficient of x must be 1 or the coefficient of x0 must be 0. I'm fine with that. For the second part of the definition, I guess you're defining each xaya to be the new product at each step of the process, and as you said a few posts ago, this applies to the original process, not the more recent one. Thus the "limit as a goes to infinity" refers to the sequence of functions generated by applying this process over and over starting with our original function. While the exponent on x does (as far as I can tell, at least for positive exponents) approach 1 as the number of iterations increases, that doesn't guarantee the limit is 2x specifically, and even if it did, we'd never actually arrive at 2x (though we would get arbitrarily close). Well, that would be the problem of definition because it infinitely reaches 2x, but finitely cannot. It will be a problem to approach when dealing with the definition of the problem. Also, your assumption is correct. I was attempting to look at the problem as if we are taking the derivative of x and y. [math]\frac{\mathrm{d} }{\mathrm{d} x}\left ( xy \right )[/math] However, I didn't see a connection. Though, there might be a connection I have not spotted. I used [math]n_{d}[/math] to represent the exponent of the degree of that polynomial function because it is merely the degree that would matter most(degree being the variable of focus that has the largest exponent). Edited March 24, 2014 by Unity+
uncool Posted March 25, 2014 Posted March 25, 2014 Take f(x) = 2x + 2; f^{-1}(x) = x/2 - 1. d/dx ((2x + 2)(x/2 - 1)) = d/dx (x^2 - x - 2) = 2x - 1 f(x) = 2x - 1; f^{-1}(x) = x/2 + 1/2 d/dx ((2x - 1)(x/2 + 1/2)) = d/dx (x^2 + x/2 - 1/2) = 2x + 1/2 For affine functions (f(x) = mx + b), you will always get that d/dx (f(x) f^{-1}(x)) is also an affine function; then if we call it m' x + b', then m' will always be 2, and b'/m' will always be equal to -1/2 b/m.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now