matt grime Posted April 30, 2005 Posted April 30, 2005 Johnny, I am more than familiar with all of the standard, and lots of the non-standard parts of mathematics, including [math] ^nC_r[/math]. Would you possibly consider that a maths phd may know more than you? Just explain what you mean by an r-permutation, and if you really feel like it, why not state why it matters that the set is "given to you at random" whatever that may mean. I suspect you're just asking "how many ways are there of selecting r objects from n, order unimportant (or perhaps you want order important)". Is that all you're asking, and why didn't you say so? I suspect, from the examples, that an r-permutation is an ordered r-tuple selected from the set of n elements without repetition. As it is, you've asked, and this is a quote, "How many elements in the generated set, with P(n,r) elements in it?" which makes no sense, and stated, "Suppose we have a set given to us by someone else, and they chose it at random. So you recieve the set from them. Since this can randomly vary, let n denote the number of elements in the set you recieve." What does that even mean? To receive a set randomly? What if, heaven forfend, it weren't finite?
Johnny5 Posted April 30, 2005 Author Posted April 30, 2005 Just explain what you mean by an r-permutation I don't mean anything different from what is standard. Lets see... You are given a set of n-elements, chosen at random. [math] n \in \mathcal{f} 0,1,2,3,4,5,... \mathcal{g} [/math] You don't know what you are going to get for n, because it is being chosen by someone else. All you know for sure, is that n is an element of the set of whole numbers, and n is finite. For now, I am ignoring any and all discussion about infinite sets. So first, the other person must choose a number for n. Suppose that the choice made by them is n=3. e.g. {a,b,c} Here are the permuations of length 3: (a,b,c) (a,c,b) (b,a,c) (b,c,a) (c,a,b) (c,b,a) Now, they are all regarded as different, and the list is comprehensive. So all permutations of length 3 have been listed. Here are the permuations of length 2: (a,b) (a,c) (b,a) (b,c) (c,a) (c,b) Now, each of these is regarded as different from any other, and the list is comprehensive. So all permuations of length two have been listed. Here are the permutations of length 1: (a) (b) © Now, each of these is regarded as different from any other, and the list is comprehensive. So all permutations of length one have been listed. And there are no permutations of length zero. So your question, as I understand it, is for me to clearly define what an r-permutation is. Standard notation, for when order matters, is to use parenthesis. Thus, while it is the case that: {a,b,c}={a,c,b}={b,a,c}={b,c,a}={c,a,b}={c,b,a} It is not the case that (a,b,c)=(a,c,b) It is not the case that (a,b,c) = (b,a,c) and so on. Since usage of parenthesis is standard for when order matters, I use them. As for the issue of replacement, too many words can confuse the other reader. Discussion of replacement vs not, doesn't seem essential to the problem. It is clear just by looking at the generated set, and being told that it is comprehensive, that elements cannot be repeated. But you want a concise mathematical definition. I did individual work on this years ago, and I was using vectors. Arbitrary example given [math] (x_1,x_2,x_3,... x_n) [/math] The usage of this notation seemed easiest at the time. So there is your "n-tuple." The range of the elements depends upon the randomly chosen set. I have done some independent work also, on decision theory. The first decision to be made by the other individual is what n is going to be. In the example under study here, they made the decision that n=3. They then chose three different things, to be elements of the set. In the example here, the three objects were c a b So an arbitrary 3-tuple will be represented by: [math] (x_1,x_2,x_3) [/math] The usage of the parenthesis indicates that order matters. Now, if you back away from the problem, to consider a larger class of problems, then in some cases x1=x2 or x1=x3 or x2=x3 or x1=x2=x3, depending on the specific problem. But in the class of problems being considered here, none of those are possibilitites, and this is embedded in the notion of "r-permutation on an n-set." It is clear from the context what was meant, and overly verbose discourse would have only served to confuse the reader, rather than communicate the information. It is deducible, from what I wrote alone, what an r-permutation on an n-set is. No further explanation was necessary. But you want more. Let A denote a set, chosen at random. Let n denote the number of elements of A. Introduce a variable r, which must satisfy the following constraint: [math] 0 \leq r \leq n [/math] So r can also be chosen at random. Now, in the book I have, the notation P(n,r) is used to denote what they call "number of r-permutations on an n-set." I certainly understood the guy. But i do agree that it's best to be as precise as one is capable of. Well ok. While I will use P(n,r) to denote the number of elements in the set I am interested in, P(n,r) is a number, and not the set in question, which must be generated. The number of elements in that set is P(n,r), but the set itself is something else. It is best to be precise, but in the fewest words possible. ------------------------------------------------------------------------ It is several days later, but what follows belongs here. Here is something I worked on years ago, when I first studied combinatorics. Suppose you are asked to find all 3-permutations of a 5-set. Let the given set be: {a,b,c,d,e} Weight the objects, set them in 1-1 correspondence with: {1,2,3,4,5} You are asked to generate all 3-permutations of {a,b,c,d,e}. To express what you are asked for, it's something like this: [math] \sum_{x_1=1}^{x1=5} \sum_{x_2=1}^{x2=5} \sum_{x_3=1}^{x3=5} (x1,x2,x3) [/math] AND [math] \text{not(x1=x2) and not(x1=x3) and not(x2=x3) } [/math] But when you write +, that must be interpreted as XOR. Thus, the summation symbol doesn't mean the mathematical process of addition, it means the logical process of repetitive XOR. At any rate, the notation above gives sufficient instructions on what set is to be generated. The number of elements in that set is P(n,r).
matt grime Posted May 2, 2005 Posted May 2, 2005 If you wished to explain concisely why did you post that verbose ramble? So you just want the number of different ways of selecting r objects from n objects with order important, and no replacement. Why were you unable to say that? That being the common explanation. The number being [math]^nP_r[/math], abused in ascii to nPr, or n pick r. nCr is the binomial coefficient n choose r, where order is unimportant. They are are, respectively, the well known numbers [math] \frac{n!}{(n-r)!} =^nP_r[/math] [math]\binom{n}{r}=\frac{n!}{r!(n-r)!}[/math] and obviously 0!=1. Compared to what you wrote how is that considered to be overly long? It even gives a formula for the numbers. Now, what on earth is all that nonsense about random?
Johnny5 Posted May 3, 2005 Author Posted May 3, 2005 If you wished to explain concisely why did you post that verbose ramble? Matt' date=' because something is bothering me about this whole 0^0, and 0!=1 thing, combined with e^0. I thought about it over the weekend, so that I could explain it here clearly. here is a 'law' of exponents: [math'] a^n a^m = a^{n+m} [/math] So now, consider the case where n=0. [math] a^0 a^m = a^{0+m} = a^m [/math] Where I have made use of the field axiom that 0+m=0 for any real number m. So certainly, the line of work above is true when m is a natural number, since the natural numbers are a subset of the reals. So for m an element of the natural numbers we have: [math] a^m = a_1 a_2 a_3... a_m [/math] In the case where a=0, the RHS is clearly zero, from a theorem (0*x=0 for any real x) which can be proven from the field axioms. So focus on this line here: [math] a^0 a^m = a^{0+m} = a^m [/math] Provided that a isn't zero, a^m isn't zero, and we can divide by it to obtain: [math] a^0 = \frac{a^m}{a^m} [/math] So here is a very important point. In the case where a isn't zero, it must be the case that a^0=1. I am sure you see this. But in the case where a=0, you get 0/0 on the RHS. But now, here comes the very big problem, which I now see clearly. Here is the exponential function: [math] e^x = \frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+...+\frac{x^n}{n!}+... [/math] In the case where x=0 we have: [math] e^0 = \frac{0^0}{0!} [/math] Now, e=2.71828... which is clearly not zero, therefore from an earlier theorem, e^0=1, hence by transitive property of equality it follows that: [math] 1 = \frac{0^0}{0!} [/math] And you have repeatedly said that 0! =1, so that we have: [math] 1 = \frac{0^0}{0!} = \frac{0^0}{1} = 0^0[/math] Which contradicts a previous theorem, in which we concluded that 0^0 is indeterminate. So I've found a contradiction. Now what? Regards PS: And by the way, I was working with P(n,r), because I was wondering if there was a good reason for choosing 0!=1, based upon combinatorics. Also, it was good excercise.
stevem Posted May 3, 2005 Posted May 3, 2005 Unfortunately, your argument is circular because of the way you've defined [math]e^x[/math]. If you define it as [math] e^x = 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...+\frac{x^n}{n!}+... [/math] then your argument no longer works.
Johnny5 Posted May 3, 2005 Author Posted May 3, 2005 Unfortunately' date=' your argument is circular because of the way you've defined [math']e^x[/math]. If you define it as [math] e^x = 1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...+\frac{x^n}{n!}+... [/math] then your argument no longer works. No, the argument is not circular, in fact it makes use of someone elses stipulation that 0!=1. I didn't dictate that 0!=1, and wouldn't because it causes contradiction, as the linear reasoning showed. Regards
stevem Posted May 3, 2005 Posted May 3, 2005 ! wasn't worried about that because everyone happily agrees that 0!=1 because, for example, it makes a nice formula for [math]e^x[/math]. The problem that really concerns me is that you have used [math]x^0[/math] for any x. But you need to define [math]x^0[/math] for x=0 before you can use it in the formula for [math]e^x[/math] and that is where you get the circular argument. What you can do is say: I like the formula [math] e^x = \frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+...+\frac{x^n}{n!}+... [/math] so I will define [math]0^0=1[/math] and [math]0!=1[/math] and then I can use this version rather than the one that starts 1 + x. The only snag is that not everyone would agree with this definition.
Johnny5 Posted May 3, 2005 Author Posted May 3, 2005 The problem that really concerns me is that you have used [math]x^0[/math] for any x. But you need to define [math]x^0[/math] for x=0 before you can use it in the formula for [math]e^x[/math] and that is where you get the circular argument. I did not use x^0 for any x' date=' look more carefully please. I used x^0 for some x. Specifically this is the statement that was used in the argument: [math'] \forall x \in \mathbb{R} [ \text{if not(x=0) then } x^0=1 ] [/math] And furthermore, the statement was concluded to be true, at the outset. Very carefully proven I might add. I used the axioms of the real number system. Now, a definition is a statement which is stipulated to be true. But if I don't agree with it, you can 'stipulate' it all you want... though I've not agreed. 0!=1 is a statement which must be agreed to, by both reasoning agents. I've not yet agreed, because I am still working on a combinatorical reason to agree to it. But lets just say that I've agreed, to locate the contradiction. Suppose then, that I have also agreed that: [math] e^x = \frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac {x^3}{3!}+...+\frac{x^n}{n!}+... [/math] Then in the case where x=0, we have: [math] e^0 = \frac{0^0}{0!}+\frac{0^1}{1!}+\frac{0^2}{2!}+\frac {0^3}{3!}+...+\frac{0^n}{n!}+... [/math] And the argument began by showing that: 0^1=0 0^2=0 0^3=0 and so on, therefore: [math] e^0 = \frac{0^0}{0!} [/math] And the argument was for any real number x (other than zero), that x^0 must equal 1. Hence e^0=1, and using transitivity we have: [math] 1 = \frac{0^0}{0!} [/math] Now, using the statement that 0!=1, which I've not really agreed to, though we are pretending I have, it now follows that: [math] 1 = \frac{0^0}{1} [/math] Whence it follows that: [math] 1 = 0^0 [/math] Which contradicts the very original theorem, which was that x^0=1 for any x except zero.
stevem Posted May 3, 2005 Posted May 3, 2005 I did not use x^0 for any x' date=' look more carefully please. I used x^0 for some x.[/quote'] But you said In the case where x=0 we have: [math] e^0 = \frac{0^0}{0!} [/math]
Johnny5 Posted May 3, 2005 Author Posted May 3, 2005 Originally Posted by Johnny5 I did not use x^0 for any x, look more carefully please. I used x^0 for some x. But you said Originally Posted by Johnny5 In the case where x=0 we have: [math] e^0 = \frac{0^0}{0!} [/math] You aren't following the logic, which is quite simple, look above where I discuss who is stipulating what to whom, and who isn't agreeing with who. I didn't dictate to the world that 0!=1. However, if I agree with the world that 0! =1, then I can prove to the world that 0=1, which is absurd. Regards
matt grime Posted May 3, 2005 Posted May 3, 2005 [math] a^n a^m = a^{n+m} [/math] So now' date=' consider the case where n=0. [math'] a^0 a^m = a^{0+m} = a^m [/math] Where I have made use of the field axiom that 0+m=0 for any real number m. what makes yo think that exponentiation is a function that is allowed to raise a number to the power m if m isn't an natrual number? What is (-1)^{1/2}? Not real is it? So certainly, the line of work above is true when m is a natural number, since the natural numbers are a subset of the reals. So for m an element of the natural numbers we have: [math] a^m = a_1 a_2 a_3... a_m [/math] In the case where a=0, the RHS is clearly zero, from a theorem (0*x=0 for any real x) which can be proven from the field axioms and this only holds if m isn't zero. So focus on this line here: [math] a^0 a^m = a^{0+m} = a^m [/math] Provided that a isn't zero, a^m isn't zero, and we can divide by it to obtain: [math] a^0 = \frac{a^m}{a^m} [/math] So here is a very important point. In the case where a isn't zero, it must be the case that a^0=1. so we have something about a case when a=/=0, let us see why you think this means anything for the case a=0 (acutally, you never explain what the non-zero case has to do with the zero case, at least not in a way that is mathematical) I am sure you see this. But in the case where a=0, you get 0/0 on the RHS. which is a good reason not to define 0^0 using this method, however we all know sin(x)/x is 1 when x is zero don't we? But now, here comes the very big problem, which I now see clearly. Here is the exponential function: [math] e^x = \frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+...+\frac{x^n}{n!}+... [/math] that is one formula for it based upon the usual standard that 0^0:=1 In the case where x=0 we have: [math] e^0 = \frac{0^0}{0!} [/math] Now, e=2.71828... which is clearly not zero, therefore from an earlier theorem, e^0=1, hence by transitive property of equality it follows that: [math] 1 = \frac{0^0}{0!} [/math] And you have repeatedly said that 0! =1, so that we have: [math] 1 = \frac{0^0}{0!} = \frac{0^0}{1} = 0^0[/math] Which contradicts a previous theorem, in which we concluded that 0^0 is indeterminate. did we say it was indeterminate? we said that by considering some expressions you couldn't determine what 0^0 was, that doesn't mean that it can't be determined in general. have you ever heard of removable singularities? evidently not, which is a shame since i already explained to you why there is ambiguity in defining 0^0. so let's do it again: If we consider the function f(x)=x^0 for all x not equal to zero it is, by continuity defined to be 1 (if we're thinking of x a real number). note that x^r is not defined necessarily as a function from R to R, and this lack of understanding of what a function is is something that is hindering you by the way, but we can define it for all strictly positive x as exp(rlog(x)), anyway I digress, the point was, that we can define a function from R\{0} to R via x^0 which is identically 1, thus we can extend to state 0^0=1 if we *fix* the exponent. now, conversely consider the function g®=0^r, for r strictly positive, which it is not unreasonable to say is identically 0 by contiinuity again. Then we can extend, by continuity to a function such that 0^0=0. Now, consider the function h(x)=x^x for x strictly positive. This hasn't a hope in hell of being extended to a function when x=0. see, there are different functions in play here. I mean you're blindly assuming that it makes any sense of even talking about raising a real number to a real power, which isn't true at all. Now, in the case of taylor series we are in the first situation where we have x^0 which we extend by continuity to state 0^0=1. What makes you think that the same symbol must have the same meaning in all contexts? So I've found a contradiction. Now what? Not really, at least nothing that we can't explain: you just won't accept that these are just conventions that we adopt as need be to suit our own needs. It is common in mathematics: without some axioms of set theory we cannot define a basis of all vector spaces, with it we can show there is a game that both players win with probablity 1. Tough. That is jsut a consequence of an axiomatic system.
Johnny5 Posted May 3, 2005 Author Posted May 3, 2005 One at a time. Originally Posted by Johnny5 [math] a^n a^m = a^{n+m} [/math] So now, consider the case where n=0. [math] a^0 a^m = a^{0+m} = a^m [/math] Where I have made use of the field axiom that 0+m=0 for any real number m. what makes yo think that exponentiation is a function that is allowed to raise a number to the power m if m isn't an natrual number? What is (-1)^{1/2}? Not real is it? I already thought about how to answer this exact question over the weekend. I was working on really primitive stuff' date=' regarding summation notation. The entire "axiomatic approach to numbers" which I use, begins by developing the natural number system, in a fashion similiar to Peano. Years ago, when I followed Peano's original work, I recall a neat little proof that multiplication is commutative, though I've forgotten how the proof goes now. At any rate, the idea I had this weekend goes like this: Define the natural numbers axiomatically, similiar or identical to the way Peano did it. That being done, let a,b denote arbitrary natural numbers. a is the sum of 'a' one's, and b is the sum of 'b' one's. That is: [math'] a = \sum_{n=1}^{n=a} 1 [/math] [math] b = \sum_{n=1}^{n=b} 1 [/math] Now consider the sum: [math] a + b = \sum_{n=1}^{n=a} 1 + \sum_{n=1}^{n=b} 1 [/math] Now, we will already have proven trichotomy, so it will now follow that either a=b XOR a<b XOR a>b. Let us first consider the case where not(a=b). Without loss of generality, let a>b. Thus: [math] a + b = \sum_{n=1}^{n=b} 1 +\sum_{n=b+1}^{n=a} 1 + \sum_{n=1}^{n=b} 1 [/math] So that: [math] a + b = \sum_{n=b+1}^{n=a} 1 + 2 \sum_{n=1}^{n=b} 1 [/math] And if we define subtraction, we can write: [math] a + b = \sum_{n=1}^{n=a-b} 1 + 2 \sum_{n=1}^{n=b} 1 [/math] Which is nothing but the obvious fact that: [math] a+b = (a-b)+2b [/math] Now consider the case where b=a. In this case we have: [math] a + a = \sum_{n=1}^{n=a} 1 + \sum_{n=1}^{n=a} 1 = 2 \sum_{n=1}^{n=a} 1=2a [/math] And now, consider a+a+a. [math] a + a + a = \sum_{n=1}^{n=a} 1 + \sum_{n=1}^{n=a} 1 +\sum_{n=1}^{n=a} 1 = 3 \sum_{n=1}^{n=a} 1 =3a [/math] Both of which facts come straight out of the distributive law. So, if we have the sum of m a's, we can write the following: [math] a_1+a_2+a_3+...a_m = m \sum_{n=1}^{n=a} 1 =m \cdot a [/math] at least for natural numbers. Now, consider the case where m=a. In this case we have: [math] a_1+a_2+a_3+...a_a = a \sum_{n=1}^{n=a} 1 =a \cdot a [/math] And we introduce the following notation: [math] a \cdot a = a^2 [/math] [math] a \cdot a^2 = a^3 [/math] And so on. So this gets you from pure addition of natural numbers, to exponents; on the field of the natural numbers at least. So now to your question. What makes me think that I can raise a number to the power m, in the case where m isn't a natural number. The answer comes from the Binomial theorem Matt, I do believe. Binomial Theorem: [math] (1+x)^\alpha = \sum_{n=0}^{n=\infty} x^n \prod_{k=1}^{k=n} \frac{(\alpha +1 - k)}{k} [/math] Let's see if we get Pascal's triangle, when m is a natural number. Example 1: Let alpha=2, let x=a/b. Hence: [math] (1+a/b)^2 = \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(2 +1 - k)}{k} [/math] Which leads to: [math] \frac{1}{b^2} (b+a)^2 = \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(2 +1 - k)}{k} [/math] Which leads to: [math] (b+a)^2 = b^2 \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(2 +1 - k)}{k} [/math] Now, from memory the answer is: [math] (b+a)^2 = a^2+b^2+2ab [/math] The combination sum/product should also give this answer. [math] \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(2 +1 - k)}{k} = \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k} [/math] Starting off at n=0, we have: [math] \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k} = 1+ \sum_{n=1}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k}[/math] Then, evaluating at n=1, we have: [math] \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k} = 1+2a/b+ \sum_{n=2}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k}[/math] And then evaluating at n=2, we have: [math] \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k} = 1+2a/b+ a^2/b^2+ \sum_{n=3}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k}[/math] At this point, the numerator in the iterated product is zero, when k=n=3. And so for any larger value of n, there will be multiplication by zero, hence: [math] \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k} = 1+2a/b+ a^2/b^2 [/math] Therefore: [math] b^2 \sum_{n=0}^{n=\infty} \frac{a^n}{b^n} \prod_{k=1}^{k=n} \frac{(3 - k)}{k} = b^2 +2ab+ a^2 =(a+b)^2 [/math]
matt grime Posted May 4, 2005 Posted May 4, 2005 That is one way to define x^r *for some x* but not the correct way for all x, for instance according to that theory the square root of minus 1 can be gotten by adding up real numbers. But of course you're sidestepping convergence issues when you write (1+x)^r, since the only converges if |x|<1. So, no, that isn't the correct definition for exponentiation. It is in fact exp{rlog(x)} as I repeatedly said, with some choice of branch of log, and a very restricted domain. Why a priori do you think 0^0 is defined? Is it the value of f(x) = x^0 when x=0? Is it the value of g®=0^r when r=0, or is it the value of h(t)=t^t when t is 0? Not that 0 must necessarily be in the domain of any of those functions, and indeed it is specious to define functions without domains, so I will declare the domain of f to be R with the *definition* that f(0)=1, thus we have a nice continuous function, the function f(x)=1, which is the convention taken in defining Taylor series. We note g can only reasonably have domain the strictly positive integers to begin with, but we extend it to strictly rational positive r, and thence by continuity to the closure of the striclty positive rational r, which is the positive reals (including 0) which means that g(x)=0 for all x. This is at least the third time that I've explained to you why it is that we say 0^0 is undetermined - there are two equally sound ways to define it giving different answers. But in Taylor series we are clearly in the first case, that of f needing to be extended to allow f(0). I leave it to you to decide what h(x)'s domain can be. You're sidestepping the point, also, that it is only you that thinks that symbols 0! and 0^0 must have some god-given meaning. They do not, and even the choice that 3!=3.2.1 is just a definition. It so happens that 3! is the number of distinct orderings of 3 objects, and undoubtedly such things led to the study of !, but the definition of n! is not "the number of ways of ordering n objects", it is the unique solution to x_n = n(x_n-1) with x_0=1 and n=>0. We set n!:=x_n it can then be used to describe things. How do *you* define n!? Why are you not asking what (-1)! is? Why must it be anything?
Johnny5 Posted May 4, 2005 Author Posted May 4, 2005 How do *you* define n!? Why are you not asking what (-1)! is? Why must it be anything? It is going to take me some time to sift through everything you said' date=' but this is a good place to start. In another thread, the question as to what (-1)! means came up, but I don't remember which one or how or why. I will try to find it. As for how I define n!, I have adopted the conventional definition. n! = n*(n-1)(n-2)..... 1 And this can be written as follows, using product notation: [math'] n! = \prod_{k=1}^{k=n} k = 1*2*... *n [/math]
Johnny5 Posted May 4, 2005 Author Posted May 4, 2005 That is one way to define x^r *for some x* but not the correct way for all x, for instance according to that theory the square root of minus 1 can be gotten by adding up real numbers. What theory? And what do you mean that...."square root of -1 can be obtained by adding up real numbers"??????
matt grime Posted May 4, 2005 Posted May 4, 2005 So, we see you only have defined ! for strictly positive integers, and thus *you* have no way to define 0!. What makes you tihnk that this means 0! is defined at all? The rest of the world is happy to extend the definition of ! to include 0 by declaring 0!=1, which greatly simplifies many expressions, though it is not *necessary* to do so - it is simply an extension that *makes sense*. If you are prepared to accept certain axioms of set theory then it also makes sense since there is exactly 1 permutation of an empty set: the empty permutation, but that is merely a convention too. And as for the second reply: you claim that you can raise numbers to non integral powers via applying the "binomial theorem" to (1+x)^r this is patentyly a false statement. It is only true for non-natural r if |x|<1, and does not let you define 3^{sqrt(2)} for instance. Thus, when I asked you to define what it means to raise numbers to non-rational, or integral powers you gave an answer that would, if it were correct, allow us to define the square root of -1 simply by putting x=-2 and r=1/2 into the expansion of (1+x)^r and thuis yield i as the sum of lots of real numbers. Of course -2 is outside the circle of convergence for that power series.
Johnny5 Posted May 4, 2005 Author Posted May 4, 2005 So, we see you only have defined ! for strictly positive integers, and thus *you* have no way to define 0!. Correct... though I could be persuaded to define 0! somehow, if the reason makes combinatorical sense. What makes you tihnk that this means 0! is defined at all? The rest of the world is happy to extend the definition of ! to include 0 by declaring 0!=1' date=' which greatly simplifies many expressions, though it is not *necessary* to do so - it is simply an extension that *makes sense*. [/quote'] I usually take 0!=1, so that I can use e^x, but that's only what I usually do. But now I have reason to question that practice. If you are prepared to accept certain axioms of set theory then it also makes sense since there is exactly 1 permutation of an empty set: the empty permutation' date=' but that is merely a convention too. [/quote'] Which axioms did you have in mind? "there is exactly 1 permutation of an empty set"??? What does that mean? Regards
Johnny5 Posted May 4, 2005 Author Posted May 4, 2005 you claim that you can raise numbers to non integral powers via applying the "binomial theorem" to (1+x)^r this is patentyly a false statement. It is only true for non-natural r if |x|<1' date=' and does not let you define 3^{sqrt(2)} for instance. Thus, when I asked you to define what it means to raise numbers to non-rational, or integral powers you gave an answer that would, if it were correct, allow us to define the square root of -1 simply by putting x=-2 and r=1/2 into the expansion of (1+x)^r and thuis yield i as the sum of lots of real numbers. Of course -2 is outside the circle of convergence for that power series.[/quote'] I am currently analyzing exactly this subject matter. It was Newton who extended the binomial theorem to non-integral powers, at least that's what the history books say. Nonetheless, I have used the binomial expansion formula, to compute square root of two before, and it works. Analysis of radical 2 goes something like this: (It will be a good exercise for me) Computation of radical two The ancient Babylonians supposedly had a formula for computing the square root of two, now known as the Babylonian formula. The idea behind a square root comes from investigation of the natural number system. Observe that: 1*1=1 2*2=4 3*3=9 4*4=16 5*5=25 6*6=36 We can now rig mathematics, in order to ask mathematical questions, using x to represent an unknown number, as follows: Find x, if x*x = 36. x is the number which, when multiplied by itself one time, is equal to 36. And we already observed that 6*6=36, hence x=6. x*x =36 is an example of a quadratic equation. Definition: [math] x*x = x^2 [/math] So we can write this quadratic equation as follows: [math] x^2=36 [/math] We already know the answer is 6, of course. So now consider the following quadratic: [math] x^2=2 [/math] In order to solve for x, we have to compute the square root of two; namely the number which when multiplied by itself is equal to two. It is clear that such a number cannot be a natural number. Symbolically, the answer is: [math] x = \sqrt{2} [/math] But, we are looking for a decimal expression. Notice the following: (1.4)(1.4) = 1.96 This is almost equal to 2, but not quite. Now consider (1.41)(1.41) = 1.9881 So we are closer to 2. The binomial formula gives us a way to get perpetually closer systematically.
matt grime Posted May 4, 2005 Posted May 4, 2005 Computing 2^1/2 using the formula for (1+x)^r does not prove anything, what about all the other possible cases? do they all converge? I see you're about to go off on a pointless tangent and ignore what has been explained to you. So that's this thread off limits then.
Johnny5 Posted May 4, 2005 Author Posted May 4, 2005 Computing 2^1/2 using the formula for (1+x)^r does not prove anything' date=' what about all the other possible cases? do they all converge? I see you're about to go off on a pointless tangent and ignore what has been explained to you. So that's this thread off limits then.[/quote'] Well I was just going to investigate root 2 first, and then move to other cases, in order to figure out the radius of convergence. I remember doing that, but it's been awhile. But, I am doing root 2 first, what would you have me do? (And I do not feel this is a pointless tangent, it is part of the problem, a particular case) I mean I already know what happens. You get ... 1.414...... I was just going to show this though, by performing the sum. Also, the work here ties in with history, in the sense that it is connected to the Babylonian formula, and Sir Isaac Newton, as well as Newton's method of approximation, which I probably can recall without the aid of google. Regards
matt grime Posted May 4, 2005 Posted May 4, 2005 Look at the power series, it's obvious how to work out the radius of convergence from that. and even it it weren't then surely the fact that if I put any x in such that x<-1 and have r, say, 1/2, then the resulting power series cannot converge otherise the square root of a negative number would be real, hence its radius of convergence is 1. But you're entirely missing the point. It is just a matter of convenience and consistency to adopt 0!=1, and x^0 extends to 0^0=1 for the uses you have come up with, and there is no other useful definition for 0!, so it is unabiguous, but there are other ways to define 0^0 that contradict the extension of x^0 to x=0, thus we cannot declare 0^0 to be unambiguous. There is no absolute truth in mathematics, there are just things we construct, and if the construction is universal then there is no harm in adopting it universally, hence 0!=1 is both useful and consistent. Set theory often defines "emtpy products" and "empty" functions, as well as empty sets, and in this situation the combinatorics mean that 0!=1 is sensible. But why are you so opposed to the definition of 0! as 1? Why not? If you don't like the convention, do not adopt it, but that is your choice.
Johnny5 Posted May 4, 2005 Author Posted May 4, 2005 But why are you so opposed to the definition of 0! as 1? Why not? If you don't like the convention' date=' do not adopt it, but that is your choice.[/quote'] I am undecided right now, as to how to define 0!, if at all. The discussion as to the radius of convergence of (1+x)^r interests me more though, right now. Look at the power series, it's obvious how to work out the radius of convergence from that. and even it it weren't then surely the fact that if I put any x in such that x<-1 and have r, say, 1/2, then the resulting power series cannot converge otherise the square root of a negative number would be real, hence its radius of convergence is 1. Ok, here is the power series; [math] (1+x)^r = \sum_{n=0}^{n=\infty} x^n \prod_{k=1}^{k=n} \frac{r+1-k}{k} [/math] Now things are coming back to me, about radius of convergence. I recall that all I have to do is analyze f(k), where: [math] f(k) = \frac{r+1-k}{k} [/math] Let me see what happens as k approaches infinity. Let's fix r, at 1/2, for this, hence: [math] (1+x)^{1/2} = \sum_{n=0}^{n=\infty} x^n \prod_{k=1}^{k=n} \frac{3/2 -k}{k} [/math] So if r=1/2, then [math] f(k) = \frac{3/2 - k}{k} [/math] Now, we have to evaluate the limit as k approaches infinity of f(k). [math] \lim_{k \to \infty} f(k) = \lim_{k \to \infty} \frac{3/2 - k}{k} [/math] The limit can be determined by using L'Hopital's rule. Let me try and formulate L'Hopital's rule correctly. Suppose that in taking the limit of g(k)/h(k) we have either the case 0/0 or infinity/infinity. Then the limit of g(k)/h(k) will equal the limit of g`(k)/h`(k). So in our problem here, we wish to compute: [math] \lim_{k \to \infty} f(k) = \lim_{k \to \infty} \frac{3/2 - k}{k} [/math] The derivative of the numerator with respect to k is equal to -1, and the derivative of the denominator with respect to k is equal to one, and the ratio of these is -1. Hence the limit is defined. Now, I just have to remember how this ties into the radius of convergence.
matt grime Posted May 4, 2005 Posted May 4, 2005 Fine, so don't use 0!=1, then you'll have to ignore all the mathematical formulae that do use that definition. The limit is just -1 in your above post as surely absolutely anyone can see? The abs value of your f(k) is the radius convergence, being the abs value of the ratio of consecutive terms in the series as the index tends to infinity. Pointless and completely off topic, but what's new? Newton probably used Newton's method to work out sqrt(2), and all the other square roots, but, once, more, this is neither here nor there. When are you going to actually learn about the things that are explained to you, instead of wandering off on tangents?
Johnny5 Posted May 4, 2005 Author Posted May 4, 2005 The limit is just -1 in your above post as surely absolutely anyone can see? Well I just used L'Hopital's rule' date=' I didn't actually prove that the limit is -1, but it is plain to see. But to prove it, some would ask for an epsilon-delta proof, though I don't feel that's necessary, when you have something like L'Hopitals rule at your disposal. The abs value of your f(k) is the radius convergence, being the abs value of the ratio of consecutive terms in the series as the index tends to infinity. I have to think about this... absolute value of ratio of consecutive terms in the series, as the index tends to infinity. Let me try to interpret that formula clearly. [math] (1+x)^{1/2} = \sum_{n=0}^{n=\infty} x^n \prod_{k=1}^{k=n} \frac{3/2 -k}{k} [/math] The series is going to be composed of terms. Power series have the following form: [math] C_0+C_1x+C_2x^2+C_3X^3+...C_nx^n+... [/math] The first term of the series is C0, the second term of the series is C1x, and so on. The nth term of the series is Cn xn. Now, pick a term of the series at random. How about n=p=999493245932459934593495 Forget about all other terms in the series but this one. So ignore the decimal representation of this term, and call it p. So the pth term of the series is given by: [math] x^p \prod_{k=1}^{k=p} \frac{3/2 -k}{k} [/math] Now, I am going to think about what you said again, very carefully. The absolute value of my f(k) is the radius of convergence... being the ratio of the absolute value of consecutive terms of the series as the index k tends to infinity. hmm I don't see how you get "absolute value" from anything which I've done. Actually, let me let p go from 1 to 4, to see the pattern you say is there. Case 1: p=1 [math] x^1 \prod_{k=1}^{k=1} \frac{3/2 -k}{k} = x \frac{3/2 -1}{1} = \frac{x}{2} [/math] Case 2: p=2 [math] x^2 \prod_{k=1}^{k=2} \frac{3/2 -k}{k} = x^2 \frac{3/2 -1}{1}\frac{3/2 -2}{2} = x^2 (\frac{1}{2}) (\frac{-1}{4})[/math] Case 3: p=3 [math] x^3 \prod_{k=1}^{k=3} \frac{3/2 -k}{k} = x^3 \frac{3/2 -1}{1}\frac{3/2 -2}{2} \frac{3/2 -3}{3} = x^3 (\frac{1}{2}) (\frac{-1}{4})(-\frac{1}{2}) [/math] [math] x^4 \prod_{k=1}^{k=4} \frac{3/2 -k}{k} = x^4 (\frac{1}{2}) (\frac{-1}{4})(-\frac{1}{2})(\frac{-5}{8}) [/math] now you say... "ratio of absolute value of consecutive terms of series." Yes. I went off and thought about it Matt. My f(k) is the ratio of consecutive terms of the series. Yes. Disregard the powers of x, and focus on two consecutive coefficients of terms in the series, say the coefficient of the pth term, and the coefficient of the p+1th term. Now, take the ratio of the coefficients of the pth term, and the p+1th term, like so: Coefficient of pth term: [math] \prod_{k=1}^{k=p} \frac{3/2 -k}{k} [/math] Coefficient of p+1th term: [math] \prod_{k=1}^{k=p+1} \frac{3/2 -k}{k} [/math] [math] \text{Ratio of two consecutive coefficients} [/math] [math] \frac{\prod_{k=1}^{k=p+1} \frac{3/2 -k}{k}}{\prod_{k=1}^{k=p} \frac{3/2 -k}{k}} = \frac{\frac{3/2-(p+1)}{p+1}\prod_{k=1}^{k=p} \frac{3/2 -k}{k}}{\prod_{k=1}^{k=p} \frac{3/2 -k}{k}} =\frac{3/2-(p+1)}{p+1} [/math] And p is arbitrary. So letting k=p+1, implying k is arbitrary as well, we have, as the ratio of two consecutive terms of the series: [math] \frac{3/2-k}{k} [/math] which is my f(k), exactly as you said. SOOOOOOOOOOOOOO In taking the limit, as k tends to infinity, of f(k), we are in fact performing the ratio test, precisely because f(k) is the ratio of the nth term, and the n-1th term. Of course I still don't see where the absolute value bit comes in at, but I see exactly what you meant. Just the f(k) is the ratio of two consecutive terms of the series. Uh huh.
matt grime Posted May 4, 2005 Posted May 4, 2005 But to prove it, some would ask for an epsilon-delta proof, though I don't feel that's necessary, when you have something like L'Hopitals rule at your disposal why on earth would i need l'hopital? it's obvious from archimedes principle. I have to think about this... absolute value of ratio of consecutive terms in the series, as the index tends to infinity. why? it's obivous, and frequently the *definition* Let me try to interpret that formula clearly. [math] (1+x)^{1/2} = \sum_{n=0}^{n=\infty} x^n \prod_{k=1}^{k=n} \frac{3/2 -k}{k} [/math] The series is going to be composed of terms. Power series have the following form: [math] C_0+C_1x+C_2x^2+C_3X^3+...C_nx^n+... [/math] The first term of the series is C0, the second term of the series is C1x, and so on. The nth term of the series is Cn xn. Now, pick a term of the series at random. How about n=p=999493245932459934593495 Forget about all other terms in the series but this one. So ignore the decimal representation of this term, and call it p. So the pth term of the series is given by: [math] x^p \prod_{k=1}^{k=p} \frac{3/2 -k}{k} [/math] Now, I am going to think about what you said again, very carefully. The absolute value of my f(k) is the radius of convergence... being the ratio of the absolute value of consecutive terms of the series as the index k tends to infinity. hmm It's the *definition*, look it up.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now