matt grime
Senior Members-
Posts
1201 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by matt grime
-
there should be no sufficies on the x's on the rhs note the word DEFINE in there. there is no reason to assume it x^0 as to be defined for any x. but this presumes that x^0 is defined a priori. there is no reason to assume that. here let me give an examle that shows assumptions can be stupid. the largest integer is 1. suppose L is the largest integer . L^2=>L but L is largest so L^2=L thus L=0 or 1 and 1 is bigger than 0 QED.
-
before you start overly editing (one annoying habt that) you have apparently defined x^y when x=0 and y is a negative integer
-
entirely possible - i make mistakes all the time. so -j=ai-b+c(a+bi+cj) is what it should read so ca-b=0, a+cb=0 c^2=-1 well, that's even easier then, since c^2=-1 has no solution in the reals.
-
It is also nothing more than a convention to declare that n!=n(n-1)..1 for n a positive integet, Johnny, and you have no problem with that. No one is saying that multipication of real numkbers is not commutative, merely that you are using the sum and product symbol in an odd manner. All the problems and attempted fixes you created in writing out Newton;s expansion are your own making and are easily avoided.
-
Topology can be independent of Analysis in the sense you are thinking of on first reading. Topology (pointset) is in essence the algebraist's way of doing analysis. All you need from analysis is the idea that the real numbers have a distance notion that makes them a metric topological space.
-
No, it must not force that upon us, and there is no reason to suppose it should, moreoever multiplication does not have to be commutative (it isn't for matrices). Even with real numbers, if the product, or summation, is over an infinite index then the order of the operations affects the outcome - rearrange the series for log(2) using the Taylor expansion and the outcome equals log(3/2). You are just breaking a convention, nothing more.
-
I said "ought" since you are drawing conclusions about the mathematical interpretation of things so you ought to adopt the mathematical meanings for those things, and accept their limits, and understand what they do. If I attempt to do geometry in the hyperbolic sense using the laws of euclidean geometry I will not make any good conclusions, will i?
-
Well you're doing mathematics using mathematical conventions, or you ought to be. If you are using your own interpretations of things but still concluding that mathematics is flawed rahther than your attempts to do it, then that is even more wrong. As it is, by DEFINITION 0!=1 in maths and this causes no problems. By convention, in certain parts of mathematics we adopt the identification of 0^0=1 too. There are no issues since these conventions only apply within the areas where they are declared true. As it is I don't see this thread going anywhere, especially if you adopt a stance you won't change that is different from mathematical ones and then proceed to deduce mathematics contains some "funky problems" with its definitions. You have written and expression that involves dividing by zero then manipulate it to get rid of this error and have issues with 0! and so on. This is a fault with your chosen conventions.
-
Why should it lead to a contradiction? I didn't say it did. I said you're ignoring the fact that the index of a product (or sum) is an ordinal, and indexing it from 1 to 0 isn't using an ordinal. I could adopt other conventions, but they are just conventions, Johnny, they are not empirically true.
-
this is undefined so what follows is spurious. I am asking you to tell us why you thought it necessary to remind us that multiplication of real numbers is commutative you may not reverse the meaning of the upper and lower terms, it is not like integration. [math]\prod_{k=r}^{k=s}x_s[/math] means the product [math] x_rx_{r+1}\ldots x_s[/math] this is not defined unless r <=s; you are just misusing notation that is all I am saying There is no proof of those since they are conventions. I can prove the following: lim x^0 as x tends to zero is 1 this fact is what we use in Taylor series for stating 0^0=1 lim 0^x as x tends to 0 (from above) is 0, this is why we don't have a universal and unequivocal meaning for 0^0 lim x^x as x tends to zero (again from above) is 1, since x^x is exactlexp(xlogx) and xlogx tends to 0 as x tends to zero. Now, as for factorials, 0! is a useful convention and, as it happens is the number of orderings of the empty set - it has exactly one, the empty ordering, but that is almost, again, a by fiat defintion. I cannot prove to you 0!=1 since it is not something one proves, it is a convention.
-
Here is an inline synopsis without using latex. and with lots of deletions to save space. the original is post 13 in the thread on manual evaluaiton of exponents. johnny asked me to point out where I thought it was wrong so here it is. no, you are presuming that 0! is writable as a product (it is - the empty product, but I suspect you don't get that) you have rewritten the taylor series and introduced a mistake. The next bit is you attempting to correct this self introduced mistake. yes, why tell us? note this product is indexed from 1 to 0. This isn't allowed. no, it was caused by you thinking 0! was a product from 0 to 1. at best that would be an empty product (there are no things to multiply together) and the empty product is 1 as well. erm, you just multiply it out; there's no need to dress it up I thought you were explaining an error in newton's formula? well? what about it? now you're onto something else entirely. this is true in this context. Both of those expressions are declared equal to 1, and all of the stuff I've snipped is unnecessary. we never were faced with a problem. you were owing to mixing up notations and not defining things properly., Ha! given how much time and bandwidth you just wasted doesn't that strike you as odd? You dont' even compute the square root of 2 in the first post! if there were a division by zero error then simply manipulating the symbols won't fix it. it is all a product of your choice of convention and writing an expansion in an unadvised way. we (the rest of the mathematical community) already do it already is yes, when k is strictly greater than 0, and 0! is 1 <rest of unnecessary post snipped> <end of post 1 note no single calculation of sqrt(2)> Second post calculates sqrt(2) in some sense, but does not acutally prove the series converges, merely evaluates a few terms then there is a post on radius of convergence. the ratio test does not show the series converges, nor does it show it diverges. All of those posts could have been summed up as: there is an formula for evaluating (1+x)^t when |x|<1 (1+x)^t = 1+tx +t(t-1)x^2/2+ t(t-1)(t-2)x^3/3! + ... if t is a positive integer this agrees with the binomial expansion, if it is negative or fractional the series converges for |x|<1. It may also converge when x=1 and it does when t=1/2 so that the square root of 2 can be written as 1+1/2 +(1/2)(-1/2)/2! + (1/2)(-1/2)(-3/2)/3!+.... if you work out the first few terms and add them up you'll see it is a quite good approximation after only a few additions. I won't prove it converges here. If you're interested this comes from the "taylor series" of (1+x)^t, a technique that lets us write lots of functions as power series like this. 6 lines? some unnecessary, really, no latex and quite clear.
-
Johnny, part of the definition of a function is its domain (and range). Your f is not a proper function: you fail to define the range. It is at best "an expresion". Let us do an example to show why this is important: Let S be the the integers mod 2, ie 0 and 1 with addition such that 1+1=0. The functions x^2+1 and x^4+1 are obviously different as functions of a real variable, but they are equivalent as functions from S to S. Now, in your case, assuming you were taking about f being a function from C to C (as z usually implies) the function isn't defined at 1, but 1 is a removable singularity, that is there is a fucntion defined on all of C and continuous (analytic actually) that agrees with f on its natural domain. If you recall I tried to explain removable singularities to you in an attempt to expllain why 0^0 does not have a universal meaning but can be taken to equal 1 when doing Taylor series.
-
Yes, but that wasn't what you stated (it can easily be extended to get round all the problems I've raised). But then what you do intend to state is often lost in the size of the post. Your original pair of posts, by the way, contains many misunderstandings, mistatements, and mistakes. Quite what that waffle about 0! was intended to do is something that defies description. And please don't start a debate in this thread about 0!, 0^0 and other things whose standard usage you simply do not accept. They are just formal conventions, you are over analysing. And you've still not proven that the power series converges in finding the square root of 2.
-
Why shouldn't pi enter into it? If it's anything to do with integrals of any reasonably nice function then pi is bound to play a role. Learn about, ooh, fourier series to see why.
-
It looks to me like the coefficients of some fourier series.
-
x^x is defined to be exp(xlog(x))
-
It was, in my opinion, a very long post containing nothing of practical interest that could not have been done in 3 lines. You don't compute the square root of 2 and that very power series you cite does not conclude that the expansion you care about is valid in finding the square root of 2, nor does it show it is invalid either before you complain that you have used it. You realize I am talking about the oist on radius of convergence that explained the ratio test? The previous post was just far too long, though well within the scope of this thread.
-
And the radius of convergence of your series is 1. Please don't hijack a thread with stupidly long and unnecessary posts, Johnny. Not everyone is as ignorant as you and most people understand this, and if they didnt' they would ask about it if they were interested. Your posting is tantamount to vandalism; please desist.
-
Please be aware that Johnny's method is very limited in scope (it cannot work out the square root of any number bigger than 2, and doesn't really allow you to do much by hand does it? Are you going to evaluate all those things? First, there is no reason to evaluate something's decimal expansion. It is for the most part mathematically unimportant. Second if you really have tot there is a general iterativem method for finding the square root of any number. It is based upon the Newton-Raphson Iterative Method.
-
Define "connection". If every atom is connected to every other atom then it owuld depend on what a "connection" was, or more precisely how many ways to connect two atoms. The number would then be a simple function of this and the number of atoms in the universe.
-
A sequence of real numbers converges if and only if it is Cauchy, yes, that is true, and easy to prove since R is a complete metric space.
-
If you're going to get bogged down in unimportant minor details you are never going to get anywhere. Oh, and it is also not the done thing to correct a mathematicians errors when they are obvious and unimportant. This may seem strange, and indeed disturbing to some people. One certainly would never refer to writing > instead of => as an error, minor or otherwise. One would instead refer to the writer making a typo or some such euphemism, and the writer would readily acknowledge the mistake and thank the person for pointing it out. When you say things like "this bit's wrong, and i don't know how it affects your argument" you show yourself up since it clearly doesn't affect the conclusion at all. Mathematicians make mistakes, lots of them; they are only serious if they are mistakes of understanding.
-
Can I offer a quick summary of your posting style? Post 23. The first 21 lines are unnecessary, being almost entirely summed up in line 22. The rest was a very verbose and unnecessary rewrite of a reasonable short proof. Why? That is rhetorical and does not require you to answer. It is supposed to make you think about wasting time posting unneccesary junk. For the sake of poeple on dail up can you not post huge long posts with loads of latex in them that are completely unnecessary?
-
-
yes it is. It is in fact the obvious "just do it proof" way of showing that n! grows faster than any exponential. It is not the simplest proof, but the simpler proofs require more knowledge. The second proof is much easier: since exp(k) has a power series valied for all k in the real numbers it follows that k^n/n!, the n'th term must converge to zero, that is, assuming k is positive, k^n/n! <1 for all n sufficiently large, ie k^n<n! for all n sufficiently large. But that requires you to know taylor series, radii of convergence and d'alembert's ratio test. My proof can be followed merely by common sense and could be dreamt up by anyone who is prepared to think a little.