amplitude
Members-
Posts
18 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
amplitude's Achievements
Quark (2/13)
-1
Reputation
-
This post raises an interesting point. Because there is an infinite number of fractions between 0 and 1. What is the interval between each? It is difficult to imagine any other answer than h (the smallest imaginable number). But h = 0. Any questions? In one way this thread has typified an important aspect of the history of mathematics since the 19th C. On the one hand, there are those who argue, "hey, it works, what other proof do you need?" And on the other (historically speaking) you had Cantor, Frege, Russell et al, who said, "yes, but we need to know why it works..." I really am saying goodnight this time, chaps. I thank those of you who have helped me to clarify my thoughts since my initial post; but we haven't made any real progress in answering the original question, so I think it's probably time to close this thread.
-
I thought I had said goodbye to this thread days ago, but mathematics and philosophy are like malaria, once you've been bitten, it's in your blood for the rest of your life. A couple of general comments: glancing back over some of the posts, I think some confusion has arisen because it's not always easy to maintain the proper distinction between the natural numbers, the real numbers, and the imaginary numbers. Maybe the way I've flitted from one to the other hasn't always been as clear as it could have been, either. The question I raised is principally a problem in the natural number line. There are moments when we need to think in terms of natural integers, as when we say, there is no number (ie natural integer) between 9 and 10, and there are moments when we need to expand the scale so as to include the natural fractions (as when we return to 0.999...=1). Perhaps confusingly, there are also moments when it is useful to think in terms of the real numbers (eg which is correct according to the axioms of arithmetic, 1 - 0.999... = 0 or 1 - 0.999... = .000...1? And the latter answer, theoretically the smallest possible number, would be an imaginary number, more usually represented as h; but whether we use h or 0.000...1 is mainly a matter of convenience and does not reflect any significant philosophical point, except that many mathematicians think that h is just another name for 0, which frankly doesn't help). There is a difference between a belief as to what is one's central difficulty, and a belief (or prejudice) as to the truth of the matter.
-
BN, I don't think you've understood the original question and you appear to be unfamiliar with its context in the axioms and definitions of arithmetic. In the number line of the natural integers (0, 1, 2, 3, ...n), 9 < 10, but there is no number between them. This is a part of the paradox which provoked my original question. There can be no number between 0.999... and 1, for the same reason that there is no natural integer between 9 and 10, because 10 is the immediate successor of 9. But nobody denies that 0.999... and 1 are different numbers (hence the requirement for proofs of their equivalence). Middle-level or 'synthetic' mathematics - that's to say, mathematics above the level of the axioms - offers several proofs that 0.999...=1. I have personally had no luck trying to devise an additional proof based solely upon the axioms and definitions, and I was curious to know whether anybody had any ideas as to how such a proof might be constructed. There are philosophical reasons why a proof based upon primitive statements would be more satisfactory than proofs based upon synthetic statements. As I suggested in another post, I believe the central difficulty is the axiomatic requirement that two numbers cannot have the same successor. If we expand the natural number line to include the fractions, this means that the successor to 0.999... cannot be equal to 0.999... BTW, I "believe" nothing in relation to this question. I am trying to follow an enquiry wherever it may lead.
- 71 replies
-
-1
-
Thanks for the tip about charmap.exe, Studiot! I blush to think that after all these years of using Windows, I still wasn't aware of that... As to the special difficulties which attach to the number 0 - material for a whole new thread there perhaps!
-
This will be my last post on this topic. I'd like to thank everybody for their input, I really appreciate it. But it's obvious that we are not making progress with regard to my original question about developing a proof that 0.999...=1 by working upwards from the axioms and definitions, rather than working downwards from mid-level theorems. You have all greatly helped me to clarify my own ideas in a number of important ways, but I've always believed in the principle that a thread should be terminated before it wanders off into areas of total irrelevance! I might say that my interest in this question arises from the fact that another equally-simple proposition, namely 1+1=2, can be easily proved by arguing directly from the axioms and definitions, like so: 1 + 1 = 1+ S(0) = S(1 + 0) = S(1) = 2 Where the function S() represents "the successor of". Of course, a great deal of creative insight went into making that argument possible, but the point is, it requires no knowledge of mathematics beyond an understanding of axioms, definitions, and logical operators. That we cannot do the same for 0.999...=1, I assume, is connected with the fact that we cannot define the meaning of 0.999... without a prior definition of what constitutes an infinite set, which of course takes us into a whole alternative Sorry, but I've just revisited this post, and I can't resist sticking in my two cents' worth once more! The idea that an infinite set cannot have finite limiting terms is a common misconception, which arises perhaps from the equally-common misconception that "infinity" can be defined as "a number greater than any natural number". That is an inevitable property of any infinity, you might even say it's a criterion for identifying an infinity, but it's not a definition in the sense required by mathematics. There is an infinite number of different-sized infinities, which the foregoing "definition" would clearly fail to account for. Furthermore, it purports to define "infinity" in terms of the natural numbers, which is a serious logical error. An infinity may terminate in a finite number (eg the set of negative integers which terminates with the number -1), or may begin with a finite number (eg 0, the starting point for the set of the natural numbers) or may both begin and end with finite numbers (eg the set of numbers which are >=0 and <=1). It is easy to prove that this last set is a much bigger infinity than the infinite set of the natural numbers, nevertheless, paradoxically, when you have listed all of the numbers >=0 and <=1, you still have not listed all of the numbers >=0 and <=1. To do that, you need to calculate the power set of the numbers >=0 and <=1, which has the cardinality of 2 to the power of infinity. But only God (or some such being) could do that... Goodnight, everybody, and thank you again for a most rewarding debate.
-
I would answer, if we stipulate positive integers, then the real-number successor of 7 is 8. Just as 8 is the natural-number successor of 7, according to the definitions of arithmetic, so +8 is the successor of +7 in the line of real integers. That is not a matter of proof; it is a matter of definition. On the other hand, if we admit all of the possible numbers in the number plane, then the successor of 7 might be 7+h (h being a more respectable way to write 0.000...1).
-
I suspect you are confusing "number" with "absolute value". The number 1 is a familiar natural number; 0.999... is unintelligible without a coherent definition of what constitutes an infinite set. Isn't there a Nobel Prize for mathematics? That seems unfair.
-
Thanks Studiot, I will try to source that reference. With regard to the assertion that 0.999... has no successor, can you provide further arguments? Cantor's Diagonal Argument, for instance, implies that the successor of 0.999... would be 1. The Diagonal Argument asks us (as a thought-experiment) to imagine a table which lists every fraction between 0 and 1. The clear implication, though as far as I know Cantor doesn't appeal to it directly, is that the first number would be 0.000...1 and the last number would be 0.999... (0.000...1 being what you are left with, if you subtract 0.999... from 1, according to the axioms of arithmetic). it's worth pointing out, perhaps, that the decimal point is itself a red herring... it could occur at any position within the number, or there could be no decimal point at all; the essential problem in mathematical philosophy remains unaltered. So it doesn't matter if the equation is 0.999...=1, or 999...=1{000}... (pls allow for my probably-erroneous symbolism here). In relation to someone else's earlier post - I assume that there can be NO number between 0.999... and 1 because, in the natural number line, there is no number between 9 and 10. 10 is defined as the successor of 9, which rules out the possibility of any intermediate number. It might be possible to stipulate an imaginary number, but I have no idea how such an argument might be set out.
-
Wow. Thank you, everybody, for the courtesy and thoughtfulness of your replies, first of all. I had no idea that my post would provoke this kind of response. This is clearly an issue upon which there are strongly-held opinions! I don't know how this will go down, but, in terms of the axioms and definitions of arithmetic, what number is the successor of 0.999...? This seems to me to be the crux of the issue. Because, of course, the axioms dictate that two numbers cannot have the same successor. So do 0.999... and 1 have the same successor? Nobody has denied that they are different numbers; but arguments purport that they are of equal absolute value; does this mean that they have the same successor? By the way, I believe that 0.000...1 is a number. It's just an infinite set ordered by diminishing magnitude, with the "1" defined as occupying a decimal place which is smaller than any other. Argument: (1) Since the irrational numbers imply an infinite set of decimal places, all numbers imply such a set; including the whole numbers: in which case, every decimal place is presumed to be occupied by 0. (1) So, for example, although 1 is a whole number, after the number 1, there is implied an infinity of decimal places, each of which is presumed to be occupied by a 0. The fact that every decimal place is occupied by a 0 may be considered as a proof that the number we are thinking about is really "1". (2) In set theory, identical members of the set would normally collapse, but, in this case, the 0's do not collapse because they are not the defining property of the set; the defining property is the relative value of each decimal place; the fact that every decimal place is occupied by a 0 is incidental. (3) conclusion: an infinite set of 0's, following a decimal point, with each 0 denoting a decimal place which is 1/10 of its predecessor, is a mathematically respectable example of an infinite set. But for any infinity, infinity+1=infinity. (sorry I don't know how to represent the proper infinity symbol). Given that this particular infinity is ordered by decimal-place magnitude, to an infinite collection of 0's, we can add an extra member There's no problem about that, in principle. And we stipulate that the extra member will be a 1. The only remaining problem is, to ensure that the 1 will always appear to be at the far boundary of the infinity; ie, the number we are talking about will always appear as 0.000...1. We can achieve this by means of a definition. Definitions are perfectly respectable entities in mathematics and, in this case, we will define the 1 as occupying a decimal position which is smaller than any other decimal position. (there will always be a decimal position which is smaller than any other position; if we didn't specify that it must be occupied by a 1, it would be occupied by a 0 anyway). This is a great forum. All of the replies I deeply appreciate; all are thoughtful and intelligent. So different to my experiences elsewhere. Thank you.
-
By the way, in terms of their properties and functionality, so-called "imaginary" numbers are every bit as real as "real" numbers. They are indispensible in engineering, science, architecture, quantum mechanics, and indeed pure mathematics. The only reason we call them Imaginary is because we don't know how to represent them in numerical form. That's one of the philosophical defects in our mathematical discourse.
-
As we all know, there exist many arguments that 0.999...=1. All of the proofs I have seen are based upon arguments drawn from mid-level mathematics. As a philosopher, this bothers me because, when proofs of a basic-level proposition depend upon mid-level arguments, there is the obvious danger that these arguments, if analysed and deconstructed in sufficient detail, will be found to be question-begging; that's to say, at some point, we may find that they have assumed the conclusion as a premise. So my question is: is there any way by which we can mount an argument that 0.999...=1 by arguing "upwards" from the axioms and definitions of arithmetic, rather than "downwards" from mid-level mathematics? The elephant in the sitting room, it appears to me, is the axiomatic principle that two numbers cannot have the same successor; the numbers in question being, of course, 9 and 10.
-
Either that, or the most exquisite cup of tea you ever tasted...
-
Let it be given that your argument is valid. How do you reconcile it with the axioms and definitions of arithmetic, which stipulate that two numbers cannot have the same successor?
-
In a small way, this dialogue highlights a prevalent issue in mathematics. Many mathematicians are very talented and fluent in the received wisdom of their discipline; but they lack a proper understanding of its philosophical issues. In particular, they often do not fully understand the concept of 'proof'. For example, certain middle-level theorems in mathematics require that the number 0 be treated as an even number. Some mathematicians consider this to be a proof that 0 is in fact an even number, without any further reference to number theory or the axioms of arithmetic. No competent second-year philosophy student would fall into such an error; why then is it so prevalent in mathematics?
- 16 replies
-
-2
-
No reply as yet. Conway, is this a question in set theory? Please try to give us any kind of a handle on your reasoning, so that we can at least guess how to frame an answer.