wtf Posted February 6, 2019 Posted February 6, 2019 (edited) I think the OP is not around but I read through the paper a couple of times and have some thoughts. There are two things going on in the paper. One, the OP is making the point that there are striking similarities between infinitesimals as they were used in 17th century math; and the nilsquare infinitesimals of Smooth infinitesimal analysis (SIA). This point of view says that, say, if we went back to the 17th century but knew all about category theory and differential geometry and SIA, we could easily show them how to logically found their subject. They were close in spirit. Ok. That might well be, and I don't agree or disagree, not really knowing enough about SIA and knowing nothing about Leibniz (being more a Newton fan). So for sake of discussion I'll grant the OP that point. But the other thing that's going on is that the OP seems to feel that the history itself supports the idea that they somehow understood this, or that they had a rigorous theory of infinitesimals that was shoved aside by the theory of limits in an act more political than mathematical. That's the second thesis of the paper as I understand it. But the OP presents no historical evidence, none at all, that there was any kind of rigorous theory of infinitesimals floating around at the time. On the contrary, the history is that Newton himself well understood the problem of rigorously defining the limit of the difference quotient. As the 18th century got going, people noticed that the lack of foundational rigor was causing them trouble. They tried to fix the problem. In the first half of the 19th century they got calculus right, and in the second half of the 19th and the first quarter of the 20th, they drilled it all the way down to the empty set and the axioms of ZFC. That is the history as it is written, and there isn't any alternate history that I'm aware of. If there were, I would be most interested to learn about it. The OP makes a historical claim, but doesn't provide any historical evidence. That bothers me. So to sum up: * From our modern category-theoretic and non-LEM and SIA perspective, all of which is math developed only in recent decades, we can reframe 17th century infinitesimals in modern rigorous terms. I accept that point for sake of discussion, though I have some doubts and questions. * But on the historical point, you are just wrong till you show some evidence. The historical record is that the old guys KNEW their theory wasn't rigorous, and that as time went by this caused more and more PROBLEMS, which they eventually SOLVED. They never had a rigorous theory and they never thought they had a rigorous theory. But if they did I'd love the references. Edited February 6, 2019 by wtf 2
dasnulium Posted February 10, 2019 Author Posted February 10, 2019 (edited) "How is LEM or its denial a corollary of nilpotency?" Because an increment which produces a noticeable 'error' (defined relatively) is by definition not infinitesimal and therefore LEM applies. The 'error' for polynomials is the sum of the higher power incremental terms. LEM here means non-negligible, not separate in some ideal sense (i.e. axiomatically). That's why I question the justification for LEM in the paper. I can't talk about the background further because the main point of the paper is to show that limits and original infinitesimals are technically equivalent at least, which is very simple. As Klein said "With these elementary presentations there remains always the suspicion that by neglecting successive small quantities we have finally accumulated a noticeable error, even if each is permissable singly. But I do not need to show how we can rule this out, for these questions are so thoroughly elementary that each of you can think them through when you feel so inclined." Elementary Mathematics - From an Advanced Standpoint, Felix Klein, 1908, p190 (NB that is a paraphrased translation). Very few people ever do feel so inclined though. For more of the philosophical background The Continuous and the Infinitesimal by John L Bell is the best guide. Over and out! Edited February 10, 2019 by dasnulium
dasnulium Posted February 28, 2019 Author Posted February 28, 2019 (edited) To elaborate on "the 'error' for polynomials is the sum of the higher power incremental terms" - when we take the derivative of y = f(x), x is said to be the independent variable. However, differentiation assumes an arbitrary x value and a variable increment, and the latter is the quantity actually varying. If we take the finite difference quotient of a polynomial, varying the increment (for a given x value) results in a gradient that changes with the secant; but if we take the regular derivative of that polynomial we can't vary the increment because those terms have been neglected (and cancelled) - and since an indefinitely small increment implies an indefinitely small secant length*, which is by definition part of the tangent, this gives us the gradient of the tangent. *By Pythagoras secant length is proportional to the increment thus s = h √(1 + y'²) although the equation is not linear, so simply by reducing the increment a smaller secant length can be found. The derivative may of course increase to counteract this, but it would have to eventually become vertical to nullify it, at which point calculus no longer applies. Note that SIA seems to work better than NSA for this line of reasoning because taking the standard part dispenses with all incremental terms, even those of the first power, but this would cause the first RHS term to be set to zero. This may indicate that SIA bears a closer resemblance to the true nature of calculus than NSA. Edited February 28, 2019 by dasnulium
studiot Posted February 28, 2019 Posted February 28, 2019 I don't seem to have had an answer to this post Quote Studiot On 1/17/2019 at 3:39 AM, dasnulium said: the only way that that condition can be met is if something is smaller than any value you give I don't agree with this. Examples from statistics come to mind. Whilst I liked the style and presentation of your paper, I don't endorse everything in it. A further comment One thing you don't seem to have examined is the clear difference between infinitesimals and limits. Limits allows one to step over the process of accounting for an infinity of terms, which you correctly say begs the question of do the ignored terms add up to something significant, and go directly the an end result, or show that there isn't one. Convergence theory is all about this. I do not know of any equivalent process with infinitesimals, NSA notwithstanding. On 2/10/2019 at 4:34 AM, dasnulium said: As Klein said "With these elementary presentations there remains always the suspicion that by neglecting successive small quantities we have finally accumulated a noticeable error, even if each is permissable singly.
uncool Posted February 28, 2019 Posted February 28, 2019 Dasnulium: I don't think you have ever answered this question. In the system you favor, what would the limit of f(x) be in this case? On 1/14/2019 at 7:56 AM, uncool said: More generally: let's say we have the function f(x) = 0 if x is neither infinitesimal nor 0, and 1 if x is 0 or infinitesimal (in other words, if x is smaller than any rational number). What is the limit as x approaches 0 of f(x)? (More on this after an answer)
studiot Posted March 1, 2019 Posted March 1, 2019 20 minutes ago, Endy0816 said: Is this applicable to the lorentz transforms(c)? In what way? What are you thinking about?
Endy0816 Posted March 1, 2019 Posted March 1, 2019 3 minutes ago, studiot said: In what way? What are you thinking about? For time dilation(I think) someone here said something to the effect of c as the limit of it at one point.
studiot Posted March 1, 2019 Posted March 1, 2019 3 hours ago, Endy0816 said: For time dilation(I think) someone here said something to the effect of c as the limit of it at one point. Asymptotes can be limits.
wtf Posted March 2, 2019 Posted March 2, 2019 9 hours ago, Endy0816 said: For time dilation(I think) someone here said something to the effect of c as the limit of it at one point. Hardly bears on the history of the limit concept and whether smooth infinitesimal analysis was prefigured in the 17th century.
Endy0816 Posted March 2, 2019 Posted March 2, 2019 8 hours ago, studiot said: Asymptotes can be limits. Then would they also be infintessimals? 2 hours ago, wtf said: Hardly bears on the history of the limit concept and whether smooth infinitesimal analysis was prefigured in the 17th century. Hoping for an application of the research.
studiot Posted March 2, 2019 Posted March 2, 2019 8 hours ago, wtf said: Hardly bears on the history of the limit concept and whether smooth infinitesimal analysis was prefigured in the 17th century. I have ordered the two books you listed, thank you for the references. It will be interesting to see what they ahve to say. On 2/10/2019 at 4:34 AM, dasnulium said: For more of the philosophical background The Continuous and the Infinitesimal by John L Bell is the best guide. Over and out! This book is hundreds of £ and out of my means, but I am pursuing a loan copy from our inter library loans system. Quote Endy0816 14 hours ago, studiot said: Asymptotes can be limits. Then would they also be infintessimals? I rather think it is the other way round. The limiting process can be usefully applied to Infinitesimals. However I think that the OP question can be answered as follows. Limits are the result of the limiting process. Infinitesimals are specially constructed abstract objects, ouside the normal number systems so no, they are not the same. The limiting process has wider applications than differentiation/integration but that is not the subject of the OP (for instance the relationship to asymptotes), so we should explore this in a new thread if you wish to take it further. There are huge and widespread applications in engineering and theoretical physics (relativity). 1
taeto Posted March 2, 2019 Posted March 2, 2019 (edited) On 1/14/2019 at 2:56 PM, uncool said: let's say we have the function f(x) = 0 if x is neither infinitesimal nor 0, and 1 if x is 0 or infinitesimal (in other words, if x is smaller than any rational number). What is the limit as x approaches 0 of f(x)? (More on this after an answer) I am curious about the assumption of such a function \(f.\) Does it provably exist in such a theory (presumably using second order logic) in which there are infinitesimals? Or is the existence undecidable? I have little experience about the possibility of undecidable statements in second order theories. As a motivation for my question, if I formulate the "in other words" condition for an infinitesimal \(x\) as \[ x \neq 0 \mbox{ and } \forall n\in \mathbb{N} \,:\, n\cdot |x| < 1, \] and if we assume that \(\mathbb{N}\) is the usual and not necessarily "standard" version of the natural numbers, then there are models in which the \(n\) can be infinite. And in that event, it intuitively seems a very strong condition on an \(x \neq 0\) to have \(n\cdot |x| < 1.\) I am insufficiently familiar with second order theories to know whether the theory can possible "see" (express formally) that a natural number \(n\) is actually finite. If so, then maybe the "in other words" condition needs added assumptions, such as the finiteness of \(n?\) Edited March 2, 2019 by taeto
Endy0816 Posted March 2, 2019 Posted March 2, 2019 2 hours ago, studiot said: I rather think it is the other way round. The limiting process can be usefully applied to Infinitesimals. However I think that the OP question can be answered as follows. Limits are the result of the limiting process. Infinitesimals are specially constructed abstract objects, ouside the normal number systems so no, they are not the same. The limiting process has wider applications than differentiation/integration but that is not the subject of the OP (for instance the relationship to asymptotes), so we should explore this in a new thread if you wish to take it further. There are huge and widespread applications in engineering and theoretical physics (relativity). Thank you. No, this is good. Know c causes math problems whereever it crops up. Literally a limit so not really surprising but... I'll keep hoping for a math advancement that allows a shift in perspective lol. Someday!
wtf Posted March 2, 2019 Posted March 2, 2019 9 hours ago, studiot said: This book is hundreds of £ and out of my means, but I am pursuing a loan copy from our inter library loans system. There's a pdf of Bell online.
uncool Posted March 2, 2019 Posted March 2, 2019 (edited) taeto - the function I described does exist; it's not hard to construct using the axioms. In nonstandard analysis (specifically, the version using Internal Set Theory), it isn't standard, and nonstandard analysis defines limits and derivatives for standard functions (as I understand it). Edited March 2, 2019 by uncool
wtf Posted March 3, 2019 Posted March 3, 2019 13 hours ago, taeto said: I am curious about the assumption of such a function f. Does it provably exist in such a theory (presumably using second order logic) in which there are infinitesimals? Or is the existence undecidable? Can you say more about second order logic in this context? My understanding is that nonstandard analysis is an alternative model of the FIRST order theory of the real numbers. If you go to second order logic you can express the completeness theorem (every nonempty subset of reals bounded above has a least upper bound). And any system containing infinitesimals is necessarily INCOMPLETE. So second order logic would seem to preclude infinitesimals entirely. Not an expert but would appreciate context.
taeto Posted March 3, 2019 Posted March 3, 2019 (edited) 10 hours ago, wtf said: Can you say more about second order logic in this context? My understanding is that nonstandard analysis is an alternative model of the FIRST order theory of the real numbers. If you go to second order logic you can express the completeness theorem (every nonempty subset of reals bounded above has a least upper bound). And any system containing infinitesimals is necessarily INCOMPLETE. So second order logic would seem to preclude infinitesimals entirely. Not an expert but would appreciate context. Maybe it is only my confusion. Earlier you pointed to the role of hyperreals and the transfer principle. When you begin reading the Wikipedia page on Hyperreal numbers, there is no mention of logic until you reach the remarks on the transfer principle, in particular where it states "The transfer principle states that true first order statements about R are also valid in *R." Now, if the theory of *R were itself a first order theory, then I do not understand the need to invoke the transfer principle. I take it to mean that the theory of hyperreals, and by extension, that of infinitesimals which is derived from it, is a second order rather than a first order theory, even though the wikipedia page does not seem to state it explicitly. I have to be careful when reading statements about "completeness", because the notion of completeness of a theory, in the sense that every true statement has a proof, is also of relevance in this context, and perhaps even more so than the competing notion of completeness of the real ordered field itself. Edited March 3, 2019 by taeto
studiot Posted March 3, 2019 Posted March 3, 2019 14 hours ago, wtf said: There's a pdf of Bell online. That would be good if I could find it. I looked but could only find extracts in pdf.
taeto Posted March 3, 2019 Posted March 3, 2019 (edited) 10 hours ago, uncool said: taeto - the function I described does exist; it's not hard to construct using the axioms. In nonstandard analysis (specifically, the version using Internal Set Theory), it isn't standard, and nonstandard analysis defines limits and derivatives for standard functions (as I understand it). Thank you! And you agree with my attempt at reformulating the predicate "\(x\) is infinitesimal" as \( x\neq 0\) and \(n\cdot |x| < 1\) for all \(n\in \mathbb{N}?\) Is "\(n\) is standard" a predicate for a natural number in this theory? If so, you could also imagine to quantify only over standard natural numbers. My motivation for asking this: so far as I know, there is no first order theory for integer arithmetic that has only the standard integers as its model, and which only has finitely many axioms. Now supposing you can formulate "\(n\) is standard" as a first order predicate, it would seem that you could make a complete first order theory of the standard integers just by adding this single new axiom to Peano's finite list. Other than that, I appreciate your point about \(f\) not necessarily having a limit since it isn't standard. Do you know whether it has a limit? Edited March 3, 2019 by taeto
studiot Posted March 3, 2019 Posted March 3, 2019 How does one collate a double limit with an infinitesimal?
dasnulium Posted March 5, 2019 Author Posted March 5, 2019 Quote One thing you don't seem to have examined is the clear difference between infinitesimals and limits. - studiot The point of the paper is that there's a connection that may have been overlooked, maybe for this reason: Quote But I do not need to show how we can rule this out, for these questions are so thoroughly elementary that each of you can think them through when you feel so inclined. - Klein He's ruling out the idea that the standard part/nilsquare rule operation is somehow unsafe. A limit in calculus is what you get when the increment of x becomes indefinitely small (i.e. infinitesimal) so I think that they are two ways of looking at the same thing. You also mention statistics, but I know that there have been efforts to found statistics on a non-constructive basis, which I would avoid so I can't comment on that. Replying to uncool: the function you describe is explicitly discontinuous so I would simply assume calculus doesn't apply to it. Bell talks about it on page 5 here https://pdfs.semanticscholar.org/e226/af69111bcba4aff8318f2b479dd6c3202325.pdf Clarification to last comment: the equation s = h √(1 + y'²) applies to the finite difference and is therefore true for any value (in this context 'proportional' is the wrong word for the RHS). Does it apply to infinitesimal increments? I said that "Note that SIA seems to work better than NSA for this", but you can get it to work in NSA too. First do this s/h = √(1 + y'²), then transition to the infinitesimal by changing s/h to ds/dh (yielding the well known equation) and taking the standard part of y'. This may be taken to mean that ds/dh becomes 0/0, but it couldn't subsequently be neglected because it would be indeterminate, not zero.
wtf Posted March 5, 2019 Posted March 5, 2019 1 hour ago, dasnulium said: A limit in calculus is what you get when the increment of x becomes indefinitely small (i.e. infinitesimal) No, that is exactly wrong. A limit is what you get when the increment is ARBITRARILY small. It's always strictly positive but gets as close as you like to zero. I now see clearly the source of your confusion. You don't know what a limit is. You have a freshman calculus understanding at best. If you would take the trouble to learn the actual definition of a limit, you would see that no infinitesimals are involved.
uncool Posted March 5, 2019 Posted March 5, 2019 1 hour ago, dasnulium said: Replying to uncool: the function you describe is explicitly discontinuous so I would simply assume calculus doesn't apply to it. The statement that the function is discontinuous is calculus. Additionally, this reverses the usual order of definitions. In calculus, continuity is defined in terms of limits, not the other way around. So this begs the question: how do you know the function is discontinuous?
dasnulium Posted March 6, 2019 Author Posted March 6, 2019 wtf: I did think carefully about word choice for the thesis, in particular I would never use the term 'infinitely small' to describe an infinitesimal - 'indefinitely small' does not mean smaller than 'every' positive number, it means smaller than any positive number to which you can assign a value (this is similar to Kant's ideas about the infinite as discussed by Bell). I don't use the word 'arbitrary' because in numerical analysis (unlike in regular calculus) it means something different to 'indefinite' - namely that the minimum value of the increment may be arbitrary (alternatively, depending on the functions concerned, it may have to meet certain criteria). Note that the term 'indefinitely small' is meaningless for numerical analysis for obvious reasons. uncool: Bell's take on the 'blip' function doesn't distinguish between non-zero and zero/infinitesimal, but simply between non-zero and zero - so I don't see the purpose of your question, maybe my answer to wtf would help.
Recommended Posts