wtf
Senior Members-
Posts
830 -
Joined
-
Last visited
-
Days Won
7
Content Type
Profiles
Forums
Events
Everything posted by wtf
-
Do you mean in full detail? No, because it would have to include itself and every particle in the universe. Not enough energy in the universe to do that with. But if you mean could we have a really good model of the laws of physics, we already have those. They can simulate the first few seconds of the big bang and the subsequent evolution of the universe. This is old hat. Why do you think AI has anything to do with this? It's humans who build the models. It's important not to get new age-y about all this stuff. Strong AI has been a complete failure and has produced nothing since the idea gained currency in the 1960's. Weak AI of course plays chess and drives cars. Impressive but very specialized problem domains. And whose achievement is it? The computer's? Or the armies of designers and mathematicians and programmers who build the clever little gizmos? The first thing to know about AI is how to separate out the breathless hype from the reality.
-
Sorry about your power loss but the recent pace is fine for me. It might have been me who pulled the plug I'm just grasping at straws to follow your posts. FWIW here is a screen shot from Introduction to Differential Geometry by Robbin and Salamon. This is from page 59 of this pdf. https://people.math.ethz.ch/~salamon/PREPRINTS/diffgeo.pdf They use the term transition map exactly as I've used it. But no matter, we can call them something else. But it's clear what they are, you are in agreement even if you prefer to use a different name. Ok. I agree with all your notation so far. As I say it took me the duration of your power outage for all this to become clear so feel free to pretend the power's out as I work to absorb subsequent posts. Yes, entirely clear. Perfectly clear. Yes. Ok so we are identifying the coordinates with the projection mappings composed on the charts that produce them. Yes this is clear to me. I take this to mean that [math]\{f^j\}_{i=1}^n[/math] is a set of maps where [math]f^j = \pi_j \varphi_\beta \varphi_\alpha^{-1}[/math], is that right? Yes very much. Yes much better. Of course the couple of days I spent working through this in my own mind helped a lot too. Maybe I should leave that remark alone Let me just say that I sometimes find it productive to work through points of murkiness in your exposition. I'm ready for the next step and do feel free to take this as slowly as you like. Also if you have any particular text you find helpful feel free to recommend it. There are so many different books out there.
-
I think I understand what you're saying. In my notation, you are using [math]\beta^i[/math] as both the value of the [math]i[/math]-th coordinate of the [math]\beta[/math]-representation of some point [math]m \in U_\alpha \cap U_\beta[/math]; and also as the function [math]\pi_i \varphi_\beta \varphi_\alpha^{-1}[/math] that maps the [math]\alpha[/math]-representation of some point [math]m[/math] to the [math]i[/math]-th coordinate of the [math]\beta[/math]-representation of [math]m[/math]. That's how I'm understanding this. You're taking the [math]i[/math]-th coordinate to be both the function and the specific value for a given [math]m[/math]. It's a little bit subtle. The REAL NUMBER [math]\beta^i[/math] changes as a function of [math]m[/math]; but the FUNCTION [math]\beta^i[/math] does not. Is that right? I want to make sure I'm nailing down this formalism. Secondly I believe that you are a little confusing or inaccurate when you say the transfer maps (without the extra projection at the end) go from [math]U[/math] to [math]U'[/math]. Rather the transition maps go from [math]\varphi_\alpha(U_\alpha \cap U_\beta)[/math] to [math]\varphi_\beta(U_\alpha \cap U_\beta)[/math] and back. Since the charts are homeomorphisms so are the transfer maps in both directions. And I've read ahead on Wiki and a couple of DiffGeo texts I've found, and I see that if the transfer maps are differentiable or smooth then we call the manifold differentiable or smooth. That makes sense. We already know how to do calculus on Euclidean space. So I'm a litle confused again ... the charts themselves don't have to be differentiable or smooth as long as the transfer maps (on the restricted domain) are. Is that correct? So for example the charts could have corners outside the areas of overlap? Perhaps you can help me understand that point.
-
[math](a - b)^2 = (b - a)^2[/math]. Your question is like noting that [math](-5)^2 = 25[/math], then asking if we are given [math]25[/math], how do we know if we "started" with [math]5[/math] or [math]-5[/math]. And the answer is that we don't. Squaring loses information. The squaring function maps two different values to the same value so you can't reliably go backwards. ps -- There is a philosophical aspect to this point. If we view an equation as a statement that two different-looking expressions point to the same object, then from [math]a = b[/math] we may infer [math]b = a[/math]. Equality is a symmetric relation. However if we regard an equation similarly to a formula in chemistry, a statement that one thing yield another thing via some process; then from [math]a = b[/math] we may not necessarily infer that [math]b = a[/math]. Not all transformations may be reversed. This impacts our daily lives in the form of Internet security. Public key cryptography is based on the fact that [math]3 \times 5 = 15[/math] is a computationally easy problem; while [math]15 = 3 \times 5[/math] is not. In your example we have a transformation (squaring) that's not reversible at all, because it loses information. In the multiplication/factoring example we have a transformation that is in principle reversible, but one direction is computationally more expensive than the other. When you say two things are equal, you have to be careful what you mean.
-
I'm replying to your post #27 which said ... I commented on the first half earlier. Now to the rest of it. First there's a big picture, which is that if we have a manifold [math]M[/math] and a point [math]m \in M[/math], then we may have two (or more) open sets [math]U, U' \subset M[/math] with [math]m \in U \cap U'[/math]. So [math]m[/math] has two different coordinate representations, and we can go up one and down the other to map the coordinate representations to each other. My notation in what follows is based on this excellent Wiki article, which I've found enlightening. https://en.wikipedia.org/wiki/Atlas_(topology)#Transition_maps The notation is based on this picture. We have two open sets [math]U_\alpha, U_\beta \subset M[/math] with corresponding coordinate maps [math]\varphi_\alpha : U_\alpha \rightarrow \mathbb R^n[/math] and [math]\varphi_\beta : U_\beta \rightarrow \mathbb R^n[/math]. I prefer the alpha/beta notation so I'll work with that. Also, as I understand it the coordinate maps in general are called charts; and the collection of all the charts for all the open sets in the manifold is called an atlas. If [math]m \in U_\alpha \cap U_\beta[/math] then we have two distinct coordinate representations for [math]m[/math], and we can define a transition map [math]\tau_{\alpha, \beta} : \mathbb R^n \rightarrow \mathbb R^n[/math] by starting with the coordinate representation of [math]m[/math] with respect to [math]U_\alpha[/math], pulling back (is that the correct use of the term?) along [math]\varphi_\alpha^{-1}[/math], then pushing forward (again, is this the correct usage or do pullbacks and pushforwards refer to something else?) along [math]\varphi_\beta[/math]. So we define [math]\tau_{\alpha, \beta} = \varphi_\beta \varphi^{-1}_\alpha [/math]. Likewise we define the transition map going the other way, [math]\tau_{\beta, \alpha} = \varphi_\alpha \varphi^{-1}_\beta [/math]. I found it helpful to work through this before tackling your notation. Now I feel equipped to understand this. We have [math]m \in U_\alpha \cap U_\beta[/math]. Then I can write [math]\varphi_\alpha(m) = (\alpha^i)[/math] and [math]\varphi_\beta(m) = (\beta^i)[/math], with the index in both cases is the [math]n[/math] in [math]\mathbb R^n[/math]. I don't think we talked about the fact that the dimension is the same all over but that seems to be part of the nature of manifolds. Question: You notated your ordered n-tuple with set braces rather than tuple-parens. Is this an oversight or a feature? I can't tell. I'll assume you meant parens to indicate an ordered [math]n[/math]-tuple. Also you referred to the coordinates as functions, and you did that earlier as well. I'm a little unclear on what you mean. Certainly for example [math]\alpha_i = \pi_i \varphi_\alpha(m)[/math], in other words the [math]i[/math]-th coordinate with respect to [math]\varphi_\alpha[/math] is the [math]i[/math]-th projection map composed on [math]\varphi_\alpha[/math]. Are you identifying each coordinate with its respective projection map? That's perfectly sensible. You probably said that earlier. Aha. This took me a while to sort out. What is [math]f^i[/math]? Putting all this in my notation, we have [math]f^i(\alpha^1, \alpha^2, \dots, \alpha^n) = \beta^i[/math]. So we seem to be starting with the [math]\alpha[/math]-coordinates of [math]m[/math], using the transfer map [math]\tau_{\alpha,\beta}[/math] to get to the corresponding [math]\beta[/math]-coordinates; then taking the [math]i[/math]-th coordinate via the [math]i[/math]-th projection map. Therefore we must have [math]f^i = \pi_i \tau_{\alpha,\beta} = \pi_i \varphi_\beta \varphi_\alpha^{-1}[/math]. As far as I can tell this is the equation that relates your notation to mine. Have I got this right? I undersand that. But note that it's ambiguous. Does [math]f^j[/math] act on the real number [math]x^k[/math]? No, actually it acts on the [math]n[/math]-tuple [math](x^k)_{k=1}^n[/math]. So if we are pedants (and that's a good thing to be when we are first learning a subject!) it is proper to write [math]f^j((x^k)_{k=1}^n)[/math]. Whenever we see [math]f^j(x^k)[/math] we have to remember that we are feeding an [math]n[/math]-tuple into [math]f^j[/math], and not a real number. This is very interesting. Let me say this back to you. [math]m[/math] has [math]\beta[/math]-coordinates [math](\beta^i)[/math]. And now what I think you are saying is that we are going to identify the coordinate [math]\beta^i[/math] with the map [math]f^i = \pi_i \varphi_\beta \varphi_\alpha^{-1}[/math]. Is that right? We identify each [math]\beta[/math]-coordinate with the process that led us to it! Very self-referential This is what I understand you to be saying, please confirm. ADDENDUM: No I no longer understand this. [math]f^i[/math] doesn't play favorites with some particular [math]\beta^i[/math]. It makes sense to say that [math]f^i[/math] maps [math]\varphi_\alpha(m)[/math] to the [math]i[/math]-th coordinate of [math]\varphi_\beta(m)[/math]. But it's a different [math]f^i[/math] for each [math]m[/math]. I think I am confused. I should sort this out before I post but I'll just throw this out there. Ok I had to think about this. Two points. * Each [math]f^i[/math] is a map from [math]\mathbb R^n[/math] to the reals. It inputs an [math]n[/math]-tuple that is the [math]\alpha[/math]-representation of a point [math]m[/math]; and outputs a single real number, the [math]i[/math]-th coordinate of the [math]\beta[/math]-representation of [math]m[/math]. So the only way to make sense of what you wrote is to that the the collection of all the [math]f^i[/math] 's are the coordinate transformations. Actually what I understood from the Wiki article is that the transfer maps were the coordinate transformations. So maybe I'm confused on this point. Can you clarify? * There's actually a little swindle going on with [math]\varphi_\alpha[/math]. At first it was a map from [math]U[/math] to some open subset of [math]\mathbb R^n[/math]. But in order to pull back along [math]\varphi_\alpha^{-1}[/math] we have to restrict the domain to the image [math]\varphi_\alpha(U_\alpha \cap U_\beta)[/math]. So we don't really have a map from [math]U[/math] to [math]U'[/math] in your notation; but only from their intersection to itself. Can you clarify? It doesn't seem to matter at this point what the topological conditions are. It's all I can do to chase the symbols. I think I'm with you so far. Just the questions as indicated. Two key questions: * How the transition maps can be said to be from [math]U[/math] to [math]U'[/math] when in fact they're only defined from the [math]\alpha[/math] and [math]\beta[/math] images, respectively, of the intersection. I'm just a little puzzled on this. * Your notation [math]x'^j= x'^j(x^k)[/math]. First I thought I understood it and now I've convinced myself [math]x'^j[/math] depends on [math]m[/math]. * And now that I think about it, the transition maps are from Euclidean space to itself, they're not defined on the manifold. I'm more confused now than when I started working all this out.
-
Are you talking about the transition maps? I'm working through that now. The Wiki page is helpful. https://en.wikipedia.org/wiki/Manifold ps ... Quibbles aside I'm perfectly willing to stipulate that the topological spaces aren't too weird. Wiki says they should be second countable and Hausdorff. Second countable simply means there's a countable base. For example in the reals with the usual topology, every open set is a union of intervals with rational centers and radii. There are only countably many of those so the reals are second countable. Interestingly Wiki allows manifolds to be disconnected. I don't think it makes a huge difference at the moment. I can imagine that the two branches of the graph of 1/x are a reasonable disconnected manifold.
-
You are using the term in a highly nonstandard way and your exposition is unclear on that point. Very much so. I'm interested in why differential geometers and physicists are so interested in using dual spaces in tensor products when the algebraic definition says nothing about them. The current exposition of differential geometry is very interesting to me but not particularly relevant (yet) to tensor products. I hope I may be permitted to post corrections to imprecise statements, in the spirit of trying to understand what you're saying. The indiscreet topology is connected but each point is in exactly one open set. Perhaps you need the Hausdorff property. Again not being picky for the sake of being picky, but for my own understanding. And frankly to be of assistance with your exposition. If you're murky you're murky, I gotta call it out because others will be confused too. I'm still digesting the rest of your post.
-
I must say that this use of the term Hausdorff is quite different from what I've learned about the term. In my understanding, asking if that property is transitive is meaningless. A topological space is Hausdorff if it separates points by open sets. That is, given any two points [math]x, y[/math], there are open sets [math]U_x, U_y[/math] with [math]x \in U_x[/math], [math]y \in U_y[/math], and [math]U_x \cap U_y = \emptyset[/math]. For example the real numbers with the usual topology are Hausdorff; the reals with the discrete topology are Hausdorff; and the reals with the indiscreet topology are not Hausdorff. I confess I have no idea what it means for the Hausdorff property to be transitive. It's not a binary relation. It's a predicate on topological spaces. Given a topological space, it's either Hausdorff or not. It would be like asking if the property of being a prime number is transitive. It's a meaningless to ask the question because being prime is a predicate (true or false about any individual) and not a binary relation. Given a pair of points, they are either separated by open sets or not. Of course for each pair of points you have to find a new pair of open sets, which is what I think you are saying. Historical note. Felix Hausdorff was German mathematician in the first half of the twentieth century. In 1942 he and his family were ordered by Hitler to report to a camp. Rather than comply, Hausdorff and his wife and sister-in-law committed suicide. https://en.wikipedia.org/wiki/Felix_Hausdorff
-
ps -- Let me just say all this back and, being a pedantic type, clarify a couple of fuzzy locutions. Minor expositional murkitude. I'd say this as: For a given set [math]S[/math], various topologies can be put on it. For example if [math]T = \mathcal P(S)[/math] then every set is open. That's the discrete topology. The discrete topology is nice because every function on it to any space whatsoever is continuous. Or suppose [math]T = \{\emptyset, S\}[/math]. This is called the indiscrete topology. No sets are open except the empty set and the entire space. And the everyday example is the real numbers with the open sets being countable unions of open intervals. [This is usually given as a theorem after the open sets have been defined as sets made up entirely of interior points. But this is a more visual and intuitive characterization of open sets in the reals]. This is actually interesting to me. Do they use unsual topologies in differential geometry? I thought they generally consider the usual types of open sets. Now I'm trying to think about this. Hopefully this will become more clear. I guess I think of manifolds as basically Euclean spaces twisted around in various ways. Spheres and torii. But not weird spaces like they consider in general topology. Well if anything it's too short, since this is elementary material (defined as whatever I understand ) and I'm looking forward to getting to the good stuff. But I hope we're not going to have to go through the chain rule and implicit function theorem and all the other machinery of multivariable calculus, which I understand is generally the first thing you have to slog through in this subject. If you can find a way to get to tensors without all that stuff it would be great. Or should I be going back and learning all the multivariable I managed to sleep through when I was supposed to be learning it? I can take a partial derivative ok but I'm pretty weak on multivariable integration, Stokes' theorem and all that. What you are doing with this symbology is simply putting a coordinate system on the manifold. We started out with some general topological space, and now we can coordinatize regions of it with familiar old [math]\mathbb R^n[/math]. All seems simple conceptually. In fact my understanding is that "A coordinate system can flow across a homeomorphism." I know these things are called charts, but where my knowledge ends is how you deal with the overlaps. If [math]U, U' \subset S[/math], what happens if the [math]h[/math]'s don't agree? Ok well if you have the patience, this is pretty much what I know about this. Then at the other end, I do almost grok the universal construction of the tensor product and I am working through calculating it for multilinear forms on the reals. This is by the way a very special case compared to the algebraic viewpoint of looking at modules over a ring. In the latter case you don't even have a basis, let alone a nice finite one. So anything involving finite-dimensional vector spaces can't be too hard
-
I'm terribly sorry if my exposition had that effect. We're all hopelessly ignorant of so many things. I should have mentioned earlier that the point I'm making, that there are not necessarily any laws of nature, is a minority opinion. I'm pretty sure the average working physicist thinks they are discovering the laws of nature, not just inventing prettier lies. My opinion is the extremist alternative one here. Ok glad I made my point, even at the expense of some confusion along the way. When the ancients looked at the night sky they saw hunters and bulls and crabs and all the other constellations. Humans see patterns even when there are no patterns. It's certainly fair to say that the constellations in the sky are artifacts of our minds and not anything that's actually there. Orion the hunter is something we made up and has nothing to do with the universe. But now when we see patterns in the data from an atom smasher, how do we know we're not doing the same thing? Perhaps our physics is nothing more than imaginary patterns in our mind and nothing to do with the actual universe as it is. That's my point. But if you asked a physicist, they'd almost certainly say that they trying to discover the actual laws of nature. They'd regard my point as profoundly wrong. In fact my understanding is that the average physicist would not regard their work as worth doing if they couldn't think of themselves as trying to discover the true laws of the universe. I think they're wrong about that. But they haven't asked me, actually. I think that's really the answer to the question. Sometimes probability refers to an inherent quality in the thing we're observing; but usually it's more to do with the state of our ignorance. It goes back to the old argument about free will versus determinism. Maybe everything that happens in the world, including the fact that I'm writing this post, was determined at the moment of the big bang. Or maybe I have free will. In which case, why do I have free will to choose which words to write, but no free will to decide to fly into the air by flapping my arms? As you asked, why is my body bound by the law of gravity, but my thoughts aren't? Isn't my brain a physical thing? These are good questions and they are matters of philosophy. The philosophers are pretty sharp. It's trendy these days for scientists to mock philosophy, but I think it's the scientists who are actually wrong about the true nature of their enterprise. They're makding useful models, not necessarily discovering ultimate truth. If a philosophically-oriented discussion on an Internet forum actually reached a satisfactory conclusion, that is a remarkable achievement No, I was crabby. I did and do apologize for that. Fair enough.
-
So far so good at my end. I haven't forgotten this thread, I've been slowly working my way through the universal property of the tensor product applied to multilinear forms on the reals. I can now visualize the fact that [math]\mathbb R \otimes \dots \otimes \mathbb R = \mathbb R[/math], because you can pull out all the coefficients of the pure tensors so that the tensor product is the 1-dimensional vector space with basis [math]1 \otimes \dots \otimes 1[/math]. This is pretty simple stuff but I had to work at it a while before it became obvious. I'm still curious to understand the significance of the duals in differential geometry and physics so feel free to keep writing, you'll have at least one attentive reader.
-
If the same person posts twice in a row, the forum software here merges both posts. I've gotten used to that so I wrote a short post in the morning and a longer one later. There's no way for anyone to know when a post's been substantially edited, and there's no way to write two posts in a row. Please accept my apology. You're right, my last post was a little crabby. I should have spent more time to figure out how to dial it back. You're right. I'm interested in the philosophical aspects so I talked about them while saying we shouldn't talk about them. My bad again. Ok well here I'll stand by what I wrote. I'm using the phrase "God's machine" to summarize what you are saying. That there's a machine, but it doesn't actually obey the laws that physical machines must obey. It's programmed, but not with any physics that we know. It's programmed with the "true" physics of the world, which for all we know doesn't exist at all. So this is a hypothetical magic box that you are using for your argument, and I'm just calling it the God machine. Because that's what it is. It works by no known physics, it can predict the future purfectly, etc. "God" is a perfectly fair characterization of such a device. Not in a religious sense, but in the sense that it is a hypothetical device that is omnipotent and omniscient. the two qualities most often associated with the religious God. I can call it the Magic Box if you like. It's a hypothetical device that transcends all known physical law and can perfectly predict the future. It's a God box. Am I misunderstnaing your hypothetical device? Works by no known laws of physics, has infinite storage and processing capacity, can perfectly predict the future? Isn't that what you are describing? But you did say that. I made the point that even if you have a computer with infinite processing and storage capacity, if it's programmed with the currently known laws of physics, it will be inaccurate from day one. That's almost exactly what I said earlier. And you replied: Yes, I know. I was considering a hypothetical machine which would know the full and correct laws of physics and mathematics. That's what you wrote. God's law. Not God of any particular religion, but the "true" and "ultimate" laws of nature. This is exactly what you wrote. "Full and correct laws of physics." What did you mean by that if I'm mis-characterizing it? If I was crabby it was because by engaging your philosophical ideas I saw that I was encouraging them, when I'm trying to get you to separate out these metaphysical speculations ("... the full and correct laws of physics ...") and stick to simpler things that we can analyze. Coin flips for example. But ok if I was crabby I apologize. But I'm probaby still coming off as crabby. You used the phrase "... the full and correct laws of physics ..." and it's a philosophical assumption to even think there is any such thing. And if these ultimate laws are unknown to us and may forever be unknown to us ... who knows them? God is the name I give to that knower of the ultimate answers. Again not in the religious sense, but what else can you call the knower of the unknowable? I believe it is a philosophical assumption. The idea that there is any "true" probability of anything. Isn't that the question you're asking? Oh no, I love philosophy and idle philosophical speculation. I'm just suggesting it should go in the philosophy section. You've raised several philosophical questions. Are there "ultimate" laws of nature, and if so, can they ever be known to us? That's the philosophy of physics. Is the human mind subject to physical law? That question's as old as Descartes. It's philosophical. I don't think philosophy is rubbish. I think it's important and interesting. It's just confusing the issue in this thread, because these are very complicated questions. Well you could accept my apology for being crabby earlier. But if you say that you have a magic machine that knows the ultimate laws of physics, I'm entitled to point out that those are philosophical assumptions. And it's not unfair of me at all to call that a God machine. It's a little like the God particle. It's just a name. Well I don't know what to say. You want to talk about a machine that knows the ultimate laws of physics. And you are unhappy that I'm pointing out that you are making philosophical assumptions about the nature of the world. I don't know any other way to say it. I suppose I may be misunderstanding you. On my part I'm trying to get you to separate out the philosophy of the universe from the much simpler case of flipping a coin. I'm calling out your philosophical assumptions in order to get you to see that you are making philosophical assumptions. The idea that there is a "true" physics is a philosophical assumption, it's not a fact. I'm not saying philosophy is bad. I'm just trying to point out where you are making assumptions about the world that are not known or not proven or not even provable in theory. And coming off crabby, I suppose. I'll plead guilty to that and throw myself on the mercy of the court. I thought Strange really had the last word here with the example of flipping a coin and one person knows the result and the other person doesn't. One person's odds are 100% for whatever the flip is. The other person's odds are 50-50 because of their lack of knowledge. Anything we talk about beyond that is confusing the issue IMO. What do you think about that example?
-
Same thing happened to Fermat. He had a marvelous proof but lost it in a browser mishap. What I usually do is hit Quote then copy/paste into a text editor, do my writing, then paste back into the browser. On those occasions when I forget to save my text file, my computer invariably crashes, losing my work. One is constantly fighting entropy. ==== Ok. What makes you think there are any such things? What's the difference betweeen just saying God knows everything that will happen? Your example is equally mystic. The correct laws of math? Absolutely no such thing. Some geometries are Euclidean, some not. Some groups are Abelian, some not. Some set theories are well-founded (no sets can be members of themselves) others not. Math is agnostic regarding truth. Logical consistency and interestingness are all that matter. Ok fine, God knows everything. I see no reason to dance around your spiritulalism. A machine that doesn't work according to the known laws of machines, programmed by laws that we don't know even exist, that predicts the future. Whatever. Why not abandon all that confusion and just say: "Imagine God knows the future. What's the probability of a coin flip?" Isn't that the sum total of what you're saying here? After this last bit I think it's zero. My only point is that we should talk about the coins and stop talking about God machines. You only want to talk about God machines. I can't respond any more to those kinds of speculations. Ok that's clear. Just so that it's also clear that the idea that there actually are true odds is a philosophical assumption. You have no evidence for it beyond a minor amount of local experience near our spacetime coordinates. A goblin? Whatever. Just say you have a God machine and be done with it. But that assumption already incorporates several unspoken assumptions about the universe. For the several-th and last time I say: Forget about the univers and split off a philosophy thread for that. It's irrelevant. I undertand you think that the universe is logical and ordered. There's no proof that's true, only local evidence. By local I mean what we've been able to see from earth in the past few thousand years. Very limited sample of observations. Ok you have a God machine. That contradictory. Your thoughts are a product of your brain which is a physical thing. You can't have it both ways. Why are you insisting on complicating a simple discussion of probability by now claiming that thought is not subject to the laws of physics but everything else is, and in fact not the human-discovered laws of physics but God's physics, for which not a shred of evience exists? So thoughts aren't physical? You are contradicting your God theory now. This is too far afield for me to comment on anymore. You want to take this to the metaphysical speculation forum. I used the example of thoughts to show that it is POINTLESS to drag the theory of mind into this simple discussion of philosophy. You've doubled down by going on about the nature of the universe and the nature of mind, which does not according to you even obey God's laws of the universe. You no longer have a coherent line of argument at all. You are all over the map and refuse to engage with the simple coin example. You've lost me totally. You only seem to want to engage in idle speculation about God's secret laws of the universe and how the mind doesn't obey them. That's not even philosophy, it's just late night dorm room chatter. Writing clearly is hard. In my previous post I did my best to try to get you to focus down on the coin example, and you responded by spinning wildly into God's secret laws and how the mind doesn't obey them. This is not good communication IMO. I'm afraid you've lost me entirely in this response. You haven't got a magic box containing God's law and if you did it wouldn't prove anything about anything. I'm disappointed that what I wrote to you earlier was so unclear as to have prompted you to go off in these unproductive directions. I was trying to get you to focus on the coin, and forget the God machine and your contradictory ideas about how the the universe has absolute laws but the mind doesn't obey them. I feel that I failed totally to make my point earlier and only made things worse. This. You don't need a God machine to discuss this simple and clear idea.
-
Good, then we are communicating. Yes and no. All physical theories are approximate. Even if we have machines with infinite capacity and speed, yet if they must be programmed with algorithms that represent CURRENTLY KNOWN physics, they must already be in error from the start, and those errors will accumulate to produce incorrect results. Newtonian gravity refined Aristotelian gravity ("All bodies move toward their natural place"); Einstein refined Newton. Some future genius will refine Einstein. We never know the ultimate laws of the universe, only mathematical approximations. Nor do we have any way to know for sure that there are any "ultimate" laws at all. Maybe it's turtles all the way down. So in a controlled experiment we can predict a coin flip with 99% accuracy or maybe even 99.9999999% accuracy but NEVER with 100% accuracy. And if you are trying to predict the evolution of the universe, that uncertainty must necessarily introduce massive error. We can predict coins but not universes. That's what my simplification is about. I propose that we should discuss coins and not universes, at least for a while. Coin flips are a more clarifying case. We know coins, we don't know the universe. Ok. After reading your post some more I don't think I fully understand your definitions. No that can't be right. Do you mean that if there are two possibilities that the odds are equal? If I jump off a high place I might fly or I may flop. The odds aren't 50-50. Or are you saying that perhaps I'm only applying what I know about the world? In that case you're right. If I have no prior information at all, I guess everything's 50-50. But we're never that ignorant. I may be missing your point here. What do you mean the "technical odds" are always 1-1? Maybe I'm misunderstanding your definition of technical odds. I thought that referred to the "real" odds based on the laws of the universe. I think you're asking a simple question about coins and complicating it by making many unproven assumptions about the universe and our ability to model it and compute with those models. The theoretical limitations of measurement and modeling are inherent even in the coin experiment. But with coins the measurement and approximation error is miniscule. When applied to the evolution of the universe, the uncertainty predominates and we can no longer predict. That's why I say, let's just talk about coins. The good question you are asking is whether probability is inherent in the event itself, or if it's merely a measure of our ignorance. In the context of coins, clearly the probability measures our ignorance, since coins are pretty much deterministic. But as far as the universe, who knows. There's a laundry list of issues to be considered. Best not to talk about the universe. That's my opinion. Because to talk about the univers you have to work out your metaphysical beliefs about the universe, whether you think there are any actual laws at all, what the relation of those "real" laws is to our historically contingent theories of physics; and then after all that you still have to solve the problem of calcution error ... you'd simply never get to the end of this conversation and you'd never reach a conclusion. That's a philosophical assumption about how the world works. That the world logical and ordered, the way it appears to us. Just noting a hidden philosophical assumption in your worldview. And what are technical odds? I'm not clear on your precise meaning. My understanding is that this is in question, even among physicists. Perhaps the gravitational constant drifts over the years. It's certainly possible. The only observations we have are very nearby in spacetime. Again, you are confusing the universe with our latest contemporary model of the universe. But if there is anything we know about science, it's that no theory is ever final; and that all theories are approximations. The "real" law of gravity, if there even is such a thing, can not possibly be our current theory of gravity any more than it was Newton's or Aristotle's theory of gravity. Our current theory is our best approximation to what we observe. No more and no less. So ok, this is in my opinion another complication we should leave aside. Just for the moment. I just want to nail down the coins, develop a common understanding around that. If the universe is one complication too far, what can we make of human nature and the human mind? Are our thoughts themselves just physical processes, subject to your deterministic prediction machine? Or are they ... what, exactly? If we are not physicalists, then what other choices are there? Ok this is a deep and wonderful question. But it's not a question about probability theory. The nature of the universe, and the nature of the human mind, must be ruthlessly left aside. At least for the moment. Till we understand the coins. I don't know if human thought is algorithmic, physical, subject to deterministic prediction. It's a great discussion, but one that clouds our conversation here. Coins. Not universes. Not minds. Coins are simple. Universes and minds are way too hard. You should split off a thread in the philosophy of mind section. But since I can't resist responding ... I will note that if you claim the human mind does NOT obey the laws of physics ... then what kind of mysticism or spiritualism do you believe in? I find both physicalism AND anti-physicalism equally problematic. If we're ONLY machines, that's bad. And if we're not machines ... well what else is there? Damn good questions I say. Yes yes. But we CAN speak very sensibly about coins. That's my suggestion. Abandon talk about universes and minds. Not because these aren't vital subjects. But because they're not subjects we can really tackle all at once. But the coins, the coins are simple. Coin flips are pretty much deterministic, and the 1/2 probability measures our ignorance. Still I only say "pretty much" deterministic. After all, we really don't know what causes things to happen in the world. Well I apologize if I've let myself ramble but it's all interesting to me
-
Just for the moment, let me change the subject to simplify it. If my simplification is off base, let me know. Suppose I have a perfect mechanical prediction machine. Whenever I am about to flip a coin, I input all known information about the angle and position of the coin on my thumb, the physical condition of my thumb, exactly how much strength is in my thumb today, etc. When I do this, I can predict the result of the coin flip with perfect accuracy. But one time, my prediction machine is in the shop. It's not available to me. So when I flip a coin, the odds are 50-50. Not because of the inherent unpredictability of the coin toss; but rather because of my own ignorance. That's the core question about probability. Does it reflect something inherent? Or is it just a measure of our ignorance? When it comes to coins, it's pretty clear that probability is only measuring our ignorance. Because they've done lab experiments where they very carefully control the input variables of the coin flip, and they can predict the flip with better than 50-50 results. However when you bring in things lie evolution of the universe and human thought, you're introducing extraneous variables about which we know very little. My sense is that you're asking about whether coin flips are inherently unpredictable, or whether they're only unpredictable due to our ignorance. Is this in line with your thoughts?
-
Two different issues. * You can't store the **EXACT** value of quantities in a finite calculating machine. (Unless you can. Maybe the universe is discrete and finite. Nobody knows). * The n-body problem is a problem in physics. We can't solve the differential equations for the motion of even 3 bodies under mutual gravity, let alone n bodies. But as I say if your argument can be done theoretically by imagining that we have a God machine, that's fine. Your logic can proceed from there. The n-body problem goes back to the time of Newton and is still important today. The Wiki article is worth reading. https://en.wikipedia.org/wiki/N-body_problem ps -- To make this clear: I have no objection to your assuming a hypothetical prediction machine. Just be aware that such a thing is not physically possible as far as we know. Or at least as far as I know. I could be wrong. pps -- I think in essence you are asking if probability is a measure of the true unknowability of things; or if it's only a measure of human ignorance. For example they've done lab experiments where they show there is nothing at all random about coin flips, once you precisely control the force and angle and direction of the flip mechanically. So I think you are asking a question about the philosophy of probability. Is that right?
-
Not by any conventional definition of machine. On the other hand if you're asking if God can predict the future, then that's a question of theology. The Wiki links on the n-body problem, chaos theory, and the stability of the solar system provide basic information on my point. If by machine you mean something that can store a finite amount of information, then it's subject to rounding error, and these will add up over time making prediction impossible. If by machine you mean something that can calculate with infinite amounts of information, that goes beyond the laws of physics and computer science. And EVEN THEN you can't solve the n-body problem. So a machine given total knowledge of the present state of a deterministic system can't predict the future. That's my understanding. But if you want to imagine a hypothetical future predicting machine called God, that's perfectly ok. Just realize you've gone beyond science into theology and fiction. Perfectly fine for a thought experiment, if it's helpful. In fact in computer science they sometimes imagine devices called oracles that can solve unsolvable problems, and then they reason based on those. So if you want to imaging you have a magic prediction machine, that's perfectly ok. Just be aware that you're reasoning from a hypothetical that doesn't actually exist.
-
No, this turns out to be untrue. Even if you know the position and velocity of every particle of a deterministic system, you still can't predict the future. As an example, we still have no idea whether the solar system is stable under Newtonian gravity. The main problems are: * You can't solve the n-body problem. * Rounding errors in the calculations will add up over time. https://en.wikipedia.org/wiki/Three-body_problem https://en.wikipedia.org/wiki/Chaos_theory https://en.wikipedia.org/wiki/Stability_of_the_Solar_System A highly recommended popular book on the subject is: https://www.amazon.com/Newtons-Clock-Chaos-Solar-System/dp/0716727242
-
You could invent a system like that but it wouldn't be a field, since 0 times anything is 0 in a field. The reason is that it's provable from the field axioms.
-
0 times anything is 0. That's provable from the field axioms. So the above can't be true in any field.
-
Studiot you must be making a point I don't understand to double down on this line of argument. I'm mystified to see you laboriously enumerate the field properties in the pursuit of demonstrating a manifestly false understanding of what a field is. Surely you don't think there's any difference between your ABCD field, your squares and triangles, and any other presentation of the unique field with four elements. If someone asks you how many fields there are of order four, and you give any answer other than one, you are missing the entire point of how algebraic structures work. We do not care about multiple representations of the same isomorphism class of objects. If a set of widgets satisfies the field axioms, then it has a 0 and a 1, no matter what you call them. That's why I don't object to Conway telling me that 0 and 1 are objects that contain z1 and z2. It makes no difference. 0 and 1 are defined by their behavior, not their representation. Perhaps this point is not brought out clearly enough in elementary presentations. Isomorphism isn't a map between two different things. It's a statement that we have two different representations for the same thing. It's really no different than the distinction between 1/2 and 2/4. Two different expressions for the exact same thing. Perhaps you'll give this some thought. To pick a simpler example, because to be fair the field of order 4 is a bit of a counterintuitive object, how many vector spaces are their of dimension two? I hope you'll agree that there is only one, even though it can appear in different guises. In your latest picture, the square is 0 and the triangle is 1. Every field has a 0 and a 1. It's part of the definition. That's an interesting remark. I see nothing about being "numeric," whatever that means, in the definition of a field. If you insist I could do the same trick with the real numbers (replacing each real number by one of uncountably many symbols) and then I could say that the real numbers aren't numeric either. That's just wordplay. The real numbers are the unique complete ordered field. Any representation will do.
-
Ok I can live with that. What is the purpose of characterizing 1 and 0 as being sets that contain z1 and z2? What happens next?
-
Well if z1 and z2 are elements in the field, then you are required to say what is their sum and product with each element of the field including themselves and each other. That's because a field is closed under addition and multiplication. In other words if x and y are any two elements of a field, not necessarily distinct, then you have to define x + y and xy. If you have elements that can't be added or multiplied then you might have some algebraic structure, but it wouldn't be a field. https://en.wikipedia.org/wiki/Field_(mathematics) I don't know if this would be helpful to you but for example suppose we define two formal symbols x and y, and we want to consider the collection of all finite sums of elements ax + by where a and b are real numbers. Then the structure we get would be the vector space of dimension two, in other words the usual Cartesian plane. Perhaps this is the kind of structure you have in mind. You have your formal gadgets z1 and z2, whatever they are, and you want to be able to multiply them by scalars. So you want to look at vector spaces or their generalization modules over a ring. https://en.wikipedia.org/wiki/Module_(mathematics) But if you don't tell me what z1 times z2 is, then you haven't got a field, by definition.
-
Every field contains 0 and 1 and they are distinct. A = 0 and B = 1 in your example, which can be read directly off your plus and times tables. As a check you see that B + B = A, or 1 + 1 = 0. That's because the field of order 4 has characteristic 2. Surely you understand that changing the names of the elements of an algebraic structure makes no difference at all.
-
Can you give examples for familiar fields such as the rationals, the reals, and the integers mod 5?