-
Posts
2038 -
Joined
-
Last visited
-
Days Won
24
Content Type
Profiles
Forums
Events
Everything posted by Eise
-
Well, if you suppose that after every interaction between particles, they are entangled, then yes, of course. But then, the history of those particles is not ended: they will interact with a lot of more particles, so 'diluting the entanglement' after this 'first' interaction beyond recognition. And that makes superdeterminism for me unacceptable. How could then the measurements in entanglement experiments work together in such a perfect way that it suggests that local realism is invalid? Due to all interactions, all parts of the experiment and the experimenters themselves consist of particles that can have wildly different histories, so are, FAPP, random.
-
OK, I am trying to read (and understand) Superdeterminism: A Guide for the Perplexed, by Sabine Hossenfelder. Until now, I do not find it convincing, but maybe some of you (Genady, Joigus, Markus?) would like to read, comment and discuss it here? Here I have already a problem: Bold by me. Bell's inequalities are based on: locality realism statistical independence Every classical theory should 'obey' Bell's inequality. QM does not, so at least one of the suppositions must be dropped for QM. Superdeterminism would be dropping statistical independence. But at the same time Hossenfelder gives up on locality, and adds 'this is just how nature is'. Sounds like giving up science. I also found this blog, by Scott Aaronson, which argues ferociously against Hossenfelder's position. A lot of reactions on it.
-
Yes, ChatGPT is not so bad... I think most postings of @amaila3 are just copies from ChatGPT.
-
Why does fine-tunning for life suggest a multiverse?
Eise replied to Boltzmannbrain's topic in Astronomy and Cosmology
Well, literally not refuted, but superfluous: one of the reasons to believe there are multiple universes would drop away. Hmm... But in different universes what is 'appropriate' might differ, no? With that the conversion factors would be different, so I would say your 'just' is not well placed here. This is the only number of dimensions that makes equal the numbers of independent rotations and independent boosts, for example. That is not a cause. But I think TheVat's question is not answerable. -
Why does fine-tunning for life suggest a multiverse?
Eise replied to Boltzmannbrain's topic in Astronomy and Cosmology
It is not. Nobody knows if e.g. the constants of nature can be different. So without any viable theory from which follows a 'chance distribution' of their values, it is an empty speculation. But it serves as a great escape route for string theory. Obviously, the different ways that the 6 space dimensions we cannot observe must be curled up so small that we do not notice them, counts about 10500 possibilities. Put this together with the idea of eternal inflation, which spawns endlessly bubble-universes, every bubble with its own set of constants (or even laws) of nature, then the possibility of a universe that allows for complex chemistry, and so for life will be close to one. And of course we find we are living in a universe that allows for that: we are here. In this way we do not need an explanation for the perceived 'fine-tuned' universe. However, the multiple universes are supposed to be causally separated from each other. So there is no way this idea can be empirically tested, and the question becomes if an idea may be called scientific, if it principally cannot be tested. -
Good one and a great song! "Philosophy is useless, theology is worse"* *Citation does not mean (totally) agreement
-
Yes, very good point +1 I agree too. I had another picture, instead of proving Euclid's fifth axiom: Plato's cave. But instead of seeing projections of eternal ideas, we have a glimpse on a dynamic reality that lies beneath our observations in space and time. You, @Markus Hanke, amplified this idea when you explained the singlet state in the thread-we-do-not-mention: distance (or space-time interval) between the measurements simply does not appear in the state of an entangled pair. So what we observe is the projection of a 'beneath quantum physics reality' on space-time. My view is more or less Kantian: it might be that we are encountering the 'Ding-an sich' (thing-in-itself), meaning we cannot look further because of our limitation to observing events in space-time. Space and time may not be the fixed categories Kant originally thought, but our observations will still always be in space-time. We may try to tickle as much as we want beyond our limited cognitive capabilities, and maybe we will discover more 'EPR-Bell-Kochen-Specker-Clauser-Zeilinger' (in)equalities. But probably these will just astonish us just more, maybe will exclude the one or the other interpretation of QM, but not a model of reality as it is under the hood. We are touching the limit of empirical science here: it is so to speak the 'micro-equivalent' of the observable universe. But not because of a practical limit, but because of our cognitive limitations.
-
But that would be plain vanilla determinism. Causal determinism is the default assumption of natural science, and QM seems to be the exception. But generally the assumption holds very well: again and again we find laws of nature that show how events are causally connected to each other. So for me the 'proof' that determinism is a correct assumption, is that we can formulate how events are related, not just that they are related. Showing that (space-like separated) measurements of entangled particles in principle could be explained deterministically is far from formulating how this happens. With other words, to be convincing, it should be showed that e.g very different ways of switching the direction of the polarisators (researcher manually turns them, a mechanism turns them, (based on a pseudo random generator, or based on a quantum randomness generator, or based on the radio noise of two remote quasars), or using 2 pair of polarisators with an optical switch in front of them, etc), all lead to the same correlations between the measurements. I have no problem with the idea that everything is determined. But I have a problem that such different ways of 'choosing' the polarisation orientations always result in the same correlations. What kind of (hidden?) laws of nature can manage that? In this article, the author comes not very far in my opinion: Louis Vervoort, Bell’s Theorem: Two Neglected Solutions: So plain vanilla determinism would do the trick? Simple causal and local determinism? [9] is 't Hooft's book on cellular automaton solution. In bold: what? The author even gives a (very) short outline of Spinoza, to show this point, but Spinoza is talking about plain vanilla determinism. My position is that determinism is OK (with QM as the exception), but pre-determinism is not. Superdeterminism smells very strong like pre-determinism. And yes, I tried to read 't Hooft's book, but as I expected, the technicalities are above my head. However, my impression is that there is not even an 'Ansatz' of a theory that could explain how such differently determined pathways to orient the polarisators all lead to the same result. Showing that (some interpretations of) QM allow for this, is wholly different as hypothesising a mechanism for it, (that could be tested?)
-
I am not convinced (yet?) that these are viable models. It nearly seems that such models are 'Lost in Math'. It is nearly literally the details. In a Bell like experiment, we have so many small details, that should be perfectly orchestrated if it can mimic non local interactions, again and again, consistently. But of course, it can be that I am 'lost in math', in another sense: I, pity enough, do not understand the math deep enough, so I should keep my mouth shut... Which of course I mostly have to do. As a non specialist, I have to wait till the discussions under the experts reach an empirically supported consensus. But I will keep on reading 't Hooft's book (at least the first part...). But if you, (or @Genady), is able to give a less mathematical taste of how these 'superdeterministic' interpretations can explain how the Big Bang determines the details of Bell like experiments, in such a way that we get consistent results, I would be very glad. E.g., as a comparison, consider deterministic chaos: even if the world would be completely deterministic, we loose all predictability after some time. But at the same time I have to believe that superdeterminism is able to exactly produce the correlations we see in complicated experiments, after billions of years? That is, in my eyes, a heavy burden for the 'super' in 'superdeterminism'. I can imagine that. But, as written above, I cannot imagine how this explains Bell like experiments. Lack of imagination? Yeah, well, Bohm claimed that, and the Bell inequalities show that with local hidden variables one cannot reproduce entanglement. von Neumann obviously had a 'proof' that no hidden variable classical theory (local and non local) can reproduce the results of QM, and Bell was astonished that de Broglie/Bohm came up with a working model. He discovered that the von Neumann proof was wrong, and in the end he came with his inequalities, which still allows for non local hidden variable interpretations of QM.
-
I will try to look into it, I hope it is not over my head. Of course I started with ChatGPT... I will look into 't Hooft's book, but if I get the essence...I don't know. One question I have already: do these 'cells' live in our normal spacetime, or in some abstract (mathematical) space, or even 'on the edge of the universe' (thinking about black holes and entropy)?
-
I read a lot, and never saw even an Ansatz for a 'mechanism'. But if you know more, I am glad if you have some (serious) references.
-
Modern determinism is based on the idea that if you know the initial conditions of a (closed) system, and the laws of nature that govern the system, every future state of the system is fixed. But you need both. Without laws of nature determinism becomes an unscientific world view, a belief. But for superdeterminism, which I assume must be 'more than determinism') there is not even a beginning of an idea about how events that were billions of light years away, and billions of years ago, determine all necessary elements of a Bell like experiment, including the experimenter. And despite the billions of (light)years, superdeterminism is a local theory? So superdeterminism reduces for me to a belief. Nothing scientific. I like it to reflect on the history of Bell like experiments: Bell proves his inequality, and it is immediate clear that QM does not obey the inequality Feynman is so sure that QM is correct, that he throws Clauser from his office, when he proposed his experiment. In the end, one can see it that way: when in Clauser's experiment no violation of Bell would have been measured, then QM would have been wrong. Physicists notice that the measurements in Clauser's experiment are time-like separated, so it is possible that some influence goes from one measurement to the one at the other side. However, no single physicist gives an hypothesis how the orientation of a polariser on one side can effect the measurement at the other side. Aspect closes this loophole by making the measurements space-like separated. QM still turns out to be right. ... an then still more experiments to close other loopholes... QM still stands Zeilinger makes the orientation of the polarisers dependent on the radio noise of two quasars, billions of (light)years (away) ago. No difference. QM stands. So what do we have all the time: QM as it is, makes the correct predictions in every experiment above, without any necessary correction necessary For none of the loopholes a plausible theory is ever given. Only vague hints I consider superdeterminism as such a loophole: only an extremely hypothetical possibility, and not a single proposal how such possibility physically would work, and by 6. above, extremely unlikely.
-
Yep, in the early fifties, in a conversation with Abraham Pais. In it's present state it is definitively non-local. This is the point. Copenhagen (if you omit the unclarity of the 'Heisenberg Cut' between quantum and classical physics), is a kind of minimal interpretation of QM. (My shortest description of the CI: "the maths works, but we will never understand why". This includes the correlation between remote measurements of entangled particles. The most natural way to extent CI would then be the introduction of local hidden variables: but the Bell inequalities shows that this does not work. But non-local hidden variables could do the trick, as exemplified by the de Broglie-Bohm interpretation. In facts it was this interpretation that triggered Bell to think up his inequalities. But together with ERP (if Alice's polariser is vertical, then she knows that at Bob's side, if he also has is polariser vertically positioned, that 100% of the photons get through) it is clear that locality is principally excluded by QM, in whatever interpretation. The dBBI takes this at face value and is non-local from the beginning. And that is the reason that Einstein did not like it. But we all defended in the thread-we-do-not-mention-here that QM is local. And thereby I still believe in the 'relativity-argument' from which you made that clear and funny drawing. If this argument really is valid, then we, at least for the moment, have no interpretation of QM that works in all situations.
-
I still think it is absurd: think about all the atoms involved in the experiment. The laser, the source of the entangled photons, the polarisers, their randomisers, the detectors, and last but not least the experimenters are all build up of atoms that might have originated from wildly different events in space and time. And then all these come together in an experiment and their behaviour is exactly coordinated in such a way that the experiments consistently have the results we observe? And given the present state of knowledge, we have one theory that is ontological stochastic (QM), and we also have chaotic determinism, and then they come together exactly coordinated in every Bell-like experiment we do? The world is small... Look at the acknowledgements: Sabine Hossenfelder (who made several videos wherein she defends superdeterminism, and even sang a song together with Tim Palmer), Jean Bricmont, who I mentioned in the OP, Tim Maudlin, from which I am now reading a book about QM and relativity (because it was mentioned in Bricmont's book). Pity enough the article is too difficult for me to evaluate if Tim Palmer has a point, or not. Did you read it? If so, what do you think? This seems to be a misunderstanding. I read in many books, that Einstein probably could have had peace of mind with the probabilistic character of QM, but not with non-locality. This seems even to go back to the Solvay conference of 1927. Einstein himself was not quite happy with the EPR article: it was obviously mainly written by Podolsky. E stressed the non-locality, and therefore suggesting there should be local hidden variables; P obviously stressed the non-deterministic character of QM, and based on that suggested that there must be hidden variables. Exactly. And then causally connected in exactly the right way, in every Bell-like experiment, to give the results we get. Would that mean that, if such a formulation were successful, that the de Broglie-Bohm is not non-local anymore?
-
I would suggest to make a drawing of two masses, where one is m2 = 5.972 × 10^24 kg, and the m1 = 1 kg, and a second experiment where m1 = 2 kg. Do you think one is able to measure the difference in acceleration of m1? Maybe instead of drawing, you should do the calculation...
-
I don't think this is a feasible solution. If one thinks about it, it would mean that nature is 'fooling us'. AFAIK, Zeilinger did an experiment, where the choice of the polarisations in a Bell-like experiment was determined by the radio noise of 2 quasars, in more or less opposite directions, billions of light years away. To suppose that this noise is precisely so coordinated that it mimics a bigger correlation than is classically possible, is a too big stretch for me. I have problems with all 'ways out'. de Broglie-Bohm (which is Bricmont's favourite), many worlds, and all others. Both make the wave function to a physical, causal agent (it forces (ok, they say 'guides', but what is the difference) particles on their determined paths; it splits the world realising all the possible outcomes). But in measurements we always only register particles arriving at some place. The wave function never physically appears in single measurements, so how can we assign it causal impact on our measurements?
-
After a now closed thread, the conclusion seemed to be that in having to give up on 'local realism', the clear tendency was that we have to give up the 'realism' part. I was heavy involved in that thread, and based on many 'quantum authorities' and physicists in the thread, I also defended that. But as I keep reading different books on quantum mechanics, I also found other, well argued positions defending none-locality,e.g. some very clear exposés by Jean Ricmont. In Quantum Sense and Nonsense, he writes: This analysis fits e.g. very well to the video presented in the closed thread, in which is clearly demonstrated that the CHSH is based only on 'locality' and 'realism'. As she leaves out EPR, just as Bricmont says, she does not come to the conclusion that QM is none-local. Bricmont mentions several physicists by name, who, in his eyes, do not really understand the consequences of Bell and EPR, e.g.... Gell-Mann. Now, of course I still believe strongly that the argument from relativity, that the order of the measurements done by Alice and Bob is observer dependent, shows that it is not possible that one measurement determines the other. Funny enough, I first read the more technical version of Bricmont's book, titled Making Sense of Quantum Mechanics, and he never mentions this, I was already thinking to email him, to ask what he thinks about that argument. But in Quantum Sense and Nonsense he discusses that argument: So in a way, he seems to do the same as what he blames the majority of physicists for: closing their eyes for (some of) the fundamental problems of understanding QM. Personally, I think that the relativity argument shows that there is no causal or determining relationship, or maybe better, no directional relationship between Alice's and Bob's measurement. So what are we left with?
-
Was Nietzsche talking about the 2nd coming of Jesus?
Eise replied to dimreepr's topic in General Philosophy
No, no, maybe I should have said, I couldn't have formulated it better than you did. My study is already a long time ago, so my concrete (historical) knowledge of philosophers has slowly diminished. And Nietzsche was not my specialty. But what one doesn't lose so fast, is, if one call it such, a philosophical mentality. I think it is easy to distinguish Nietzsche from his sister's deformation of his philosophy: his sister identified 'Arians' (Germanics) with the Übermensch. And Nietzsche's 'Ansatz' was individualistic, not collectivist. -
Was Nietzsche talking about the 2nd coming of Jesus?
Eise replied to dimreepr's topic in General Philosophy
Wow, you know your Nietzsche! +1. What in my opinion fails in your answer (especially for a 'Biology Expert'), is that Nietzsche was influenced by Darwinism. He saw mankind as a phase between beasts and the Übermensch: But it seems he did not see it as something that just will happen according to natural selection, but as something man should strive for. So to answer dimreepr's question: surely not literally, as CharonY noticed: Nietzsche was 'Antichrist'. But one can discuss how far Nietzsche was influenced by the very common idea of the prospect of salvation. Nearly every religion has a concept of salvation, and one could defend that Nietzsche presents an atheistic, naturalistic account of such salvation. -
I find a small Dobson telescope a good start, e.g. like here. You can easily take it with you (e.g. far from street lights...), place it on a garden table or something like that. But it is just good for direct observations of planets, nebulae, the moon, etc. Not well equipped for photography. But if you get the taste, your next telescope might be a more professional one.
-
<antfuckermode> It was the other way round. Heisenberg came first with his matrix formulation, followed about a year later with Schrödinger's 'wave mechanics'. </antfuckermode> 'Wave mechanics' became more popular in those days than matrix mechanics, because it was more 'visible', and used concepts of the physics of waves, which was of course more familiar to physicists. Heisenberg did not even realise that his 'tables of energy transitions' already existed in mathematics as matrices. I think it was Born who recognised that.
-
What is the difference among 90%, 99%, and 100% chocolate?
Eise replied to kenny1999's topic in Amateur Science
I think it is the process of conching that takes longer. Without help of supporting substance (e.g. lecithine) conching takes long, maybe even better conching machines are needed. And did you look carefully for the prices of the 90% and the 99%? Here in Switzerland the 99% has only half of the contents of the 90% (50g instead of 100g): -
Of course there is a distinction between object and subject, albeit a conceptual one. The question is if this conceptual distinction is also an ontological one. Many mystical traditions, and interesting enough, modern science, do not think there is a complete distinction between the two. That sounds more like a temporal loss of personal identity. That can happen just spontaneously, but can also be triggered by severe stress, or by meditation. I never heard of a connection with IQ. But I know of an German author, Michael Schmidt-Salomon, with a similar experience: it was triggered by an intense period of thinking about the free will problem, which he was writing an article about. A game where the endpoint is death should be played differently than a game you can start over and over again. I take it seriously, but I think you are wrong. Why would otherwise mystical traditions strive for insight in the grounds of personal existence? Think about the concept of Śūnyatā in Buddhist philosophy. Realising Nirwana, according to Buddhism, leads to equanimity, calmness of the mind. No idea what you mean here.
-
Thanks all. That helps. I will dive into the references given by @swansont and @Genady, in the hope that it even increases my understanding a bit more. Yes, he was 'forced' by the distance between the Israel embassy in London, where he worked as a military attaché, and the university where GR was taught: it was too far. So he went into quantum physics, which was taught in a university close by. From here: I can imagine that: What? More right than Likud? Oh boy.
-
I am not sure if I understand this. Do you mean that the Omega-minus can have any of these 4 spin directions? That, so to speak, max(abs(spin)) for the Omega-minus = 3/2? I think that is the statement of fact. One octet (spin 1/2), and one decuplet (spin 3/2). I just do not understand why a baryon with 3 strange quarks cannot principally belong to the octet. What is the connection between S = 3 and spin 3/2? In the book is the story that Feynman would look into the office of Ne'eman, jokingly saying 'Did you hear? They found the Omega-minus! It has a spin of 1/2'. Meaning that the Eightfold Way would be wrong.