Jump to content

Recommended Posts

Posted (edited)

After a now closed thread, the conclusion seemed to be that in having to give up on 'local realism', the clear tendency was that we have to give up the 'realism' part. I was heavy involved in that thread, and based on many 'quantum authorities' and physicists in the thread, I also defended that. But as I keep reading different books on quantum mechanics, I also found other, well argued positions defending none-locality,e.g. some very clear exposés by Jean Ricmont.

In Quantum Sense and Nonsense, he writes:

Quote

In order to explain the misunderstandings of Bell’s results, let us first summarize what we said in Sect. 7.5: if one takes Bell’s result discussed in Sect. 7.4 alone, and if one forgets about the EPR argument, that result can be stated as a “no hidden variables theorem”: Bell showed that the mere supposition that the measured values of the spin pre-exist to their “measurement”, and are perfectly (anti)-correlated, combined with the 1/4 frequency for the (anti)-correlation of measurements along different axes, leads to a contradiction. Since the perfect (anti)-correlation and the 1/4 frequency are quantum mechanical predictions that have been amply verified experimentally, this means that these hidden variables or pre-existing values cannot exist. 

But Bell almost always presented his result in combination with the EPR argument, which shows that the mere assumption of locality, combined with the perfect correlation when the directions of measurement (or questions) are the same, implies the existence of the supposedly “impossible” hidden variables. So for Bell, his result, combined with the EPR result, was not a “no hidden variables theorem”, but a nonlocality theorem, the result on the impossibility of hidden variables being only one step in a two-step argument.

This analysis fits e.g. very well to the video presented in the closed thread, in which is clearly demonstrated that the CHSH is based only on 'locality' and 'realism'. As she leaves out EPR, just as Bricmont says, she does not come to the conclusion that QM is none-local.

Bricmont mentions several physicists by name, who, in his eyes, do not really understand the consequences of Bell and EPR, e.g.... Gell-Mann.

Now, of course I still believe strongly that the argument from relativity, that the order of the measurements done by Alice and Bob is observer dependent, shows that it is not possible that one measurement determines the other. Funny enough, I first read the more technical version of Bricmont's book, titled Making Sense of Quantum Mechanics, and he never mentions this, I was already thinking to email him, to ask what he thinks about that argument. But in Quantum Sense and Nonsense he discusses that argument:

Quote

But that poses a serious problem for our notion of causality: indeed one would like to think that causes precede their effects in an absolute sense and one certainly would like to say that which event is a cause and which event is an effect does not depend on the state of motion relative to which those events
are described.
Is there a solution to this problem? Unfortunately, not really.One possibility is to assume that there is a state of motion which is “privileged” in the sense that, relative to that state of motion, the real causes and effects occur and the causes precede their effects (for example, one could take that state of motion to be represented by the green lines in Fig. 7.11). One could consider that state of motion as one of absolute rest. This amounts to bringing back a sort of ether, which was thought, in the 19th century, to be a medium in which electromagnetic waves propagate.

So in a way, he seems to do the same as what he blames the majority of physicists for: closing their eyes for (some of) the fundamental problems of understanding QM.

Personally, I think that the relativity argument shows that there is no causal or determining relationship, or maybe better, no directional relationship between Alice's and Bob's measurement. So what are we left with?

Edited by Eise
Posted

I want to mention something technical about the book, Quantum Sense and Nonsense. At the end of the appendix 5A it says,

image.png.b93592dd7fdc31156796cd86068cd74d.png

The last part of the statement is simply incorrect: the state phiup*PHI1+phidown*PHI2 does not mean the state phiup+phidown, which a superposed wave function of “the pointer being up” and “the pointer being down” would've been.

Posted
15 hours ago, Eise said:

So what are we left with?

Bell’s inequality can fail to hold in another way, too - namely, by virtue of one of its underlying assumptions to be false. In particular, it assumes statistical independence of observer and system, meaning the experimenter actually has completely free choice in how he sets up his experiments. If this is violated, you get some version of superdeterminism, which can preserve local realism even if Bell’s inequalities don’t hold.

I personally don’t like this idea, but the more you think about it, the less easy it becomes to dismiss it outright, especially since it also provides a solution to the measurement problem.

Posted
6 hours ago, Markus Hanke said:

Bell’s inequality can fail to hold in another way, too - namely, by virtue of one of its underlying assumptions to be false. In particular, it assumes statistical independence of observer and system, meaning the experimenter actually has completely free choice in how he sets up his experiments. If this is violated, you get some version of superdeterminism, which can preserve local realism even if Bell’s inequalities don’t hold.

I personally don’t like this idea, but the more you think about it, the less easy it becomes to dismiss it outright, especially since it also provides a solution to the measurement problem.

AFAIK, this view is supported and further developed by Gerard 't Hooft (e.g., [1405.1548] The Cellular Automaton Interpretation of Quantum Mechanics (arxiv.org)).

I like this approach more than the many-worlds interpretation.

Posted
On 9/14/2023 at 6:26 AM, Markus Hanke said:

In particular, it assumes statistical independence of observer and system, meaning the experimenter actually has completely free choice in how he sets up his experiments.

I don't think this is a feasible solution. If one thinks about it, it would mean that nature is 'fooling us'. AFAIK, Zeilinger did an experiment, where the choice of the polarisations in a Bell-like experiment was determined by the radio noise of 2 quasars, in more or less opposite directions, billions of light years away. To suppose that this noise is precisely so coordinated that it mimics a bigger correlation than is classically possible, is a too big stretch for me.

I have problems with all 'ways out'. de Broglie-Bohm (which is Bricmont's favourite), many worlds, and all others. Both make the wave function to a physical, causal agent (it forces (ok, they say 'guides', but what is the difference) particles on their determined paths; it splits the world realising all the possible outcomes). But in measurements we always only register particles arriving at some place. The wave function never physically appears in single measurements, so how can we assign it causal impact on our measurements?

Posted

I remember that 't Hooft related to this experiment as an illustration in one of his writings. According to him, the noise is precisely correlated being precisely determined since the time the sources of the noise were affecting each other near the beginning of the Big Bang.

Posted
6 hours ago, Eise said:

If one thinks about it, it would mean that nature is 'fooling us'.

I wouldn’t look at it this way. What it means is that there are in fact correlations between observer and system that are not accounted for in standard QM, which fundamentally assumes statistical independence. I don’t like this idea much either, but the fact of the matter is that the universe started off very small and very dense, and has a finite age - so the concept perhaps isn’t so absurd after all.

But of course, this would have serious implications for the philosophy of science, since one could no longer cleanly separate an experimenter from his experiment. How much could we reliably know about a universe where this is the case?

Here’s a very recent and quite interesting paper on this - see under ‘conclusions’ at the end for a quick summary.

https://arxiv.org/pdf/2308.11262.pdf

Posted

I cannot be totally sure of what Bricmont's position is, but from what I can see, it seems compatible with mine, so here goes.

At the point of publishing EPR, Einstein was not so much concerned with the possibility of hidden variables as he was with que question of whether uncertainty is just a consequence of our ignorance or goes deeper, as QM suggests. In order to do that, he tried to confront reality with relativistic causality. He might have devised an argument to confront it with the principle of relativistic frame independence or what may have you. But he was firmly convinced that both principles (relativistic causality and reality) must hold.

As far as I can see, all the people who later worked on hidden-variables theorems --starting with V. Neumann-- were really working on theorems about reality (whether quantities A, B, C... can or cannot be said to have a value at the same time). The question of locality being in the back of everybody's mind, partly because Einstein invoked it, and partly because the projection postulate does invoke a non-local operation, even though it has no consequences in the way of signals, interactions, and the like.

I also find the solution of superdeterminism unpalatable, even you there's always the possibility of saying that, at some point in the past, everything may have been causally connected.

The De Broglie-Bohm solution I find the most reasonable, although it is extremely unappealing. In my mind, it could be but a very rough, very crude version of an idea that should be formulated in terms of field theory and gauge invariance. Both elements absent from the original formulation.

Posted (edited)
On 9/15/2023 at 4:03 PM, Markus Hanke said:

the fact of the matter is that the universe started off very small and very dense, and has a finite age - so the concept perhaps isn’t so absurd after all.

I still think it is absurd: think about all the atoms involved in the experiment. The laser, the source of the entangled photons, the polarisers, their randomisers, the detectors, and last but not least the experimenters are all build up of atoms that might have originated from wildly different events in space and time. And then all these come together in an experiment and their behaviour is exactly coordinated in such a way that the experiments consistently have the results we observe? And given the present state of knowledge, we have one theory that is ontological stochastic (QM), and we also have chaotic determinism, and then they come together exactly coordinated in every Bell-like experiment we do?

On 9/15/2023 at 4:03 PM, Markus Hanke said:

Here’s a very recent and quite interesting paper on this - see under ‘conclusions’ at the end for a quick summary.

The world is small... Look at the acknowledgements: Sabine Hossenfelder (who made several videos wherein she defends superdeterminism, and even sang a song together with Tim Palmer), Jean Bricmont, who I mentioned in the OP, Tim Maudlin, from which I am now reading a book about QM and relativity (because it was mentioned in Bricmont's book). Pity enough the article is too difficult for me to evaluate if Tim Palmer has a point, or not. Did you read it? If so, what do you think?

On 9/15/2023 at 4:27 PM, joigus said:

At the point of publishing EPR, Einstein was not so much concerned with the possibility of hidden variables as he was with que question of whether uncertainty is just a consequence of our ignorance or goes deeper, as QM suggests.

This seems to be a misunderstanding. I read in many books, that Einstein probably could have had peace of mind with the probabilistic character of QM, but not with non-locality. This seems even to go back to the Solvay conference of 1927. Einstein himself was not quite happy with the EPR article: it was obviously mainly written by Podolsky. E stressed the non-locality, and therefore suggesting there should be local hidden variables; P obviously stressed the non-deterministic character of QM, and based on that suggested that there must be hidden variables.

On 9/15/2023 at 4:27 PM, joigus said:

I also find the solution of superdeterminism unpalatable, even you there's always the possibility of saying that, at some point in the past, everything may have been causally connected.

Exactly. And then causally connected in exactly the right way, in every Bell-like experiment, to give the results we get. 

On 9/15/2023 at 4:27 PM, joigus said:

The De Broglie-Bohm solution I find the most reasonable, although it is extremely unappealing. In my mind, it could be but a very rough, very crude version of an idea that should be formulated in terms of field theory and gauge invariance.

Would that mean that, if such a formulation were successful, that the de Broglie-Bohm is not non-local anymore?

Edited by Eise
Posted
50 minutes ago, Eise said:

This seems to be a misunderstanding. I read in many books, that Einstein probably could have had peace of mind with the probabilistic character of QM, but not with non-locality. This seems even to go back to the Solvay conference of 1927. Einstein himself was not quite happy with the EPR article: it was obviously mainly written by Podolsky. E stressed the non-locality, and therefore suggesting there should be local hidden variables; P obviously stressed the non-deterministic character of QM, and based on that suggested that there must be hidden variables.

OK. That's a historical point I cannot claim to be 100% sure about, so you may be right. I didn't mean he abhorred of the use of probabilities. I meant rather that Nature at the most fundamental level bespeaks probability.

IMHO, Einstein's position towards QM, while it didn't significantly change, did experience a shift in emphasis perhaps. I know for a historical fact that at some point he's quoted as accepting probabilities appearing at the most fundamental level (whatever that means.) But his real qualms must have been not so much about probabilities as they were (must have been) about reality. What he did not accept until the bitter end was the possibility that there be no elements of reality below that level. He's been quoted as saying,

Quote

As Albert Einstein once bemoaned to a friend, “Do you really believe the moon is not there when you are not looking at it?”

https://www.scientificamerican.com/article/the-universe-is-not-locally-real-and-the-physics-nobel-prize-winners-proved-it/#:~:text=As Albert Einstein once bemoaned,regarded as a bad move.

Correct me if I'm wrong, but I think this testimony came relatively late in his scientific career. It is a relevant matter to distinguish between something Einstein said in 1935 and, say, 1950. You know these things much better than I do.

I think the concept of reality is the one that didn't let him sleep at night. He knew QM is deeply, unmistakably, irrecoverably at variance with the notion of a real world, existing independently of the observer. 

1 hour ago, Eise said:

Would that mean that, if such a formulation were successful, that the de Broglie-Bohm is not non-local anymore?

The problem with locality is that many people have used it for decades in several different ways, not all of them mutually overlapping. I'm not totally convinced that Bohm's theory is non-local in any meaningful sense that I can recognize. Of course I could be totally wrong and simply misunderstanding the finer points these experts are making.

Strictly speaking, Copenhagen quantum mechanics is (mildly) non-local already, because of the fact that components of the wave function that now are light-years away, must go to zero just because I've measured something here on Earth.

For better or worse, nobody can measure the consequences of something non-measurable disappearing out of existence[???]... But the user manual of QM does tell you to do that in your equations, which is a faux pas in local physics.

Not that anybody has worried about that or (most) even noticed for about 50 plus years.

 

Posted
19 hours ago, Eise said:

And then all these come together in an experiment and their behaviour is exactly coordinated in such a way that the experiments consistently have the results we observe?

The thing here is that no experimenter has the freedom to choose the parameters of his experiments - in some sense, all experiments and their outcomes are preordained from the very beginning, so it isn’t possible for these constituents you mentioned to not be exactly correlated in the right way. One might say that it is the correlation that reveals the experiment, not the other way around. There is no freedom of choice in superdeterminism.

19 hours ago, Eise said:

If so, what do you think?

Unfortunately I do not understand all of it, since this is not an area of physics I have reliable expertise in. So I’m not in a good position to evaluate its scientific merit. It should be mentioned though that this isn’t the only such proposal - there are several superdeterminism candidate models in existence, all of which are consistent with QM and the known Bell experiments.

Personally I dislike the idea greatly; it even scares me a little, since a superdeterministic universe would be one in which no one possesses any degree of free agency. But I struggle to find a decisive argument against it, since the maths appear to be consistent, and I must acknowledge that it would eliminate some difficult issues, not least of which the measurement problem.

Posted (edited)
20 hours ago, joigus said:

I think this testimony came relatively late in his scientific career

Yep, in the early fifties, in a conversation with Abraham Pais.

20 hours ago, joigus said:

I'm not totally convinced that Bohm's theory is non-local in any meaningful sense that I can recognize.

In it's present state it is definitively non-local. 

20 hours ago, joigus said:

Strictly speaking, Copenhagen quantum mechanics is (mildly) non-local already

This is the point. Copenhagen (if you omit the unclarity of the 'Heisenberg Cut' between quantum and classical physics), is a kind of minimal interpretation of QM. (My shortest description of the CI: "the maths works, but we will never understand why". This includes the correlation between remote measurements of entangled particles. The most natural way to extent CI would then be the introduction of local hidden variables: but the Bell inequalities shows that this does not work. But non-local hidden variables could do the trick, as exemplified by the de Broglie-Bohm interpretation. In facts it was this interpretation that triggered Bell to think up his inequalities. But together with ERP (if Alice's polariser is vertical, then she knows that at Bob's side, if he also has is polariser vertically positioned, that 100% of the photons get through) it is clear that locality is principally excluded by QM, in whatever interpretation. The dBBI takes this at face value and is non-local from the beginning. And that is the reason that Einstein did not like it.

But we all defended in the thread-we-do-not-mention-here that QM is local. And thereby I still believe in the 'relativity-argument' from which you made that clear and funny drawing.

image.png

If this argument really is valid, then we, at least for the moment, have no interpretation of QM that works in all situations.

 

 

Edited by Eise
Posted (edited)
5 hours ago, Markus Hanke said:

The thing here is that no experimenter has the freedom to choose the parameters of his experiments - in some sense, all experiments and their outcomes are preordained from the very beginning, so it isn’t possible for these constituents you mentioned to not be exactly correlated in the right way.

Modern determinism is based on the idea that if you know the initial conditions of a (closed) system, and the laws of nature that govern the system, every future state of the system is fixed. But you need both. Without laws of nature determinism becomes an unscientific world view, a belief. But for superdeterminism, which I assume must be 'more than determinism') there is not even a beginning of an idea about how events that were billions of light years away, and billions of years ago, determine all necessary elements of a Bell like experiment, including the experimenter. And despite the billions of (light)years, superdeterminism is a local theory? So superdeterminism reduces for me to a belief. Nothing scientific.

I like it to reflect on the history of Bell like experiments:

  1. Bell proves his inequality, and it is immediate clear that QM does not obey the inequality
  2. Feynman is so sure that QM is correct, that he throws Clauser from his office, when he proposed his experiment. In the end, one can see it that way: when in Clauser's experiment no violation of Bell would have been measured, then QM would have been wrong.
  3. Physicists notice that the measurements in Clauser's experiment are time-like separated, so it is possible that some influence goes from one measurement to the one at the other side. However, no single physicist gives an hypothesis how the orientation of a polariser on one side can effect the measurement at the other side.
  4. Aspect closes this loophole by making the measurements space-like separated. QM still turns out to be right.
  5. ... an then still more experiments to close other loopholes... QM still stands
  6. Zeilinger makes the orientation of the polarisers dependent on the radio noise of two quasars, billions of (light)years (away) ago. No difference. QM stands.

So what do we have all the time:

  • QM as it is, makes the correct predictions in every experiment above, without any necessary correction necessary
  • For none of the loopholes a plausible theory is ever given. Only vague hints

I consider superdeterminism as such a loophole: only an extremely hypothetical possibility, and not a single proposal how such possibility physically would work, and by 6. above, extremely unlikely.

Edited by Eise
Posted

I read a lot, and never saw even an Ansatz for a 'mechanism'. But if you know more, I am glad if you have some (serious) references.

Posted

I will try to look into it, I hope it is not over my head. Of course I started with ChatGPT... I will look into 't Hooft's book, but if I get the essence...I don't know. One question I have already: do these 'cells' live in our normal spacetime, or in some abstract (mathematical) space, or even 'on the edge of the universe' (thinking about black holes and entropy)?

Posted
7 minutes ago, Eise said:

I will try to look into it, I hope it is not over my head. Of course I started with ChatGPT... I will look into 't Hooft's book, but if I get the essence...I don't know. One question I have already: do these 'cells' live in our normal spacetime, or in some abstract (mathematical) space, or even 'on the edge of the universe' (thinking about black holes and entropy)?

The normal spacetime.

Posted
On 9/19/2023 at 1:28 PM, Eise said:

there is not even a beginning of an idea about how events that were billions of light years away, and billions of years ago, determine all necessary elements of a Bell like experiment

I would disagree with this, since - as mentioned - several “toy models” exist that do just precisely this. Genady has mentioned ‘t Hooft, and another example is the model by Donadi/Hossenfelder:

https://arxiv.org/abs/2010.01327

I think the key here is the distinction between correlation and causation - the necessary correlations exist from the beginning, but the measurement outcome is still caused purely locally by the detector settings only, just like in ordinary entanglement. The very fact that such models can be written self-consistently while demonstrably reproducing all predictions of standard QM makes it difficult to outright dismiss the concept as nonsense.

Experimentally, superdeterminism would show up as small deviations from Born’s rule under specific circumstances, so it might be possible to experimentally distinguish it from QM.

Posted
On 9/19/2023 at 11:41 AM, Eise said:

Yep, in the early fifties, in a conversation with Abraham Pais.

Thanks! Yes, that's the source. Here's a reference to the recollection,

Quote

“We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it.” Rev. Mod. Phys. 51, 863–914 (1979), p. 907

 

On 9/19/2023 at 4:20 PM, Eise said:

I will try to look into it, I hope it is not over my head. Of course I started with ChatGPT... I will look into 't Hooft's book, but if I get the essence...I don't know. One question I have already: do these 'cells' live in our normal spacetime, or in some abstract (mathematical) space, or even 'on the edge of the universe' (thinking about black holes and entropy)?

In the forums below, G. 'tHooft himself clarifies these questions and some more. In a nutshell, and to the extent that I understand correctly, cellullar-automaton variables provide "onticity", but are affected by probabilities themselves, and produce the quantum states as something that very much looks "emergent". The space-time being essentially what we all know and love:

https://physics.stackexchange.com/questions/34217/why-do-people-categorically-dismiss-some-simple-quantum-models

As to non-locality of the Broglie-Bohm model, I'm aware that people claim it is. The claim is always kinda wrapped up in some obscure wording, never in the mathematics. I don't think relevant physicists have ever weighed in with the discussion, except to the effect of dismissing it from a distance --pun intended.

<speculative>

Of course, I'm sure the BB model cannot --at best-- be the whole story. It's got anthropocentrism written all over it. IMO, it must be some kind of toy-modelled approximation to some non-linear generalisation of field theory that exploits the (infinite dimensional) dynamical possibilities that gauge freedom affords. Throw in further assumptions on how this lumpiness of the gauge degrees of freedom correlates to the linear amplitude and there you are: your stand-in for realistic degrees of freedom, plus the reason why they cannot be ultimately determined: they can be changed by a simple gauge transformation, so they cannot ever be determined.

</speculative>

Posted
On 9/20/2023 at 2:24 PM, Markus Hanke said:

I would disagree with this, since - as mentioned - several “toy models” exist that do just precisely this. Genady has mentioned ‘t Hooft, and another example is the model by Donadi/Hossenfelder

I am not convinced (yet?) that these are viable models. It nearly seems that such models are 'Lost in Math'. It is nearly literally the details. In a Bell like experiment, we have so many small details, that should be perfectly orchestrated if it can mimic non local interactions, again and again, consistently.

But of course, it can be that I am 'lost in math', in another sense: I, pity enough, do not understand the math deep enough, so I should keep my mouth shut... Which of course I mostly have to do. As a non specialist, I have to wait till the discussions under the experts reach an empirically supported consensus. But I will keep on reading 't Hooft's book (at least the first part...). But if you, (or @Genady), is able to give a less mathematical taste of how these 'superdeterministic' interpretations can explain how the Big Bang determines the details of Bell like experiments, in such a way that we get consistent results, I would be very glad. E.g., as a comparison, consider deterministic chaos: even if the world would be completely deterministic, we loose all predictability after some time. But at the same time I have to believe that superdeterminism is able to exactly produce the correlations we see in complicated experiments, after billions of years? That is, in my eyes, a heavy burden for the 'super' in 'superdeterminism'. 

On 9/20/2023 at 7:47 PM, joigus said:

cellullar-automaton variables provide "onticity", but are affected by probabilities themselves, and produce the quantum states as something that very much looks "emergent".

I can imagine that. But, as written above, I cannot imagine how this explains Bell like experiments. Lack of imagination?

On 9/20/2023 at 7:47 PM, joigus said:

As to non-locality of the Broglie-Bohm model, I'm aware that people claim it is.

Yeah, well, Bohm claimed that, and the Bell inequalities show that with local hidden variables one cannot reproduce entanglement. von Neumann obviously had a 'proof' that no hidden variable classical theory (local and non local) can reproduce the results of QM, and Bell was astonished that de Broglie/Bohm came up with a working model. He discovered that the von Neumann proof was wrong, and in the end he came with his inequalities, which still allows for non local hidden variable interpretations of QM.

Posted (edited)
4 hours ago, Eise said:

a less mathematical taste

I can think of it as a conservation of something. Regardless of how many interactions occur, for how long, and how far away they spread, this something has to be conserved, and this conservation means that all the things involved continue to stay correlated.

Edited by Genady
Posted
22 hours ago, Eise said:

I am not convinced (yet?) that these are viable models.

I’m not trying to convince you, since I myself am not convinced that superdeterminism is a viable way to go. I’m saying only that it shouldn’t be readily dismissed, since it does reproduce the same results as ordinary QM.

I’ll have to do more study myself regarding the precise details, though.

Posted
On 9/22/2023 at 9:45 AM, Eise said:

[...] von Neumann obviously had a 'proof' that no hidden variable classical theory (local and non local) can reproduce the results of QM, and Bell was astonished that de Broglie/Bohm came up with a working model. He discovered that the von Neumann proof was wrong, and in the end he came with his inequalities, which still allows for non local hidden variable interpretations of QM.

I think you got this absolutely right. V. Neumann tried to get a really robust proof that hidden variables were hopeless if QM is right. He thought he did. It's perhaps not widely well-known that Bell further elaborated on V. Neumann's argument and extended it to commuting observables, which resulted in what we know today as the Kochen-Specker theorem, which should be called Bell-Kochen-Specker. In a nutshell, what it says is that for some quantum systems, you cannot even assign reality to pairs of commuting variables. Analogously to the "regular" Bell theorem, this only happens for naggingly-difficult-to-spot pairs of variables.

The thing about determinism is it blurs the boundary between causal and non-causal, as cause and effect are both co-determined by the deterministic law...

I'm loosing my train of thought. I wanted to say more on @Markus Hanke's hopes that some version of superdeterminism coud be not so far-fetched --if not altogether plausible. They have to do with the possibility that the universe is actually holographic in nature. Maybe later.

Posted
15 hours ago, joigus said:

I wanted to say more on @Markus Hanke's hopes that some version of superdeterminism coud be not so far-fetched

Actually, I’m hoping that there might be some convincing reason to once and for all rule out superdeterminism (I don’t like the idea) - but I can’t find any, and a number of quite esteemed physicists seem to pursue this line of research.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.