Jump to content

Recommended Posts

Posted

Earlier I used Rule 30 as an example of a deterministic system which produces a statistically random distribution. However, Severian instructed me that Rule 30 does not use a system of non-local hidden variables which violation of Bell's inequality would necessitate. Information in Rule 30 moves no faster than "c" (one cell per second). Rule 30 is still very much a local system.

 

However, my question would be have the Bell test experiments shown that the resulting distribution does not rely on information traveling at c? Why must it be superluminal and how has it experimentally shown to be otherwise?

Posted

Bascule, I think we may share some basic interests but you seem to see things from another angle and I don't exactly understand your thinking. Before trying to comment on the wrong thing I am curious what you are working on. What are you looking for so to speak?

 

> Rule 30 moves no faster than "c" (one cell per second).

 

Are you identifying time flow with data flow?

 

How do you distinguish data from useful information? Ie. data with obvious rules - that are easily compressed - and hard to reduce random data? would the flow of time not have any relation to this?

 

/Fredrik

Posted

Referring to one cell per iteration as "c" in CA is just a useful convention and has nothing to do with the real world.

 

I'm far more concerned with what would happen in a Bell test experiment where the observations took place at a distance too great for information travelling at c to have an impact on the outcome.

 

My question is how do the present experiments rule this out?

Posted

Here we go, this is what I'm asking about:

 

http://en.wikipedia.org/wiki/Bell_test_loopholes

 

Another problem is the so-called “locality” or “light-cone” loophole. The Bell inequality is motivated by the absence of communication between the two measurement sites. In experiments, this is usually ensured simply by prohibiting any light-speed communication by separating the two sites and then ensuring that the measurement duration is shorter than the time it would take for any light-speed signal from one site to the other, or indeed, to the source. An experiment that does not do this cannot test Local Realism, for obvious reasons. Note that the needed mechanism would necessarily be outside Quantum Mechanics, and needs to explain “entanglement” in a great variety of geometrical setups, over distances of several kilometers, and between a variety of systems.

 

There are, so far, not so many experiments that really rule out the locality loophole. John Bell supported Aspect’s investigation of it (see page 109 of (Bell, 1987)) and had some active involvement with the work, being on the examining board for Aspect’s PhD. Aspect improved the separation of the sites and did the first attempt on really having independent random detector orientations. Weihs et al improved on this with a distance on the order of a few hundred meters in their experiment in addition to using random settings retrieved from a quantum system. This remains the best attempt to date.

Posted

I see. I am not very updated on all the experiments that have been done on that. But I guess if one is to exploit anything that doesn't come with a 100% confidence level (which never happens in reality anyway), I guess "proof" is strong word. This is even one reason for my previously expressed attitude on the matter. So I think you may have a point.

 

Lately I didn't spend that much time with the Bell stuff because I didn't consider it that much of an issue, but some issues that I think has been beaten over several times is that the original formulation of the bell inequalities first of all is unclear about the proabilistic framework. They start to talk about probabilities without defining the settings and priors. Some papers also assume that the conditional probabilities of the detections on the hidden variable are integrable. It's alot of things that could be questions, and I think there are many papers on that. For sure I know there are papers rectifying the use of probability originally from Bell.

 

( def. P(x|y) := the probability of x, give that we know y )

 

Some papers start out with like this

[math]

P(A \cap B) = \int P(A|\lambda|B)P(B|\lambda) P(\lambda) d\lambda

[/math]

 

The locality assumption is that

[math]

P(A|\lambda|B) = P(A|\lambda)

[/math]

 

Giving

[math]

P(A \cap B) = \int P(A|\lambda)P(B|\lambda) P(\lambda) d\lambda

[/math]

 

Now where is the prior assumption of entanglement? The whole construction relies on that there is a relation between A and B, so that once A is determined, B is, because they are entangled. Let's call this relation R(A,B).

 

I'd like to write then

[math]

P(A \cap B | R(A,B)) = \int P(A|\lambda|B| R(A,B))P(B|\lambda| R(A,B)) P(\lambda) d\lambda

[/math]

 

To now suggest that

[math]

P(A|\lambda|B| R(A,B)) = P(A|\lambda|R(A,B))

[/math]

doesn't make sense does it, taken into account the entanglement constraint. The locality assumption defined as above seems inconsistent with the entanglement constraint.

 

So it seems either you believe in the existence of entanglement or you don't.

 

In the nature of the probability concept is also the incompleteness that in reality we never ever reach completel confidence. There is always a possibly arbitrary big, but still finite amount of data. This is the reason why I like to keep doors open.

 

Anyway, I do not have any references to the experiments at hand. Maybe someone else has.

 

/Fredrik

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.