Jump to content

Why does matter behave like particles when observed, and like waves when not?


Recommended Posts

Posted

I had a look at the double-slit experiment, and I get how they discovered it, and it makes sense. What I don't get is how the heck this happens, and how a particle would know when it's being looked at, and when not.

Posted

That's the nature of the beast that is quantum mechanics. There is wave-like behavior, but localized interactions.

 

 

Don't anthropomorphize waves/particles. They hate it when you do that.

Posted

Don't anthropomorphize waves/particles. They hate it when you do that.

 

 

I just laughed really loudly at work and caused people to stare at me...

 

 

Nicely done Swansont!

Posted

It might seem intimidating, but it really isn't that bad, atoms can almost perfectly be described by wave mechanics. The reason why atoms act like they do in the double slit experiment is because they are waves, and waves simply don't have a specific location. You can try dropping a pebble in the pond, but you can't point to a specific location and say "the wave is precisely there". As for the actual measurement of particle-waves, they still act like waves, but we only observe single points of them for reasons that remain unknown, though in a complex way it could be related to the mechanics itself stating that "if you determine a precise location, then it cannot have an undefined location", or more accurately, "the more precisely you know one aspect, the less precisely you know another aspect", also known as the uncertainty principal. With this, if you determine a particle-wave, then that means it's momentum is no longer defined if it has a precise location, but if you define only its momentum, then you have no idea where it's location is.

Posted (edited)

There is an approach called weak measurements which do "observe them without interfering with them", as you put it. See first article in link:

 

http://physicsworld....icle/news/48126

 

Something about this doesn't make sense to me, because by the mere inherent nature of a wave even in the macroscopic world, it can't exist in a single finite location, so I think he is somehow viewing the "possible paths" of a measurement of an undefined photon, which I don't see much a difference with now, because if you shoot a photon, you still have places it's most likely to show up and not show up, or he is in somehow only "temporarily" collapsing a wave-function, perhaps "weak-measurement"="weak collapse", but this is semi-new to me.

Maybe just the wave he's describing it doesn't make sense, or perhaps by running a photon through a piece of calcite he is actually collapsing it's wave function.

It would help if some expert could explain it better.

Edited by questionposter
Posted (edited)

A bit more abstractly one can consider the commutation relation. Things that can be observed in quantum mechanics, "observables", correspond to operators which can be represented as matrices. When A and B are matrices it is often the case that

 

[math] AB \neq BA[/math]

 

Anyone who knows anything about matrix multiplication can then see that "order matters" when you play with these observables. If you follow the wikipedia link (hurry before SOPA shuts it down :P ) you'll see something like this:

 

[math] [\hat{x},\hat{p}_x]= \hat{x}\hat{p}_{x}-\hat{p}_{x}\hat{x} = i\hbar [/math]

 

Which is really just a more formal statement of the Heisenberg uncertainty principle. A bit of extra work up front to get some of the abstraction of quantum mechanics (or science in general) can pay huge dividends in understanding further down the road. That's been my experience to date anyway though I'm not a physicist.

Edited by mississippichem
Posted (edited)

A bit more abstractly one can consider the commutation relation. Things that can be observed in quantum mechanics, "observables", correspond to operators which can be represented as matrices. When A and B are matrices it is often the case that

 

[math] AB \neq BA[/math]

 

Anyone who knows anything about matrix multiplication can then see that "order matters" when you play with these observables. If you follow the wikipedia link (hurry before SOPA shuts it down :P ) you'll see something like this:

 

[math] [\hat{x},\hat{p}_x]= \hat{x}\hat{p}_{x}-\hat{p}_{x}\hat{x} = i\hbar [/math]

 

Which is really just a more formal statement of the Heisenberg uncertainty principle. A bit of extra work up front to get some of the abstraction of quantum mechanics (or science in general) can pay huge dividends in understanding further down the road. That's been my experience to date anyway though I'm not a physicist.

 

Right, but he's saying he knows the path of a specific photon when it's in a wave form or still traveling, but I don't think it' could be the entire photon, I think it's just one possible path of the measurement of a photon based on my understanding.

Edited by questionposter
Posted

I dont get why everyone separates these behaviors. As I see it, all matter behaves as wavicles. They can exhibit both properties at once, and the nature of observation determines which behavior we see.

Posted

I dont get why everyone separates these behaviors.

 

 

Because it's counter intuitive

 

Common sense suggests that it's either a wave or a particle

 

QM says both at the same time is fine :blink:

Posted

The problem is that most of us still believe in matter which is truly just an effect or perception of energy.

No matter how small or large an object is it is composed of energy.

 

Paul

Posted (edited)

I think matter is both a particle AND a wave, that's why I don't think he can actually be observing just one path of one point, it must just be how he's explaining it.

Edited by questionposter
Posted

There is an approach called weak measurements which do "observe them without interfering with them", as you put it. See first article in link:

 

http://physicsworld.com/cws/article/news/48126

 

Thank you very much for the link. Physorg cited the experiments as well.

 

The experiment reveals, for example, that a photon detected on the right-hand side of the diffraction pattern is more likely to have emerged from the optical fibre on the right than from the optical fibre on the left.

Perhaps the strong measurement interaction, at the photon-absorbing detector screen (which destroys the incident photons), between the incident photon wave-function & said screen, "starts" as soon as wave-function begins to impinge upon the screen, so that wave-function having a shorter path-length to said screen, has a "head start" on wave-function flowing in, from farther away ?

 

...by the mere inherent nature of a wave even in the macroscopic world, it can't exist in a single finite location, so I think he is somehow viewing the "possible paths" of a measurement of an undefined photon...

 

I found this nature article.

 

I understand, that a "weak measurement" does not cause wave-function "collapse", at least not very often. (Wave-functions that did "collapse", due to the "weak" perturbing influence, obviously would not propagate further through the experimental apparatus, on through to the main detector, i.e. they would represent "losses" of input signal strength, e.g. photons being absorbed by the apparatus, heating it up.)

 

Naively, for the photons of the two top-prize-winning "weak measurement" experiments, from the article, if those photons are prepared, into some state [math]|s>[/math], which is then "weakly perturbed" by some "weak measurement", represented by some "operator" A, then [math]|s> \longrightarrow |s> + \epsilon A |s>[/math]. The [math]\epsilon[/math] parameter measures the "strength" of the perturbation, and can be adjusted. For example, for the top-prize experiment, the "weak measurement" perturbation was a calcite crystal, which polarizes photons, differentially, according to their momenta. Presumably, the thicker the calcite crystal, the "stronger" the perturbation. So, by "shaving the crystal down", until it was wafer thin, the [math]\epsilon[/math] parameter could be reduced.

 

Somehow, a subsequent "strong measurement", which does cause the collapse of the wave-function (or, all that manage to make it that far through the apparatus), can then infer properties of the unperturbed wave-function, e.g. momentum [math]\vec{p}[/math], from the "proxy" of the "weak measurement", e.g. polarization. After many many many trials, by some "law of large numbers", the average momentum, as implied by the average polarization, measured through space, one detector plane at a time (each detector plane measures p(x,y); then the detector plane is moved, and many many more trials are conducted, at the new value of z --> p(x,y,z)).

 

After the facts, ex post facto, of many many trials, you can build up a picture, of what the average photons' wave-functions "were doing" or "had looked like".

 

I feel like "flowery language" over-complexifies the physics, and makes me much less capable, of comprehending the quantum mechanics, which I would want to think, that I otherwise could.

Posted

Thank you very much for the link. Physorg cited the experiments as well.

 

 

Perhaps the strong measurement interaction, at the photon-absorbing detector screen (which destroys the incident photons), between the incident photon wave-function & said screen, "starts" as soon as wave-function begins to impinge upon the screen, so that wave-function having a shorter path-length to said screen, has a "head start" on wave-function flowing in, from farther away ?

 

 

 

I found this nature article.

 

I understand, that a "weak measurement" does not cause wave-function "collapse", at least not very often. (Wave-functions that did "collapse", due to the "weak" perturbing influence, obviously would not propagate further through the experimental apparatus, on through to the main detector, i.e. they would represent "losses" of input signal strength, e.g. photons being absorbed by the apparatus, heating it up.)

 

Naively, for the photons of the two top-prize-winning "weak measurement" experiments, from the article, if those photons are prepared, into some state [math]|s>[/math], which is then "weakly perturbed" by some "weak measurement", represented by some "operator" A, then [math]|s> \longrightarrow |s> + \epsilon A |s>[/math]. The [math]\epsilon[/math] parameter measures the "strength" of the perturbation, and can be adjusted. For example, for the top-prize experiment, the "weak measurement" perturbation was a calcite crystal, which polarizes photons, differentially, according to their momenta. Presumably, the thicker the calcite crystal, the "stronger" the perturbation. So, by "shaving the crystal down", until it was wafer thin, the [math]\epsilon[/math] parameter could be reduced.

 

Somehow, a subsequent "strong measurement", which does cause the collapse of the wave-function (or, all that manage to make it that far through the apparatus), can then infer properties of the unperturbed wave-function, e.g. momentum [math]\vec{p}[/math], from the "proxy" of the "weak measurement", e.g. polarization. After many many many trials, by some "law of large numbers", the average momentum, as implied by the average polarization, measured through space, one detector plane at a time (each detector plane measures p(x,y); then the detector plane is moved, and many many more trials are conducted, at the new value of z --> p(x,y,z)).

 

After the facts, ex post facto, of many many trials, you can build up a picture, of what the average photons' wave-functions "were doing" or "had looked like".

 

I feel like "flowery language" over-complexifies the physics, and makes me much less capable, of comprehending the quantum mechanics, which I would want to think, that I otherwise could.

 

So in other words, because he's not actually collapsing the ave function of a photon, he's merely seeing where it's likely to go after a specific perturbation?

Posted

So in other words, because he's not actually collapsing the ave function of a photon, he's merely seeing where it's likely to go after a specific perturbation?

 

Not quite ?

 

I get the impression, that, in the limit of weak perturbations, the averages of the data distributions that you gather, asymptotically converge, to the what the wave-functions, of all of the assumedly identical photons, "really would have been doing", in the absence of any perturbations.

 

If so, then sometimes "the perturbation kicks the particle one way", and other times "the other way", so that you get a broad distribution of actual data... but the mean in the middle is (mathematically provable to be) the "original unperturbed behavior".

Posted

Not quite ?

 

I get the impression, that, in the limit of weak perturbations, the averages of the data distributions that you gather, asymptotically converge, to the what the wave-functions, of all of the assumedly identical photons, "really would have been doing", in the absence of any perturbations.

 

If so, then sometimes "the perturbation kicks the particle one way", and other times "the other way", so that you get a broad distribution of actual data... but the mean in the middle is (mathematically provable to be) the "original unperturbed behavior".

 

But if it was unperturbed then it would still just act like a normal wave function. Basically, he's just seeing where things turn out as points and based on that he's building a wave?

Posted

But if it was unperturbed then it would still just act like a normal wave function. Basically, he's just seeing where things turn out as points and based on that he's building a wave?

 

I understand, that the calcite / quartz crystal induces polarizations [math]\vec{P}[/math] into photons, differentially, according to their momenta [math]\vec{p}[/math]:

 

[math]\vec{p} \longrightarrow \vec{P}[/math]

And, the detector screens used, can detect polarization, as a function, of position on the screen, i.e. [math]\vec{P}(x,y)[/math]. Then, "working backwards",

 

[math]\vec{p}(x,y) \longleftarrow \vec{P}(x,y)[/math]

Now, all of those quantities are statistical distributions, accumulated, over many many repeated trials, i.e. each photon fired lands somewhere on the screen (x,y), with some polarization (P), implying some momentum (p). After many many trials, many photons will have landed at each spot on the screen (x,y); each having had various polarizations (P); each of which implied various momenta (p). You repeat the experiment, many many times, until you have large numbers of detections, at each point on the screen (x,y), from whose polarizations (P[x,y]), you can infer some average mean momentum (<p(x,y)>).

 

Supposedly, you can mathematically show, that those average values <p(x,y)> converge towards the actual original wave-function's momentum at that point.

Posted

I understand, that the calcite / quartz crystal induces polarizations [math]\vec{P}[/math] into photons, differentially, according to their momenta [math]\vec{p}[/math]:

 

[math]\vec{p} \longrightarrow \vec{P}[/math]

And, the detector screens used, can detect polarization, as a function, of position on the screen, i.e. [math]\vec{P}(x,y)[/math]. Then, "working backwards",

 

[math]\vec{p}(x,y) \longleftarrow \vec{P}(x,y)[/math]

Now, all of those quantities are statistical distributions, accumulated, over many many repeated trials, i.e. each photon fired lands somewhere on the screen (x,y), with some polarization (P), implying some momentum (p). After many many trials, many photons will have landed at each spot on the screen (x,y); each having had various polarizations (P); each of which implied various momenta (p). You repeat the experiment, many many times, until you have large numbers of detections, at each point on the screen (x,y), from whose polarizations (P[x,y]), you can infer some average mean momentum (<p(x,y)>).

 

Supposedly, you can mathematically show, that those average values <p(x,y)> converge towards the actual original wave-function's momentum at that point.

 

So what he's mathematically doing is working backwards to see how the photon traveled, even though that would assume that during the entire process a photon was a particle and not a wave?

Posted

A wave-function has a momentum expectation value, at every point in space, which is proportional to the gradient of the wave-function, at that point:

 

[math]\propto \left( - \imath \hbar \frac{\partial}{\partial x} \right) \Psi(x) [/math]

In analogy, a wave-function is a little like a flock of birds, flowing through space, with each "bird", at each point in space, having some momentum, at that point in space.

Posted (edited)

A wave-function has a momentum expectation value, at every point in space, which is proportional to the gradient of the wave-function, at that point:

 

[math]\propto \left( - \imath \hbar \frac{\partial}{\partial x} \right) \Psi(x) [/math]

In analogy, a wave-function is a little like a flock of birds, flowing through space, with each "bird", at each point in space, having some momentum, at that point in space.

 

Yeah, wave functions change distance, but in order to "work backwards" from when a photon was measured as at a definite location, wouldn't that have to assume the photon was a point or in a defined location the whole way through?

Edited by questionposter
Posted (edited)

I understand, that quantum wave-packets have calculable "centroids" or "expectation values"; and, that those "centroids" evolve through time, according to the Classical equations of motion, whilst obeying Classical conservation laws. I.e. our macroscopic notions of "Classicality" arise from the "average" behavior, of quantum wave-packets. In analogy, wave-packets are like "schools of fish" or "flocks of birds", characterizable by an average "center of mass", and an average "overall momentum"; but which are also spread out through physical space (xyz) ("many many birds here & there"), and momentum space (pxpypz) ("some birds flying fast left, others flying slow right").

 

For example, for a fermion (F), composed of a "spin up" component (+), and a "spin down" component (-):

 

[math]\tilde{\Psi}_F(\vec{x}) = \Psi_{+}(\vec{x})|+\rangle + \Psi_{-}(\vec{x})|-\rangle[/math]
("flock of sea-gulls & flock of crows")

the "expected (linear) momentum":

 

[math]\langle \vec{p} \rangle \equiv \int d^3 x \tilde{\Psi}_F^{*}(\vec{x}) \left( -\imath \hbar \vec{\bigtriangledown} \right) \tilde{\Psi}_F(\vec{x})[/math]

 

[math] = \left( \int d^3 x \Psi_{+}^{*}(\vec{x}) \left( -\imath \hbar \vec{\bigtriangledown} \right) \Psi_{+}(\vec{x}) \right) + \left( \int d^3 x \Psi_{-}^{*}(\vec{x}) \left( -\imath \hbar \vec{\bigtriangledown} \right) \Psi_{-}(\vec{x}) \right) [/math]

 

[math] = \langle \vec{p}_{+} \rangle + \langle \vec{p}_{-} \rangle[/math]
("sea-gulls' momentum + crows' momentum")

and the "expected angular (spin) momentum":

 

[math]\langle s_z \rangle \equiv \int d^3 x \tilde{\Psi}_F^{*}(\vec{x}) \left( \hat{s}_z \right) \tilde{\Psi}_F(\vec{x})[/math]

 

[math] = \frac{\hbar}{2} \left( \int d^3 x \Psi_{+}^{*}(\vec{x}) \Psi_{+}(\vec{x}) \right) - \frac{\hbar}{2} \left( \int d^3 x \Psi_{-}^{*}(\vec{x}) \Psi_{-}(\vec{x}) \right) [/math]

 

[math] = \frac{\hbar}{2} \left( \langle _{+} \rangle - \langle _{-} \rangle \right)[/math]
("sea-gulls' whirling clockwise - crows' whirling counter-clockwise")

are well-defined quantities,

 

quantummechanicsclassic.th.jpg

whose evolution through time obeys Classical equations of motion

 

quantummechanicsclassic.th.jpg

Thus, via their expectation values, quantum wave-packets (of fermions) can be corresponded to Classical 'particles', having four degrees of freedom (pxpypz,sz). Naively, a system of [math]N[/math] fermions, has [math]4N[/math] dof. And, such a system can undergo [math]\left( \begin{array}{c} N \\ 2 \end{array} \right) = \frac{N (N-1)}{2}[/math] interactions, between pairs of particles.

 

Now, each interaction, between wave-packets co-mingles, i.e. entangles, those wave-packets ("two flocks of birds collide, and get all mixed up"). And, each interaction imposes a conservation law, e.g. conservation of momentum [math]\left( \langle \vec{p}_1 \rangle + \langle \vec{p}_2 \rangle = \langle \vec{p}_1 \rangle' + \langle \vec{p}_1 \rangle' \right)[/math]; and spin [math]\left( \langle s_{z,1} \rangle + \langle s_{z,2} \rangle = \langle s_{z,1} \rangle' + \langle s_{z,2} \rangle' \right)[/math], from before to after the interaction (denoted by single quotes).

 

So, if the number of interactions, i.e. number of constraints, equals the number of dof; then the system is "uniquely determined"; and the system must adopt a unique set of values, for those dof ("number of equations = number of unknowns"). Naively, this occurs, for [math]N \ge 9[/math], i.e. up to eight fermions can be mutually co-entangled, and still manifest quantum indeterminacy, e.g. at least four non-determined dof, corresponding to the linear momentum, and angular momentum, of the entangled "ensemble" of fermions ("the momentum & spin, of the 'super-flock', of all the now-co-mingled flocks").

 

I understand, that interactions "whittle away" remaining, non-determined, dof, until the system becomes uniquely specified, at which point quantum "decoherence" and wave-function "collapse" occur:

 

"a combination of measurements or encounters, which progressively limit the remaining degrees of freedom possessed by a quantum entity, will have the same effect, where the wavefunction collapses and a classical outcome is the result [i.e.] the conflicting interests of too many quantum mechanical effects happening at once, that leads to interference" (dickau)
Edited by Widdekind

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.