-
Posts
2081 -
Joined
-
Days Won
61
Content Type
Profiles
Forums
Events
Everything posted by Markus Hanke
-
Mathematics is Inconsistent!
Markus Hanke replied to Willem F Esterhuyse's topic in Analysis and Calculus
The original Banach-Tarski “paradox” explicitly concerns a solid ball, not a sphere, in Euclidean 3-space (see first sentence): https://en.wikipedia.org/wiki/Banach–Tarski_paradox Not that this really makes any difference, because any 2D surface element of a sphere can likewise be considered a non-measurable ensemble of points, in which case the surface area of that subset wouldn’t be self-consistently defined either. So you could - using a similar process as in the original “paradox” - deconstruct a sphere and re-construct it in some other way, without the original surface area being conserved. There still isn’t an inconsistency, because this has to do with measures. -
Mathematics is Inconsistent!
Markus Hanke replied to Willem F Esterhuyse's topic in Analysis and Calculus
No, it doesn’t. What it does say is that you can decompose a Euclidean 3-volume into a finite number of subsets, each of which is itself a non-measurable collection of infinitely many points, and then reassemble these subsets in a new way. The crucial point here is that you cannot uniquely and self-consistently define the notion of ‘spatial volume’ for a non-measurable infinite collection of individual points, so this decomposition does not preserve the original volume, contrary to naive intuition. It’s a subtle ‘trick’ of sorts to do with Lebesgue and Banach measures. IOW, the Banach-Tarski paradox breaks down and reassembles a 3-volume in a way that does not itself preserve the original volume. Thus it is hardly surprising that you can turn a ball into two balls in this manner - in fact you could turn a ball into anything at all in this manner, no matter how big or small. It isn’t a true paradox, and most certainly not an inconsistency in mathematics. Also, don’t forget that unfortunately we do not really live in an infinitely sub-divisible 3-dimensional Euclidean world where such a procedure could in fact be implemented - it would be a neat little trick with lots of interesting applications! -
Non-locality means that the outcome of an experiment/measurement performed at a specific point (be that in spacetime, or in some abstract state space) depend explicitly on what happens at another point; so the outcome is not uniquely determined by physical conditions in a small local neighbourhood alone. Consider again the example of the entangled wave function I gave earlier. The respective observable here is the probability of finding one of the particles in a specific state. For example, the local probability of finding particle A in the state ‘1’ is exactly ½; simultaneously, the local probability of finding particle B in state ‘1’ is also exactly ½. The global probability of finding state ‘10’ is 1/2, and the global probability for ‘01’ is 1/2. Neither of these probabilities is a function of coordinates - distant or otherwise -, or indeed a function of the state of the other particle. At the same time, the probability of the overall composite state to be ‘00’ or ‘11’ is exactly zero - again, without this being a function of any coordinates. At no point is any of these probabilities a function of coordinates or distant states at all, so it is meaningless to speak of this situation as being non-local. It is, however, quite meaningful and natural to speak of the overall composite wave-function as being non-separable, which is purely a stochastic statement and has nothing to do with locality. It is also an example of the absence of local realism, which is a more general concept than locality. No, that is not at all what entanglement means. Please refer back to my previous post - entanglement means that the overall wave-function of the composite system has a reduced set of possible composite states as compared to the same system sans entanglement relationships. At no point does this make any reference whatsoever to the spatial separation between these particles. Again, entanglement is purely a stochastic phenomenon to do with the form of the overall wave function, it is entirely separate from any embedding of this situation into a particular spacetime. Note also that you can entangle more than just two particles at a time, again irrespective of how far the constituents of such an ensemble are from each other. Also no. Decoherence is a purely local phenomenon - it means that local degrees of freedom of a wave function become coupled with local degrees of freedom of its immediate environment, e.g. as a result of performing a measurement. Note that the global situation - i.e. the original system plus the environment it came into contact with - remains completely coherent, and thus global unitarity remains conserved in this process, as of course it must be. For example, if you perform a spin measurement on particle ‘A’ of our entangled pair, then its spin direction becomes coupled to the mechanism of the measurement apparatus. You now have a new statistical correlation - between particle A and the measurement apparatus which it comes into contact with it, as opposed to particle A with particle B. The exchange of information involved here is thus purely local, even if the entanglement between possibly distant particles is broken in the process. Two particles being entangled fundamentally precludes the possibility of them interacting in any way after the point when the correlation has first been established, irrespective of the nature of such an interaction (FTL or not). Interacting particles cannot be entangled, since the composite wave function of such a system cannot have the form quoted earlier while still maintaining local probabilities of ½ during the measurement of the entangled property.
-
Let’s look at this whole quantum entanglement business systematically, because I really don’t think it requires 22 pages of discussion and argument to understand this. It may be counter-intuitive, but it really isn’t that complicated. Suppose you have - to begin with - two completely separate particles, which aren’t part of a composite system; their states are thus entirely separate, and denoted by \[|A\rangle ,|B\rangle\] Don’t mind the precise meaning of this mathematical notation; it simply denotes two separate particles being in two separate states, where the outcome of measurements are probabilistic, and not in any way correlated at all. No mystery to this thus far. Now let’s take the next step - we combine the two particles into a composite system. The state function of that composite system is then the tensor product of the states of the individual particles, like so: \[ |\psi \rangle =|A \rangle \otimes |B \rangle \equiv |AB \rangle\] Again, don’t mind the precise definition of these mathematical operations; the idea here is simply that our two particles A and B form a composite system. Let’s, for simplicity’s sake, assume that each particle can only have two states, ‘0’ and ‘1’ - the physical meaning of the tensor product above is then that it combines each possible state of one particle with each possible state of the other, so the overall combined system can have four possible states: \[|00\rangle ,|01\rangle ,|10\rangle ,|11\rangle\] Thus the overall combined state of the particle pair is (I will omit the coefficients here, as the precise probabilities aren’t important): \[|\psi \rangle =|00\rangle +|01\rangle +|10\rangle +|11\rangle\] This is an example of a system that is not entangled - the combined state function can be separated into the individual states of the constituents, and all combinations are possible (though not necessarily with equal probability). Non-entangled states are separable into combinations of states of the individual constituent particles - they are tensor products of individual states - which means physically that there are no correlations between outcomes of measurements performed at the constituent particles. If you get state ‘0’ for a measurement on particle A, then you can get either state ‘0’ or state ‘1’ for a measurement on B, and these outcomes are statistically independent from each other. Mathematically, the tensor product makes no reference to the separation of the particles, ie it is not a function of their position, hence neither is the overall combined state. An entangled 2-particle state, on the other hand, looks like this: \[|\psi \rangle =\frac{1}{\sqrt{2}}\left(|01\rangle +|10\rangle \right)\] Notice three things: 1. Compared to the non-entangled state, two of the possible measurement outcomes are missing; the set of possible outcomes is reduced 2. The combined state cannot be uniquely separated into tensor products of individual states; it is non-separable 3. The form of the combined state does not depend on the spatial (or temporal) position of the particles - it is purely a stochastic statement, not a function of spacetime coordinates. What does this physically mean? Because the set of possible measurement outcomes in the overall state is reduced as compared to the unentangled case, there is now a statistical correlation between measurement outcomes - with emphasis being on the term statistical. There are now only two possible combinations, as opposed to four in the unentangled case. This is the defining characteristic of entanglement - it restricts the pool of possible combinations of measurement outcomes, because the overall state cannot be separated, due to there being extra correlations that weren’t present in the unentangled case. This is purely due to the form of the combined wave function - the outcome of individual measurements on each of the constituents is still purely stochastic, and not (!!!) a function of distant coordinates. Because the outcome (statistical probability) of local measurements is not a function of coordinates or any distant states, it is completely meaningless to say that this situation is somehow non-local, or requires any kind of interaction, be it FTL or otherwise. The entire situation is fully about statistics and correlations, which is not the same as a causal interaction; in fact, any interaction between the constituents (including FTL ones) would change the combined wave function and preclude the possibility of there being a statistical correlation while at the same time maintaining the stochastic nature of the outcomes of individual measurements. This is evident in the fact that the entanglement property of the above state function isn’t encoded in any kind of coordinate dependence, but rather in a reduction of terms, ie in a reduced pool of possible outcomes. This hasn’t got anything to do with locality at all, but is purely a statistical phenomenon. Hopefully the either helps, or possibly it might spark off another 22 pages of discussion
-
Yes, that’s precisely my point. It is meaningless to speak of length contraction and time dilation “happening” to rulers or clocks. It’s always a relationship between two rulers, or two clocks. No. That would be like saying that a topographical map of your local area relies on an “aether” just because it uses a coordinate grid. To be sure, you can make that claim without affecting the usefulness of the map itself, if you so wish, but it doesn’t add anything to the information contained therein. Spacetime is just the same - it’s quite simply a map of events that allows you to determine separations and angles. There is no implication that we need to reify this into some kind of physical substance. This guy disagrees: Yes! Very important observation +1 This very lucidly demonstrates why a specific choice of coordinate system cannot carry physical relevance, so far as the form of physical laws is concerned.
-
Using entanglement is not forbidden by relativity??
Markus Hanke replied to Lorentz Jr's topic in Relativity
Sorry, yes. That is what I actually had in mind - I’m not used to the “ict” convention, which I haven’t seen used all too often in more modern texts. Thanks for correcting +1 -
Using entanglement is not forbidden by relativity??
Markus Hanke replied to Lorentz Jr's topic in Relativity
Such a metric would be isomorphic to the Euclidean one, and thus cannot give rise to Lorentz transformations, an invariant speed of light, or any other relativistic phenomenology. It’s precisely that - time and space parts of the metric having opposite signs - that yields relativity. Note that the choice of which part has which sign is arbitrary, so long as they are opposite. -
I think it is best to just keep the situation fully classic, and consider only physical clocks to begin with, rather than wave functions. The question of evolution operators in RQM is complex and very non-trivial, and does little to illuminate this underlying question. Time dilation is a relationship between reference frames, and not something that physically “happens” to a single clock. Asking for a mechanism that “slows down” some clock is thus meaningless - clocks always tick at the same rate within their own frames. So the correct question would be why inertial frames are related via hyperbolic rotations in spacetime - that’s a very valid question, but it isn’t one that any of our present theories can answer. So to make a long story short, we don’t have an explanation of why this happens, only a description of it. That’s not the same thing at all. The length of a world line between given events in Minkowski spacetime is defined to be equivalent to the proper time of a clock travelling between these events that traces out that world line. In other words, it’s simply the total elapsed time that’s physically measured on a clock that travels along a specific spatial path between events. Intuitiveness is not a necessary condition for a mathematical model to be valid and useful. It just needs to be internally self-consistent, and produce results that can be verified using the scientific method. I think you would agree that SR does this quite well. Beside, something being intuitive (or not) is a very subjective measure - many things I find intuitive might appear otherwise to you, and vice versa. I would, by and large, agree with you - though I wouldn’t put into such strong terms. I just think many depictions of physical concepts get the differences between what is an explanation and what is a description muddled up, especially within pop-sci publications. We do not yet know the underlying mechanism of why spacetime is what it is, but we do have an excellent description of its features. To fully understand why spacetime gives rise to the phenomenology we see, we’d have to figure out first how spacetime itself comes to be, and if it can be broken down further into more fundamental concepts. Such attempts are under way, but at present they are just ideas and conjectures. I disagree. Physics makes models of the world around us, but not all of these models purport to be a fundamental explanation in ontological terms. As such, SR is a very good model that is in excellent agreement with experiment and observation. It’s just important to not confuse a model with an (ontological) explanation, because they are not the same.
-
Using entanglement is not forbidden by relativity??
Markus Hanke replied to Lorentz Jr's topic in Relativity
Please define what you mean by “real geometry”? Also, the components of the metric tensor in Minkowski spacetime are real-valued, so I’m not sure what you mean by “complex metric”. -
There are three types of frequency shift - gravitational, cosmological, and Doppler. If you look at just one single object in isolation, then you are right in that one cannot uniquely decompose total frequency shift into these three components, based on mere observation of light received from that object. However, we don’t see objects in isolation, but an entire background of very distant objects - so we can examine them in a larger context. What we find is that there is a clear correlation between observed redshifts and distance, and that this correlation is uniform across all directions in the sky. Furthermore, we observe that all these objects recede away from each other, not just from us. This is entirely inconsistent with how gravitational redshift would work. So no, there is no contradiction - we are simply opting for the explanation that best fits the available observational data.
-
The terminology is mostly for historical reasons, I think, though of course (at least in the case of Schrödinger) many of the physically relevant solutions to these equations are wave-like. But, as exchemist has pointed out, technically speaking they are diffusion equations.
-
Some of these effects would be very obvious and easy to observe - for example the frequency dependence of the speed of (massive) photons. With modern telescopes we can observe objects and events at distances on the order of ~billions of LY, and for those distances the delay in arrival of photons near the blue end of the visible spectrum and ones at the red end would be on the order of months or even years. Effectively we would see high-energy photons arriving first, and lower-energy ones from the same source some ~months later. That’s evidently not what is happening. And if the effect doesn’t show up on scales of ~billions of LY, then any potential rest mass of the photon would be so vanishingly small as to be wholly unable to account for the observed DM effects (currently, upper limit for photon mass is on the order of 10^(-54)kg). The other major issue is the impossibility of having a photon mass in the presence of U(1) gauge invariance, which is an integral part of the Standard Model. If this symmetry was broken - as it would have to be in order for photons to have any mass at all - then this would have consequences not just in the EM sector, but in all the rest of the Standard Model. It’s not immediately obvious exactly what would happen in the QFD and QCD sectors, but I think it is safe to say that the Standard Model Lagrangian would need to look radically different - this isn’t just some subtle deviation that we might have missed in our particle accelerators, but more like a completely different particle zoo. Yet, the experimental data we get from our accelerators is by and large in excellent agreement with the Standard Model as it currently stands.
-
It’s a probability density. So, in order to extract actual probability distributions from it, you need to take its square norm and integrate this up over the region you are interested in - hence the result will be a real-valued probability distribution. The wave function itself is always complex-valued, and it can be shown that it is fundamentally necessary for that to be the case: https://physicsworld.com/a/complex-numbers-are-essential-in-quantum-theory-experiments-reveal/ That being said, the observable quantities derived from the wave function are of course always real-valued. Actually, Maxwell’s equations in their most general form involve only one entity, the electromagnetic field: \[dF=0\] \[d\star F=4\pi \star J\] The above equations are completely independent of choice of coordinate system, and are valid in all spacetimes. You can of course associate the 2-form above with a rank-2 tensor, and then express the components of that tensor as a mix of E and B fields. This will yield the usual four equations for the components of E and B. The problem is the form of these four equations depends on the observer. The solutions to the Dirac equation are complex-valued bispinors. The Pauli equation is the non-relativistic version of the Dirac equation; its solutions are ordinary spinors that are also complex-valued. In general, quantum mechanical wave functions are always complex-valued, irrespective of exactly which representation of the Lorentz group you are dealing with. I suspect that’s because the complex numbers have a richer “structure” than can be represented by pairs of real numbers.
-
The idea of photons perhaps having a tiny rest mass isn’t new - however, on a fundamental level this would create huge problems. Think about this for a minute: 1. Charge conservation in QED would no longer hold 2. Gauge invariance in QED would need to be violated (ordinary U(1) gauge invariance cannot give rise to photons with rest mass) - effectively meaning that QED and much of the rest of the Standard Model cease to be valid models 3. Photons would travel at different speeds (speed of light would depend on frequency), meaning for far-away events we would see more energetic photons arrive here first 4. Strength of the electrostatic force would be weaker over large distances as compared to small distances 5. There would be three (as opposed to two, for massless photons) possible modes of polarisation And probably many more. Needless to say we do not observe any of these things in the real world, which is why we can say with very high confidence that photons are most likely massless.
-
What is gamma factor of object, which is falling into black hole?
Markus Hanke replied to DimaMazin's topic in Relativity
I’m sorry, but I don’t understand what you are trying to do here? What do you mean by “minimal mass”? What kind of black hole are talking about (presumably Schwarzschild)? Kinetic energy is a an observer-dependent concept, so which frame are you working in? -
Relativity of Time does not Make Sense.
Markus Hanke replied to Willem F Esterhuyse's topic in Speculations
“Making sense” isn’t a valid scientific argument, because it is relative to some specific observer’s subjective state of knowledge and understanding, both of which are objects present in - and generated by - the human mind, and thus do not necessarily correspond to objective reality. The human mind is often a pretty bad judge of such things. There are plenty of concepts that don’t “make sense” to many (or even most) people, yet they are still demonstrably in agreement with experiment and observation. Also, there are many things in today’s world that wouldn’t have made sense at all to the average person in - say - the year 1200, such as heavier-than-air flying machines, electricity, microbes, or TV, to name just a few. You would have been laughed out the door (or burned at the stake) had you tried to explain any of those things to people back then. So “making sense” is also a product of environment, time, and culture, never even to mention a specific sensory apparatus, the structure of the brain etc etc - and thus entirely useless as a measure of what kinds of belief about the world are justified, and which ones are not. As it stands, relativity of time (ie time dilation and relativity of simultaneity) is in agreement with experiment and observation to such an overwhelmingly high degree that one might as well consider it as “for granted” by now. As it happens, crucial aspects of the hardware belonging to the computer you are using to read this post just now - such as the chemical properties of platinum - are due to relativistic effects that rely on relativity of time being true. Another everyday example is the lead-acid battery in your car; without relativity, the voltage between that battery’s terminals would only be in the region of ~2V, rather than your 12V. This is because some of the electrons in heavier elements move at relativistic speeds, so such effects become important when it comes to the chemical properties of atoms. Thus, relativity isn’t just an abstract idea that has no bearing in our everyday world; it has measurable and observable consequences even in our human domain of experience. -
Is "positionary-temporal" uncertainty built into spacetime?
Markus Hanke replied to geordief's topic in Relativity
No, but I think you have picked up on that yourself already. The crucial feature of Minkowski spacetime is found in how it defines the separation between points. You might remember from your school days the Pythagorean theorem - if you have a pair of points in some Euclidean space, the squared separation between them is the sum of squares of coordinate differences: \[(\Delta s)^{2} =( \Delta x)^{2} +( \Delta y)^{2} +( \Delta z)^{2}\] In Minkowski spacetime, you have one additional dimension, being time - so the squared separation will involve four coordinates. However, unlike in Euclidean space, Minkowski spacetime does not simply add them; instead it ensures that time and space have opposite signs in the separation formula, like so: \[(\Delta s)^{2} =( \Delta t)^{2} - ( \Delta x)^{2} -( \Delta y)^{2} -( \Delta z)^{2}\] So the squared difference isn’t just the sum of (squared) spatial separations, but the difference between (squared) separation of time and space: (total separation)^2 = (separation in time)^2 - (separation in space)^2 Note that the choice of signs is arbitrary - I could have made time negative and space positive, without affecting the result. This is an example of hyperbolic geometry (as opposed to Euclidean geometry). What does this do physically? Well, having a difference rather than a sum enables you to make simultaneous changes to the space part and the time part in equal but opposite measure, without affecting the overall separation in any way. So you can trade a decrease in space for an increase in time (or vice versa), and still end up with the same overall separation. And that’s exactly what happens in Special Relativity - for example, if you are looking at a clock passing you by at relativistic speeds, you’ll find that the clock is time-dilated (meaning it takes longer for the clock’s hands to move, from your point of reference), while at the same time the clock itself will be length-contracted in its direction of motion, so its size becomes shorter (again, from your point of reference). So in this scenario, and from your point of reference, “space is traded for time”, in a manner of speaking. This happens in equal but opposite measures - the decrease in size is by the same factor as is the increase in time - which is why the ratio between them remains the same always, which physically means that the speed of light is always the same in any inertial frame. This is is purely a consequence of the hyperbolic geometry of Minkowski spacetime. A simple change in signs makes all the difference! The above is very simplified and not especially rigorous, but hopefully you get the central idea. -
What is gamma factor of object, which is falling into black hole?
Markus Hanke replied to DimaMazin's topic in Relativity
Yes, of course. It depends on the effect. In the simplest cases, they just add - for example, the total difference in tick rates between a clock on earth and a clock in an orbiting satellite will just be the sum of gravitational time dilation and kinematic time dilation between these frames. Yes. Personally I associate the gamma factor with inertial frames in Minkowski spacetime, since gamma arises from Lorentz transformations. I think in the interest of clarity and consistency it is best to avoid this terminology when working in curved spacetimes, and just refer to the specific quantity in question instead. For example, in the OP’s scenario it would be best to speak about time dilation, rather than the gamma factor, simply to avoid unnecessary confusion. There’s also the danger that someone might naively take the gamma factor and apply it to quantities that ‘behave’ differently in the presence of gravity - take for example the OP’s scenario, but use the observed length of the falling object as the quantity in question, rather than time dilation. The result won’t be correct, because in an inhomogeneous gravitational field you have extra tidal effects that don’t exist in Minkowski spacetime. To be fair, you could again separate the various effects, as you suggested - but I think you can see the potential confusion a naive application of gamma to frames in curved spacetimes might cause. -
What is gamma factor of object, which is falling into black hole?
Markus Hanke replied to DimaMazin's topic in Relativity
The gamma factor is used to characterise the relationship between inertial frames in flat spacetime, ie between frames that are related via Lorentz transformations. When you have a test particle freely falling into a black hole, it will trace out a world line in a spacetime that is not flat - you can still choose another far-away frame as reference, and both of these will be locally inertial, but spacetime between them isn’t flat, so these frames are not related by simple Lorentz transformations. Hence, asking about what the gamma factor between these frames will be is meaningless - it is only defined for frames that are related via Lorentz transforms. -
There are Physical Concepts that is Left Up To Magic
Markus Hanke replied to Willem F Esterhuyse's topic in Speculations
In classical vacuum, you have at a minimum two fields defined at each point - the metric tensor field (gravity), and the electromagnetic field. Both of these are rank-2 tensors, so there’s lots more going on than a single number. Note that even at points where the EM field strength is zero, you can still have physical effects resulting from the presence of its underlying potentials (eg Aharanov-Bohm effect). In quantum vacuum, in addition to the above, you’ll also have the full menagerie of all the various quantum fields associated with the standard model, even in the absence of any particles. This matters, because, unlike in the classical case, the energy of the vacuum ground state of these fields is not zero, and you can get various physical effects resulting from this. This is true only for fermions, but not for bosons. No. What matters are the physical effects a field has; again, the Aharanov-Bohm effect is a good example. The laws of physics do not depend on the choice of reference frame. You can change your coordinate system at any time without affecting any laws (general covariance). Note that this also does not change the number of coordinates required to uniquely identify a point. -
Exponents - Why is 2 to the power of 1 not 4?
Markus Hanke replied to MathHelp's topic in Linear Algebra and Group Theory
That’s not a very good definition, IMHO - especially since the exponent can be any number, even a negative one, a fraction, an irrational one, or a complex number. Let’s stick to simple, natural numbers like 1,2,3,... for now. Exponentiation is then a short-hand notation for a multiplicative series starting at 1, followed by as many multiplications with the base number as indicated in the exponent: 1 x ... x ... x ... and so on Thus: 2^0 means you start at 1, followed by no further multiplications. Thus 2^0=1. 2^1 means you start at 1, followed by exactly one multiplication by 2. Thus 2^1=1x2=2. 2^2 means you start at 1, followed by two consecutive multiplications by 2. Thus 2^2=1x2x2=4. 2^3 means you start at 1, followed by three consecutive multiplications by 2. Thus 2^2=1x2x2x2=8. And so on. Does this make sense now? -
There are Physical Concepts that is Left Up To Magic
Markus Hanke replied to Willem F Esterhuyse's topic in Speculations
Pick a random point in - say - your living room. At that point, you can define a value for air temperature - a scalar. At the same time, you can define a value for air pressure at that same point - another scalar. You can further define a quantity to measure air flow there - a vector, since it has magnitude and direction. Or you can define the stress within the air medium at that point - a tensor. Or perhaps you could look at the electromagnetic field there - a differential 2-form. And so on. So as you can see, not only can a single point ‘take’ more than one field value at a time (each of which reflects a different physical quantity), the fields themselves can consist of many different objects, not just simple scalars. They can even take more abstract objects that don’t have numerical components at all, such as operators. This is all rigorously defined, and works precisely as it should - the very computer you are using right now is built upon these principles. -
Do we really need complex numbers?
Markus Hanke replied to PeterBushMan's topic in Applied Mathematics
I think a far more interesting operation appears when one uses complex numbers as exponents - suddenly we are now dealing with rotations and scalings, which is a much richer structure than real exponents can yield. This being linear transformations, it’s not surprising that there is a close connections to certain types of matrices. Either way, the results I linked to seem to show unambiguously that - for whatever reason - complex numbers are indispensable for QM. -
It doesn’t - all known physics obeys locality. What happens is rather that changes in one system that affect another are mediated by various kinds of fields. For example, the presence of electric charges implies the presence of an accompanying electromagnetic field, which extends throughout spacetime and thus affects other (distant) electric charges. This has all been worked out in detail - electromagnetism, strong and weak interactions are well described by quantum field theory, whereas gravity is described by General Relativity (which is also a field theory, but of a different type). There are no ‘actions at a distance’, in the sense of non-local effects.
-
Do we really need complex numbers?
Markus Hanke replied to PeterBushMan's topic in Applied Mathematics
This is true in classical physics, but as it turns out it is not true in quantum mechanics. You can construct a class of experiments where real-valued QM (replace complex numbers by pairs of real ones) makes predictions that are different from complex-valued QM, thereby opening up a way to test this experimentally. Turns out, complex Hilbert spaces are an essential feature of any QM formalism that describes the world accurately (within that domain): https://arxiv.org/abs/2101.10873