-
Posts
2073 -
Joined
-
Days Won
61
Content Type
Profiles
Forums
Events
Everything posted by Markus Hanke
-
Uncovering the Neural Mechanics of Autism
Markus Hanke replied to NudeScience's topic in Medical Science
I am on the autism spectrum myself, and I do not exhibit any of these “symptoms” (never have). My hearing is also perfectly standard, and always has been. -
Einstein never said this. The quote is from Ernest Rutherford. I presume you mean “Big Bang”. Where do recession velocities come into this? It has no rest mass, since there isn’t any frame where it could ever be at rest, but is does have energy and momentum. There is no such thing as “anti-photons”; photons are their own antiparticles. You cannot construct the set of know particles, their interactions and properties from just these. Also, the proton is not a fundamental particle. No. Atomic nuclei are held together by the residual strong force. If there were any electrons present inside the nucleus, then the shell structures of all the elements would look very different. Everything else in that post is essentially meaningless technobabble.
-
Consider an arbitrary event located directly on the surface in question, and attach a light cone to that event. Now look at the tangent space to the surface at that event. If the surface is like-like, the tangent space will fall to the interior of the light cone; if the surface is null, the tangent space will coincide with the surface of the light cone. So this isn’t the same - you can (at least in principle) escape from a light-like surface to infinity, but you can’t escape from a null surface. For some time, yes. This is true. I’d just like to point out that the non-existence of stationary frames below the horizon is not a consequence of tidal forces, but is due to the causal structure of spacetime; but you are right in that, for very massive BHs and just below the horizon, one could remain very nearly stationary for some time. General relativistic optics is a notoriously tricky subject, so I won’t speculate on this too much, also because it would in some ways depend on how exactly you move once you are below the horizon. In principle though, for very massive BHs and just below the horizon, there shouldn’t be any extraordinary visual effects, other than some blue-shifting of distant stars. It depends what you mean by “nearby”. Since below the horizon the r-coordinate becomes time-like in nature, all light cones will be tilted inwards - meaning you cannot see anything that is below you, since it is impossible for a photon to increase its r-coordinate, irrespective of how it is emitted. You can still communicate with particles co-moving along with you at the same radial distance. A particle higher up than you can send you messages, but your reply won’t ever reach that particle. So locally in your own frame nothing special happens, but once you start interacting with other local frames, I think you can always deduce that you are below a horizon. It’s a direct consequence of the geometry of this kind of spacetime, and you can fairly straightforwardly calculate the tidal effects that occur. For a radial in-fall into a Schwarzschild black hole, what you’ll find is that the test body gets stretched along the radial direction, and compressed perpendicular to it (this effect is hence called “spaghettification”). The magnitude of these effects follows an inverse cube law, and also depends on the mass of the black hole. This is linked to, but not necessarily dependent on, gravitational time dilation - you can have time dilation without there being spatial tidal effects (but not vice versa). For an observer who is stationary just outside the horizon, the astronaut will fall past him at nearly the speed of light, so the actual time it takes for a human body to cross the horizon is so short that no adverse effects could occur. The astronaut himself will never notice anything special as he falls through the horizon. If you look at the diagram you posted, you will notice that below the horizon all light cones are tilted inwards, towards the singularity. Photons “live” on the surface of light cones, meaning no photon could ever increase its radial coordinate, i.e. move away from the singularity. This is not a tidal effect, but due to the causal structure of spacetime. Once emitted, a photon can only decrease its radial position wrt the singularity. Hence, for an astronaut falling feet-first, a photon emitted from his foot should not be able to travel “upwards” to his eyes. On the other hand though the astronaut himself is of cause also falling - so the real question is whether it is possible to set up the scenario such that the relative motion between photon and eyes can be made such that the falling astronaut can somehow “catch up” with the (also falling) light. I reserve final judgement here, as I think this is one of those situations where one would really have to go and work through the maths.
-
It’s a null surface, actually. The geometry of spacetime below the horizon is such that no stationary frames exist - in other words, no matter how much radial thrust the engines of the unfortunate ship put out, it will continue to experience radial decay as it ages into the future. So the two ships couldn’t remain at relative rest. What did you mean by “paradox” in the thread title?
-
In what way am I “not right”, exactly? As I have pointed out to you, the statistical significance figure currently stands at roughly 4σ , that’s not enough to establish LU violations as being physically real just yet. That’s just how it is. You find the raw data in the link I gave, so you can verify the figure yourself. If these violations are verified to be physically real by doing future measurements, then this will be a very exciting find - discovering new physics is the pinnacle of every physicist’s life, and thinking this is somehow perceived as a “threat to dogma” is simply ridiculous. No genuine physicist thinks this way. Personally I cannot wait to learn whatever structure underlies the Standard Model, and/or GR, though it’s perhaps unlikely to happen within my lifetime. Either way, the SM will continue to be used for cases where it is known to work well, just like Newtonian gravity continues to be used alongside GR, and classical mechanics alongside QM. Remember the purpose of physics: it makes models to describe aspects of the world. It is not about some notion of “truth”. Hence, a model will continue to be used for a specific purpose as long as it is useful, internally self-consistent, and delivers results that are in line with what we see in the real world. Are there any known issues with the Standard Model? Most certainly - here is a list of the most obvious ones. It is precisely these issues that provide an impetus for continued research, both in the theoretical as well as experimental domains. I think this is all very exciting, because historically you‘ll find that the phase when the limitations of an existing model are better understood generally precedes important new discoveries and paradigm shifts. This is simply not true. If you look at the link I gave above, you will find in it not just a listing of the limitations of the Standard Model, but also a number of alternative models (not an exhaustive list). These alternatives continue to be extensively researched, and are taken seriously by the scientific community. However, as it stands, there isn‘t enough evidence in favour of any of these, and also, some of the alternatives come with problems of their own.
-
Synchronizing clocks in different frames of reference.
Markus Hanke replied to geordief's topic in Relativity
I think (but maybe that’s just me) that the notion of “tick rate” is not particularly helpful, since no ideal clock can ever tick at anything other than “1 second per second” in its own frame, irrespective of where it is and how it moves. It is only when you compare the total accumulated time between two shared events that differences become apparent. “Tick rate” is one of those notions that, even though everyone routinely uses it, all too easily lends itself to misinterpretation. Now, the total accumulated time a clock records as it travels from event A to event B is identical to the geometric length of the world line it traces out while connecting these events - so it is actually a geometric quantity. This is true regardless of what the geometry of the underlying spacetime is, so it applies whether or not there is gravity present. So what is the meaning of acceleration then? If you have two events A and B in spacetime, the longest (!) possible world line that connects these is always that which represents a test clock in free fall (i.e. inertial motion) - such world lines are called geodesics. Hence, given that the absence of any acceleration yields the longest possible world line and thus the most accumulated time on your clock, the presence of acceleration at any point of the clock’s journey will shorten its world line between the same two events - so the clock will accumulate less time. This is just precisely what we see as time dilation (due to acceleration). So, proper acceleration can thus be understood as the degree by which a world line differs from being a geodesic, or alternatively, the degree by wich motion deviates from being free fall. Or in a somewhat more fancy way, it’s a parameter that picks out a world line in a 1-parameter family of all (physically realisable, sharing the same boundary conditions) world lines connecting two given events in spacetime. Note that this type of time dilation has nothing to do with spacetime curvature - it’s simply about how you choose to connect two given events. -
Synchronizing clocks in different frames of reference.
Markus Hanke replied to geordief's topic in Relativity
Not unless you artificially make it so. If one of the clocks experiences acceleration and the other one does not, then there will be time dilation between the two. -
What I am saying is that, to the best of my limited knowledge in this area, this has already been intensively investigated (with methods that aren’t so crude, such as fMRI etc), and no local “seat” of consciousness has been found. It appears to be a global property, not something that can be uniquely reduced to a single area.
-
Neurophysiology is definitely not my area of expertise, but it seems evident that consciousness isn’t localisable to any specific area in the brain; it’s a global phenomenon. Of course, there will be some local areas the proper functioning of which is a prerequisite for having ordinary consciousness; but that’s not the same thing. If you were to take that old radio in your kitchen, open it, and remove any random piece from its main board, then chances are there won’t be any more music playing - but that does not imply that that random piece was what generated the music. How exactly is consciousness a “frequency”? Frequency of what?
-
First of all, YouTube videos are not valid sources of scientific information - not even if the information given happens to be correct. So I did some quick research on the current state of affairs in the field (this isn’t my area of expertise), and here’s a good summary: https://arxiv.org/pdf/1809.06229.pdf The upshot is that the current indications for there being some violation of LU come in at a statistical significance of, on average, around \(4 \sigma\), and are seen only for the case of b-quark decays. Other quark decay processes are perfectly in line with SM predictions. This is not sufficient evidence yet to call a new discovery, since the statistical significance level is not high enough. At the very least this will require more such experiments in order to acquire a larger data set. All this being said, there are indeed tantalising hints that some new physics may perhaps be going on, pending further investigation. However, should this turn out to be the case, then this would in no way invalidate the Standard Model, which quite evidently works very well - it would simply require an extension to the model which provides a suitable mechanism to explain these findings. Note also that it is just as possible that these findings are not due to new physics at all, but could arise from our mathematical difficulties in treating QCD non-perturbatively. On a very high level, let me reiterate that we have known for a long time already that the SM in its current form is in all likelihood merely an effective field theory that provides an approximation to something more fundamental. As such no physicist in their right mind would expect the current SM to be the final word on the matter of particle physics. However, when such a more fundamental model is found, this still will not mean that SM is abandoned; after all, we know it works extremely well within the energy levels we can currently probe. This is similar to the situation in classical mechanics - Newtonian physics is still successfully used (and taught in schools), even though it’s just a low-energy low-velocity approximation.
-
As others here have said. I can generally follow the main ideas and steps of a paper within my own area of “expertise” (I’m self-taught and haven’t formally studied physics), being General Relativity; which does not necessarily imply that I understand every single thing and detail (I don’t), but generally speaking that isn’t needed in order to grasp the general ideas and conclusions. Nonetheless, on occasion there will be publications which I can only follow with great difficulty - in the world of modern physics, one can spend many years specialising and studying a specific area, and yet not know everything there is to know about it. It is not rare that I come across GR-related things which I have never heard of before. In either case, it will never be as easy as reading the newspaper, the subject matter just requires deeper thought, knowledge and attention. Once I leave my area of expertise and interest though I get lost pretty quickly - for example, most papers on quantum field theory and the Standard Model tend to be beyond me, since I’m not sufficiently knowledgeable about the intricate details, methodologies, and maths of those areas.
-
This is not true, because it is possible to construct topologies that are unbounded in space and time, yet finite in extent - analogous to (e.g.) the surface of a sphere, which has no boundary, but nonetheless a finite and well defined surface area. The Hartle-Hawking state (a valid solution to the Wheeler-deWitt equation) is one such example for the universe as a whole; it describes a spacetime that is finite in temporal (and possibly spatial) terms, and yet has no boundaries in either space nor time. Even the reverse is possible - one can conceive geometric constructs that have a finite and well-defined boundary in all spatial directions, and at the same time infinite surface area enclosing zero volume, such as the Sierpinski cube. The global geometry and topology of the universe is a question that is nowhere near as straightforward as you seem to think it is, so be careful about making claims such as the above.
-
The notion of ‘gravitational potential’ can be meaningful defined only in spacetimes which are (among other requirements) stationary, i.e. in spacetimes that, in mathematically precise terms, admit a time-like Killing vector field. The universe in its entirety is approximately described by an FLRW spacetime, which does not fulfil this crucial condition. So the concept of ‘gravitational potential of the universe’ is meaningless, which is why you weren’t able to find anything on this topic.
-
Many results and papers first appear on freely accessible pre-print servers such as arXiv before they go to peer-review journals, so the short answer is yes. The problem though is that such papers are almost always very technical in nature, so unless they have the requisite background knowledge it is very unlikely that a random member of the general public would understand such articles. It’s usually only later that easier to understand corollaries of these findings appear in various pop-sci publications aimed at the general public.
-
On the most abstract level (I have little interest in specific setups tbh) I can tell you for a fact that electromagnetism locally conserves energy-momentum, just like any other interaction in nature: \[\triangledown \cdot T_{(EM)} =0\] As such, it is not possible to get “free energy” from a magnetic field on fundamental grounds, irrespective of how the apparatus functions in detail. At the very least you would need to invest the same amount of energy as you need to propel the spacecraft, into making the magnets in the first place.
-
The is what the peer review process is there for. If someone arrives at a new result that is potentially relevant to science, it is published in a peer review journal - other scientists who work in the area will then review that paper (methodology, results, interpretation etc). If the results seem valid, and important enough, someone will eventually want to repeat the experiment. So what protects against falsification of experimental results is that these results are made public, and that they must be repeatable and thus independently verifiable - simply meaning if someone else performs a similar experiment, they should obtain the same results.
-
I just gave you a link for this, did you even look at it? Lorentz invariance automatically implies the invariance of c. The solutions to the inhomogeneous wave equations are retarded Lorenz potentials - which physically represent spherical wave fronts propagating away (future-oriented) from the source, just as expected. I’ll skip typesetting this here, you can Google it if you want to see the actual expression. The solution to the homogenous equations is any function f of the form \[\vec{E} =f( \omega t-\vec{k} \cdot \vec{r})\] and similarly for the B field. This can literally be any function at all, so long as it is smooth and differentiable within the relevant domain. It doesn’t even need to be sinusoidal. So the wave equation is only a very general constraint on what form the wave function can have, and not all of its solutions are plane waves. Of course there are plane wave solutions (both in 1D and in 3D), and these prove very useful for many applications. What do you mean by “definition of the fields”? The gamma factor is only meaningful as a relation between frames, i.e. it appears in how quantities transform. Locally within the same frame it is always unity. This is inconsistent with the Standard Model. So you see, nothing in physics stands in isolation - if you radically redefine just one aspect, you will find that it is no longer consistent with everything else we already know.
-
You didn’t respond to my request for clarification as to what the scenario you are talking about actually is, so no, I didn’t know. But it doesn’t matter, because if we are not in a flat spacetime then this isn’t a Special Relativistic scenario, and you need to use the usual General Relativistic relations between frames. Either way, it is no problem to do this. But then, why do you keep talking about Lorentz transformations? As I have pointed out, we already know the source of the Pioneer anomaly, and it doesn’t have anything to do with gravity or new physics.
-
A question about quantum entanglement
Markus Hanke replied to starchaser137's topic in Quantum Theory
To be honest, I did not consider any specific scenario (but the author of the paper I linked earlier did), I was thinking only about general principles with this. So I don’t have any specifics to offer. What I will say though is that, in order to bring one of the particles to rest at a different gravitational potential wrt to the other one, some form of acceleration needs to be applied, which is (assuming constant a) already locally equivalent to a uniform gravitational field. So even before the final state is achieved, the question of what effect gravity has here already arises. So do you mean to say that subjecting an (already) entangled system to the influence of gravity will break the entanglement? Of course entanglement means non-separability of the wave function, so perhaps my earlier comment was misleading - I did not mean that the two parts of the system evolve separately (in that they have separate propagators), only that the 2-particle system as a whole must evolve in a different way than the one that isn’t subject to gravity. Simply on account of them not sharing the same notion of time. I think I didn’t express this very well. I am not clear though what this would really mean mathematically, since the spatiotemporal embedding of such a system would span a region of spacetime that is now no longer necessarily Minkowskian. This should have an impact on the wave function itself (does the tensor product reference the metric?), as well as on its propagator (how to formulate this, if time is a local notion?). -
The problem is that both the stone and the mountain are classical objects and as such share the same fundamental properties. The same is not true, however, for the chair I am sitting on and the elementary particles of which it is ultimately composed - you can’t describe the properties and interactions of these particles with Newtonian mechanics, and conversely the chair as a whole will not exhibit any quantum effects. So these are distinct categories of objects, even though there is a definite relationship between them.
-
A question about quantum entanglement
Markus Hanke replied to starchaser137's topic in Quantum Theory
I’m struggling to follow you on this one - if you do this, then the system is no longer entangled. It is precisely the non-separability of the wave function that is the essence of what ‘entanglement’ means. I think this is a matter of degrees, i.e. it depends on what ‘significant’ means for a specific scenario. In principle, I would argue the following: let’s say you prepare two identical entanglement pairs, both of which consisting of two entangled particles each. Keep one of these entangled pairs in a locally inertial frame, simply for reference purposes. For the other pair, place the system such that there is a gravitational gradient present between the two particles that make up the entanglement pair, i.e. there is relative acceleration between their geodesic world lines as they age into the future. Both of these pairs will now have wave functions that are of the same form and are both non-separable; however, the evolution of these wave functions must differ, because in the presence of gravity the propagator is a purely local operator, so the two parts of the non-separable wave function subject to gravity will evolve differently, as compared to the reference pair that is not subject to gravity. So clearly, gravity must have some effect on the entanglement relationship. But of course I agree with you in that for most real-world scenarios such effects should be entirely negligible - unless you are in a spacetime with extreme tidal gravity, such as near the event horizon of a microscopic black hole. -
Ok, but the problem then is that such a universe would not permit any (tidal) gravity in the vacuum outside of massive bodies. This is contrary to both observational evidence in the real world, as well as Maxwell’s equations. All of these, and many many more. But you are getting this backwards, because, since you are the one proposing a new idea, it is up to you to show experimental evidence that there exists electromagnetic radiation that propagates at v > c. This is simply not true. The EM wave equation follows directly from Maxwell’s equations, and its solutions are precisely the kind of wave forms we find in the real world. The entire field of electrical engineering relies on this, and it evidently works very well - in everything from aircraft avionics to microwave ovens. There is really only one field, the electromagnetic field \(F_{\mu \nu}\); the E and B fields are merely observer-dependent aspects of this, and thus make up the various components of the field tensor. When you look at how these fields transform, you will see that they already contain the gamma factor, so this is nothing new. What is this? You are essentially giving us the finger here, by saying that you are not prepared to look at any evidence that might contradict what you believe. That’s not how science is done.
-
I have no idea what you mean by this, you need to explain some more. A Lorentz transformation is a relationship between inertial frames; if one of the frames is not inertial, or if spacetime in between the frames isn’t flat, then the relationship will be more complicated. Note also that Special Relativity encompasses not just inertial frames, but any situation so long as the respective region of spacetime is approximately flat. The Pioneer “anomaly” has nothing to do with relativity, it’s simply due to uneven heat loss from the probe. There is no mystery here. What curve?
-
A question about quantum entanglement
Markus Hanke replied to starchaser137's topic in Quantum Theory
I think this is an interesting question, and the answer is certainly not obvious. I would expect that, if you were to place one part of an entangled system into a different gravitational potential, then this should have a measurable effect on the entanglement relationship, simple because the two parts of the system no longer evolve in time in the same way, meaning something would need to change in the overall wave function describing that pair. At the same time though I don’t see how this could possibly affect the fundamental non-separability of that wave function, so some notion of entanglement should persist. I have no idea what this would really mean in physical terms, though. I did a Google search, but this was the only thing I could find on the subject. The experiment hasn’t been performed yet, but clearly the author also expects there to be an observable effect of some kind (he talks about “entanglement degradation”). No, because a) entanglement is usually discussed on the premise of the entire system being in the same inertial frame, and b) even if gravity does have an effect, there would still be entanglement, though aspects of it might be subtle different.