Jump to content

Markus Hanke

Resident Experts
  • Posts

    2073
  • Joined

  • Days Won

    61

Markus Hanke last won the day on July 30

Markus Hanke had the most liked content!

5 Followers

About Markus Hanke

Profile Information

  • Location
    Ireland
  • Favorite Area of Science
    Physics

Recent Profile Visitors

18001 profile views

Markus Hanke's Achievements

Primate

Primate (9/13)

887

Reputation

  1. My understanding of this is that in order to measure the graphing distance, you have to first foliate the hypergraph into slices of simultaneity, which is to say you need to have a convention to decide in which sequence the nodes and edges get updated, since in general there’s more than one possibility. Different observes will do this in different ways since they belong to different subgraphs, which is essentially just your ordinary relativity of simultaneity. The graphing distance is then measured within one slice of that foliation only, since we wish to consider spatial length contraction. Thus, even if all observers are part of the same hypergraph, they can still obtain different graphing distances between the same nodes, because they count nodes along different paths within the graph. The graph’s symmetry of causal invariance ensures that the causal structure is always the same, regardless of which sequence the graph gets updated in. That’s how I understand it anyway. Wolfram’s own explanation of this is found here.
  2. All observers are themselves a part of the hypergraph, so I don’t think this question is very meaningful. I think the better question to pose is whether SR and GR follow from this framework (ie can you recover the spacetime interval from the hypergraph), and the answer is apparently yes - with the caveat that I haven’t studied the technical details of this, so I don’t know how watertight Wolfram’s derivation actually is. I should perhaps explicitly state that it isn’t my intention to make any claims as to the viability of this framework - it might well turn out to go nowhere. I merely think it’s a very interesting approach that is worth pursuing further.
  3. The idea is that space is discretised, ie a geometric volume would consist of a finite number of points (which increases with time), each of which corresponds to a node in the hypergraph. By measuring graph distance, you’d thereby have a measure of how a volume relates to an emerging space’s dimensionality. There’s apparently also a mechanism which ensures that the number of dimensions in the emerging spacetime remains stable after a certain point, but I haven’t fully wrapped my head around the details of that yet.
  4. Yes, that’s the big question. The thing with this model is that the underlying discretisation of spacetime has potentially got consequences on larger scales, which can at least be estimated, eg here: https://arxiv.org/abs/2402.02331 So essentially, accretion disks of some black holes would be more luminous than expected from ordinary physics alone. The precise values will depend on the underlying model, which of course hasn’t been finalised. But the point is that yes, these models make specific predictions that can at least in principle be falsified.
  5. I’m wondering if anyone here has followed the Wolfram Physics Project? If so, what are your thoughts on it? The text in the link is a long-ish read, but well worth it. When I first heard of this I didn’t think much of it, but I must admit that the idea has really been growing on me. It’s a fascinating approach to a TOE (if one can call it that), and those of you who have known me for a while will notice that it contains many of the elements I have been advocating for some time now, such as chaos/complexity, graph theory etc. And some of the preliminary results are tantalising. I know this thing isn’t so popular in most of the physics world, but I’m curious to hear what others here think.
  6. Not really, because in curved spacetimes the concept of “gravitational potential” only meaningfully exists if certain symmetries are present in that spacetime. It is not a generally applicable concept in the same way as it is in Newtonian gravity. Also, the (Newtonian) mass of a black hole is finite, so the potential well wouldn’t be infinite.
  7. No, it’s undefined. There is no hyperbolic angle (=transformation) that takes one from an ordinary inertial frame to a “rest frame of light”, because such a thing does not exist.
  8. No, light does not have a rest frame associated with it; there’s no valid Lorentz transformation that brings you from an ordinary frame to one in which photons stand still. Inertial frames in SR are related by rotations in spacetime, where the rotation angle is \[\omega =arctanh \left( \frac{v}{c} \right)\] What angle do you get for v=c?
  9. I’m finding that Opera Browser (which has built-in adblocker and VPN functionality) works pretty well for SFN; for incompatible sites, these functions can be deactivated with a single click.
  10. I don’t find this puzzling - just the opposite. The speed of light follows (eg) from Maxwell’s equations, so it would be a lot more puzzling if different observers experienced different laws of electrodynamics, especially since their speeds are not intrinsic physical properties of their own frames, but merely a measure of how they relate to other frames. Without this invariance of c, the universe couldn’t function, since you’d get unresolvable paradoxes.
  11. No. Entanglement is correlation between measurement outcomes. They need to interact first (in some ordinary way, not at a distance), which establishes the entanglement relationship. There are different ways to do this, but they all involve an initial causal interaction of some kind; they then remain entangled afterwards, right up until a measurement is performed on them; once any entangled part collapses into a definite state, the entanglement relationship is broken.
  12. No. You can yourself, at home, perform simple table-top experiments to investigate gravity, such as eg the Cavendish experiment (all required parts are readily available for purchase, or you can build your own if you’re handy with tools). You can vary the setup as you see fit - use different masses or materials; place the whole thing or parts of it in a Faraday cage; place it in a vacuum etc.
  13. No, not necessarily. While all causation automatically involves some form of correlation, the reverse isn’t true - not all correlation implies causation, in the sense of something “acting” non-locally.
  14. I thought we were discussing kinematic time dilation for the time being, which is what my comment was aiming at. For example, the kinematic component of time dilation between a satellite clock and an Earth clock is solely due to relative velocity, and not a function of how high up the satellite is. This is my main point - kinematic time dilation is solely a function of relative velocity (ie it doesn’t matter where and when the experiment is performed), whereas the density of your proposed DM gas is at a minimum a function of position and time. So I don’t see how you can meaningfully relate these two. I understand that that’s the idea, but I don’t see how any particle/field can interact with all the other fundamental particles and their interactions just so that any macroscopic composition of them is equally affected by time dilation. There’s no conceivable mechanism that can achieve this at below-GUT energies, since the fundamental interactions all function differently according to their own symmetry groups and coupling constants.
  15. Any unstable elementary particle. For that matter also all hadrons, since the strong interaction behaves nothing like electromagnetism. No one can be sure of such a thing, given that the very notion of “DM particle” is itself speculative. What we can state though is that the statistical decay rate of unstable elementary particles (irrespective which ones) has never been been observed to depend on external circumstances. It seems to be an intrinsic property of those particles. And that’s part of the problem with this idea - all types of clocks, irrespective of their internal mechanisms and composition (or lack thereof), display precisely the same time dilation under the same circumstances. The amount of kinematic time dilation is solely a function of relative velocity. On the other hand, we know that DM, if it exists, cannot be evenly distributed - it must be more dense in some regions than in others in order to match observations, so we’d see differing time dilation effects in different regions/directions, which we don’t. Honestly, I don’t see how you could make this work at all - your DM particle would need to interact with all types of other particles in exactly the same way, and the interaction could not even depend on the density of the gaseous medium. This seems highly implausible, and appears to be incompatible with the Standard Model. Besides, since even quite ordinary clocks on quite ordinary energy levels are easily seen to exhibit time dilation, why do we not detect the DM particle in our accelerators, which detect interactions with many orders of magnitude higher precision? It’s completely implausible that all our precision and high-energy detection experiments have come up empty-handed, whereas at the same time the DM gas interacts strongly enough with (eg) a simple satellite clock to give it a substantial time dilation.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.