Jump to content

Markus Hanke

Resident Experts
  • Posts

    2073
  • Joined

  • Days Won

    61

Everything posted by Markus Hanke

  1. My understanding of this is that in order to measure the graphing distance, you have to first foliate the hypergraph into slices of simultaneity, which is to say you need to have a convention to decide in which sequence the nodes and edges get updated, since in general there’s more than one possibility. Different observes will do this in different ways since they belong to different subgraphs, which is essentially just your ordinary relativity of simultaneity. The graphing distance is then measured within one slice of that foliation only, since we wish to consider spatial length contraction. Thus, even if all observers are part of the same hypergraph, they can still obtain different graphing distances between the same nodes, because they count nodes along different paths within the graph. The graph’s symmetry of causal invariance ensures that the causal structure is always the same, regardless of which sequence the graph gets updated in. That’s how I understand it anyway. Wolfram’s own explanation of this is found here.
  2. All observers are themselves a part of the hypergraph, so I don’t think this question is very meaningful. I think the better question to pose is whether SR and GR follow from this framework (ie can you recover the spacetime interval from the hypergraph), and the answer is apparently yes - with the caveat that I haven’t studied the technical details of this, so I don’t know how watertight Wolfram’s derivation actually is. I should perhaps explicitly state that it isn’t my intention to make any claims as to the viability of this framework - it might well turn out to go nowhere. I merely think it’s a very interesting approach that is worth pursuing further.
  3. The idea is that space is discretised, ie a geometric volume would consist of a finite number of points (which increases with time), each of which corresponds to a node in the hypergraph. By measuring graph distance, you’d thereby have a measure of how a volume relates to an emerging space’s dimensionality. There’s apparently also a mechanism which ensures that the number of dimensions in the emerging spacetime remains stable after a certain point, but I haven’t fully wrapped my head around the details of that yet.
  4. Yes, that’s the big question. The thing with this model is that the underlying discretisation of spacetime has potentially got consequences on larger scales, which can at least be estimated, eg here: https://arxiv.org/abs/2402.02331 So essentially, accretion disks of some black holes would be more luminous than expected from ordinary physics alone. The precise values will depend on the underlying model, which of course hasn’t been finalised. But the point is that yes, these models make specific predictions that can at least in principle be falsified.
  5. I’m wondering if anyone here has followed the Wolfram Physics Project? If so, what are your thoughts on it? The text in the link is a long-ish read, but well worth it. When I first heard of this I didn’t think much of it, but I must admit that the idea has really been growing on me. It’s a fascinating approach to a TOE (if one can call it that), and those of you who have known me for a while will notice that it contains many of the elements I have been advocating for some time now, such as chaos/complexity, graph theory etc. And some of the preliminary results are tantalising. I know this thing isn’t so popular in most of the physics world, but I’m curious to hear what others here think.
  6. Not really, because in curved spacetimes the concept of “gravitational potential” only meaningfully exists if certain symmetries are present in that spacetime. It is not a generally applicable concept in the same way as it is in Newtonian gravity. Also, the (Newtonian) mass of a black hole is finite, so the potential well wouldn’t be infinite.
  7. No, it’s undefined. There is no hyperbolic angle (=transformation) that takes one from an ordinary inertial frame to a “rest frame of light”, because such a thing does not exist.
  8. No, light does not have a rest frame associated with it; there’s no valid Lorentz transformation that brings you from an ordinary frame to one in which photons stand still. Inertial frames in SR are related by rotations in spacetime, where the rotation angle is \[\omega =arctanh \left( \frac{v}{c} \right)\] What angle do you get for v=c?
  9. I’m finding that Opera Browser (which has built-in adblocker and VPN functionality) works pretty well for SFN; for incompatible sites, these functions can be deactivated with a single click.
  10. I don’t find this puzzling - just the opposite. The speed of light follows (eg) from Maxwell’s equations, so it would be a lot more puzzling if different observers experienced different laws of electrodynamics, especially since their speeds are not intrinsic physical properties of their own frames, but merely a measure of how they relate to other frames. Without this invariance of c, the universe couldn’t function, since you’d get unresolvable paradoxes.
  11. No. Entanglement is correlation between measurement outcomes. They need to interact first (in some ordinary way, not at a distance), which establishes the entanglement relationship. There are different ways to do this, but they all involve an initial causal interaction of some kind; they then remain entangled afterwards, right up until a measurement is performed on them; once any entangled part collapses into a definite state, the entanglement relationship is broken.
  12. No. You can yourself, at home, perform simple table-top experiments to investigate gravity, such as eg the Cavendish experiment (all required parts are readily available for purchase, or you can build your own if you’re handy with tools). You can vary the setup as you see fit - use different masses or materials; place the whole thing or parts of it in a Faraday cage; place it in a vacuum etc.
  13. No, not necessarily. While all causation automatically involves some form of correlation, the reverse isn’t true - not all correlation implies causation, in the sense of something “acting” non-locally.
  14. I thought we were discussing kinematic time dilation for the time being, which is what my comment was aiming at. For example, the kinematic component of time dilation between a satellite clock and an Earth clock is solely due to relative velocity, and not a function of how high up the satellite is. This is my main point - kinematic time dilation is solely a function of relative velocity (ie it doesn’t matter where and when the experiment is performed), whereas the density of your proposed DM gas is at a minimum a function of position and time. So I don’t see how you can meaningfully relate these two. I understand that that’s the idea, but I don’t see how any particle/field can interact with all the other fundamental particles and their interactions just so that any macroscopic composition of them is equally affected by time dilation. There’s no conceivable mechanism that can achieve this at below-GUT energies, since the fundamental interactions all function differently according to their own symmetry groups and coupling constants.
  15. Any unstable elementary particle. For that matter also all hadrons, since the strong interaction behaves nothing like electromagnetism. No one can be sure of such a thing, given that the very notion of “DM particle” is itself speculative. What we can state though is that the statistical decay rate of unstable elementary particles (irrespective which ones) has never been been observed to depend on external circumstances. It seems to be an intrinsic property of those particles. And that’s part of the problem with this idea - all types of clocks, irrespective of their internal mechanisms and composition (or lack thereof), display precisely the same time dilation under the same circumstances. The amount of kinematic time dilation is solely a function of relative velocity. On the other hand, we know that DM, if it exists, cannot be evenly distributed - it must be more dense in some regions than in others in order to match observations, so we’d see differing time dilation effects in different regions/directions, which we don’t. Honestly, I don’t see how you could make this work at all - your DM particle would need to interact with all types of other particles in exactly the same way, and the interaction could not even depend on the density of the gaseous medium. This seems highly implausible, and appears to be incompatible with the Standard Model. Besides, since even quite ordinary clocks on quite ordinary energy levels are easily seen to exhibit time dilation, why do we not detect the DM particle in our accelerators, which detect interactions with many orders of magnitude higher precision? It’s completely implausible that all our precision and high-energy detection experiments have come up empty-handed, whereas at the same time the DM gas interacts strongly enough with (eg) a simple satellite clock to give it a substantial time dilation.
  16. I have little else to add to the above excellent replies, except perhaps this: on a fundamental level, ‘being in relative motion’ is not a physical property of an observer. Motion is merely a relationship between at least two chosen frames - meaning one can simultaneously be at motion wrt to one reference frame, and at rest wrt to another. It is therefore impossible for the laws of physics within a local frame to depend on relative motion, since this would create unresolvable paradoxes. Needless to say that no such thing has ever been observed. So I stand by what I said earlier - the absence of physical paradoxes in our universe precludes any kind of violation of Lorentz invariance.
  17. How do you propose to reassess this? The numerical value of c is a function of the permittivity and permeability of the underlying medium (this was known before Einstein), which of course don’t change just because some observer happens to be in relative motion wrt to some reference point. If they changed, he wouldn’t be in the same medium any longer, which creates physically unresolvable paradoxes. I propose that c is invariant because the universe cannot contain such unresolvable paradoxes.
  18. The equations already are non-linear. There are some stringent mathematical constraints as to what form the equations can take in standard GR, they aren’t just randomly invented, but derived from those conditions. It is not possible for them to take a different form without violating some of these conditions. You can have different equations, but then you’re not doing GR any longer, but some alternative theory of gravity.
  19. Note that you don’t necessarily have to do this - in many cases, it’s quite possible to work with space and time as separate (but interdependent) entities. For example, there’s nothing wrong with using the original 3-vector based formalism of Maxwell’s equations, rather than tensors or differential forms on spacetime. The problem is just that you often sacrifice physical intuition when you do this, because the maths tend to become less transparent. And sometimes you realistically can’t do without spacetime - for example, writing down the Standard Model without using tensors, spinors, or any other object that requires a concept of spacetime, would be a straight-up nightmare, if it is possible at all.
  20. Ok, thanks for the explanations, everyone. So is the general consensus here that Hossenfelder’s points (rhetoric aside) aren’t really valid criticisms of current academia at all?
  21. No lol +1 It’s cheap freeze-dried instant, that requires the addition of liberal doses of dihydrogenmonoxide You see, this is what I’m unclear about. Could someone explain to me how the research funding process actually works? If I’m a random academician somewhere who needs funds to perform an experiment (or simply further my research in other ways), how would I go about this? And how does my track record of published papers (number of them, referenced by others etc) play into this? Have I got better chances the more I have published? I never fully understood this process, tbh.
  22. Some very goods point made here by @studiot, @swansont, @joigus, @Mordred and everyone else. Thanks for all your input. For me as an interested amateur who isn’t entirely ignorant of modern physics, it’s just that I think I’m going to scream if, on my daily arXiv check, I see one more paper of the type “Dark Matter as weakly interacting pink-flavoured superaxions with fractional charge and complex-valued spin. And they make coffee too”. So much of what is written in modern papers just seems like straight-out WAGs to me, and I find it frustrating. The impression I’m getting is that people write papers just for the sake of having publications to their name (which is presumably connected to research funding), and not for their scientific value. And I think that’s at least part of what Prof Hossenfelder is saying. PS. Just for the record, I’m not especially fond of her rhetoric either, but I think some of her points are worth talking about.
  23. My understanding of this was that she meant progress specifically in the foundations of physics - we are still essentially using the same fundamental models (GR and SM) we did back in the 1970s, and the fundamental issues associated with these models still remain unresolved. This isn’t to say that these models don’t work (they clearly do), but that the approach we take towards the known issues with them doesn’t seem to be working. Of course, much progress has been made on the finer details, but not on the overall paradigms. Could you elaborate on this a bit more?
  24. This is with reference to the following recent short video by Sabine Hossenfelder: I must say that, while I don’t necessarily share all of her pessimism, I do find myself agreeing to some of what she says here. My problem though is that I have never myself worked in professional academia, and have only a peripheral awareness of how exactly funding, the “paper mill” etc work when it comes to research in the foundations of physics. I also haven’t read her book Lost in Maths. I am thus curious to hear from those on this forum who do work in professional academia - what do you think about her comments? Is there any merit in the notion that there are systemic issues in academia, specifically in physics? She does have a good point though in that we have made little progress in the foundations of physics since the ~1970s, and that much of current work feels a lot like people randomly and blindly groping in the dark by inventing maths that don’t seem to be motivated by any real-world data points, hoping to just stumble across that next breakthrough. This isn’t really how science should work. Comments, anyone?
  25. Same in German, so this is a familiar concept. In Norwegian, there’s no definite articles either; instead, definiteness and number are marked via noun declination: pike - young girl piken - the girl piker - girls pikene - the girls Sometimes these can be irregular also. It does have indefinite articles, though.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.