Jump to content

Markus Hanke

Resident Experts
  • Posts

    2073
  • Joined

  • Days Won

    61

Everything posted by Markus Hanke

  1. They are looking at it - if you search for this on arXiv, you’ll find quite a number of papers on this subject. I would imagine it hasn’t become part of the general consensus, because there are also problems and issues with these models. You are right, yes. I think the problem here is that the error bars for the precise value of the Hubble constant are far bigger than the effects of the acceleration, so at the moment we aren’t yet in a position to draw a meaningful graph for this. This is a work in progress.
  2. This is the generally agreed upon definition in algebra. Of course not. The singularity is inevitable in purely classical Schwarzschild spacetime, as can be formally proven using the singularity theorems. As I mentioned earlier, the appearance of a singularity in any theory of physics (not exclusive to GR) generally means that the model has been extended past its domain of applicability. In this case, GR, as being purely classical, fails to account for quantum effects during the collapse. It does most emphatically not mean that we expect a singularity to be a real-world object. If you want to eliminate the singularity, you can choose a connection other than Levi-Civita on your manifold, which allows for the presence of torsion in addition to curvature. This model is called Einstein-Cartan gravity, and is singularity-free. But this is not the same theory as General Relativity. Schwarzschild spacetime is static and stationary by definition. There are no dynamics whatsoever - you actually use this fact as boundary condition to derive the solution in the first place! You can translate the entire manifold in time without changing anything in its geometry. In technical terms, the manifold admits a time-like Killing vector field. When we say that time and space trade places below the event horizon, what we really mean is that ageing into the future inevitably corresponds to a radial decay - meaning there cannot be any stationary frames, no matter how much force you exert in trying to counter gravity. It’s inherent in the causal structure of spacetime itself. This is of course independent of the choice of coordinates. ‘Patch’ is simply the technical term for a particular region on a manifold, it has nothing to do with any manipulations of same.
  3. I don’t know, to be honest - I’ve never looked into these models. Over the last few years my focus has been elsewhere, so I haven’t been keeping up with latest developments as much as I would have liked to. Another one for the future to-do list 👍
  4. joigus has beaten me to it with his excellent answer (+1). As I said earlier, the manifold and a particular coordinate chart chosen on it are not the same things at all - to put it succinctly, having a ‘hole’ in an embedding diagram does not necessarily imply that there is a corresponding ‘hole’ in the manifold, in a topological sense. These are different things. You can in fact have patches (or entire manifolds) without coordinates defined on them. For Schwarzschild spacetime, you need only transform the metric to a different, more complete coordinate basis to see this. But if you want to be absolutely sure and precise, it is always best to use tools that are coordinate-independent. Yes, indeed. Arriving at a precise value is actually not easy, also because external conditions play a role during the collapse. But I think the salient point is that there is such a limit, for any given level of degeneracy. It’s hypothetical to some degree, yes. But just as in the case of quantum gravity, there are good reasons to believe that the GUT domain is quite real, even if we don’t know for sure which of the numerous GUT candidate models will apply. That being the case, quarks and gluons are by-products of a broken GUT symmetry, so once energy levels are high enough, the strong interaction will cease to exist in its ordinary form. In more general terms, I very much agree that singularities are not real-world objects, but artefacts of our models being pushed beyond their domains of applicability.
  5. That’s a really good question! It is indeed possible to vary lambda with time, location, or both - the resulting models are called “agegraphic dark energy models”. There are both advantages and problems associated with these, but I must admit that this isn’t something I’ve been following, so I don’t know where things stand on this. It hasn’t caught on in the mainstream though.
  6. Einstein, despite having come up with the equations initially, didn’t know about the full set of principles underlying their form (the crucial topological concepts underpinning this were worked out by Ellie Cartan at a later date) - so he wouldn’t initially have been aware that the presence of the constant was the ‘normal’ state of affairs, hence it didn’t appear in his original formulation. So unfortunately any fine-tuning to precisely zero still lacks a physical mechanism or reason. Again, I’m not saying it can’t be zero, just that this would be an example of unexplained fine-tuning.
  7. I’m really confused now - could you try to rephrase for me what your main point is? I can’t really make sense of the progression of the last few posts. I should remind you again that an embedding diagram concerns a coordinate chart, which is a separate thing from the manifold itself. To say that any part of an embedding diagram - whether missing or not - is ‘outside the manifold’ is meaningless, since you can have regions that aren’t covered by that particular chart. Schwarzschild spacetime is the simplest and most straightforward solution to the Einstein equations - both its geometry and topology are well understood and have been studied ad nauseam by generations of physicists and mathematicians. Precisely which aspect of it do you think we are misunderstanding?
  8. The trouble with this is that the Einstein equations aren’t just invented out of thin air. There are some fundamental principles of consistency and topology that greatly constrain the form these equations can take (See Meisner/Thorne/Wheeler for details). As it turns out, the equations including the constant are the most general form that fulfils all these conditions - so there needs to be a reason why the constant should be exactly zero. Im not saying it can’t be zero, just that there would have to be a reason for it.
  9. If the embedding diagram terminates in a throat, then that means the coordinate chart isn’t continuous at that point. This doesn’t imply anything about spacetime itself. Ok, but then he would have realised that that spacetime is everywhere continuous and doesn’t terminate at any horizon surface. I don’t really understand the significance of these historical references, to be honest. We nowadays know of a large number of exact solutions to the field equations, including maximally extended metrics that cover the entirety of this particular spacetime, so we know in depth its complete geometry and topology - which is a lot more than was known back in early 1900s. Why do you keep referring back to the state of affairs a hundred years ago? Our state of knowledge and maths has moved on greatly since then. Remember also that metrics aren’t invented - they are derived solutions to the field equations.
  10. Yes, that’s what I meant. Good question! What they did was to approximate the tightly packed neutrons in a neutron star as a special kind of gas, called a Fermi gas. The dynamics of this were fairly well worked out, which made it possible to derive a rough limit for when neutron degeneracy is able to resist gravitation. It’s called the Oppenheimer-Volkoff limit. If that limit is exceeded, gravity will be stronger than the degeneracy pressure. Oppenheimer/Volkoff did not use any results from QCD, which wasn’t fully developed until later. Nowadays we can speculate that there might also be a degeneracy state involving quarks, which then would also have a corresponding limit. This would lead to an astrophysical object called a quark star (purely hypothetical). However, it is safe to say that in this domain no classical approximations will suffice, this is where we need to use quantum gravity - which we don’t yet have. So we can’t say what happens when the quark degeneracy limit gets exceeded. Yes, but if you keep collapsing the body, the pressures and energies will eventually get so high that the fundamental forces will re-unite into a GUT scenario - at which point the concept of ‘quark’ ceases to make sense. It is also not clear that the Pauli exclusion principle is meaningful in a scenario where quantum gravity plays a role. This is inconsistent with GR, because there can be no stationary frames beyond the horizon, due to the fundamental geometry of the spacetime there. So you can’t have stable objects of any kind. In other words, it doesn’t matter what specific mechanism you propose - so long as there is classical spacetime, a full collapse is inevitable (ref also singularity theorems). The only ways to avoid this is to either modify GR, or abandon classical (smooth, continuous) spacetime once certain limits are exceeded. You can calculate them using the laws of QM and QFT: Chandrasekhar limit - electron degeneracy - white dwarfs Tolman-Oppenheimer-Volkoff limit - neutron degeneracy - neutron stars Quark degeneracy (don’t know if there’s a name for it) - quark stars (hypothetical)
  11. I may have misunderstood what you did, then. Apologies.
  12. I don’t know exactly where the confidence level for this stands at the moment, but I think it’s pretty likely that this is real. Not really. It straightforwardly corresponds to having a positive cosmological constant in the Einstein equations. To me, it would actually be much weirder should it turn out that this constant is somehow exactly zero, because there is no a priori reason (that we know of) for that to be the case.
  13. That’s because the rotation is a hyperbolic one, and thus rapidity uses the tanh function - so the difference would only grow large once you have relative speeds close to the speed of light itself.
  14. The current accelerated expansion of the universe is not related to inflation - these are physically different circumstances.
  15. You can also look at this purely geometrically. If O and O’ are in uniform relative motion, then these two coordinate systems are related by a hyperbolic rotation in spacetime (ignore boosts for simplicity) - in other words, a Lorentz transformation is essentially just a simple rotation of the associated coordinates. The speed v is then directly related to the rotation angle by a simple equation; so you can express relative speed as an angle. This is called rapidity.
  16. It’s outside the particular coordinate chart that happened to be used; that doesn’t mean the manifold itself is not smooth and continuous there. I don’t know what this means? There’s no such thing as negative proper mass. The first solutions found in and around 1917 were exterior metrics, meaning they described spacetime in a vacuum, ie outside the central body. Oppenheimer and others later found solutions that describe the interior spacetime of bodies, ie spacetime inside the central body, and by extension a full metric that consistently encompasses both regions. Since, once certain limits are exceeded, it is physically impossible to prevent gravitational collapse, these must contain singularities. There is no such condition once you use a metric that covers both interior and exterior spacetime, which is what they did. When the sign on the squared line element inverts, then that means that time and space trade places - which is to say that ageing into the future becomes equivalent to a diminishing radial coordinate. In other words, there are no longer any stationary frames, and one cannot avoid falling towards the center. The issue here is that Einstein’s GR is a purely classical model of gravity, so it does not and cannot account for quantum effects. Within the framework of the classical model, the appearance of physical singularities is inevitable (this can be mathematically proven). However, the real world isn’t classical below a certain scale, so the current assumption is that quantum gravity will remove such singularities. We don’t have such a model yet, but it’s an era of active research. To remove singularities from classical GR, you can make a small modification to it that leads to a model called Einstein-Cartan gravity. This is free of singularities. However, this modification has other consequences, which to date we haven’t seen in the real world. But you are right of course in that one does not expect singularities to be actual objects that occur in the real world. They are artefacts of the model, and generally mean that it breaks down under the given set of circumstances; they are “physical” only within the context of that model.
  17. What does this mean? Nothing has been ‘corrupted’, it’s just that now, a century later, we have a much better understanding of the foundations of GR than Schwarzschild (or any of his contemporaries) would have had. There was confusion about this only because the model was brand new back then, and it took time to figure things out. Nowadays we are in a much better position. In his original paper, Schwarzschild used a coordinate system that had its origin at the event horizon, so r=0 meant the horizon surface. However, this does not at all mean that there is nothing beyond the horizon, because in GR the choice of coordinates is arbitrary and has no physical significance. Schwarzschild used this convention simply because it made his particular way of deriving the solution mathematically easier. A consequence of this choice is that large parts of the spacetime aren’t covered by any coordinate patch, so, in his notation, there are physical events that cannot be labled by any coordinate. But again, that’s just a convention without physical significance. You can rectify this simply by choosing a different coordinate system - which does not change anything about the actual geometry of the spacetime. This is why there are so many seemingly different metrics (Novikov, Kruskal-Szekeres, Aichlburg-Sexl, etc etc) which all describe the same physical spacetime. To see whether the event horizon is a physical singularity (as opposed to just a coordinate one), and what the nature of spacetime beyond the horizon is, you can use tools that do not depend on the choice of coordinate system at all - such as invariants of the curvature tensors. That way, it’s trivially easy to show that, in classical GR, the horizon as well as all of spacetime in the interior right down to the singularity is in fact perfectly smooth and regular, just like anywhere outside the horizon. This is a standard exercise in pretty much any graduate GR course.
  18. Interesting new paper on anomalies in physical cosmology: https://arxiv.org/pdf/2208.05018.pdf
  19. Well, without a metric we can’t really have a discussion about this. The thing here is that you don’t start with a metric - you begin with an energy-momentum tensor plus boundary conditions, then you use these to solve the Einstein equations. That gives you the metric. All solutions to the Einstein equations are metrics, but not all metrics are valid solutions to the Einstein equations.
  20. Ok, so what is the metric? You haven’t written it down yet.
  21. I believe you are thinking of specific, symmetric solutions such as the Kerr spacetime. In those special cases the situation is indeed unambiguous - but that’s because these cases assume certain symmetries that remove the extra degrees of freedom. The problem referred to in the paper pertains to general regions of curved spacetime, where no symmetries or boundary conditions are assumed. Defining the total energy (not just mass) contained in such a region has been an intractable problem - which this paper now solves. I must look at this in detail, but at the moment I’m engaged in other pursuits.
  22. That’s certainly a factor, but I suspect it’s mostly because a lot of people simply don’t have the ability to step outside the paradigm of what their sensory apparatus tells them about the world - which is essentially Newtonian. Thus, relativity and QM get rejected wholesale, because they “don’t make sense”. Also, believing that you are smarter than a larger than life figure such as Einstein props up people’s egos.
  23. Here’s an article about this paper - it’s actually more about angular momentum (which is just as ambiguous as mass), but the problems are closely related: https://www.quantamagazine.org/mass-and-angular-momentum-left-ambiguous-by-einstein-get-defined-20220713/ I can’t offer any real details yet, since I haven’t studied the paper itself.
  24. You’ve got this backwards - it was you who made the claim that spacetime is a mechanical medium, and that energy-momentum is always conserved. Mainstream physics says no such thing. So the onus is on you to show how your claim is right. Indeed. Yes, that’s right. The problem is that the gravitational field itself carries energy, but this energy isn’t localisable; if you try to account for it, you generally end up with expressions that are observer-dependent. A further problem is that there is more than one way around this, which is why you get different ways to define the energy content of a region of spacetime, like ADM energy, Komar energy etc etc. It’s not immediately clear how to define it in a general, unambiguous way. I believe the problem has recently been solved, though I haven’t had time to look into this new development, so I can’t comment yet.
  25. Spacetime isn’t a mechanical medium, so this is irrelevant. Also, it might surprise you to hear that the law of conservation of energy exists only in flat spacetime - in the presence of gravity, things become rather more complicated.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.