Jump to content

Killtech

Senior Members
  • Posts

    76
  • Joined

  • Last visited

Everything posted by Killtech

  1. How well is the self coupling experimentally verified and understood? Though if we go away from a linear field like thinking, it makes intuitively sense, i would still like to see what evidence we have for that. Not looking for a general test of GR though. i admit that singling out an aspect of a theory without a proper competing model that differs only there may be hard to do. Maybe it is just enough to have a theory with a limited propagation speed and therefore replacing Newtons gravity with Maxwell-like equations for gravity would do the trick? What would that predict for the rotation of Mercury's perihelion?
  2. ideally the latter is the quantized version of the former and in particular in the classical limit we should yield one from the other. But since the latter does not exist yet, we have to study the specialties of the former to understand where the trouble comes from. In a closed physical system this cannot happen and it doesn't actually matter if the black hole is made of mass or something else - the curvature is caused by the energy stress tensor after all. Therefore the back hole could be made up entirely of light of sufficient intensity in principle. Clearly the gedankenexperiment violates energy conservation in Newtons and Einsteins physics all the same, so let's just assume it's an open system and treat it analoge to a driven oscillator. The physicists that formulated that old gedankenexperiment didn't intend for it to be a realistic situation but wanted to illustrate a particular question about propagation of gravity. Just as it was valid to show how Newton's gravity is instantaneous compared to Einstein's, it is still an interesting case to drill down the properties of gravity to some of its essential aspects.
  3. But does gravity actually slow down gravity? Let's consider the very old gedankenexperiment: what would happen is a black hole suddenly disappeared, how long would it take for different observers around it to notice it? In particular, if we say gravity travels at \(c\) and for the outside observer clocks within a gravity field are massively slowed, then the speed measured in term of the outside observers clock should appear much slower. Or let's rephrase it: given a massive object with a time dependent mass/energy of \( m(t)=1+\sin (\omega t) \), i.e. producing a field resembling a longitudinal wave - how fast would the curvature changes it produces propagate? what wavelength would that curvature field have locally? going back to the graviton, or in fact any virtual particle. They appear in Feynmen diagrams as part of the integral kernel. But that kernel still contains these paths by physics, which can be interpreted as allowing only somewhat possible paths - that is paths which require more then infinite energy have a contribution/probability density of 0. So if virtual particles are constrains by the same geometry, how are they able to contribute to the amplitude starting at a point within \( r_s \) to a point outside? Well. of course we do not have a theory of quantum gravity or in fact any quantum theory that can deal with curvature, so i guess the question probably does not have a good answer as of now?
  4. Killtech

    math test

    test sentence \[ \rho^2 \] ... do i have to post to get a preview? 😮 [math]x^{2}[/math] man make it work! inline test \( \sqrt{2} \)
  5. Maybe let's first settle much simpler question about just GR: is a gravitational wave subject to gravitational lensing? The question is two-fold: what does the theory say and what the experiment. the latter is more relevant but probably still unclear, since it is not that long ago that we were able to even make a first detection. a gravitinon would have to follow the same propagational behavior as the wave.
  6. Hmm, what an inkonvenient technical pause to the conversation. Sure, so you reduce time to the instructions to construct it and in the case of SI, we have very specific instructions. That definition is well defined, sure, but it is decided upon by a committee of people and not actually nature. If you look into the details, you will find a lot of instructions how to correct the Caesium atoms readings for specific effects and you will additionally find a passage explicitly stating that effects of gravity must not be corrected. These seemingly arbitrary specifications make it clear that it is a convention we come up with, same as Einstein's synchronization is and that raises further questions. A mathematicians first natural reflex here is to ask, what other choices of such instructions could we use instead that lead us to a well defined time? are all of those equivalent? And really, thinking a bit about it, it turns out that geometry rises such question and has figured out the answers long ago. It turns out that a metrizable topological space with a differential structure allows for way more one Riemann manifold to construct on it. So we know that there is a large set of possible alternative concepts of time that are not isometric to the one we use. We can deduct how clocks of the different definitions of time relate to each other, and we can formulate how the laws of physics and their symmetries look from the perspective of other alternative clocks. We cannot go wrong when we change conventions our theories work with, can we?
  7. And you are not entirely wrong here. However, math is a mean bastard when you go deep into some seeming trivial details. There is just no one singular way to represent and model physics because it turns out you will requiring quite a few of additional assumptions that cannot be experimentally verified. Those are technically conventions. The choice of those will however have an impact on the resulting invariances and therefore laws of physics. Uff, that isn't so easy to answer. But how do you define a clock or a time measurement in general? let's say we are in a different universe then this one with other laws an all (or just in a computer simulated reality like in the matrix films). How do we define in a general abstract case? Usually it helps by asking what do we need time measurements for to solidify which axioms those measurements have to adhere to in order to fulfill that purpose. This is maybe a mathematical approach physicist don't often consider. A key aspect of measurements is their ability to compare results and translate real world relations into numeric values we can do calculus on. The mathematical concept of a metric very accurately reflect this fundamental of measurements. But given one metrizible topology, we know from mathematics there is way more then one possible metric. Note that nature does not actually need any numbers to work. But we do to model nature. So for nature a smooth topology is enough and it is us that adds a numeric structure of a metric with its comparison relation. in doing so we introduce a lot of untestable assumptions (that implicitly define the metric we use) that mix with the laws of nature into their familiar representation. As a somewhat analog example consider how different symmetries look from the perspective of different coordinates. The same is true for geometries / metrics. But your initial believe holds in a sense as long as the model suffices to Noethers prerequisites: if we choose a suitable metric, it will still have invariances and well handable laws of physics - those will depend on chosen metric / geometry though. Technically you could however work with violated Noether assumptions... but that will be very annoying to handle a system where the total energy isn't conserved but evolves by a deterministic function which means the laws of physics will have some nasty absolute time dependence. I did pick the TDB time coordinate as a time metric explicitly because it guarantees a Galilean invariance with a corresponding energy conservation but requires altered laws of physics that fit those while still reproducing the same relativistic physics.
  8. I have written that a few times, but you are seem keen on jumping over it. the problem is the galaxy rotation curves as indicated by the mass-to-light ratios anomalies since we have no solution to this problem. If you cite dark matter as a solution, then note how that hypothesis works: given a deep mismatch between the GR model and observation we introduce a completely unknown field-like degree of freedom to GR, just so generic and flexible that it can fit almost any deviation. But that flexibility renders it no actual quantifiable theory but free empiric parameters that must be fit experimentally. And experimentally, our means are not good enough to do even that to a satisfactory degree. Sure there have been a lot of proposals to what dark matter may be in order to somewhat constrain these parameters but let's be honest, so far we haven't come far understanding it at all. Adding torsion to GR is just another alternative to fix the same problem and it is a hypothesis similarly generic to dark matter with its own degrees of freedom, meaning it opens up a large class of possible models with wider range of predictions - same as dark matter does. And since the concept is similarly flexible, it can potentially make most of the dark matter obsolete. Unless you consider the question of dark matter resolved and well understood, there is still a big problem to solve. Look how much we try to learn about dark matter from simulations. For these the concept is very feasible - albeit admittedly just as satisfactory as the dark matter explanation. In terms of actual observations i was thinking that one could potentially try to do a Segnac experiment in a much smaller system, like around earths orbit via satelites. But of course here the effect might be much less prominent and thus harder to detect.
  9. fair enough, bad wording. an interferometer for the purpose would have use a pretty extreme wavelength. in my mind the plastic model of a spiral galaxy was shaped more like an airscrew. but fair enough, a perfect flat disc wouldn't be able to move a medium without friction. you got the point anyway. I know the prediction GR makes. But i also I don't know any kind of experiments which would already reliably disprove this possibility, though i may be wrong - so correct me if i am. For this scenario the theory does not seem to be well tested and its predictions are therefore in an extrapolating regime. But of course that does not disprove it in any way. As i stated in my opening post, the purpose is to formulate/sketch the idea for an experiment that might be looking for physics beyond the currently known. After all the purpose of experiments is to test a theory - and that's especially interesting for things it hasn't been tested for. well, look up the galaxy rotation curve issue which shows a big discrepancy between model if we only consider the visible matter. The introduction of large quantities of dark matter outside the galaxy is the preferred hypothesis to mend the discrepancy between the model and the observation. MOND is another approach. however, both are merely assumptions, neither is considered experimentally established. A rotating aether or equivalently introducing torsion degrees of freedom to the affine connection GR uses might be another alternative. Anyhow, the issue is that a galaxy spanning ring is not particularly ideal to build for an experiment - yet this is the case where the discrepancies with observation are the largest. Is there a better way to test such a hypothesis?
  10. Off topic, but you rise an very important point: how do i use LaTeX on this forums?
  11. Yes, the gedankenexperiment is to place a Sagnac sensor (thanks for the proper name) moving along a galaxies disk (i.e. no relative rotation) and use it to measure the angular velocity at that radius to compare it with usual methods of determining a galaxies rotation curve from afar. Our current physical model would expect these methods yield agreeing results. However, i suggest to test if this assumption holds in such circumstance. For example one possible outcome could be that the Sagnac interferometer would yield an angular velocity close to our current model of gravity without the presence of dark matter, which is much lower to what the outside observer measures. The idea is merely based on the cylinder case where also a Sagnac test is used to determine the difference in the one way speed of light. I only mentioned SR because it allows to study this surprising case and its implications - particularly the concept of a preferred frame in relativity. Maybe consider doing a similar experiment in other area of physics to understand the motivation better: Let's take a miniature plastic model of a galaxy and let it spin under water (or better, a superfluid). The spinning galaxy will also cause the surrounding medium to partially flow along the rotation in a curl flow, a vortex will form. We conduct an analoge of a Sagnac sensor but instead of using light, we use acoustic signals which propagate via the medium. Calculating the delay between the signals works similar to light except that the medium defines the one way speed of sound along the signals path. In this case the sensor therefore only measure the angular velocity of the galaxy relative to the rotating medium around it. Because of this the measured result will deviate from the actual angular velocity observed from afar.
  12. It is known that the one way speed of light cannot be measured under normal circumstance, however there are a few special interesting cases. Let's consider the situation of the twin paraoxon, except we assume the world is shaped like a cylinder thus having a finite length in one direction. In this unique settings we can still discuss the twin paradox within special relativity where the two twins travel exclusively in inertial frames, yet are able to periodically meet each other since geodesics around the circumference of the cylinder form closed loops. These meetings allow to make age comparisons locally and logic requires that there must be a uniquely determined, frame independent answer which twin is older. So globally inertial frames are not strictly equivalent in such a case and indeed there can only exist one frame where aging will happen the fastest. In the same situation each twin could also send two light signals (instead of his twins) in opposed directions around the circumference, wait for their return and determine the delay between them. This delay is the difference in the one way speed of light along those two directions. There is only one frame where both signals arrive simultaneously and which is also the same where clock tick the fastest. So we have a case in SR where a preferred frame exists that acts a bit as absolute rest frame. This is a pure theoretical scenario since we have no experimental indication for a topology with nontrivial homotopy class (i think). But I was contemplating if we can make use of these considerations in a normal situation. After all we still can send light along closed loops in two directions and determine the delay, for example along a planet's orbit. But i figured that will merely measure which angular velocity the light ring has. In general, if we consider the difference in the one way speed of light to be a vector field, we find that a closed loop only measures its average difference along the loops tangent vector, hence it will always yield a 0 result for constant or conservative vector fields. But if the field has a curl, it will not. So... what would happen if we place such a light-signal ring along the outer rim of a galaxy? It is an open issue that our current physical models find the observed galaxy rotation curve to be quite abnormal. Is it thinkable that the angular velocity obtained from observing the delay in light signals measured by the ring will differ from the velocity obtained by other means? This is another approach to Mach's question of absolute rotation. Because the possibility of a curl in the one way speed of light could be practically considered as space itself being partially dragged along a rotating object, therefore the object will perceive a lower angular momentum then a distant observer would visually assume. Such an additional degree of freedom would allow a galaxy to match the modelled angular momentum curve and with the angular velocity of space added on top still obtain the observed behavior. This is of course pure hypothetical/speculative, but the question whether that makes sense to look for experimentally is not. finding things that may be of interest for measurement is always looking for possibilities beyond the established - hence why i posted this here rather then the speculation forum section. As far as i understand GR, it isn't able to account such a possibility as that would at least require to introduce a torsion degree of freedom to its connection - thus this is looking for new physics. I don't know of any experiments that would have discounted such a possibility already, or does someone know any?
  13. Indeed me too. I am still figuring if such an approach is viable in general and if there are already similar concepts people worked on that may help here. As for the details, i am still figuring out how the correction function applied to clocks in TDB looks like when generalized to any possible theoretical situation, i.e. what the metric that TDB units imply looks like relative to the usual metric of GR - because that's what mostly defines the transition between geometries. It doesn't work like that. A such a big change in geometry also is accompanied by a change of some symmetries. Particularly having a locally dependent speed of light does not work too well with Lorentz invariance. Other geometries means other laws of physics, and in this special case the laws of physics using a time (metric) that is shared between all frames and locations (effectively is an absolute time by Newtons definition), their invariance can at best be only Galilean. This is the form where the equations can start to resemble those of fluid dynamics.
  14. You have to be careful with such statements. c is a very different constant from all else given its connection to the definitions of time and length. As Poincaré nicely demonstrates by the example of how Astronomers determined that c was constant, that in fact they had to assume how light moves through the vacuum beforehand in order to make measurements at all, hence they showed that if c is const then c ist const. Therefore if you account how the interpretation ties into the model, the experiment actually measured the function c(c). His example is a good point of study for the general unresolvable interdependence. Besides, with the current definition of the SI meter, it is logically impossible that c can vary in any way. i am aware that this definition was chosen much later and for a reason. But let's view it the other way around: if we simply ignore what c "really" does, define its behavior ourself instead and make all experiments maintain that convention (SI system), would we be able to find out that we are "wrong"? For as long as a meter defined liked this provides a well defined measure for length (a rod that has some hysteresis when moved around won't suffice the axioms of a length measure), then no, because all experiments will just provide some results and we can always will find some model that reproduces them. The question is a chicken or the egg causality dilemma between definitions of units and laws of physics. if one assumes a constancy the other inherits it. In reality we can only observe how physical entities change relatively to each other but never how they change absolutely. So what we can do is compare two physical processes where c is involved against each other and compare that c obtained from one is same as the other. A deviation would be interpreted as our model/understanding of one the processes is wrong. I know experiments were conducted to check how stable constant were, but we have to be a lot more careful interpreting the results. Well, one can package all components of Maxwell into a rank-2 energy-stress tensor and the field equation of GR provide the its time evolution. The analogue could work for gravity... maybe it would just effectively replace the Einstein Tensor with another and reshape the remaining Maxwell energy stress tensor into a trivial geometry. With the metric trivialized to globally Euclidean and the two tensors becoming analogue in interpretation, one could combine them into one thingy. So i am not convinced this must lead to a less simple formalism. There are a lot of reasons to ask questions. We still haven't solved the issue of quantum gravity. Looking on a problem from another perspective may help, specifically in a flat Euclidean geometry quantization might be easier. Also reshaping spacetime like this allows comparison to familiar classical fluids and their equations. I would be interested to study how light inside a warp-bubble solution compares to the situation of sound waves inside a cockpit of a supersonic jet. Maybe if we can bring sound and light into a comparable metric, an analogy which might help us know where to look to find solutions to circumvent certain speed limits. There is also an unresolved issue with galaxy rotation, which i have a hypothesis i an interested to test experimentally. I wanted post on that later, but since it has connection to this concept here, i decided to post this first.
  15. yes and no. you are right that i can do a lot of things already with the coordinates but not all of it. but coordinates are one thing, units another. energy for example does not depend on the choice of coordinates (apart from the frame), yet its unit is made up of length and time units. using a locally different time unit as a basis for energy defines a very different physical entity that actually belongs to a different geometry. Noether guarantees us that as long as we can find the symmetries in the new geometry, there will be a alternative concept of energy which will be preserved. Coordinates cannot help with the energy question. Consider Euclidean clocks which behave differently between frames compared to proper time, particularly lacking the singularity at c - if we insist on those to measure an alternative energy, we get diverging results. It requires very different laws of physics to make that new energy (and its action) produce the same outcomes as the relativistic Minkowski geometry does. That is what i am aiming to look for by changing of the metric and particularly hope it can provide a direct translation mechanism in between these different concepts of energy and geometry. I think it is also the metric which carries units while coordinates are usually treated as dimensionless. of course we often use conventions like c=1 to hide and simplify the equations. It is worthwhile to spend some time reading Henry Poincaré's notes on measuring time and what that means for the speed of light. So how would be even notice c to vary, if even Poincaré's corrected Lorentz Aether Theory concludes the same result for the Michelson interferometer? Any attempt to measure c requires us to be able to measure time and length or at least assure we can maintain intervals of constant lengths for the measurement. Yet all the definitions of length and time we used were purely electromagnetic in origin. And it is precisely there where we go the full circle. if our concepts of length and time are implicitly based on c and use it as a reference rendering it constant, then all our measurement will show exactly this and none will be able to record any deviations unless it breaks from the specifications of the SI system. Look at the definition of a geodesic clock @Genady posted earlier and consider how it is affected by a locally varying c(x). it demonstrates how the assumptions on the speed of light is tight to definition of clocks and that it will work with any assumption you put into it. Now consider using that clock in reverse to measure c - those are two side of the same coin. We do know from experiments that clocks run out of synch depending how close they are to a gravity well. We can interpret it as usual, or we can assume that our reference oscillator for time is affected by some local effect and needs correction - same as we had to correct for thermal expansion of the original meter bar and same as the official SI definition of second via Caesium lists required corrections singling out gravity as the only local influence that must not be corrected. If we do that however, we speed up time locally at the immediate consequence that c(x) gets a local dependence. isotropy of light is incompatible with isotropy of clocks. I am open to the result of whatever the metric transition will require. i highly doubt it will be however a scalar field theory, after all the existing degrees of freedom gravity has in GR embedded into the geometry have to go somewhere. I do however think there will be at least one dominant scalar field, especially in the Maxwell equation: a refractive index c(x). But if gravity travels at c and has transversal waves, it likely needs quite a few field equations... actually i would think it might look analogue to Maxwell with two force fields, a vector current and a scalar density. each dimension reflecting one degree of freedom of the GR metric tensor. actually i stumbled on this recently: https://arxiv.org/pdf/gr-qc/0205035.pd . haven't yet time to go through it, but it goes in a similar direction albeit with a different starting point.
  16. Yes, this is indeed where i want to start from. As I understand it, the metric is the important bridge between model and experiment and one finds almost all measurements contain the units of lengths and time. The interpretation isn't actually trivial, because looking deeper at the definitions, one has to make quite a few implicit assumptions in order to define any kind of unit. So the question arises what happens, if we changed some of those assumptions/definitions? We could for example assume a real physical oscillator, on which frequency we base our unit on, is influenced by certain local conditions and consequently we want to apply location and frame specific correction factors to counter these effects. But that interests me for another reason: In science we want to test the assumptions of our models against experiments and in the process it's sometimes easier to formulate counter-hypothesis and check those instead. For some postulates that runs into logical problems: e.g. the isotropy of the speed of light. If we assume a model where it isn't a constant, we run into contradictions with how measurement in experiments works which still implicitly assumes otherwise. If we however account those new assumptions in measurement, the required corrections will yield different measurements as we practically use a different metric (implicitly also geometry). In that case, we may end up doing what i want to discuss. If physics can be reformulated into a different geometry with different laws of physics such that it yields identical predictions, then a counter-hypothesis may prove physically equivalent to the base postulate. In that case i would consider such postulates untestable and treat them as conventions. Before i can go deeper exploring various approaches for physical models, i want to first get a good understanding of the the fundamental relations between a model, its interpretation, measurement and experimental testing. Furthermore, what exact role does the metric plays in this and is the way i think about it correct? I have looked tiny bit into GTG and it does sound quite interesting, though i have not yet understood how its interpretation works. I will have to look better into the techniques applied, though the idea to start a theory from the action and deduct the model from there is inconvenient for my case, because my starting point is indeed the metric and i know too little about what the resulting model may be. Also, i'm not sure if a flat Minkowski space is a good basis for a formulation where c(x) is deliberately made non constant. During the week i usually don't have too much time to focus on such topics. For now you gave me plenty of stuff to read
  17. Okay, i totally failed explaining what i mean by "changing the metric". In my defense i don't know any appropriate terminology for that particular procedure and googling didn't help... so i turn to the forums. Maybe let me try to rephrase it: Given a Riemann manifold (X,g) we can also consider it a simple metric space (ignoring its differential structure for the start). Now let's consider the identity map id of X to itself. I want to introduce an alternative metric structure on X to make it a different metric space (X,f). In that scenario id also becomes a map between two metric spaces and the intention of the choice of f is that id won't be an isometry. Now accounting that X is a smooth manifold we have two distinct Riemann manifolds, each with its own LC connection and they must consequently fail the Cartan-Kalhede test. I started reading on teleparallelism and it goes quite along what i am interested in. The tetrad field in my case would be build from the unit vectors of the TDB and BCRS coordinates. I am just not sure i understand the choice of metric and connection in that case yet. gimme some time. But choice of metric indeed also tries to study a case of a flat geometry, but i intend to stay within the context Riemann geometry. The major difference is that i do not want to postulate any new physical laws on my own but rather would like to deduct the laws in the new geometry from the starting theory using a transformation like Steven posted. In particular, i want to move all influence of gravity from the geometry and torsion (rendering it trivial) and instead separate it out into its own fields: in terms of the transition to the new equation of motion of a particle, the remaining difference between the new and the old Christoffel symbols needs to be interpreted as physical fields representing gravity.
  18. Yes, and that connection is always given by the metric of the Riemann manifold via Levi-Civita. This is why the definition skips out out to mention it. In the special case of Riemann geometry, the metric uniquely dictates the connection and gives it a special name, the LC connection. But don't misunderstand me, i am not insisting that in general a connection requires a metric for its definition. It does not. In that sense it is indeed an entirely independent object around which there is a separate field of study. But we are not discussing the connection itself but geometry, and that is another matter. A some geometric properties have redundant definitions as they can be defined via different concepts, like e.g. geodesics. Besides of what the name geometry already implies, as you can see from the field of pure metric geometry, main geometric definitions don't need to have a connection at all. This is where it is important that when different concepts are available at the same time, their compatibility must be ensured. It is weird to work with a metric and a connection that contradict each other showing two very different geometries. So whenever geometry is concerned specifically, there is a clear link between them. For the part of physics i want to discuss I assume we have a both a metric and a connection and they have to be compatible so that whenever we talk about geometry in the model, these don't provide contradicting accounts. A change of metric hence requires to find a new connection compatible with the new metric. I'm on it.
  19. When you stick to that definition, your 21404 mile route is a geodesic, just not a mimimizing geodesic.
  20. the general definition only requires to minimize distance locally, the extension of that condition to hold on the entire interval makes it hold globally.
  21. I very roughly summarized the definition. technically, it is merely minimizing the distance locally. Like we know it from optimization problems, finding a local minimum doesn't guarantee at all it's also a global one. This can happen when there is more then one geodesic connecting two points with each other. The definition requires the existence of a continuous curve between the start and the end, so it only concerns a connected subsets of the space. You can find the definition here: https://arxiv.org/pdf/2007.09846.pdf and this is how this general concept translates into the special case of Riemann geometry: https://www.cis.upenn.edu/~cis6100/cis61008geodesics.pdf
  22. I have to admit that i am a bit rusty in the field and had to look up a few things again. I mean no disrespect either but what you said is at least partially at odds with literature on geometry - because you leave important things out. While the connection is a fundamental tool in geometry, it is not actually used in most geometric definitions because it is only available under special circumstances. And whenever you talk about a LC connection, you forget that is is explicitly defined as a metric connection, that is via its compatibility with the metric (please check that if you don't believe me). So maybe let's start with clarifying a few things then. The concept of a geodesic as a generalization of a straight line in a curved space, is roughly defined as the shortest local path between two points (skipping some details). That is amongst the most general definition for which only a metric space is needed but no differentiable structure (hence no connection). Note that also the setup of Riemann geometry, the Riemann manifold, is defined minimalistically in terms of only a smooth manifold and a metric with no mention of a connection because being a metric space, the entire geometry along with the geodesic structure is fully specified this way already. However, for any practical use it is incredibly cumbersome to construct geodesics from that alone and since we are in a special case, we can make use of the additional tools available. This is where the LC connection comes into play. Aside from being torsion-free, its other defining property is that it must be compatible with the metric on the Riemann manifold. Only the combination of both conditions makes the connection unique. The second condition is crucial because it assures that the geodesics constructed via the connection agree with their metric definition - and frankly speaking this is the core motivation for that definition. The torsion-free is chosen for simplicity and more importantly to make it unique. In case of Riemann geometry, the definition of geodesics via parallel transport is equivalent to their metric definition through that link. But that of course requires that we cannot treat the connection as independent from the metric at all. What you write sounds as like lengths, angles and volumes are effectively independent of the geometry. but you are aware that angles are a way to express whether two vectors are orthogonal or parallel or in between? So your independent metric may get into an argument with your connection about its idea of a parallel transport. Yes, this is very helpful! Albeit i will have to do some reading before i can sort out how these relate to what i want to look into. Give me a few days. I'm sure i'll back full of questions
  23. Steven's post was the best and shortest summary of what i intend to discuss - and yes, just as Poincaré implies and Steven writes, the change of geometry entails the change of laws physics. Einsteins field equations are linked with a very specific interpretation and geometry. Changing the latter requires to update the prior. The starting point is that i want to use different devices as clocks which will produce time measurements that will disagree with the proper time general relativity expects. Instead of discarding these devices as false, i intend to find a model that fits them and therefore I need a metric tensor that is able to reproduce their time measurements. Measurements with such clocks naturally will also show a disagreement when testing various laws of physics as we know them, hence we do indeed need different laws to make the new clocks work. Your link is a bit general. can you tell me which chapter i should be looking up in more detail? when comparing Newton's old theory to the relativistic case, we find that it is itself a collection of different proxy models, because we have to choose the frame in which gravity acts instantaneous. depending on the choice/definition what we consider simultaneous in reality, we get each a model that will produce slightly different predictions. Furthermore, we need to interpret Newton's model and in particular each time measurement. Do we use an interpretation where any SI-clock is a valid measure of the absolute time or do we identify Newton's time with a coordinate time like TDB? while that does not change the working of the physical model itself, this has big impact on translating measurements into initial conditions and later back into predictions we can compare with experiments. Newton's gravity is by no means a uniquely determined theory either. Hmm, not sure what you have trouble with here. As you can see from Steve's post we have two connections Gamma and Gamma' wich are both derived via the formula from their corresponding metric tensor g or g'. That formula is what makes each into an LC connection of the corresponding geometry. Since both manifolds use the very same set and coordinate map, we can evaluate all those terms in those coordinates at each location and find they are simply different matrices. Let's go back to the simple case of a single differentiable manifold, which in one case we equip with a Riemann metric to make it into a sphere and in the other we pick a another metric to make it into an ellipsoid. Both cases have each a metric tensor and an associated LC connection. But the LC connection on a sphere cannot be the same as on an elipsoid. A LC connection is only unique per each Riemann metric - therefore on a single differentiable manifold we have as many LC connections as we have valid tensor fields that fulfil the requirements of a Riemann metric (bilinear, symmetric, non-degenerate everywhere). I would be very interested in looking these up. It's quite possible i was looking (googling) for the wrong thing and my results just came up empty.
  24. Yes, a covariant form is as you say independent of the choice of coordinates - but i do not intend to change the coordinates. The issue is that "changing the metric" has an ambiguous meaning, because in physics it's used differently - because the actual metric is never considered to be changable. Physics don't consider the possibility that the very same physical situation can be described using different geometries. Analog to how we can freely choose coordinates, mathematics does actually allow to change the geometry as well, for as long as the topology is shared. Consider that Newton's classical theory of gravity and Newton-Cartan theory represent the same physics, yet they achieve the same predictions by the use of different geometries. I do assume to always use a Levi-Civita connection as you can see from the formulas in my last post. I left out the connection mostly because i prefer to have it compatible with the metric and therefore implied by the known formula for the Christoffel symbols. But in my case it is applied to a different tensor, hence the resulting connection is different as well. I haven't said that explicitly but it's in the formulas i have posted. If we agree to restrict to LC-connections, it is enough to specify only the metric tensor. If the formulas in my example weren't enough or the idea too unfamiliar, note that a Weyl transform is a special case of this, however it has a different purpose and i don't intend to restrict to rescaling of the metric but rather any local transformation that preserves the rank of the tensor. The idea is indeed to redefine the meaning of lengths, angles and what is parallel: what one metric tensor (and its connection) may consider orthogonal, another may not! In mathematics we know that we can work with different definitions of orthogonality all the same - it is just not trivial to find the proper interpretation of what that means. Same as using different coordinates requires using a different interpretation for each point is (i.e. (0,0,1) is a very different location in polar then in cartesian coordinates), we must adapt our interpretation when using a different geometry similarly (lengths, angles etc.). You are right, that if we change the connection and leave the interpretation unchanged, we will get a different model that will make wrong predictions. However if we change both the model and its interpretation accordingly, we can leave all predictions untouched.
  25. Okay, from your responses i see we are talking past each other. All you say is true, but it is also not exactly related to what i was writing. Let's go back to the example of a single differentiable manifold that we can make to be either an ellipsoid or a sphere depending to what metric and connection we define on it. Because both are build on the very same manifold however means that any covariant tensor equation in the ellipsoid geometry can be translated into one in the sphere geometry like this: (btw. how do you use latex here around these forums when in need of writing down some formulas?) There is also an interesting discussion of this by Henry Poincaré you can find here in section XII: https://en.wikisource.org/wiki/The_Foundations_of_Science/The_Value_of_Science/Chapter_2 It discusses the difficult relation we have in physics where in order to conduct any measurement at all we need to start with some assumptions. These kind of assumptions must be distinguierend from other physical laws as these assumptions are not testable in an experiment as Poincare remarks. For example: you will find that we cannot simply assume the speed of light to be non-constant as this - given how we define the measurement of time and distance - will produce contradictions whenever we try to model these measurements with such an assumption. Anyhow, once you have made a transition to another geometry, you have to be very careful with the interpretation. if you use the wrong one, the new model will of course make wrong predictions. But same as for general relativity, you first have to deduct the correct clocks and rods that are represented by the new metric tensor and its connection. And only then the combined model and interpretation will yield identical prediction to your starting theory.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.