-
Posts
2082 -
Joined
-
Days Won
61
Content Type
Profiles
Forums
Events
Everything posted by Markus Hanke
-
As I mentioned earlier, in the context of GR gravity is defined as geodesic deviation. What it means is that GR tells us about the world lines of test particles - far from gravitational sources, initially parallel world lines remain approximately parallel; in the vicinity of sources initially parallel world lines will deviate in specific ways. GR allows us to calculate this deviation, ie the motion of test particles; in fact this is all it does, since geodesic deviation is precisely what ‘curvature’ means. It does not address the question as to why (in a fundamental ontological sense) the deviation occurs, it simply quantifies it. So it is purely descriptive in that sense, and no deeper mechanism is suggested or implied. It is also important to remember that the mathematical structures employed in this description (manifolds, connections, metrics, geodesics) where known and existed long before Einstein, who simply put them to use for his model. They do not ‘belong’ to GR, but are just general mathematical entities used in many other contexts as well. It is also possible to use different mathematical tools to arrive at the same results (eg a Lagrangian instead of curvature tensors, or numerical methods). Given this fact, in what sense could the formalism of GR be anything more than instrumental? The only observable of the model is the motion of test particles, but the entities used to calculate that observed motion are not themselves observable or detectable in any way, and can to some extent even be substituted for different ones. I don’t know if that makes me an instrumentalist, but if it does then I’m ok with that label. I just think it’s dangerous to reify mathematical tools that don’t correspond to physical observables, especially not if we know already that more than one formalism is possible for a given model. You can point to a test particle falling, but you can’t point to a Riemann tensor. That doesn’t diminish its usefulness, but we shouldn’t make more of it than what it is. A force is a vectorial quantity by definition. Vectors are rank-1 tensors; it can be formally shown that it is not possible to capture the necessary degrees of freedom exhibited by gravity by any kind of rank-1 object in general. You need at least a rank-2 tensor for this, hence the necessity for a metric theory such as GR. So no, force fields are not generally equivalent to space time curvature, on fundamental grounds. Only under very special circumstances (static and stationary spherically symmetric vacuum that admits a time-like Killing field) can you describe gravity using a simple potential, and thus force. Ok, but what exactly is meant by this? As explained, GR models the motion of bodies, so it accurately enough represents that aspect of reality. But do you mean to ask whether all unobservable mathematical entities employed in arriving at that observable result must necessarily also represent aspects of reality? For example semi-Riemannian manifolds, and curvature tensors?
-
Yes @joigus, I lurk in the shadows and follow proceedings here whenever I get the opportunity At present I live in the jungles of Thailand, having recently been ordained as a monk, and do not have access to anything other than an old mobile phone with spotty and slow internet access, so I’m not really in a position to participate in discussions. It’s just too slow and painful to type this way. I will return once I get access to better infrastructure - perhaps some time next year. Satellites in orbit are in free fall - place an accelerometer into them, and it will show exactly zero at all times. No proper acceleration -> no force acting on them. And yet they don’t fly off into space, but remain gravitationally bound into their elliptical orbits. Clocks in them are also dilated wrt to far-away reference clocks, which is also a gravitational effect. Thus, no force, but still gravity. Newtonian forces are simply bookkeeping devices, and as such they often work well - but only in the right context. Their nature is descriptive, but not ontological. They are not very physical either, given that they are assumed to act instantaneously across arbitrary distances. The strong, weak, and EM interactions aren’t ‘forces’ in that sense at all, since they work in very different ways. They are only sometimes called ‘forces’ by convention, for historical reasons. They ultimately arise through the breaking of symmetries, with the particles involved being irreducible representations of symmetry groups. Finally, it should be noted that physics makes models, that’s what it sets out to do - and as such it is always descriptive rather than ontologically irreducible. So, asking whether gravity “really is” A or B, or whether A or B are “true” is fairly meaningless, since both A and B are descriptions of reality, but not reality itself. Like maps of a territory. The correct question is thus whether models A and/or B are useful in describing gravity, and in what ways and under what circumstances they are useful. So - Newtonian gravity is sometimes useful, but GR is more generally useful, as it gives more accurate predictions for a larger domain. So for now the best answer to “what is gravity” that we have is a purely descriptive one: it’s geodesic deviation, and thus a geometric property of space time. To put it flippantly, it’s the failure of events to be causally related in a trivial manner. Future advancements may upend this picture in the high-energy domain, perhaps radically. We’ll see. I’m sorry I can’t contribute much at the moment, but I’ll leave you with the above thoughts. I could have written much more, but it’s too much of a pain on a small mobile phone screen.
-
Dear All, I am going to take a hiatus from the forum from today. As some of you might know, the natural sciences are not my only area of interest; in particular, I am committed to a form of spiritual practice as well, and have been living in a Buddhist monastery as a lay person for the past few years. I have made the decision to deepen this practice further by ordaining as a monk in the Theravadin Thai Forest tradition, and for various logistical and monastic-political reasons this should ideally happen at a traditional training monastery in Thailand. So tomorrow I will be departing for Thailand to seek ordination there. I think it doesn’t need pointing out that forest monks generally don’t spend a lot of time on Internet forums, so chances are that I will only get to check in here very occasionally, if at all. That being said, there are a lot of question marks and uncertainties, particular in terms of immigration formalities, so it is possible that I need to come back here to Europe in a few weeks once my initial entry permit runs out, and make alternative arrangements from here (meaning I’ll have to find another place to ordain). I will only know once I get to the monastery and start dealing with the local immigration authorities (I see frustration and nightmares on the horizon!), but I’m willing to take that risk. I have been debating whether it is useful to present my reasons for going this path - you have seen me here being on about physics and equations all the time, so this might appear strange to some of you. But I’ve decided not to, because when it comes down to it, I can’t really present a convincing rational argument - this decision simply didn’t come about as the result of reason. I will say only that I’ve seen and understood enough in the spiritual practice that I have already done in the last few years, to know that this is the right path for me. The argument is a phenomenological one, not the result of rationality, so it cannot be easily conveyed in a written post. Spirituality ultimately expresses itself in the kind of person you become by engaging in it, and that’s not something you can fake or wear as a mask. You also cannot reason yourself into the monastic life - that is far too weak a basis for anyone to be at peace with that form of life, never even mind to be able to derive any benefit from it. It needs to be a true conviction that arises somewhere deep within, and that cannot be verbally communicated to others. I will add here that for me there has never been any contradiction between scientific endeavours, spiritual practice, and philosophical enquiry. Not only is there no contradiction, for me these are just aspects of the same underlying motivation to better understand the human condition; hence, if engaged with in the right way, they are complementary and inform each other. I have always felt strongly that it is necessary to achieve some kind of synthesis of these three things for us as a species to make any kind of real long-term progress, since each one in isolation can be misused for harmful and even destructive purposes, as history has sadly shown us all too often. So anyway, thank you everyone for sharing in these discussions, and I hope I have been able to make some kind of contribution - no matter how small - to this forum. In case I’m not back here for a while, I wish all of you the very best, and hopefully we’ll cross paths again. Keep my account open, just in case
- 14 replies
-
12
-
Yes, and we moved forward from there. We know a lot more now than we used to, so we won’t be going back to 1963. Yes. And we can do much more than that - we can even probe the internal structure of the protons and neutrons themselves, and thus directly test the quark model. In particle physics we do not speak of “certainties”, but instead deal with a quantity called statistical significance. This essentially tells us the degree by which, given a sufficiently large statistical data set, an event is likely to be “real” (as opposed to being a statistical fluke of some kind). As for neutrinos, yes, we know these things with a very high degree of statistical confidence, way beyond the required threshold value. Note that the three neutrino flavours and their oscillations have little to do with mass, other than the fact that they need to have a non-vanishing rest mass in order to oscillate at all. The various known fundamental particles have been found - and continue to be probed - with a large variety of different methods. I’m not sure what this has to do with protons, specifically. Sure. Some examples that immediately spring to mind would be nuclear reactors, diagnostic equipment such as PET and MRI, quantum computers, and many more. Even your smartphone is likely to contain components that directly rely on some aspect of particle physics in order to function correctly. Also, the chemical properties of all the various elements are a direct result of particle physics and its laws. Pair production is a simple consequence of quantum field theory, along with the usual conservation laws. There is little mystery here - you can even deduce some of the basic kinematics at play using semi-classical methods. That’s because, energy levels being equal, what we call a proton is a composite quark-gluon system, whereas an electron is an elementary particle. You can read up about quantum chromodynamics, if you want to know more details. Yes, which is precisely what General Relativity tells us will happen once certain conditions are present. Not so! There is a lot we don’t know yet, and yes, there are some obvious shortfalls and problems in some of our models. Physics would be a very boring discipline if that were not so - these issues are what provide the impetus to do further research, and continuously develop new models, so this is a very positive thing. At the same time though, there is an awful lot we already know at confidence levels that are so high that for all intents and purposes they can be considered near-certainties. We have much more powerful and sensitive instruments at our disposal compared to 1963, so we are able to probe far deeper into the structures of reality. The quark model wasn’t fully developed and experimentally tested until the 1970s, so your textbook is missing a huge piece of the puzzle.
-
Only the first five particles in that table are actually elementary - the entire rest of the list are composite particles. There are also very many particles missing. Why do you go back to a book that is nearly 60 years old, and thus outdated? Why not refer to a more modern publication that reflects our current level of knowledge on this subject? It isn’t. Neutrinos (of which there exist more than one kind) are fermions, and they have a small rest mass; photons are bosons, and massless. They are completely different. Because this old information turned out to be both incomplete, and wrong in places. We know a lot more about particle physics now (from experiments and observation) than we did in the 1960s. The main objection would be that they are simply not there. With modern particle accelerators, we can probe not only the nucleus as a whole, but also the internal structure of the proton and neutron, so we already know that there are no relativistic electrons to be found there.
-
No, but that doesn’t mean that within the fish blood cannot circulate. Likewise, the composite system “astronaut + photon” cannot move away from the central singularity (only towards it) - but that doesn’t necessarily mean there can’t be ordinary (i.e. respecting the laws of SR) relative motion between the photon and the astronaut’s eyes on a small enough local scale.
-
Heisenberg's uncertainty principle for dummies?
Markus Hanke replied to To_Mars_and_Beyond's topic in Quantum Theory
Well, even for a classical system there will be limitations due to the limited sensitivity of the measurement apparatus - e.g. you couldn’t weigh a grain of sand using a kitchen scale, since it’s not nearly sensitive enough. But that’s due to the apparatus, not due to anything inherent in the grain of sand. So that’s a different phenomenon than HUP. -
Correct, it is indeed, but that isn’t how such a surface is defined (that would be difficult, since all light cones have a light-like interior). The simplest formal definition I know of for any kind of boundary surface like this is by way of what kind of normal vector with respect to the local metric they admit. In the case of an event horizon, wrt to the local Lorentzian metric, the unit normal vector at all points is a null vector, so this is a null (hyper-)surface. In fact, in can be shown that all event horizons are always null surfaces. If I remember correctly, Wald (General Relativity) formalises this by using the pullback of the metric, but tbh I don’t remember the details exactly. I’d have to find that in my notes first. I won’t claim that this is wrong, because I am honestly not sure how this would play out. I had similar thoughts actually, which is why I mentioned the relative motion between photon and falling astronaut. Your general line of thought is not wrong, since both photon and astronaut are falling, so neither is increasing its r. Nonetheless, wrt to the astronaut the photon must of course still propagate at exactly c, so I am unsure what form the relative motion between the two would need to take. I don’t see how the eyes of the astronaut could possibly “catch up” with the photon, while still preserving the usual local laws of SR. Perhaps the answer is obvious (lol), I just don’t see it right now. This is one of those questions that seem trivial at first glance, but if you really think about them, you’ll find a lot of little devils in the details. Both the photon emitted from his boots, as well as the astronaut, can only fall along allowed geodesics in this region of spacetime, which means they both can only decrease their r-coordinates as they age into the future, wrt to the central singularity. However, this does not necessarily preclude a relative motion between photon and helmet (and photon and boot) such that the astronaut might see something, so long as this relative motion is in accordance with the usual laws of SR - so the astronaut must determine the photon’s propagation velocity to be exactly c in his own frame. It should be possible to set this up accordingly - after all, a freely falling test particle in a region where tidal forces are negligible is locally inertial, i.e. it finds itself in a small local patch of Minkowski spacetime, irrespective of whether this is above or below a horizon surface. So in principle, so long as the BH is massive enough, the astronaut should be able to see his boots for a while at least, because otherwise he couldn’t be considered to be in inertial motion within a Minkowski patch. That being said, if my years of looking into GR have taught me anything, then it is to be suspicious of what seems “intuitively obvious” - I’ve fallen on my nose often enough through this mistake. So perhaps I’m overlooking something here.
-
Heisenberg's uncertainty principle for dummies?
Markus Hanke replied to To_Mars_and_Beyond's topic in Quantum Theory
Like has been pointed out by other posters here, this is called the measurement effect, which is not the same as the HUP. The fact that certain pairs of observables cannot be determined simultaneously with arbitrary precision is something that is intrinsic to the quantum nature of the system - it is not something that arises as an artefact of the measurement process. As swansont has stated, this is because these observables aren’t independent quantities, they are Fourier transforms of one another. In more technical terms, these pairs of observables do not commute, and any pair of non-commuting quantities is always subject to some uncertainty relation. No, because what you are describing is a classical system, and one of the defining characteristics of classicality is precisely the fact that all observables always commute. This is not true in the case of quantum systems, though. Yes, but to do so you need to decide on a choice of basis representation. So you can either determine the state function in position representation, or in momentum representation - and these are again Fourier transforms of one another, so the HUP still applies. -
Uncovering the Neural Mechanics of Autism
Markus Hanke replied to NudeScience's topic in Medical Science
I am on the autism spectrum myself, and I do not exhibit any of these “symptoms” (never have). My hearing is also perfectly standard, and always has been. -
Einstein never said this. The quote is from Ernest Rutherford. I presume you mean “Big Bang”. Where do recession velocities come into this? It has no rest mass, since there isn’t any frame where it could ever be at rest, but is does have energy and momentum. There is no such thing as “anti-photons”; photons are their own antiparticles. You cannot construct the set of know particles, their interactions and properties from just these. Also, the proton is not a fundamental particle. No. Atomic nuclei are held together by the residual strong force. If there were any electrons present inside the nucleus, then the shell structures of all the elements would look very different. Everything else in that post is essentially meaningless technobabble.
-
Consider an arbitrary event located directly on the surface in question, and attach a light cone to that event. Now look at the tangent space to the surface at that event. If the surface is like-like, the tangent space will fall to the interior of the light cone; if the surface is null, the tangent space will coincide with the surface of the light cone. So this isn’t the same - you can (at least in principle) escape from a light-like surface to infinity, but you can’t escape from a null surface. For some time, yes. This is true. I’d just like to point out that the non-existence of stationary frames below the horizon is not a consequence of tidal forces, but is due to the causal structure of spacetime; but you are right in that, for very massive BHs and just below the horizon, one could remain very nearly stationary for some time. General relativistic optics is a notoriously tricky subject, so I won’t speculate on this too much, also because it would in some ways depend on how exactly you move once you are below the horizon. In principle though, for very massive BHs and just below the horizon, there shouldn’t be any extraordinary visual effects, other than some blue-shifting of distant stars. It depends what you mean by “nearby”. Since below the horizon the r-coordinate becomes time-like in nature, all light cones will be tilted inwards - meaning you cannot see anything that is below you, since it is impossible for a photon to increase its r-coordinate, irrespective of how it is emitted. You can still communicate with particles co-moving along with you at the same radial distance. A particle higher up than you can send you messages, but your reply won’t ever reach that particle. So locally in your own frame nothing special happens, but once you start interacting with other local frames, I think you can always deduce that you are below a horizon. It’s a direct consequence of the geometry of this kind of spacetime, and you can fairly straightforwardly calculate the tidal effects that occur. For a radial in-fall into a Schwarzschild black hole, what you’ll find is that the test body gets stretched along the radial direction, and compressed perpendicular to it (this effect is hence called “spaghettification”). The magnitude of these effects follows an inverse cube law, and also depends on the mass of the black hole. This is linked to, but not necessarily dependent on, gravitational time dilation - you can have time dilation without there being spatial tidal effects (but not vice versa). For an observer who is stationary just outside the horizon, the astronaut will fall past him at nearly the speed of light, so the actual time it takes for a human body to cross the horizon is so short that no adverse effects could occur. The astronaut himself will never notice anything special as he falls through the horizon. If you look at the diagram you posted, you will notice that below the horizon all light cones are tilted inwards, towards the singularity. Photons “live” on the surface of light cones, meaning no photon could ever increase its radial coordinate, i.e. move away from the singularity. This is not a tidal effect, but due to the causal structure of spacetime. Once emitted, a photon can only decrease its radial position wrt the singularity. Hence, for an astronaut falling feet-first, a photon emitted from his foot should not be able to travel “upwards” to his eyes. On the other hand though the astronaut himself is of cause also falling - so the real question is whether it is possible to set up the scenario such that the relative motion between photon and eyes can be made such that the falling astronaut can somehow “catch up” with the (also falling) light. I reserve final judgement here, as I think this is one of those situations where one would really have to go and work through the maths.
-
It’s a null surface, actually. The geometry of spacetime below the horizon is such that no stationary frames exist - in other words, no matter how much radial thrust the engines of the unfortunate ship put out, it will continue to experience radial decay as it ages into the future. So the two ships couldn’t remain at relative rest. What did you mean by “paradox” in the thread title?
-
In what way am I “not right”, exactly? As I have pointed out to you, the statistical significance figure currently stands at roughly 4σ , that’s not enough to establish LU violations as being physically real just yet. That’s just how it is. You find the raw data in the link I gave, so you can verify the figure yourself. If these violations are verified to be physically real by doing future measurements, then this will be a very exciting find - discovering new physics is the pinnacle of every physicist’s life, and thinking this is somehow perceived as a “threat to dogma” is simply ridiculous. No genuine physicist thinks this way. Personally I cannot wait to learn whatever structure underlies the Standard Model, and/or GR, though it’s perhaps unlikely to happen within my lifetime. Either way, the SM will continue to be used for cases where it is known to work well, just like Newtonian gravity continues to be used alongside GR, and classical mechanics alongside QM. Remember the purpose of physics: it makes models to describe aspects of the world. It is not about some notion of “truth”. Hence, a model will continue to be used for a specific purpose as long as it is useful, internally self-consistent, and delivers results that are in line with what we see in the real world. Are there any known issues with the Standard Model? Most certainly - here is a list of the most obvious ones. It is precisely these issues that provide an impetus for continued research, both in the theoretical as well as experimental domains. I think this is all very exciting, because historically you‘ll find that the phase when the limitations of an existing model are better understood generally precedes important new discoveries and paradigm shifts. This is simply not true. If you look at the link I gave above, you will find in it not just a listing of the limitations of the Standard Model, but also a number of alternative models (not an exhaustive list). These alternatives continue to be extensively researched, and are taken seriously by the scientific community. However, as it stands, there isn‘t enough evidence in favour of any of these, and also, some of the alternatives come with problems of their own.
-
Synchronizing clocks in different frames of reference.
Markus Hanke replied to geordief's topic in Relativity
I think (but maybe that’s just me) that the notion of “tick rate” is not particularly helpful, since no ideal clock can ever tick at anything other than “1 second per second” in its own frame, irrespective of where it is and how it moves. It is only when you compare the total accumulated time between two shared events that differences become apparent. “Tick rate” is one of those notions that, even though everyone routinely uses it, all too easily lends itself to misinterpretation. Now, the total accumulated time a clock records as it travels from event A to event B is identical to the geometric length of the world line it traces out while connecting these events - so it is actually a geometric quantity. This is true regardless of what the geometry of the underlying spacetime is, so it applies whether or not there is gravity present. So what is the meaning of acceleration then? If you have two events A and B in spacetime, the longest (!) possible world line that connects these is always that which represents a test clock in free fall (i.e. inertial motion) - such world lines are called geodesics. Hence, given that the absence of any acceleration yields the longest possible world line and thus the most accumulated time on your clock, the presence of acceleration at any point of the clock’s journey will shorten its world line between the same two events - so the clock will accumulate less time. This is just precisely what we see as time dilation (due to acceleration). So, proper acceleration can thus be understood as the degree by which a world line differs from being a geodesic, or alternatively, the degree by wich motion deviates from being free fall. Or in a somewhat more fancy way, it’s a parameter that picks out a world line in a 1-parameter family of all (physically realisable, sharing the same boundary conditions) world lines connecting two given events in spacetime. Note that this type of time dilation has nothing to do with spacetime curvature - it’s simply about how you choose to connect two given events. -
Synchronizing clocks in different frames of reference.
Markus Hanke replied to geordief's topic in Relativity
Not unless you artificially make it so. If one of the clocks experiences acceleration and the other one does not, then there will be time dilation between the two. -
What I am saying is that, to the best of my limited knowledge in this area, this has already been intensively investigated (with methods that aren’t so crude, such as fMRI etc), and no local “seat” of consciousness has been found. It appears to be a global property, not something that can be uniquely reduced to a single area.
-
Neurophysiology is definitely not my area of expertise, but it seems evident that consciousness isn’t localisable to any specific area in the brain; it’s a global phenomenon. Of course, there will be some local areas the proper functioning of which is a prerequisite for having ordinary consciousness; but that’s not the same thing. If you were to take that old radio in your kitchen, open it, and remove any random piece from its main board, then chances are there won’t be any more music playing - but that does not imply that that random piece was what generated the music. How exactly is consciousness a “frequency”? Frequency of what?
-
First of all, YouTube videos are not valid sources of scientific information - not even if the information given happens to be correct. So I did some quick research on the current state of affairs in the field (this isn’t my area of expertise), and here’s a good summary: https://arxiv.org/pdf/1809.06229.pdf The upshot is that the current indications for there being some violation of LU come in at a statistical significance of, on average, around \(4 \sigma\), and are seen only for the case of b-quark decays. Other quark decay processes are perfectly in line with SM predictions. This is not sufficient evidence yet to call a new discovery, since the statistical significance level is not high enough. At the very least this will require more such experiments in order to acquire a larger data set. All this being said, there are indeed tantalising hints that some new physics may perhaps be going on, pending further investigation. However, should this turn out to be the case, then this would in no way invalidate the Standard Model, which quite evidently works very well - it would simply require an extension to the model which provides a suitable mechanism to explain these findings. Note also that it is just as possible that these findings are not due to new physics at all, but could arise from our mathematical difficulties in treating QCD non-perturbatively. On a very high level, let me reiterate that we have known for a long time already that the SM in its current form is in all likelihood merely an effective field theory that provides an approximation to something more fundamental. As such no physicist in their right mind would expect the current SM to be the final word on the matter of particle physics. However, when such a more fundamental model is found, this still will not mean that SM is abandoned; after all, we know it works extremely well within the energy levels we can currently probe. This is similar to the situation in classical mechanics - Newtonian physics is still successfully used (and taught in schools), even though it’s just a low-energy low-velocity approximation.
-
As others here have said. I can generally follow the main ideas and steps of a paper within my own area of “expertise” (I’m self-taught and haven’t formally studied physics), being General Relativity; which does not necessarily imply that I understand every single thing and detail (I don’t), but generally speaking that isn’t needed in order to grasp the general ideas and conclusions. Nonetheless, on occasion there will be publications which I can only follow with great difficulty - in the world of modern physics, one can spend many years specialising and studying a specific area, and yet not know everything there is to know about it. It is not rare that I come across GR-related things which I have never heard of before. In either case, it will never be as easy as reading the newspaper, the subject matter just requires deeper thought, knowledge and attention. Once I leave my area of expertise and interest though I get lost pretty quickly - for example, most papers on quantum field theory and the Standard Model tend to be beyond me, since I’m not sufficiently knowledgeable about the intricate details, methodologies, and maths of those areas.
-
This is not true, because it is possible to construct topologies that are unbounded in space and time, yet finite in extent - analogous to (e.g.) the surface of a sphere, which has no boundary, but nonetheless a finite and well defined surface area. The Hartle-Hawking state (a valid solution to the Wheeler-deWitt equation) is one such example for the universe as a whole; it describes a spacetime that is finite in temporal (and possibly spatial) terms, and yet has no boundaries in either space nor time. Even the reverse is possible - one can conceive geometric constructs that have a finite and well-defined boundary in all spatial directions, and at the same time infinite surface area enclosing zero volume, such as the Sierpinski cube. The global geometry and topology of the universe is a question that is nowhere near as straightforward as you seem to think it is, so be careful about making claims such as the above.
-
The notion of ‘gravitational potential’ can be meaningful defined only in spacetimes which are (among other requirements) stationary, i.e. in spacetimes that, in mathematically precise terms, admit a time-like Killing vector field. The universe in its entirety is approximately described by an FLRW spacetime, which does not fulfil this crucial condition. So the concept of ‘gravitational potential of the universe’ is meaningless, which is why you weren’t able to find anything on this topic.
-
Many results and papers first appear on freely accessible pre-print servers such as arXiv before they go to peer-review journals, so the short answer is yes. The problem though is that such papers are almost always very technical in nature, so unless they have the requisite background knowledge it is very unlikely that a random member of the general public would understand such articles. It’s usually only later that easier to understand corollaries of these findings appear in various pop-sci publications aimed at the general public.
-
On the most abstract level (I have little interest in specific setups tbh) I can tell you for a fact that electromagnetism locally conserves energy-momentum, just like any other interaction in nature: \[\triangledown \cdot T_{(EM)} =0\] As such, it is not possible to get “free energy” from a magnetic field on fundamental grounds, irrespective of how the apparatus functions in detail. At the very least you would need to invest the same amount of energy as you need to propel the spacecraft, into making the magnets in the first place.