-
Posts
2073 -
Joined
-
Days Won
61
Content Type
Profiles
Forums
Events
Everything posted by Markus Hanke
-
Whoa! Remind me to visit your house some day, sounds like the place to be... I must admit I am baffled by this - you are a philosopher yourself, so surely you must see the issue with this? When you probe a sample of matter on atomic scales, what are you really going to find? Will you find ‘atoms’? Of course not. What you will find are ensembles of electrons, protons and neutrons, in various configurations, plus an abundance of vacuum. That is all. What we call ‘atom’ is a convenient convention to give a short name to such quantum mechanical ensembles, largely for historical - not scientific - reasons. They are real, but only in a conventional sense; ontologically there is no such thing. No experiment will ever detect the ‘atom-ness’ of an atom, because the only thing there is on that scale are electrons and nuclei. But it gets worse. If we decide to crank up the energy and probe said protons and neutrons, we find that they themselves are also ensembles of more fundamental particles, being quarks and gluons. So on subatomic scales, there’s no such thing as protons and neutrons either, they are convenient conventions too, but don’t exist as independent entities in and of themselves. So what about quarks and electrons? Surely they are ‘real’? When you try and take a closer look at them, they turn out to be pretty slippery bastards - try to confine them into smaller and smaller areas, and they move about more and more wildly. Try to measure their momenta, and suddenly you can’t pin them down any more. Send them through a double slit, and they behave like waves; try to measure their spin vector, and each time you laboriously determine one component, the other two get erased! It’s like trying to nail jelly to the wall. So to our dismay, even the very notion of ‘particle’ turns out to be just a convenient tool. Even such a seemingly innocuous concept as ‘number of particles in a given volume’ turns out to depend on who’s counting them! There’s not really such a thing in reality - there might be something there, but it’s nothing like our intuitive notion of a particle, unless you zoom out far enough so that quantum effects become negligible. So what are we left with? The most basic elements of reality we currently know of - and this is almost certainly not the deepest level - are quantum fields. So we don’t have a universe with 10^120 particles with independent existence - all we have is one spacetime with 37 (depending on how exactly you count) quantum fields. That is all. You don’t have any more independent existence than does that flock of birds, since both are just complicated ensembles of the same 37 quantum fields (according to current knowledge). On those scales you are not different from those birds, and on other scales you are not the same. There’s no contradiction - both are correct. You take what is found on human scales to be absolutely real only because that happens to be the scale your sensory apparatus is able to probe. And that’s my central point - if you probe reality on human scales, then you and me and the birds are ‘real’. If you probe it on molecular scales, then atoms are ‘real’. If you probe it on atomic scales, then ‘subatomic particles’ are real...and at the bottom, what is real are quantum fields, according to current knowledge. Hence, there is no one reality - what is real depends on the scale of the instrument that probes reality. It is scale-dependent. This is called contextuality. You will never find a ‘bird’ if you use the LHC to look - even if you look in the same region of spacetime. And when you look at subatomic constituents, then sometimes you’ll find waves, sometimes various quantum objects, depending on how you set up the probe. Mostly, you’ll find nothing at all. I will for now forgo any mention of counterfactual definiteness and the empirical violation of Bell’s Theorem, which puts further nails into the coffin of ‘reality’. Or what might happen if you look still deeper, beyond quantum fields. Or you could go the other way - what happens if a hypothetical very large organism (~10 billions of light years in size) tries to build a machine to observe my cat? Because the speed of light is so slow on such scales, metric expansion would rip this life form apart long before he could become conscious of the outcome of that measurement. My cat could never become part of his reality. So what is real depends on how you probe! That is why both ‘bird’ and ‘37 quantum fields’ are equally valid realities, but in different contexts and on different scales. Neither one is more ‘wrong’ or ‘right’ than the other, but both are contextual and scale-dependent conventions. They are both real enough and useful, but only in their own contexts. I will leave it at this for now. Personally I think the rabbit hole is much deeper than this still - I happen to think that reality doesn’t just depend on how you look, but also on who’s looking. But I won’t get into this here.
-
What is the real difference between science and philosophy?
Markus Hanke replied to dimreepr's topic in General Philosophy
Of course not (and I have only read the last few posts, not the whole thread). My position is that the heliocentric model contains only observables, and is not quantifiable, so this is a trivial case. It is also not a ‘theory’ in the modern sense, but simply a statement of something that is easily and directly observable. Observables like this map directly onto aspects of reality, I think we can all agree on that. The real question is what happens when we have a theory which, in addition to observables, also contains mathematical machinery that allows us to quantify these. This is what we have with all of modern physics. The question is then whether it is just the observables that map into reality, or also the various parts of the mathematical machinery behind it, even if it is not itself observable. Is spacetime real? Are tensors real? Is a wavefunction real? What about symmetry groups? Etc. My position is that if the machinery employed is non-unique, then it almost certainly doesn’t map into reality. For example, I don’t think that curvature tensors directly map into any element of reality (in the context of GR), because there’s other ways to describe gravity. You will never observe a tensor. If an element of a theory is unique, then it is possible that it could map into an aspect of reality, at least in principle. In GR for example, you can do without curvature tensors, but you can’t do without diffeomorphism invariance, being its fundamental symmetry from which most of the physics arise. So I think the symmetries captured by whatever formalism you use might correspond to something real out there - as far as symmetries can be considered ‘real’. As to whether such unique elements necessarily map onto reality, or just potentially - I profess myself agnostic. I don’t know. I’m wondering though - how do you even define ‘reality’ in this context? As you well know, philosophically speaking this is a pretty slippery concept! Thank you, I was looking for this term! -
What is the real difference between science and philosophy?
Markus Hanke replied to dimreepr's topic in General Philosophy
Spot on +1 And there’s another issue that isn’t often spoken about. Consider solid state physics and statistical mechanics - large ensembles of constituents give rise to certain dynamics and laws governing the ensemble. The interesting point is that these laws do not explicitly depend on the precise nature of the constituents. Eg you can describe the dynamics of water without knowing anything about H2O molecules, and if you replaced them with something different that happens to exhibit similar properties, in principle at least you’d get a liquid that would be similar to ordinary water (maybe not the best example, but you get my drift). This is why we could do chemistry before we knew of elementary particles. Reality isn’t just what humans experience, it’s a scale-dependent tree-like structure. So there’s a certain epistemological non-uniqueness in what constitutes the fundamental building blocks of the world, and you can only be sure of their nature if you have the means to probe them directly or indirectly (which opens another can of worms though). -
Yes, true indeed. The concept of ‘extremum’ is much more precise. Whether it is the longest or shortest path is simply a matter of which convention one chooses. I’ve seen both used in various books, but personally prefer ‘longest’ since the concept of ‘time dilation’ then takes on a nicely intuitive geometric meaning.
-
Ok, no problem, no offence taken. But I’m genuinely curious - why do you keep going on about the idea of replacing GR with a model based on a gravitational potential? It has been explained at length, in different threads, why such a thing cannot work; but you seem to keep pursuing it regardless. The point is simply this: you need a certain amount of degrees of freedom to accurately capture all features of gravity - including gravitational radiation and its polarisation states. It can be formally shown (ref Misner/Thorne/Wheeler and others) that no scalar theory can do this, irrespective of its details; also no vector theory can do this. You need at the very least a rank-2 tensor theory, such as GR. That’s because gravitational radiation is quadrupole radiation with two polarisation states at 45 degree angle, and couples to the energy-momentum tensor as source. So nothing less than a rank-2 tensor will ever do (which corresponds to massless spin-2 radiation field). Given this, why not just let the gravitational potential thing go? It won’t work because it can’t work. At best you’d get something that works as an approximation under special conditions, like Newton. GR is much more complete and general. I just feel there’s no value in flogging a dead horse - you could be spending your time in more useful ways, wouldn’t you agree?
-
They follow paths in space-time, not just space. That’s a crucial difference. When in free fall, they will follow precisely that path which maximises proper time; so they tend to follow the longest possible path through space-time (‘geodesic’), which is also that path for which acceleration vanishes everywhere (hence free fall). This is called the principle of extremal ageing. Writing this mathematically gives an equation the solution to which is precisely the path followed by the falling body. Very simply put, the mathematical description for simple cases like the Earth (but not in more complicated cases!) ultimately depends on just two terms - one for time, and one for the radial coordinate. The former carries an additional factor of c^2, so it is much larger than any spatial effects. In that sense, time is the crucial thing here. Note that this is not necessarily true in more complicated spacetimes - just for some simple cases.
-
As I already explained in my previous post, there is no such thing as ‘gravitational potential of the universe’; the concept is meaningless. It’s very frustrating when something is being explained at length, and then goes ignored. Besides, gravity is nonlinear, so even in cases where potentials are meaningful, you cannot just add them linearly. And yes, GR reflects Mach’s principle explicitly, since in order to find solutions to the field equation, you must specify both local sources, as well as distant sources as boundary conditions. This has nothing to do with any potentials, it’s about initial and boundary conditions in a differential equation. In fact, the aforementioned asymptotic flatness is an example of this.
-
It’s more the other way around, in a sense. What we ordinarily experience as gravity in a scenario like being ‘attracted’ to the earth is almost exclusively due to time dilation, which can be considered ‘warping of time’. Curvature of space then produces tidal effects. Ultimately though you can’t neatly separate these.
-
Learning physics and math before astrophysics
Markus Hanke replied to pmourad's topic in Science Education
If you have high school algebra, your next step will be to teach yourself calculus. I recommend this: https://ocw.mit.edu/ans7870/resources/Strang/Edited/Calculus/Calculus.pdf It’s very readable, and has loads of exercises to work through. After that then I cannot recommend MIT’s Open Courseware highly enough (just Google it) - there’s a wealth of material there to teach yourself physics and math from the basics up to the most advanced. That should keep you busy for a couple of years -
Yes, good point. I don’t know the answer. I’m hoping that smoothness and continuity might somehow arise naturally for proper choice of relations. But of course I can’t know this. So perhaps the idea isn’t completely worthless after all...thanks, studiot. This is the most obvious approach towards a model of quantum gravity. In fact, it is actually straightforward(-ish) to formulate such a model. The problem is that the result turns out to be physically meaningless, so unfortunately this does not work.
-
It is tensor calculus, and differential geometry. Perhaps it would be better to start with Special Relativity first, which is simpler, and work your way up from there.
-
In general spacetimes, the concept of gravitational potential is replaced by the metric (you seem to have missed that on Wiki). In order to meaningfully define ‘gravitational potential’, the following conditions must hold for your spacetime: 1. It must be static 2. It must be asymptotically flat 3. It must be spherically symmetric 4. It must admit a time-like Killing vector field These conditions are met only by a very few special spacetimes, and the cosmological FLRW spacetime is not one of them. An example of a spacetime where it does work is Schwarzschild. Yes. This implies asymptotic flatness of the spacetime. It also implies path independence, ie the energy must be independent from the path taken to get to infinity (the value is a path integral), meaning it is a function of change in r alone. So this definition requires all four of the above conditions to be true, just like I said. What is the escape velocity of the universe? Where are you going to escape to? If asymptotic flatness does not hold, then no, there is no escape velocity at all, since there’s no meaningful notion of ‘escape’; gravity is non-negligible even at infinity. If it does hold, but any of (1)/(3)/(4) above are violated, then the escape velocity will explicitly depend on the precise trajectory and timing of the motion, in more or less complicated ways. Only if all four conditions are met is it a path-independent scalar, and thus related to a gravitational potential. Note that ordinary Newtonian single-body gravity already presupposes these four conditions, which is why the potential can be defined so neatly there. Again, all this already assumes (1)-(4) to be true, because it only works if the differences are path-independent. Even something as simple as adding angular momentum will prevent path-independence already. Cosmological FLRW spacetime is not asymptotically flat, not spherically symmetric, and does not admit a time-like Killing field, so the notion of ‘gravitational potential’ is simply meaningless there. Yes, this is more accurate than the original statement.
-
I’m pretty sure he didn’t actually say this, since metric expansion is itself a gravitational phenomenon. Can you provide an exact reference, so we can see the context? Metric expansion means simply that measurements of distances depend on when you make them - the results get bigger (on large enough scales) as you age into the future. There is no such thing as “gravitational potential” in general curved spacetimes - the concept only makes sense under some very special conditions, and certainly not for large regions of the universe. I seem to remember that this has been pointed out numerous times in past threads on here. The other issue of course is that the strong and weak interactions (and gravity itself!) are not invariant under rescaling; hence, metric expansion and rescaling are physically very different things. You can’t rescale a region with matter in it, and expect physics to still work the same.
-
This is why I previously mentioned the fact that no vector model can ever capture all degrees of freedom involved in gravity, on fundamental grounds. You need at a minimum a rank-2 tensor. This has been known for a long time (it is even mentioned in some old texts such as MTW), so I don’t know why people still try, and then argue about it. Good point! I have been pondering lately (being a monk has its advantages - I have time on my hands!) if it might not be possible to formulate GR in terms of Graph Theory, which ties in nicely with what you said above. Rough idea being to see what happens when you treat GR as a finite and initially discrete graph/network of events with given relations - without any recourse to geometry or manifolds, at least at first. I’m wondering how applying local constraints would affect the global structure of such a graph; and what happens if one lets the number of nodes increase. Ultimately I’m wondering if a sufficiently fine-grained graph/network with the right structure can approach some semblance of the 4D differential geometry machinery we are ordinarily using for GR. Hope this makes some sort of sense. This is only an idea, haven’t begun any work on it - don’t know if it’s worth pursuing, or even if this makes sense. I’d need to teach myself graph theory first, so sorry for brutalising terminology. But I hope you get the drift - fundamentally I’m interested in whether the structures encapsulated in GR can arise from something other than geometric and topological considerations. To see if there’s another level to it behind the obvious. No such formalism seems to exist yet (at least I couldn’t find anything) - which probably means I’m missing something, and doesn’t bode well. How do you as a mathematician feel about such an idea?
-
I think this is a matter of internal consistency. We know from experiment and observation that nature obeys certain fundamental symmetries - for example, from observing and playing around with a large number of particle interactions, and even before making any specific models, we will eventually notice that all these interactions are subject to what’s called CPT symmetry. Any model of particle physics we now develop must therefore reflect this symmetry (the current Standard Model does this). As it turns out, CPT symmetry implies that space time must have a certain local symmetry as well, called Lorentz invariance (this can be formally proven, and has been experimentally shown to high accuracy). This symmetry is not compatible with a Euclidean geometry - you need something that has different signs in the time and space parts of the metric, making it non-Euclidean. So you need non-Euclidean space time for internal consistency, or else there would be a conflict between particle physics and macroscopic physics. Poincare could not have known this, since the necessary observational data was not yet available to him. The other thing is that the strong and weak interactions are not invariant under rescaling, so shrinking and expanding rulers are not even an option. However, it should be noted that, given non-Euclidean space-time, you can describe gravity in ways that don’t use curvature - notably with a concept called torsion. Einstein himself tried this, but failed for technical reasons. Only in the 1960s was a functioning model along these lines developed; it’s called Møller gravity. Spacetime here is completely flat, and gravity is due to torsion alone. The physical predictions are the same as in standard GR, because they obey the same symmetries. Which, of course, further underlines my earlier point that the behind the scenes machinery of GR - such as curvature tensors - does not necessarily map into any element of reality. You can do away with curvature completely, and yet still obtain the same gravitational physics through other geometrical means. What both models share are again the underlying symmetries.
-
I am using formalism in the mathematical sense; ie akin to the ‘language’ used to write down the model. Usually this will be the language of tensor calculus and differential geometry. You need to choose a formalism to write down your model, otherwise you cannot extract any predictions from it - just as you need to choose a language when you write a forum post here. What I am attempting to point out is that this choice is not unique. Most texts on GR will use tensor calculus as their formalism, so you will see it written in terms of the metric, plus a couple more tensor fields built from the metric. But there are other choices that use completely different objects, yet still arrive at the same physical predictions. To pick just one example, you can write down GR using the Penrose spinor formalism - the basic object is now a rank-4 spinor, and the field equations become a constraint on that spinor. Or you could use the ADM formalism, which uses conjugate momenta. And so on. They all describe the same physics, but using very different languages. So my point is simply this - if the “machinery behind the observable” is not unique, and to some degree interchangeable, in what sense then can this formalism itself be ontologically ‘real’? It’s like language - you can say ‘table’ or ‘Tisch’ or โต๊ะ, but these are just conventions. What’s ontologically real is the object in your room which you can touch and bang your knee against, not the many different words. Or have you ever banged your knee on a word What I wish to do is carefully distinguish between the observables of a model, and its formalism. The observables directly map to elements of physical reality, so they are ‘real’ in that sense. For GR that would be the outcome of measurements taken on test particles with clocks and rulers - GR is simply a set of correlations between gravitational sources, and such outcomes. The same is not true for the specific formalism, though - there are different ways to obtain these correlations, and the computational devices employed in doing so do not uniquely map into anything in the real world. Only observables do. This is just my current view on this matter, which may evolve and change as I continue learning and pondering. I do not claim that it absolutely can’t be read any other way. One notable problem with this view of mine is that there is at least one computational device that is shared by all formalisms which I am aware of - space-time. Does that mean that space-time is ontologically real, even if it can’t be observed? Is it possible to formulate GR without recourse to any concept of space-time? And what about things like the Ahoronov-Bohm effect? What are the implications? I shall continue to ponder. To finish with a quote by George Box: “ All theories are wrong, but some are useful.” P.S. What have all these disparate formalisms in common, that enables them to represent the same physics? The answer is that they capture the same symmetries - local Lorentz invariance, and global diffeomorphism invariance. For example, both tensors and spinors are representations of the Lorentz group. So really, the most fundamental thing that models and reality have in common - why they can map into each other - are symmetries. The same is true of course in quantum physics.
-
As I mentioned earlier, in the context of GR gravity is defined as geodesic deviation. What it means is that GR tells us about the world lines of test particles - far from gravitational sources, initially parallel world lines remain approximately parallel; in the vicinity of sources initially parallel world lines will deviate in specific ways. GR allows us to calculate this deviation, ie the motion of test particles; in fact this is all it does, since geodesic deviation is precisely what ‘curvature’ means. It does not address the question as to why (in a fundamental ontological sense) the deviation occurs, it simply quantifies it. So it is purely descriptive in that sense, and no deeper mechanism is suggested or implied. It is also important to remember that the mathematical structures employed in this description (manifolds, connections, metrics, geodesics) where known and existed long before Einstein, who simply put them to use for his model. They do not ‘belong’ to GR, but are just general mathematical entities used in many other contexts as well. It is also possible to use different mathematical tools to arrive at the same results (eg a Lagrangian instead of curvature tensors, or numerical methods). Given this fact, in what sense could the formalism of GR be anything more than instrumental? The only observable of the model is the motion of test particles, but the entities used to calculate that observed motion are not themselves observable or detectable in any way, and can to some extent even be substituted for different ones. I don’t know if that makes me an instrumentalist, but if it does then I’m ok with that label. I just think it’s dangerous to reify mathematical tools that don’t correspond to physical observables, especially not if we know already that more than one formalism is possible for a given model. You can point to a test particle falling, but you can’t point to a Riemann tensor. That doesn’t diminish its usefulness, but we shouldn’t make more of it than what it is. A force is a vectorial quantity by definition. Vectors are rank-1 tensors; it can be formally shown that it is not possible to capture the necessary degrees of freedom exhibited by gravity by any kind of rank-1 object in general. You need at least a rank-2 tensor for this, hence the necessity for a metric theory such as GR. So no, force fields are not generally equivalent to space time curvature, on fundamental grounds. Only under very special circumstances (static and stationary spherically symmetric vacuum that admits a time-like Killing field) can you describe gravity using a simple potential, and thus force. Ok, but what exactly is meant by this? As explained, GR models the motion of bodies, so it accurately enough represents that aspect of reality. But do you mean to ask whether all unobservable mathematical entities employed in arriving at that observable result must necessarily also represent aspects of reality? For example semi-Riemannian manifolds, and curvature tensors?
-
Yes @joigus, I lurk in the shadows and follow proceedings here whenever I get the opportunity At present I live in the jungles of Thailand, having recently been ordained as a monk, and do not have access to anything other than an old mobile phone with spotty and slow internet access, so I’m not really in a position to participate in discussions. It’s just too slow and painful to type this way. I will return once I get access to better infrastructure - perhaps some time next year. Satellites in orbit are in free fall - place an accelerometer into them, and it will show exactly zero at all times. No proper acceleration -> no force acting on them. And yet they don’t fly off into space, but remain gravitationally bound into their elliptical orbits. Clocks in them are also dilated wrt to far-away reference clocks, which is also a gravitational effect. Thus, no force, but still gravity. Newtonian forces are simply bookkeeping devices, and as such they often work well - but only in the right context. Their nature is descriptive, but not ontological. They are not very physical either, given that they are assumed to act instantaneously across arbitrary distances. The strong, weak, and EM interactions aren’t ‘forces’ in that sense at all, since they work in very different ways. They are only sometimes called ‘forces’ by convention, for historical reasons. They ultimately arise through the breaking of symmetries, with the particles involved being irreducible representations of symmetry groups. Finally, it should be noted that physics makes models, that’s what it sets out to do - and as such it is always descriptive rather than ontologically irreducible. So, asking whether gravity “really is” A or B, or whether A or B are “true” is fairly meaningless, since both A and B are descriptions of reality, but not reality itself. Like maps of a territory. The correct question is thus whether models A and/or B are useful in describing gravity, and in what ways and under what circumstances they are useful. So - Newtonian gravity is sometimes useful, but GR is more generally useful, as it gives more accurate predictions for a larger domain. So for now the best answer to “what is gravity” that we have is a purely descriptive one: it’s geodesic deviation, and thus a geometric property of space time. To put it flippantly, it’s the failure of events to be causally related in a trivial manner. Future advancements may upend this picture in the high-energy domain, perhaps radically. We’ll see. I’m sorry I can’t contribute much at the moment, but I’ll leave you with the above thoughts. I could have written much more, but it’s too much of a pain on a small mobile phone screen.
-
Dear All, I am going to take a hiatus from the forum from today. As some of you might know, the natural sciences are not my only area of interest; in particular, I am committed to a form of spiritual practice as well, and have been living in a Buddhist monastery as a lay person for the past few years. I have made the decision to deepen this practice further by ordaining as a monk in the Theravadin Thai Forest tradition, and for various logistical and monastic-political reasons this should ideally happen at a traditional training monastery in Thailand. So tomorrow I will be departing for Thailand to seek ordination there. I think it doesn’t need pointing out that forest monks generally don’t spend a lot of time on Internet forums, so chances are that I will only get to check in here very occasionally, if at all. That being said, there are a lot of question marks and uncertainties, particular in terms of immigration formalities, so it is possible that I need to come back here to Europe in a few weeks once my initial entry permit runs out, and make alternative arrangements from here (meaning I’ll have to find another place to ordain). I will only know once I get to the monastery and start dealing with the local immigration authorities (I see frustration and nightmares on the horizon!), but I’m willing to take that risk. I have been debating whether it is useful to present my reasons for going this path - you have seen me here being on about physics and equations all the time, so this might appear strange to some of you. But I’ve decided not to, because when it comes down to it, I can’t really present a convincing rational argument - this decision simply didn’t come about as the result of reason. I will say only that I’ve seen and understood enough in the spiritual practice that I have already done in the last few years, to know that this is the right path for me. The argument is a phenomenological one, not the result of rationality, so it cannot be easily conveyed in a written post. Spirituality ultimately expresses itself in the kind of person you become by engaging in it, and that’s not something you can fake or wear as a mask. You also cannot reason yourself into the monastic life - that is far too weak a basis for anyone to be at peace with that form of life, never even mind to be able to derive any benefit from it. It needs to be a true conviction that arises somewhere deep within, and that cannot be verbally communicated to others. I will add here that for me there has never been any contradiction between scientific endeavours, spiritual practice, and philosophical enquiry. Not only is there no contradiction, for me these are just aspects of the same underlying motivation to better understand the human condition; hence, if engaged with in the right way, they are complementary and inform each other. I have always felt strongly that it is necessary to achieve some kind of synthesis of these three things for us as a species to make any kind of real long-term progress, since each one in isolation can be misused for harmful and even destructive purposes, as history has sadly shown us all too often. So anyway, thank you everyone for sharing in these discussions, and I hope I have been able to make some kind of contribution - no matter how small - to this forum. In case I’m not back here for a while, I wish all of you the very best, and hopefully we’ll cross paths again. Keep my account open, just in case
- 14 replies
-
12
-
Yes, and we moved forward from there. We know a lot more now than we used to, so we won’t be going back to 1963. Yes. And we can do much more than that - we can even probe the internal structure of the protons and neutrons themselves, and thus directly test the quark model. In particle physics we do not speak of “certainties”, but instead deal with a quantity called statistical significance. This essentially tells us the degree by which, given a sufficiently large statistical data set, an event is likely to be “real” (as opposed to being a statistical fluke of some kind). As for neutrinos, yes, we know these things with a very high degree of statistical confidence, way beyond the required threshold value. Note that the three neutrino flavours and their oscillations have little to do with mass, other than the fact that they need to have a non-vanishing rest mass in order to oscillate at all. The various known fundamental particles have been found - and continue to be probed - with a large variety of different methods. I’m not sure what this has to do with protons, specifically. Sure. Some examples that immediately spring to mind would be nuclear reactors, diagnostic equipment such as PET and MRI, quantum computers, and many more. Even your smartphone is likely to contain components that directly rely on some aspect of particle physics in order to function correctly. Also, the chemical properties of all the various elements are a direct result of particle physics and its laws. Pair production is a simple consequence of quantum field theory, along with the usual conservation laws. There is little mystery here - you can even deduce some of the basic kinematics at play using semi-classical methods. That’s because, energy levels being equal, what we call a proton is a composite quark-gluon system, whereas an electron is an elementary particle. You can read up about quantum chromodynamics, if you want to know more details. Yes, which is precisely what General Relativity tells us will happen once certain conditions are present. Not so! There is a lot we don’t know yet, and yes, there are some obvious shortfalls and problems in some of our models. Physics would be a very boring discipline if that were not so - these issues are what provide the impetus to do further research, and continuously develop new models, so this is a very positive thing. At the same time though, there is an awful lot we already know at confidence levels that are so high that for all intents and purposes they can be considered near-certainties. We have much more powerful and sensitive instruments at our disposal compared to 1963, so we are able to probe far deeper into the structures of reality. The quark model wasn’t fully developed and experimentally tested until the 1970s, so your textbook is missing a huge piece of the puzzle.
-
Only the first five particles in that table are actually elementary - the entire rest of the list are composite particles. There are also very many particles missing. Why do you go back to a book that is nearly 60 years old, and thus outdated? Why not refer to a more modern publication that reflects our current level of knowledge on this subject? It isn’t. Neutrinos (of which there exist more than one kind) are fermions, and they have a small rest mass; photons are bosons, and massless. They are completely different. Because this old information turned out to be both incomplete, and wrong in places. We know a lot more about particle physics now (from experiments and observation) than we did in the 1960s. The main objection would be that they are simply not there. With modern particle accelerators, we can probe not only the nucleus as a whole, but also the internal structure of the proton and neutron, so we already know that there are no relativistic electrons to be found there.
-
No, but that doesn’t mean that within the fish blood cannot circulate. Likewise, the composite system “astronaut + photon” cannot move away from the central singularity (only towards it) - but that doesn’t necessarily mean there can’t be ordinary (i.e. respecting the laws of SR) relative motion between the photon and the astronaut’s eyes on a small enough local scale.
-
Heisenberg's uncertainty principle for dummies?
Markus Hanke replied to To_Mars_and_Beyond's topic in Quantum Theory
Well, even for a classical system there will be limitations due to the limited sensitivity of the measurement apparatus - e.g. you couldn’t weigh a grain of sand using a kitchen scale, since it’s not nearly sensitive enough. But that’s due to the apparatus, not due to anything inherent in the grain of sand. So that’s a different phenomenon than HUP. -
Correct, it is indeed, but that isn’t how such a surface is defined (that would be difficult, since all light cones have a light-like interior). The simplest formal definition I know of for any kind of boundary surface like this is by way of what kind of normal vector with respect to the local metric they admit. In the case of an event horizon, wrt to the local Lorentzian metric, the unit normal vector at all points is a null vector, so this is a null (hyper-)surface. In fact, in can be shown that all event horizons are always null surfaces. If I remember correctly, Wald (General Relativity) formalises this by using the pullback of the metric, but tbh I don’t remember the details exactly. I’d have to find that in my notes first. I won’t claim that this is wrong, because I am honestly not sure how this would play out. I had similar thoughts actually, which is why I mentioned the relative motion between photon and falling astronaut. Your general line of thought is not wrong, since both photon and astronaut are falling, so neither is increasing its r. Nonetheless, wrt to the astronaut the photon must of course still propagate at exactly c, so I am unsure what form the relative motion between the two would need to take. I don’t see how the eyes of the astronaut could possibly “catch up” with the photon, while still preserving the usual local laws of SR. Perhaps the answer is obvious (lol), I just don’t see it right now. This is one of those questions that seem trivial at first glance, but if you really think about them, you’ll find a lot of little devils in the details. Both the photon emitted from his boots, as well as the astronaut, can only fall along allowed geodesics in this region of spacetime, which means they both can only decrease their r-coordinates as they age into the future, wrt to the central singularity. However, this does not necessarily preclude a relative motion between photon and helmet (and photon and boot) such that the astronaut might see something, so long as this relative motion is in accordance with the usual laws of SR - so the astronaut must determine the photon’s propagation velocity to be exactly c in his own frame. It should be possible to set this up accordingly - after all, a freely falling test particle in a region where tidal forces are negligible is locally inertial, i.e. it finds itself in a small local patch of Minkowski spacetime, irrespective of whether this is above or below a horizon surface. So in principle, so long as the BH is massive enough, the astronaut should be able to see his boots for a while at least, because otherwise he couldn’t be considered to be in inertial motion within a Minkowski patch. That being said, if my years of looking into GR have taught me anything, then it is to be suspicious of what seems “intuitively obvious” - I’ve fallen on my nose often enough through this mistake. So perhaps I’m overlooking something here.
-
Heisenberg's uncertainty principle for dummies?
Markus Hanke replied to To_Mars_and_Beyond's topic in Quantum Theory
Like has been pointed out by other posters here, this is called the measurement effect, which is not the same as the HUP. The fact that certain pairs of observables cannot be determined simultaneously with arbitrary precision is something that is intrinsic to the quantum nature of the system - it is not something that arises as an artefact of the measurement process. As swansont has stated, this is because these observables aren’t independent quantities, they are Fourier transforms of one another. In more technical terms, these pairs of observables do not commute, and any pair of non-commuting quantities is always subject to some uncertainty relation. No, because what you are describing is a classical system, and one of the defining characteristics of classicality is precisely the fact that all observables always commute. This is not true in the case of quantum systems, though. Yes, but to do so you need to decide on a choice of basis representation. So you can either determine the state function in position representation, or in momentum representation - and these are again Fourier transforms of one another, so the HUP still applies.