KipIngram
Senior Members-
Posts
710 -
Joined
-
Last visited
About KipIngram
- Birthday 01/10/1963
Contact Methods
-
Website URL
http://www.kipingram.com/blog
Profile Information
-
Location
Houston, Texas
-
Interests
Photography, history, science, digital privacy, beer brewing, genealogy, astronomy, cycling, movies / books, ...
-
College Major/Degree
PhD - Engineering, The University of Texas at Austin, 1992
-
Favorite Area of Science
Physics
-
Occupation
Senior Engineer, IBM
KipIngram's Achievements
Molecule (6/13)
132
Reputation
-
I'm not really familiar with the circumstances of this case, but I ran across some interesting statistics the other day. In 2015 the Washington Post estimated that there were 350 million firearms in the US. That same year, the Centers for Disease Control reported that approximately 36,000 individuals died from gunshot wounds. I can't help noting that this implies that 99.99% of all firearms in the US in 2015 were not used to kill someone. A better analysis would include non-fatal gunshot wounds as well, but I didn't see that data. At any rate, it seems clear that the vast majority of firearms in the US are not used for "nefarious purposes." Based on this, I find the notion of a complete and total ban heavy-handed. It seems to me that solving any problem by producing 10,000 times the impact actually needed to solve the problem almost guarantees some form of unintended consequences. So my question is simple. Do we really think that the only way to reduce the 36k annual fatality count is by total disarmament? I think there must be more intelligent ways to attack this problem. I'll share another anecdote - but I want to make clear that this is, in fact, just that: an anecdote. Word of mouth from an IRC friend I regard as well read, but I have not checked his facts here - he can fall into hyperbole sometimes, so I am more inclined to accept the "spirit" of this story than the specific numbers. According to this friend, the US ranks 3rd in the world on murder rates. But, he said, if you remove New York, Chicago, Los Angeles, and Washington D.C. we drop to below 500th. All of these cities have strong gun control legislation compared to the national average. It seems unclear to me, therefore, that having laws against guns actually solves the gun violence problem (in fact, this anecdote implies that it may make it worse). After all, the people who actually commit the violence are breaking the law - they are bad people. Why would they hesitate to break a law re: having a gun? I also feel fairly sure that if we waved a magic wand today and made every gun in America vanish, they'd immediately begin re-appearing. And not just via illegal imports - if I were an enterprising Mafia don, the first thing I'd do in such a situation would be to hire gunsmiths, set up hidden gun shops, and start arming my troops. This technology is "somewhat common knowledge" and it's not rocket science - Pandora's box is open and I doubt it can ever be closed. A black market seems inevitable, and it will route guns primarily into the hands of people inclined to break the law. So given that when a random person possesses a firearm, it's order of 10,000 times more likely that he or she will not commit violence with that the alternative, I think I like having plenty of armed good people in the population. The alternative distribution seems far worse to me.
-
Right - it was for the other angles (say, like 30 degrees off from the preparation angle) where he said there would be a superposition. For example, say we prepare at angle 0, then measure at angle 30. He said we'd then get a superposition of the 30 degree and 210 degree states (though he didn't say it like that). Just that we might get "no photon," which would imply the measured system aligned with the measurement field, or "full photon," would would imply it was 180 degrees out from the measurement field. I hope I'm describing this well enough - if anything sounds off the problem is me, not Susskind.
-
Hi guys. First, studiot, he said to neglect everything else about the electron and consider only it's spin - basically he said "imagine it's nailed down so it doesn't get away from us." Next, swansont, he did say that if we'd prepared the the electron in a certain direction, then there was 0% probability of finding it at 180 degrees to that preparation. He said if we re-applied the same magnetic field we'd used to prepare it, we would never get a photon, and that if we applied a field at 180 degrees to the original, we would always get a photon. Then if the measurement field was at any other angle (not equal to or exactly opposed to the preparation field, then there would be a probability of getting a photon, which upon hearing later lectures turned out to be [ 1 - cos(angle) ] / 2, where "angle" is the angle between the preparation and measurement fields. I think the later lectures helped me - after I heard the first one I was imagining a particular simple preparation / measurement flow, and I don't think the full scope of the formalism was required to get the right answer for that one. I was able to avoid thinking of a superposition of actual electron states, and instead just think of a probabilistic emission of a photon. But I think now that more complex situations might require the whole standard shebang in order to come out right every time.
-
Ok, so I've been watching the lectures Leonard Susskind gave at Stanford that are available on the internet. He described a superposition of states sort of along the lines below. For purposes of this discussion we're assuming the position of the electron is "pinned down" somehow so that all we have to consider is spin. 1) Prepare the electron by applying a strong magnetic field. This will align the electron's magnetic moment with that field. I don't care whether a photon is emitted in this step or not - this is the preparation phase. 2) Now remove the preparation field, and apply a measurement field, at a different angle. Classical electromagnetic theory makes a prediction about how much energy should be radiated in this situation, but we do not see that amount of energy. Instead we either see a single photon that has a larger amount of energy, or we see no photon. In either case we presume after the measurement that the electron is now aligned with the measurement field. So if we saw a photon, we say that it was initially 180 degrees out of alignment with the measurement field, and if we don't see a photon we say that it was initially at 0 degrees. So it was in a superposition of the 180 degree and 0 degree states. So this bothers me. I'd like to consider the small instant of time after we turn off the preparation field but before we turn on the measurement field. It seems clear to me tha the electron is not in a superposition of states at this time - the measurement field doesn't yet exist to define what those superposed states would be. On the other hand, if I turn the preparation field back on again, I will never get a photon - so it seems clear to me that the electron is still "aligned with A." Then, after measurement, it's aligned with B. The initial and final states of the electron seem completely clear - no "superposition" is required. The only thing that requires a probabilistic interpretation is whether or not a photon is emitted. So, my question this: Why is it not adequate when explaining this situation to simply attach probability (that depends on the direction of the preparation field and the direction and strength of the measurement field) to whether or not a photon is emitted, while declaring the initial and final states of the electron to be fully specified by the directions of the two fields? Why does the framework "push the fuzziness" all the way to the physical state of the electron itself? It seems like it's then "escalating" this fuzzy element to the macroscopic level with some mechanism that gets us "cats in superposed states of alive and dead," and so on. The spot in this that I see a possible weakness is saying that the electron's magnetic moment is "aligned with the preparation field" to start with - that is implicitly specifying all three components of that direction - this may already be a misstep. But Susskind either said that or strongly created that image in my mind when he lectured through this stuff. Ok, I'm going to reply to my own topic, based on a later lecture in the series. Is the resolution of my question above somehow related to this line of reasoning: In classical thinking, if a magnetic moment points in some direction (say positive x axis), then it has NO COMPONENT in the y or z directions. But when the system is described using quantum states, the states that describe a y direction or a z direction are not orthogonal to the state that describes the positive x direction. Only the state that describes the negative z direction has that character. So if such an electron (prepared +x) is allowed to participate in some series of events, and the y-axis component of spin is important in that series, then to get the right answer we must presume that the events happened both ways, with +y and -y, and then take out the probabilities at the very end when we actually perform a measurement. We can't get the right answer by saying "well, the electron was oriented +x, and therefore the y and z components were zero." I guess this matters because it's possible for all of those cases to interfere with each other as the system evolves unmeasured? And if we inserted an interim measurement to determine which, say, y case was in play, then we would no longer have any contribution of the other y case, but now we'd have to consider both possible x cases thereafter, whereas before that interim measurement we only had to consider the +x case. Am I vaguely on the right track here?
-
Thanks, swansont. I thought so - just wanted to make sure I wasn't totally overlooking something subtle.
-
Often I encounter materials online that motivate the mental picture of quantum uncertainty by describing it primarily as a measurement error. "Electrons are small, so to "see" them we have to use light of very small wavelength. But photons of such light have a lot of energy, and so necessarily disturb the momentum of the electron severely." Etc. I tend to find such descriptions very unsatisfying - they imply that the electron actually has both position and momentum, but that we are just unable to measure them both simultaneously because by making the measurements we disrupt the thing being observed. When I think about it, I tend to think about it in terms of the wave description of the electron (or whatever) and the unavoidable truths of Fourier theory. In other words, to describe an electron as having a well-defined position, we must use frequency components in the wave description that also describe a wide spectrum of possible momenta, and vice versa. It just can't be dodged. So my question is "Is there any sort of deep physical connection between these two ways of discussing uncertainty?" Or is the first way I described above just in the weeds from the jump, because it still tries to describe the electron as a localized particle, regardless? Thanks, Kip
-
Maybe it will help if I try to identify where I think my misunderstanding might be. After thinking about this, I believe I have assumed that it is necessary for a quantum of energy to reside in only one mode of a field - that if an entity is "many modes," as is necessary for it to be physically localized, it is of necessity many quanta. But this morning I'm thinking maybe that's wrong - maybe the two things have nothing to do with one another. Maybe a single quantity can still be a superposition of many modes, any one of which might be the one sampled by a momentum measurement, for instance. After all, it is possible to measure the position of a single particle - say an electron - and an electron is a quantum of the electron field. If we sharply measure its position, we have to call into service many different modes which we can superpose to produce that sharply localized position. But it's still just one quantum. Whereas if we sharply define its momentum, we may only need a small number of modes. So maybe my mis-thinking from the outset was to get it in my mind that each quantum had to correspond to just one modal component of the field. I think this is what swansont was getting at - even just one quantum can still be in a superposition of many modes. Maybe - still kind of wandering in the dark, but I'm trying. Didn't see swansont's last reply when I posted my last one. Yes, I see what you're getting at. I think I'm falling into the beginner's trap of trying to view the quantum as something well-defined, perhaps well defined in a way that supports a precise position measurement, or a precise momentum measurement, but well-defined nonetheless. And of course that's just wrong - it's all of those things at once, subject ot a probability distribution based on past events. But it's still just one quantum. When I asked if a quantum has to reside in a single mode, the answer is "no" - that would imply it actually *has* a precise momentum and a wholly undefined position. So I guess the way we associate a modal distribution to a quantum depends on what we know about it - if we've done a precise position measurement we now use a tightly localized collection of modes to describe it. It's position now has meaning because we've done that - but conversely its momentum is now very uncertain, because the frequencies we had to use in choosing those modes are diverse. Didn't see swansont's last reply when I posted my last one. Yes, I see what you're getting at. I think I'm falling into the beginner's trap of trying to view the quantum as something well-defined, perhaps well defined in a way that supports a precise position measurement, or a precise momentum measurement, but well-defined nonetheless. And of course that's just wrong - it's all of those things at once, subject ot a probability distribution based on past events. But it's still just one quantum. When I asked if a quantum has to reside in a single mode, the answer is "no" - that would imply it actually *has* a precise momentum and a wholly undefined position. So I guess the way we associate a modal distribution to a quantum depends on what we know about it - if we've done a precise position measurement we now use a tightly localized collection of modes to describe it. It's position now has meaning because we've done that - but conversely its momentum is now very uncertain, because the frequencies we had to use in choosing those modes are diverse. I believe I thought myself into this trap as a result of reading some of the early papers (e.g., Schrodinger's 1926 paper). Following the analogy between optics and geometric optics, it becomes tempting to try to attach some real, tangible nature to the wave function. But bearing in mind that the wave function itself is unobservable seems to help lead out of the problem. Maybe that's why modern treatments of the theory eschew that intuitive connection with classical mechanics and go straight for the clean axiomatic presentation.
-
I'm not sure my question is coming clear here. I already did understand the fact that in the real world there's never a perfect, infinite sinusoidal mode, and that all real wave patterns are combinations of sinusoids such that they do not extend to infinity and so on. The math nuances of Fourier theory I think I already understand fairly well. Let me try again. I've read many descriptions of experiments over the years (let's use the double slit experiment as a reference point), where mention is made of lowering the intensity of the beam until only single quanta move through the apparatus at a time. The interference pattern still appears, and the customary explanation is that even though the interaction at the screen is localized (a flash), the business going on at the slits is that a spread out field wave passes through the slits, creates spherical wave fronts emerging from each slit, and that those waves interfere with each other at the screen. So I want to take this literally - that one quantum is released through the slits. My questions then are 1) how does that one quantum get spread across all of the components of a wave *packet*, so that it's spread out some for purposes of going through the slits but not spread out all the way to infinity the way a pure monochromatic beam would be, and 2) how do we get a localized effect at the screen, which would seem to require a very sharply localized wave packet and thus require many frequency components to be present. I thought that a single quantum of energy could only be in one mode of the field - not spread out across a bunch of modes to get a packet. Maybe this isn't strictly correct and is my problem. Thanks, Kip
-
Hi swansont. Yes, that makes sense, but some of the discussions here on QFT led me to believe that's exactly what QFT field modes were - fully space-filling, sinusoidal structures that changed everywhere instantaneously when that mode gains or loses a quantum. I thought when a QFT field absorbed a quantum, that quantum was present everywhere in space per the shape of that particular mode. I admit I'm very weak in all of this still, though, and your comment furthers my feeling that I'm missing some important piece of all of this.
-
Hi Rob. Thanks for the reply. Since reading Hobson's paper "There are no particles, there are only fields" I have, in fact, been operating with that as a working assumption. So references to particles and fields aren't quite in my thought sphere right now. Hobson describes arguments that assert the mere existence of an entity we could properly call a "particle" is incompatible with the combination of quantum theory and special relativity. I find those arguments compelling, and am willing to operate for now on the presumption that everything is fields. Schrodinger's paper pursued the same idea: that matter is wave based from the outset, and then the Schrodinger equation arose from analogy with the Hamilton-Jacobi equation. Classical particle physics arises by direct analogy with the transition from true wave-based optics to (approximate) geometric optics. All of that seems to have striking physical content to me. I have done some reading on information approaches to quantum theory in the past, and there are some very interesting things going on there that I don't fully grasp yet. I'm certainly open to more information on that front. Anyway, our whole conception of "particles" arises from our day-to-day experience with macroscopic objects, and the work alluded to above shows how that behavior arises completely as a geometric optics like approximation of underlying wave essence. We simply do not have day-to-day experience that has shaped our intuition that directly deals with atomic-scale systems, so it's entirely explainable why we find particles so satisfying. But they don't appear to be "needed" as far as I can tell. Cheers, Kip
-
KipIngram started following Quanta and localized wave packets
-
Hi guys. I've been away for a while - busy with work and family and so on, and I also got a little weary of the political and social narrow-mindedness that comes up here sometimes. But the quest continues - I've still been prowling the internet for good papers on quantum theory and so on. A few days ago I ran across Schrodinger's original 1926 paper, where he lays out quantum theory from ground zero, rather than via given axioms like most modern treatments use. I've found the connection with optics to be a VERY helpful mental image. So, if we start from the beginning and presume that matter is a wave phenomenon, and work through the development alluded to above, we wind up with Schrodinger's equation connected very clearly to classical mechanics. I see how uncertainty arises - building a wave packet with a sharply defined position requires many frequency components, and each of those has a different momentum, so the momentum becomes more and more fuzzy as we make the position more well-defined. I'm left a little confused, though, trying to connect this to other stuff I've read. Specifically, if we have a localized wave packet, with many frequency components, doesn't that automatically mean that we have many quanta of energy? At least one for each frequency component? Is it even possible for a single quantum to have a localized position? It seems that a single quantum would have to reside in a single mode of the field, and that mode by definition is spread out over all of space. Another angle on the same quandary. Let's say we send a single quantum of an electron field through the double sit apparatus. That quantum is spread out all over the place, and strikes the detector with an intensity given by the interference bands of the apparatus. So far so good. But when we get a flash in a specific spot on the screen, that seems to imply that there's a sharply localized event, which seems that it would require many quanta, associated with different modes so that they combine to give that very localized effect. So how did we get from one quantum to many quanta? I feel like there's something I'm missing here - but I know there are people here who can set me straight. Maybe the electron field passing through the apparatus has many quanta of energy, just all in that one mode? I might should open another topic for this next question, but I'll put it here anyway and see how things go. This "optical analogy" path that I read up on the last few days makes it very clear to me why electrons bound to atoms can only have specific energy levels - that comes right out of the solution to the equation. So that degree of "quantization" is making a lot of sense to me. But I think in the modern point of view quantization goes much further - it applies everywhere rather than just in isolated situations. What sort of thought process do I use to step on to that more advanced point of view? I hope all of you guys have been doing well - sorry to have been so absent. Kip
-
Correct - the quotient of any two rational numbers is always a rational number. Say we have r1 = a/b and r2 = c/d, where a..d are integers. That's the definition of a rational number (quotient of two integers). So now r1/r2 = (a/b) / (c/d) = (a/b) * (d/c) = (ad)/(bc). The products ad and bc are both the products of integers, and thus are integers. The quotient of those two products is thus a rational number. The previous answers point out the misstep in your original reasoning.
-
Imagination is more important than knowledge
KipIngram replied to Itoero's topic in General Philosophy
Well, I think there's truth in those words, but caution is in order. On the one hand, you have people like Einstein, whose imagination leads them to things that turn out to be right. Then on the other hand you have people who uncork the most ridiculous things. I don't think Einstein's words should be taken to defend anything that someone coughs up, because some people's imagination leads them thoroughly out into the weeds. I have no doubt that Einstein's imagination tended to stay "in bounds" because he also had knowledge. So I absolutely do not think he was saying "imagination is all that matters." Knowledge is very important too. To say that a different way, imagination without knowledge doesn't get you there. Knowledge is a necessary pre-requisite, but then having imagination as well makes all the difference in the world to how much you can contribute to new understanding. -
It is a mess, and the software industry is as well. A large fraction of working programers don't, in fact, have familiarity with what's really going on under the hood of the systems they work on and also don't have familiarity with those "core" things I mentioned, like algorithm theory, basic data structure concepts, and so on. A lot of programming these days involves relying heavily on software libraries that hide all of that, and which have their own bugs and quirks which the programmer is likely unfamiliar with. On top of that, a lot of the day to day work involves either modifying software written by someone else, or working in a very small niche of some large system in collaboration with hundreds or even thousands of other programmers. These hoards of programmers will generally have no idea what one another is doing and wind up creating pieces that, when brought together, lead to more bugs or security vulnerabilities. When evaluating the performance of programmers, management usually rewards speed of execution (which may later turn out to have led to bugs or vulnerabilities) rather than taking the time to develop the deep understanding required to do a better job. Said managers are often in no way technical themselves, and yet they are expected to lead and evaluate technical staff. It's no wonder we wind up with a world full of buggy, sloppy products that make very poor use of the underlying power of the hardware.
-
There are components of a (good) computer science education that don't change. Algorithm complexity theory, data structures, standard classic algorithms like searching, sorting, hashing, and so on - those things don't change as technology changes. Proper application of them depends on the underlying technology, and so that changes. You are right up to a point, though - languages come and go, hardware changes, and so on - the usefulness of knowledge of that sort is ephemeral.