Jump to content

Rob McEachern

Senior Members
  • Posts

    97
  • Joined

  • Last visited

Everything posted by Rob McEachern

  1. I suggest you look at this detailed lesson plan, for applying the scientific method to climate change, from the Fraser Institute: https://www.fraserinstitute.org/studies/understanding-climate-change-lesson-plans-classroom
  2. This article in today's Science News states that "Cave art suggests Neandertals were ancient humans’ mental equals": https://www.sciencenews.org/article/cave-art-suggests-neandertals-were-ancient-humans-mental-equals?tgt=nr
  3. If you would like to hear what leading physicists, philosophers and theologians have to say about this topic, I suggest you look at the extensive interviews, dealing with this subject, on the Closer to Truth website: https://www.closertotruth.com/search/site/nothing A good place to start is with "Levels of Nothing" ( https://www.closertotruth.com/articles/levels-nothing-robert-lawrence-kuhn ) defining nine different levels/definitions of what might be meant by "nothing"
  4. This is also the explanation underlying Feynman's Interpretation of the Path Integral Formulation of Quantum Mechanics: https://en.wikipedia.org/wi... These problems in quantum theory all arise from attempting to extend the principle of superposition, from describing all the components of a single particle (as it was correctly applied in the derivation of Schrödinger's equation for a single free particle), to simultaneously describing all the components of every particle. The latter incorrectly enables the Fourier transforms being used to mathematically describe the superposition, as having no constraints whatsoever, on particles adhering to any localized trajectories - thereby producing the illusion of a theory that describes "quantum fluctuations" as if they were a real phenomenon (particles spontaneously disappearing from one trajectory and reappearing on another) rather than just the unconstrained theory's ability to perfectly describe/incorporate every source of non-local noise and every other modeling error. Unfortunately, since this error in applying superposition via Fourier transforms ended-up producing the correct probability distribution, physicists, beginning in the 1920s and continuing to the present day, have completely failed to appreciate their error, and we have been stuck with all the "weird" interpretations of quantum theory, ever since.
  5. The problem lies within the difference between the properties of the territory (reality) and the properties of the map (mathematics - specifically Fourier transforms) being used in an attempt to describe the territory. I am of the opinion that all the well known, seemingly peculiar properties of quantum theory, are merely properties of the map and not properties of the territory itself. This begins with the assumption that quantum theory describes probabilities. It actually describes the availability of energy, capable of being absorbed by a detector; this turns out to be very highly correlated with probability. This happens, because of the frequency-shift theorem pertaining to Fourier transforms: in the expression for the transform, multiplying the integrand by a complex exponential (evaluated at a single "frequency") is equivalent to shifting (tuning - as in turning a radio dial) the integrand to zero-frequency and the integral then acts like a lowpass filter to remove all the "signal" not near this zero-frequency bin. Thus, the complete transform acts like a filterbank, successively tuning to every "bin". Subsequently computing the sum-of-the-squares of the real and imaginary parts, for each bin, then yields the (integrated) energy accumulated within each bin (AKA power spectrum). If this accumulated energy arrived in discrete quanta, all of the same energy per quanta, then the number of quanta that was accumulated in each bin, is simply given by the ratio of the total accumulated energy divided by the energy per quanta. In other words, in the equi-quanta case, this mathematical description turns out to be identical to the description of a histogram. Which is why this description yields only a probability distribution and why all the experiments are done with monochromatic beams. If there is "white light", then there may be no single value for the energy per quanta within a single bin, to enable inferring the correct particle count, from the accumulated energy. So, quantum theory never even attempts to track the actual motion (trajectory) of anything, either particle or wave, it just, literally, describes a set of detectors (a histogram) at some given positions in space and time, that accumulate the energy arriving at those positions in space and time - energy that enables an exact inference of particles counts (probability density) whenever the energy arrives in equal-energy quanta within each bin. The mathematical description is thus analogous to the process of a police officer attempting to catch a thief that is driving though a community, with many roads, but with only one way out. Rather than attempting to follow the thief along every possible path through the community, the officer simply sits at the only place that every path must pass though (a single bin/exit), in order to ensure being detected. If there are multiple such exit-points, then multiple bins (detector locations) AKA a histogram is required, to ensure that the probability of detection adds up to unity. - every way out must pass through one detector or another.
  6. My point is that QM imposes no cutoffs, unless it is done in a completely ad hoc manner, because it cannot do it any other way; because any physically relevant cutoff would be dependent on the specifics of the detection process used to experimentally detect and thus count anything.
  7. The answer lies in the word density. First, density is only proportional to number, so the theory only needs to track the probability density (not the actual number) of particles being detected at particular locations and times, but not the trajectory they took to get there, or their actual number. Second, and much more interesting, is how this all relates to so-called quantum fluctuations, renormalization, and the error behavior of superpositions of orthogonal functions. Addressing these issues will quickly lead beyond the scope this tread’s topic, so I will keep this brief, so as not to incur the wrath of any of the moderators - if you wish to pursue the matter further, I would suggest starting a new topic devoted to these issues. Briefly, think of a Fourier series, being fit to some function, such as the solution to a differential equation. As each term in the series is added, the least-squared-error between the series and the curve being fit, decreases monotonically, which each added term, until it eventually arrives at zero - a perfect fit. Which means that it will eventually (if you keep adding more terms to the series) fit any and all errors and not just some idealized model of the “correct answer”. In other words, it will continue adding in global-spanning basis functions, that decrease the total error, while constantly introducing fluctuating, local errors all over the place. In essence, it treats all errors, both errors in the observed data and errors in any supposed idealized particle model (like a Gaussian function used to specify a pulse that defines a particle’s location) as though they have actual physical significance and must therefore be incorporated into the correct answer. Hence, the series forces “quantum fluctuations” to occur, by instantaneously reducing what is being interpreted as the particle numbers at some points, while simultaneously increasing it at others, in order to systematically drive down the total error between the series and the curve being fit; all because the superposition of the orthogonal functions never demands that any particles remain on any trajectory whatsoever, in order to reduce the total error. It ends up being much easier (and likely) for the method to drive the error down, by constructing a solution that has supposed particles popping in and out of existence all over the place, in order to rid the solution of any and all non-local errors or noise.
  8. Are you familiar with TSPLIB: http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/
  9. We are talking about two different things. I am talking about the number of parameters required to specify the solution to a problem. You are talking about the number required to specify only the equations. The solution also depends upon the auxiliary conditions like the initial and boundary conditions, in addition to the equations. A non-parametric solution does not require any correspondence between the number of parameters in the solution and the number in the initial conditions - think about the number of parameters in a best-fit line versus the number of *points* being fit. The significance of this, is that such a theory "loses track" of the number of particles it is supposed to be describing. Thus, it should come as no surprise that it cannot keep track of their detailed trajectories, when it cannot even keep track of their number. The number of particles that it actually does describe, ends up being an artifact of the description itself, rather that a property of the entities being described. And that is why particles are excitations of a field, in such a theory. This is the direct consequence of exploiting the mathematical principle of superposition in order to formulate a non-parametric solution. It worked great, when Joseph Fourier first developed his technique to describe a temperature *field*, but when physicists in the early twentieth century tried to apply it to the tracking of *particles*, they ended up losing track of them - because that is the inevitable result of choosing to employ a mathematical, descriptive technique, that is ill-suited to that purpose.
  10. I assume the first convolution was a square wave convolved with itself. In the frequency domain, convolution is equivalent to the multiplication of Fourier transforms. That means the operation does not add or subtact any new frequencies to the mix, it merely alters the relative amplitudes of frequency components that already exist within the transforms. In your examples, it is suppressing the upper harmonics in the spectrum of the original square wave, and then the triangle wave. So the fundamental frequency, which is what you are mostly observing in the second convolution, is the same as the fundamental frequency in the original square wave - given by the reciprocal of that wave's period. These wiki entries may help: https://en.wikipedia.org/wiki/Convolution https://en.wikipedia.org/wiki/Triangle_wave
  11. QED does not play very well with gravity, and unlike EM, it is non-parameteric; Nobody uses wavefunctions, in classical EM, as they are used in QED.
  12. I do understand. What I am saying is that there are also less complicated objects that act... But a moderator has requested that I stop, so I shall.
  13. You (and many, many others) have assumed that the observer is the entity that has committed the omission. But it is possible that the observed is the cause of the omission, and there is consequently nothing else there capable of ever being reliably measured. Your assumption is quite reasonable in the classical realm. But that may not be the case in the quantum realm. My point is, that all or most of the seeming weirdness of quantum phenomenon, may be due to that assumption. Which may be completely unmeasurable or even detectable, in the quantum realm, due to limited interaction duration, limited bandwidth and limited signal-to-noise ratio (the cause of the omission). In which case it is possible (and, I would argue, even likely in quantum scenarios) that there is only a single bit of information that can ever be reliably extracted, from any set of measurements, made on such objects.
  14. But a spin measurement can. If there are a number of skaters (an ensemble), all spinning one way or the other, and I ask you to give me a one-bit (one number) answer to the following question, you will indeed be able to describe the spin with a single number: If you look down on the skaters from above, do they all appear to be spinning clockwise (1) or anticlockwise (-1)? The number of numbers (components) required to describe spin is critically dependent on whether or not the description pertains to before, or after, an actual observation. One might say that the "collapse of the wavefunction" corresponds to the collapse of the number of numbers required by the description. Now if I asked you how fast they were each spinning, or to determine their average speed, your answer (assuming you could actually, reliably measure their speeds) may require either more numbers, more bits per number, or both.
  15. One has to be careful. The number of dimensions that are important, may be those existing in the logical space being used to describe the physical space, rather than those of the physical space (including time) itself. Consider the situation in numerical analysis, of attempting to fit an equation to a curve or to a set of data points. One may choose to fit an equation with a fixed number of parameters (logical dimensions) or a non-fixed number. For example, if you choose to fit a straight line, then there are only two parameters required to describe the "best fit" line. But if you were to chose to fit a Fourier series or transform to the curve, there may be an infinite number of parameters required, to specify the "best fit". Gravitational theory is a "parametric" theory, in that it requires only a fixed number of parameters, to describe behaviors. But quantum theory is a "non-parametric" theory, requiring an infinite number of parameters (Fourier transforms) to describe behaviors. This is one of the reasons why it is so hard to make the two "play together" and it is also why it is so hard to find a common-sense interpretation for quantum theory - a non-parametric theory may be consistent (sufficient to describe) virtually any behavior, and thus it may not be possible for it to eliminate any hypothetical cause for the behavior - anything goes - and weird interpretations and correlations (entanglement) may result from attempting to associate an incorrect "logical dimensionality" with the correct "physical dimensionality". The situation is further complicated by the fact that the physical dimensionality of an emitter may differ from the logical dimensionality of any emissions (observables) produced by the emitter. In other words, simply because an object has three physical, spacial dimensions, does not necessitate that its observables must also possess three logical dimensions (three independent parameters).
  16. Only if observed at a single point. Gravity usually has a gradient. Consequently, gravity at the top and bottom of an elevator car, will differ slightly. But the same two points in an idealized, accelerating frame of reference, will not exhibit that difference.
  17. Exactly. The same effect can often be produced by more than one cause. An acceleration implies the application of force, and forces applied to things change how they behave. Yes. The last sentence in the article states "These include fundamental questions for our understanding of the universe like the interplay of quantum correlations and dimensionality..." As I have described in other posts, the effect of "quantum correlations" can be produced by misinterpreting measurements made on a 1-dimensional object (a single bit of information) as being caused by measurements assumed to be made on the multiple vector components existing within a 3D object.
  18. Until about the year 1600 there was no difference. Then Galileo, Bacon and others created the difference. Although it is a bit of an over-simplification, you could say that philosophy prior to 1600 was mostly modeled on ancient Greek mathematics - you stated some premises, then derived, via deductive logic, some conclusions based on those premises. So Aristotle stated the premise that the cosmos is perfect, therefore the planets must move in perfect circles (I won't get into how epicycles fit into this idea), perfectly centered on the earth, with a moon that is a perfect sphere etc. But 2000 years later, Galileo discovered mountains on the moon, disproving that long-standing premise of perfection. That triggered rapid changes. Shortly thereafter, Bacon declared that science (he called it Natural History) needed to be based first and foremost on Inductive reasoning rather than deduction, because only induction, applied to actual observations, was likely to be able to ascertain the validity of the starting premises, thereby avoiding a repeat of the earlier problems resulting from dubious premises. This became the basis of the "scientific method". Bacon was also the person who first proposed massive "state funding" for this new enterprise - previously people like Galileo either had to be financially independent, or seek financial aid from wealthy patrons, a situation that did not change too much, until the mid-nineteenth century. Bacon was also mostly interested in what would be called "applied science" today, rather than basic research. He was interested in finding new ways to cause desirable effects - like finding a new medicine for curing a disease. Bacon also was of the opinion that, starting with Socrates, the Greek philosophers were responsible for a 2000 year delay in philosophical/scientific progress, because unlike the pre-Socratic philosophers, they had convinced subsequent generations of philosophers to focus almost exclusively on moral and social issues rather than Natural Philosophy, now known as Science. A number of present-day philosophers of science, have become concerned that physics, in particular, is reverting to the pre-Bacon model, of pulling dubious premises out of thin-air, deriving wonderful, elegant (fanciful?) conclusions based on such premises, and having too little concern for experimental validation of their premises.
  19. Exactly. It is generally true in the macroscopic realm, that supposed independent components actually are independent, precisely because, all the "naturally occurring" entities in that realm exhibit multiple bits of information. But "unnatural" macroscopic entities can be created with this single-bit-of-information property, analogous to the ability to create unnaturally occurring transuranic elements. If you do so, and measure their properties, they exhibit "weird" behaviors, just like quantum entities - because that is what they are - even though they are macroscopic - they have a severely limited (AKA quantized) information content. However, in the microscopic/quantum world, single-bit entities are common (that is why you only ever see spin-up or spin-down etc.). but their behavior is unfamiliar. So most quantum experiments, unlike classical ones, end-up being examples of a "specifically contrived experiment" as you have noted - experiments on objects with a severely limited information content. It is the small information content, not the small physical size, that drives the differences between classical and quantum behaviors. Bell et. al. had the great misfortune of stumbling upon a theorem that only applies to those specifically contrived experiments, that they never actually perform - it only applies to classical objects not quantum ones, because the quantum experiments, done on photons and electrons etc, are all being performed on objects that behave as if they fail to observe Bell's most fundamental and usually unstated assumption - that the objects manifest enough bits of information, to enable at least one unique, measured bit to be assigned to each member of each pair of entangled measurements: it is a logical impossibility to assign a unique bit to a pair of anything, when you only ever have one bit to begin with. Bell's theorem assumes that you can. That is the problem.
  20. With a single bit - up or down - per observation - bit values that will exhibit strange correlations, if you attempt to determine another value of that single bit, by using an apparatus oriented in anything other than, the only direction that is actually guaranteed to yield the correct bit value, in the presence of noise. This is what phase-encoded, on-time-pads are all about. It gets even more interesting, when you realize that the mathematical description employs Fourier Transforms to describe wave functions, and those exact same equations (when the Born rule is employed) are mathematically identical to the description of a histogramming process - which directly measures the probability, with no need for phase components, much like measuring speed versus measuring velocity components. In other words, the histograms simply integrate the arrival of quantized energy. As long as each bin in the histogram only responds to the same quanta per arrival (which may differ from bin to bin), then the ratio of total received energy divided by energy per quanta, enables you to infer the number of arrivals and thus the relative probability, independently of whether or not the quanta arrive as waves, particles, or wave-particle dualities. That is why it only works with equi-quanta experiments - monochromatic light versus white light, in the classical case. In the white light case, the histograms correctly measure the total energy, but the inference of the number of arriving particles is incorrect, because there is not a single correct value for energy arriving per quanta.
  21. Indeed it may be, if you wish to actually understand what may be happening. Yes. Simply because we have chosen to describe a thing as having three components does not mean that it actually has three components. To be specific, describing the thing via three components may be sufficient to perfectly match the observations, but it may not be necessary. For example, you could determine an object's speed, by specifying the three components of its velocity vector and then using them to compute the speed. Or, you could just measure the speed - a single scalar component - and skip the entire three component description. In effect, this is what the Born rule accomplishes in quantum theory - computing a single scalar component (a probability) from the vector (or spinor) components, that were never actually necessary, but which are in fact sufficient. The problem comes, if you ever try to actually make a one-to-one correlation between the assumed components and the actual attributes of the entity being described by those components (as Bell's theorem attempts to do); because any entity that manifests less than three bits of observable information, will never exhibit the three unique bits of information that would be required to form a unique one-to-one correlation with the three components in the description - resulting in rather weird correlations, if you make the unfortunate decision to attempt to interpret them as being obtained from measurements of an entity that actually does exhibit three measurable and independent components.
  22. φ=2cos(π/5)
  23. The video interviews on "Closer to Truth" give a number of well-known and Nobel Prize winning physicists' answers to this very question. Here is the summary, introducing the interviews, on the topic "Are the Laws of Nature Always Constant?": Summary: "The laws of nature or physics are assumed to be everywhere the same, on the far side of the universe as sure as on the far side of your house. Otherwise science itself could not succeed. But are these laws equally constant across time? Might the deep laws of physics change over eons of time? The implications would be profound." Here is the link to the video interviews: https://www.closertotruth.com/series/are-the-laws-nature-always-constant
  24. Exactly my point. Understanding what caused the previous uncertainty to suddenly cease to exist, provides the answer to the original question "What is uncertainty?"
  25. What "this" are you referring to? The issue I am attempting to point out, is not the uncertainty in the measurements, but the uncertainty in any decision/behavior based upon those measurements, when it is possibly based upon a false assumption. There are many famous examples in physics, like Lord Kelvin's wildly incorrect estimate of the age of the earth - caused not by uncertainty in the measurements, but uncertainty in the meaning of the measurements. Such uncertainties in meaning, rather than uncertainties in the measurements, are at the heart of all the uncertainties regarding interpretations of quantum theory. But there is something much more subtle and interesting as well: are physical systems compelled, by the laws of nature, to respond to noise, *as if* it is not noise? Or can an entity ignore the noise, for example, by ignoring the least significant bits in any measurements, that are, a priori, known to be highly likely to be corrupted by noise? Macroscopic communications systems do this all the time. What makes you assume that microscopic systems cannot do the same thing? Because that is indeed an assumption at the heart of the classical versus quantum interpretation problem. If quantum systems *act* as if the least significant bits of fields, forces, potentials etc, do not matter, then all observable behaviors will be quantized - not because the fields etc. are quantized, but because the set of behaviors induced by the fields in other entities, like purported particles, are discontinuous and restricted to a small number of possible behaviors. Since information is quantized, by definition, all interactions driven by the recovery of that information, may also be discrete/quantized, even if the fields/forces that the information is being extracted from, are not quantized, but continuous. In such a case, it is unlikely that a quantized field theory will ever be entirely successful, because the cause of the observed, quantized behavior does not lie within the fields per se, but within the restricted behavioral repertoire of the entities responding to the fields, just as in a classical communications systems, responding discontinuously to continuous electromagnetic field measurements; it does not matter it the actual voltage measurement corresponding to a bit value, is equal to 1.176, the system is going to behave *as if* it was exactly equal to 1.000. Quantum entities may do the same. Attributing quantized behaviors entirely to quantized fields, is thus another source of uncertainty.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.