Feynmanfan85
Senior Members-
Posts
31 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Feynmanfan85
-
A photon has a mass of zero, yet has to made of something. We can debate what the substance of a photon is, but it is certainly not mass. My model views a photon as pure kinetic energy, and thus, views the substance of a photon as energy. Do you have another view as to the primary substance of a photon?
-
@swansont Please see my responses: I think my point came across imprecisely: the point is that any particle must have energy in order to exist. For example, a stationary electron has mass, and therefore, has a non-zero energy of mc2. Similarly, a photon has an energy of hf. In general, all particles, and all systems, must have energy in order to exist, whether or not they have mass. I used the photon as an example only because it is difficult to argue that a photon is not pure energy. I assume that particles with mass are also comprised of energy, but that the energy in a massive particle "codes" for mass. Similarly, a particle with charge would have energy that codes for charge. So, for example, the energy within a stationary electron would code for both mass, and charge. When an electron-positron collision occurs, my model would view the resultant photons as the result of the energy within the electron and positron "changing states", or informally, changing codes, thereby producing photons, instead of the original electron-positron pair. My model views particle decay as the result of a similar process, except no interaction is necessary: the codes spontaneously change, thereby causing particle decay. "Processing" is an analogy I use to make my model intuitive. In my model, energy is quantized, just like charge, and comes in chunks of kh, where h is Planck's constant, and k is a constant I describe in my paper (I discuss k a bit more below). In a stationary massive particle, all of the energy codes for mass (the "mass chunks" I mentioned above). I assume that as time goes on, the codes contained in that mass energy spontaneously change. Eventually, the individual codes change so much, that collectively, the code generated produces a completely different particle. We can think of the codes changing as the "processing" that takes place over time. When a particle has kinetic energy, these codes still change, but the kinetic energy within the particle also needs to get "processed". I assume that kinetic energy is a bit different from mass energy, in that instead of changing its code, kinetic energy causes the particle to move when it gets "processed". Every particle can be thought of as having a fixed capacity for processing energy. When it's stationary, it's at parity, and experiences no time-dilation, since it is "built" to process its own mass energy constantly. When it gains kinetic energy, its ability to process the additional energy starts to become strained, causing its behaviors to become delayed. In short, we can think of mass energy changing codes as a clock, and as a particle gains kinetic energy, it "ticks" less often, since the mass codes get updated less often. Energy is not relative in my model, and neither is time. Nonetheless, my model produces slow ticking clocks, but this is a mechanical result due to the presence of kinetic energy, and not the result of a philosophical view on the nature of light or time. My model does not imply a specific value for k, but I present equations that give its relationships to Planck's constant h, and the Compton wavelength of a particle in Section 3.5 of my paper. The most intuitive explanation for the physical significance of k is as follows: Consider a light source with a frequency of f, and let's view that light source as generating waves, instead of discrete photons. Now consider a point in space in front of the source, let's say a point on a wall that the light source is shining on. The value 1/k is the amount of time it takes for the energy contained in the waves emitted by the source to impart the energy of a single photon of the same frequency as the source. That is, if we ask, how long would it take for the waves incident upon a single point along the wall to impart the same energy as a discrete photon with a frequency of f? The answer is 1/k seconds.
-
All particles must have energy in order to exist. Even a stationary particle has mass energy. In contrast, a particle can have a momentum of zero, and still exist. As such, momentum is not a reasonable candidate for an elementary physical substance, since it is not a necessary property of all particles. In contrast, energy is a reasonable candidate for an elementary physical substance, since it is a necessary property of all particles, and all things generally. That is, there is literally nothing in this universe that exists with a zero energy. Moreover, electron-positron annihilation demonstrates that mass and energy are interchangeable substances. Similarly, photon-photon pair production shows that the energy of a photon can produce mass. Sources: https://en.wikipedia.org/wiki/Electron–positron_annihilation https://www.slac.stanford.edu/exp/e144/e144.html Further, there is no property of a photon other than its energy that can account for its substance, which it must have, since it is capable of interacting with other particles. Source: https://en.wikipedia.org/wiki/Compton_scattering As you note, a photon has momentum, but as I note, momentum is not a necessary property of all particles, and as such, it is not a reasonable candidate for the primary substance of a photon, or any particle for that matter. I briefly summarize my thesis below: The core insight of the model comes from elementary particle decay. Particles will decay spontaneously, without interacting with other particles. When a particle has kinetic energy, the amount of time it takes for this decay to occur increases. In special relativity, this is attributed to time-dilation, and the subjectivity of time. Source: https://en.wikipedia.org/wiki/Particle_decay#Probability_of_survival_and_particle_lifetime My model takes a different approach, one that views time like a processor that gets slowed down when given a task that requires it to churn a lot of information: in my model, the more energy a particle contains, the more information it contains, and as such, when a particle has more and more kinetic energy, more and more time is required to "process" its behaviors, causing its behaviors to be slowed down relative to an identical stationary particle. Specifically, I assume that the mass of an elementary particle comes in "chunks" of m = kh/c2, where h is Planck's constant, and k is a constant of proportionality I describe in my paper (I actually derive this value, but for now let's just treat it as an assumption). Further, I assume that each little chunk of mass within an elementary particle contains a tiny bit of information about its properties, and together, all of the little chunks of mass in an elementary particle collectively "code" for the properties of the particle. For example, an electron will consist of some number of chunks of mass, each of which contains a code, and collectively, all of the chunks together code for the properties of a single electron. When a particle is stationary, I assume that the individual codes contained within the chunks of mass spontaneously change over time, eventually causing the chunks of mass to collectively code for a different particle, causing particle decay. In short, I treat particle decay as a combinatorial game of sorts, where the codes contained within the individual chunks of mass spontaneously change over time, eventually causing the chunks to collectively produce a code that generates a completely different particle, or set of particles, causing decay. I also assume that kinetic energy comes in chunks of kh, and that kinetic energy also contains information, this time about velocity (specifically, direction and magnitude). Finally, I assume that either the mass chunks within a particle change their codes, or the particle moves in the direction coded for by its kinetic energy, but never both at the same time. This means that the more kinetic energy a particle has, the less likely it is for the mass chunks within the particle to change their codes. Therefore, the more kinetic energy a particle has, the longer it is expected to survive, since it will take more time for the chunks to collectively produce a code that generates a different particle. A more technical version of these assumptions appears in Section 3 of my paper, which I show leads to the same equations given by the special theory of relativity. I also show that this line of thinking implies the equations for the Compton Wavelength of a particle, the De Broglie wavelength of a particle, and the correct equations for the velocity and momentum of a particle, including the photon. However, because my model is not rooted in assumptions regarding the velocity of light, it allows for particles like the neutrino, which have non-zero mass, and a velocity of c, which is a glaring exception to the special theory of relativity that everyone seems to simply gloss over. Source: https://arxiv.org/abs/1307.0101 Finally, I also show that my model implies the correct equations for time-dilation due to gravity, and specifically, implies that gravity can be viewed as a force carried by a wave with a definite frequency and wavelength. As for your question regarding the photon with an energy of 1eV, my model would view 1eV as the quantity of energy within the photon, which ignores the information contained within that energy. That is, energy is akin to information in my model, and saying a photon has an energy of 1eV means that there is a certain amount of information contained within that energy that describes all of the properties of the photon. In short, I treat elementary particles like automata, and assume that the energy of a particle contains the information that describes its behavior. Informally, the "code" for the particle is contained within its energy, which I treat as a physical substance. The more energy a particle has, the more information it contains about its own behavior. https://www.researchgate.net/publication/323684258_A_Computational_Model_of_Time-Dilation
-
What is the substance of a photon? I think you'll be hard pressed to say it isn't pure energy. Please provide an explanation of how this is falsified, rather than simply saying it is the case. My understanding is that this a science forum, not a courtroom. The very nature of my model is that we can view energy as containing all of this information. If we do, then Einstein's equations for time-dilation follow. In fact, I've updated the work to show that it implies the correct equations for time-dilation due to gravity given by the general theory of relativity as well. https://www.researchgate.net/publication/323684258_A_Computational_Model_of_Time-Dilation Then the difference between my model and the special theory of relativity will be even smaller. The difference in measured wavelength would be smaller than one nanometer, which is approaching the sensitivity limits of a standard interferometer. As far as I am aware, no one has tested the velocity of light to the precision required to falsify the claims I make. My paper cites other relevant works in Section 5.
-
The two models imply equations that are generally identical, but the underlying assumptions are very different, which results in extremely small differences between the results predicted by the two models. SPR arrives at time-dilation by assuming that the velocity of light is constant in all frames of reference. I arrive at time-dilation by assuming that energy contains all of the information about a system. For example, my model predicts slightly different equations for the Doppler effect. Under reasonable assumptions, the difference in the energy of a photon that undergoes a Doppler shift as predicted by my model and that predicted by the SPR is on the order of 10^-6 eV. Further, my model implies that while the actual velocity of light is always exactly c, the measured velocity of light can deviate from the exact value of c. I show that from an inertial frame like the Earth, the deviations from c could be so small that even a modern interferometer might struggle to detect them (i.e., it could require measuring sub-nanometer displacement). In my analysis, I use concepts such as the information entropy, Kolmogorov complexity, and computable functions. Specifically, in my model, all physical properties, such as momentum, are computable functions of some "basis" information contained within the system. For example, my model implies that there is some computable function that, when given the basis information of a particular system, will generate the momentum of the system as its output.
-
Hi All, I have developed a model of time-dilation using information theory that is almost indistinguishable from the special theory of relativity, except at the scales that can be measured by a device like a Mossbauer spectrometer, or an interferometer. My academic background is in computer theory and graph theory, so while I am confident in the soundness of the mathematics I have developed, I would greatly appreciate the insights of professional physicists in navigating this topic in a manner that is respectful to the existing body of knowledge regarding the SPR. So far I've found the physics community to be extremely helpful, and thankfully, no one has found any irresolvable problems with the concepts I developed. I look forward to discussing! Here is a working copy of the paper: Computational Model of TD 3-9.pdf Charles