Jump to content

Mordred

Resident Experts
  • Posts

    10078
  • Joined

  • Last visited

  • Days Won

    37

Everything posted by Mordred

  1. This isn't true in terms of measurement for our Observable universe. That isn't the same as center of our universe which doesn't exist under the Cosmological principle. Velocity is always relative even in Newtonian physics to the observer. For Our Observable universe the center Earth. However an observer in some distant galaxy will have a different Observable universe.
  2. Well I believe everyone agrees we need tighter constraints on the impact possibilities nothing is particularly conclusive at our stage of research.
  3. Speed of the objects ie average velocity of galaxies or rate of expansion influence ? (Ie recessive velocity) ?
  4. Well guesswork with known physics being applied. The simulations are very useful as one can further use them to look for other evidence. For example one simulation suggests we can find a significant portion of Theia below our crust though if I recall something on the order of 80 km. Which is one other issue "where are the remnants of Theia ?"
  5. Well in terms of debris one might be surprised at what is shown under simulations. This is one example. Paper here https://arxiv.org/abs/2210.01814 NASA website pop media coverage with the simulation https://www.nasa.gov/solar-system/collision-may-have-formed-the-moon-in-mere-hours-simulations-reveal/ I've seen different simulations and searches the results can vary greatly on how debris gets applied.
  6. The reason why standard candles are needed is that they must have a well understood repeating process in order to determine what the emitter frequencies would be prior to any redshifts. This naturally relates directly to spectography This is also why the local group calibrations are necessary as we can use other methods not involving luminosity (stellar parallax ) as a means of verification that are unfortunate impractical far field. The evolution history of our universe will also influence luminosity due to how the density of matter, radiation and the cosmological constant evolve over time. Though these factors are typically included in these papers. Using a galaxy as a distance measure is well put bluntly to varied in possible spectography spectrum to be useful.
  7. The biggest problem with pop media coverage is they tend to be very misleading. They never tell the full story and typically strive to drive reading interest with sensationalist claims. What the pop media coverage fails to mention is just how truly difficult it is to calibrate for standard candles. Whenever more sensitive equipment is used, you will invariably encounter calibration issue. Specifically environmental calibration such as local group light pollution, peculiar local group velocities, etc etc. A great deal of research has been recently published in regards to using specific standard candles of our local group for benchmark calibration this then gets applied to the luminosity distance relations. This also is being applied to the Hubble contention between local group datasets give rise to a different Hubble constant than datasets using the CMB. A good example you may be familiar with was the calibration issues in regards to the axis of evil from the first Planck dataset. (Dipole anistrophy) The other issue of course being were not clear on just how long it takes for a galaxy to form in a much higher density past. The same goes for primordial black holes. Then there is detail that look back time used to determine the age of the Universe involves the cosmological parameters and Hubble constant. Any variation in two datasets concerning those will determine a different age for the universe. Though the difference typically isn't too significant. I've also seen later studies showing distance corrections to previous far field measured objects via filter calibration (filters for luminosity both hardware and software) in regards to JWST This paper I posted in another thread in this forum is a recent JWST study to determine the Hubble constant by the method I briefly described above is one example. It doesn't find any need for new physics in regards to Hubble parameter. Though this paper is in regards to the Hubble contention the studies It did on the cepheids it uses also get applied for far field measurements. https://arxiv.org/abs/2408.06153
  8. It's all good if you keep posting in a decent manner such as you did on your return here. https://www.scienceforums.net/topic/134508-the-moon-earths-little-sister/ Then any neg rep points will quickly disappear.
  9. One significant difference the moon has no erosion while the Earth does. It's more likely the Earth was hit more frequently due to higher gravity but due to erosion the evidence has long been wiped out. +1 for the considerable improvement in thread quality.
  10. How did we go from renewable energy to conservation of energy in regards to the Observable universe ?
  11. No problem glad to help.
  12. Not quite the router will have its own address but so does each device downstream. The router will rebroadcast the data included the IP address which is contained with each data packet. The device that has the correct address will then pick up the data. If you do not know the IP address on the CPU should be a MAC number. You can use the old DOS (ARP) command using the MAC Addy and return the IP address. Or change that IP address. There are utilities though available that does the same thing however I can't recall the name.
  13. Using what I mentioned above and the graph A of the article posted by Studiot. on the Y axis assign \(\sigma^\ast\) excited quantum state of an atom. The Y axis on the graph assign \(\phi(E)\) on the x axis E. The vertical center line of the amplitude peak assign \(E_R\) for peak energy of the amplitude. The width of the amplitude 2/3 up determines the the lifetime of the resonance. Resonance is used for all particles under Breit-Wigner. further details here. ( a simplified treatment for ease of understanding ) https://web2.ph.utexas.edu/~vadim/Classes/2019f/resonances.pdf a cross section being the entire graph rather than the localized highest peak resonance. In terms of sound this would apply to the phonon. (keep in mind by the statement extremely simplified even though the later parts gets complex a full cross section of an interaction would look like below (as I already have this in latex in another thread my Nucleosynthesis thread) I will simply copy and paste from there. It includes further details on Breit Wigner Breit Wigner cross section \[\sigma(E)=\frac{2J+1}{2s_1+1)(2S_2+1)}\frac{4\pi}{k^2}[\frac{\Gamma^2/4}{(E-E_0)^2+\Gamma/4)}]B_{in}B_{out}\] E=c.m energy, J is spin of resonance, (2S_1+1)(2s_2+1) is the #of polarization states of the two incident particles, the c.m., initial momentum k E_0 is the energy c.m. at resonance, \Gamma is full width at half max amplitude, B_[in} B_{out] are the initial and final state for narrow resonance the [] can be replaced by \[\pi\Gamma\delta(E-E_0)^2/2\] The production of point-like, spin-1/2 fermions in e+e− annihilation through a virtual photon at c.m. \[e^+,e^-\longrightarrow\gamma^\ast\longrightarrow f\bar{f}\] \[\frac{d\sigma}{d\Omega}=N_c{\alpha^2}{4S}\beta[1+\cos^2\theta+(1-\beta^2)\sin^2\theta]Q^2_f\] where \[\beta=v/c\] c/m frame scattering angle \[\theta\] fermion charge \[Q_f\] if factor [N_c=1=charged leptons if N_c=3 for quarks. if v=c then (ultrarelativistic particles) \[\sigma=N_cQ^2_f\frac{4\pi\alpha^2}{3s}=N_cQ^2_f\frac{86.8 nb}{s (GeV^2)}\] 2 pair quark to 2 pair quark \[\frac{d\sigma}{d\Omega}(q\bar{q}\rightarrow \acute{q}\acute{\bar{q}})=\frac{\alpha^2_s}{9s}\frac{t^2+u^2}{s^2}\] cross pair symmetry gives \[\frac{d\sigma}{d\Omega}(q\bar{q}\rightarrow \acute{q}\acute{\bar{q}})=\frac{\alpha^2_s}{9s}\frac{t^2+u^2}{t^2}\]
  14. It may be best to add some important details as it goes beyond the good example provided by Swansont. As dealing with this gets lengthy I will include an article to supply the details behind eugenstates which has no uncertainty. The article opens with the following key statement. "As we know, observables are associated to Hermitian operators. Given one such operator A we can use it to measure some property of the physical system, as represented by a state Ψ. If the state is in an eigenstate of the operator A, we have no uncertainty in the value of the observable, which coincides with the eigenvalue corresponding to the eigenstate. We only have uncertainty in the value of the observable if the physical state is not an eigenstate of A, but rather a superposition of various eigenstates with different eigenvalues. https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf In essence eugenstates has no uncertainty however this gets into the measurement axiom of QM. Where the act of measurement produces a state however any further measurement will produce a new state. Better described below https://www.britannica.com/science/quantum-mechanics-physics/Axiomatic-approach For the OP this deals specifically with "observation" measurement where the superposition of state's is lost due to observation. However I will leave that to a mental exercise with the article Studiot posted. Side note graph A is a delta function that is localizable you can readily determine the boundaries from graph a. Whereas a sine wave is not localizable. In terms of a particle the mean lifetime can be determined by graph a) using Breit Wigner distributions. The outside amplitudes not the primary amplitude would be considered resonance however as the width is equal or greater than the amplitude would not be considered a resonant particle. (Just a little side note and taking advantage of the graph provided by Studiot.
  15. Any Fourier transformation will inherently have uncertainty in position, momentum and time +1 for mentioning that Cross posted with Swansont. Here is a decent article involving Fourier transform uncertainty http://math.uchicago.edu/~may/REU2021/REUPapers/Dubey.pdf
  16. Fairly common on forums particularly those that allow Speculation. You see the same thing of FB as well.
  17. How about simply thinking of observer effect as any measurement ? That is how QM describes observer effect.
  18. I seem to to recall that so your likely right on that.
  19. After running into the problem numerous times of my latex getting changed to rich text format. I noticed a couple of repeatable and consistent causes. For example if you have latex in your post and try to insert an image or hyperlink to another website this will more often than not cause the latex instructions to drop and the post entirety seems from my end to switch the post to RTF. Do not know if this is repairable but I mention it for everyone's awareness. (010) The last switch occurred simply due to thread merge. The latex was placed in a separate post but the merge operation switched it to rtf. This was what I had actually typed minus the latex command brackets. (010) Lol that activated without \[ command brackets on merge. Both lines were using pmatrix latex format.
  20. One idea I was considering suggesting is simply dropping negative rep points but keep the positive side. The negative rep tends to get a members hackles up which rather defeats the intent behind the rep system being a recognition for good post quality.
  21. Hence the usage of supercomputers like Illustrus Millenium simulation which when they zoom in produced each galaxy type. https://wwwmpa.mpa-garching.mpg.de/galform/virgo/millennium/ Even the NFW profile though requires using a PC as its formula is a power law though in that case one can factor that into a natural log function in the same manner as the scale factor for the FLRW metric.
  22. I was thinking more in lines of bulging occurring during cooling but that would require an early tidal locking prior to completely cooling. However I agree with the rest we have to see if the OP returns or not but still a good topic discussion
  23. Yeah one of the hassles of how to describe something outside of the math lol. Leave that one for the metaphysics arguments give them something useful lol
  24. Sounds like what's being described is the CNN blackbody temperature decrease due to expansion which means the mean average kinetic energy is reduced due to reduction in number density of neutrinos. That makes sense now the current CNB temperature from calculation is 1.97 Kelvin however in the past much higher. Yes even with 3 generations of sterile neutrinos to replace DM you would need more than the baryonic particles. As you recall there is more DM than baryonic matter. That's the essential clincher against a sterile neutrino solution. One can show sterile neutrinos has a good match in mean lifetime and weakly interactive with the non relativistic (cold) characteristics. It's the sheer number required that's the main issue.
  25. Actually redshift applies to all particles but I understand your referring to photons being the meansof how we measure an objects redshift. Yes neutrinos does have a high momentum term however due to its weak interactions any scattering collisions is greatly reduced. Were both trying to get DanP to clarify to which class of observer. Ie applying the four momentum but recognizing observer effects to the particles four momentum term.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.