Duda Jarek
Senior Members-
Posts
587 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Duda Jarek
-
Decay is going to some lower energy and so more stable state - what normally stays on the way for such natural process is some energy barrier ... so the higher temperatures (average energy), the easier the decay ... and in inner core of neutron star are achieved kind of maximal temperatures available for standard matter ... so proton decay could be some kind of nature's failsafe to avoid infinite densities ... About ultra-high-energy cosmic rays ... if proton can decay, for example: - while fast gravitational collapse, start of importance of this decay could be rapid, so that it would cause explosion with much higher energetic particles than from standard supernova, or maybe - while slow collapse, GeV scale photons created while proton decay could in such extreme conditions destroy internal structure of neutrons (or proton-electron pairs), absorbing its energy and growing into astronomical energies ...
-
Particles are some local energy minimals, but there is always lower energy state than any particle - no particles. States prefer to deexcitate to lower energy state, radiating the difference in form of photons (the deeper local energy minimum, the higher energy barrier and so the more difficult this deexcitation is). The higher the average energy (temperature) the easier these deexcitations/decays are (lower expected life time). If the baryon number doesn't have to be conserved, matter instead of creating infinitely energy density state in a black hole, should decay into nothing, emitting its energy in form of photons ...
-
There is considered hypothetical decay of proton - usually into positron and neutral pion, which quickly decays into two photons. Such decays would allow standard matter to completely change into EM waves (proton + electron -> ~4 photons). So this decay allow to get to more stable state and temperatures in collapsing neutron stars should make it easier - it suggests that neutron star instead of creating a mysterious matter state (black holes), should 'evaporate' - turn its core into photons ... I've looked at a few papers and I haven't found any considered this type of consequences? If this process requires extreme conditions to be statistically important, it would happen practically only in the center, heating the star ... It doesn't contradict Big Bang as a Big Bounce (it cannot change difference between amount of matter and antimatter) Maybe it could explain extremely high energetic cosmic rays? (maybe in extremely high temperatures high energy photons could itself destroy proton + electron structure, absorbing part of their energy...) What do you think about proton decay? If it would be true - would black holes be created?
-
Ok, I've finally found the article You are referring to ... yes - it's how it should be done! ... http://www.pacificbiosciences.com/index.php I didn't realize that there are so large difference in time scale between searching and incorporating the new base ... in this case indeed the dye doesn't have to be activated - it can be always active, but we can use that "it takes several milliseconds to incorporate it" ... It's simpler and better than my idea, but still I wanted to defend it... As I imagine, simpler polymerases makes that succeeding 'nucleotide carriers' tries if it suits to actually considered base. If polymerase doesn't analyze the current base, these 'nucleotide carriers' are taken from environment completely at random - if their concentration is 1:10 means that statistically per 11 'draws' there would be tried one of the first type ... I completely agree that it's simplified, but generally we can choose concentrations to select differences in time required for search as we want. It's stochastic process - this time varies - it's why there would be required a few runs for given strand to increase preciseness. I agree that these differences in concentrations would increase the number of errors made, but we don't have to use these duplicated strands (for example using nanopore - duplicates are on the other side) The problem is that time required to incorporate it is much larger, but it should be rather practically constant. "How do you measure movement on the DNA?" The forces read by AFM should be rather small and smooth, until the active movement of polymerase to the next base - they should be seen as 'peaks' of force (along DNA strand - the tip should be not on the bottom of cantilever, but rather on it's front)
-
In short, polymerase cycle is - catch and insert the proper nucleoside triphosphate, then GO TO THE NEXT BASE, don't it? This movement is rather active (uses ATP), so when polymerase would be attached, it would have to pull DNA - AFM observes single atom interactions, so it should also observe forces created while this 'pulling' step. To distinguish between bases we would have just to watch time between succeeding steps - because of large difference in concentrations of 'nucleotide carriers', time required to find complementary base for different nucleotides would be e.g. like 1:10:100:1000. Of course its accuracy wouldn't be perfect, so we would have to read it a few times. Probably this pulling would be to difficult for polymerase and we would have to help it in controlled way. For example ssDNA it is working on could just go through a nanopore - cantilever of AFM would be just behind the nanopore and polymerase would work toward the nanopore. Thanks of it polymerase would work on unfolded ssDNA and we can control speed of releasing DNA through nanopore by changing electric field to ensure optimal working conditions. Such nanopores are already working http://www.physorg.com/news180531065.html I agree that dyes are more reliable, but it's difficult to use different ones for different nucleotides, so in pyrosequencing it has to be done in separate machine cycle. The perfect would be attaching different inactive dyes to nucloside triphosphates so that when polymerase catches it, it by the way breaks this connecting, what would release the dye and somehow activate it ... sounds great, but how to do it?? The only fast and simple way of distinguishing bases without drastically modifying biochemical machinery (or for example attaching electrodes to it) I can think of is modifying its speeds by changing 'nucleotide carriers' concentrations ...
-
Yes - monitoring the insertion of nucleotides is more accurate, but it's difficult to make it fast - usually each base requires some separate cycle of macroscopic time. So for such methods the only way to make it practical is to use massive parallelism as you written ... but it's still slow and expensive ... We should also look for smarter methods analyzing base by base for example while it comes through nanopore/protein/ribosome ... Such polymerase naturally process large portions of chromosome in minutes/hours - if we could monitor this process, we would get faster and cheaper methods ... Measuring position of polymerase is in fact really difficult ... optical methods are rather not precise enough ... making many snapshots using electron microscope could damage it ... Alternative approach is attaching polymerase to cantilever of atomic force microscope - it should 'feel' single steps it make ... so when we would use large differences in concentrations (like 1:10:100:1000), times between steps would very probably determine base. Of course we would have to process given strand a few time to get required accuracy.
-
One of the reasons to to introduce the concept of entanglement was the EPR experiment. Standard quantum measurement is projection into eigenstate basis of some hermitian operator (Heisenberg uncertainty principle applies to these measurements). EPR uses some additional, qualitatively different type of information - we know that because of angular momentum conservation, produced photons have opposite spin. In deterministic picture - there was no entanglement - there was just created some one specific pair of photons - 'hidden variables'. In this picture quantum mechanics is only a tool to estimate probabilities and the concept of entanglement is essential to work on such uncertain situations, but it isn't directly physical. But Bell's inequalities made many people believe that we cannot use such picture. However, such moving 'macroscopic charged points' can be well described deterministically - when we doesn't have full information, we could construct probabilistic theory with 'hidden variables'. And when we know that there was created two pairs of rotating bodies, such that we couldn't measure it's parameters, we still would know that e.g. because of angular momentum conservation, they would have to rotate in opposite direction - so to work on such probabilities we would have to introduce some concept of entanglement ... So the question stays - would that theory have 'squares' like quantum mechanics, which make it contradict Bell inequalities? If no - how this qualitative difference emerges while rescaling? Why we can describe rotating macroscopic points with 'hidden variables', but we cannot do it with microsocpic ones?
-
Quantum mechanics ‘works’ in proton + electron scale. Let’s enlarge it – imagine proton rotating around chloride anion … up to two charged oppositely macroscopic bodies rotating in vacuum. We can easily idealize the last picture to make it deterministic (charged, not colliding points). However, many people believe that Bell’s inequalities says that the first picture just cannot be deterministic. So I have a question – how this qualitative difference emerge while changing scale? In what scale an analog of EPR experiment would start giving Bell’s inequalities? I know – the problem is: it’s difficult to get such analog. Let’s try to construct a thought experiment on such macroscopic rotating charged bodies, which are so far, that we can measure only some of its parameters and so we can only work on some probabilistic theory describing their behavior. For simplicity we can even idealize that they are just point objects and that they don’t collide, so we can easily describe deterministically their behavior using some parameters, which are ‘hidden’ from the observer (far away). The question is: would such probabilistic theory have quantum-mechanical ‘squares’, which make it contradicting Bell inequalities? If not – how would it change while changing scale? Personally I believe that the answer is yes – for example thermodynamics among trajectories also gives these ‘squares’ (http://arxiv.org/abs/0910.2724), but I couldn’t think of any concrete examples… How to construct an analog to EPR experiment for macroscopic scale objects? Would it fulfill Bell inequalities?
-
Sanger is completely different - it cuts into short pieces and use electrophoresis. Pyrosequencing I've just read about, is a bit closer what I'm thinking - it sequentially adds nucleotides and watch if they were used by polymerase. Steps of such sequence are quite long and so expensive. The idea is not to use such macroscopic time sequences, but rather a natural process which goes many orders of magnitude faster. For example - somehow mount polymerase on the cantilever of atomic force microscope, so that it can 'watch' its speed of DNA processing. Now use different concentrations of the 'carriers' of nucleotides - so that the speed of the process depends on the current base. So there should be correlations between base sequence and forces observed by the microscope - processing given sequence a few times this way, we should be able to fully determine base sequence ... many orders of magnitude faster than using pyrosequencing. Eventually we could mount ssDNA and optically watch the speed of polymerase (for example by attaching to it something producing light, like luciferase).
-
There are considered some approaches to sequence DNA base by base - for example by making it go through nanoscale hole and measure its electric properties using some nanoelectrodes. Unfortunately even theoretical simulations says that identifying bases this way is already extremely difficult ... http://pubs.acs.org/doi/abs/10.1021/nl0601076 Maybe we could use nature's ways to read/work with DNA? For example somehow mount polymerase or ribosome and somehow monitor its state... I thought about using speed of process to get information about currently processed base. For example DNA polymerase to process succeeding base has to get from environment corresponding nucleoside triphosphat - there are only four of them - we can manipulate their concentrations. If we would choose different concentrations for them, there would be correlations between type of the base and time of its processing - by watching such many processes we could determine the sequence. Is it doable? What do you think about such 'base by base' sequencing methods? How to use proteins developed by nature for this process?
-
While introducing random walk on given graph, we usually assume that for each vertex, each outgoing edge has equal probability. This random walk usually emphasize some path. If we work on the space of all possible paths, we would like to have uniform distribution among them to maximize entropy. It occurs that we can introduce random walk which fulfills this condition: in which for each two vertexes, each path between them of given length has the same probability. Probability of going from a to b in MERW is S_ab= (1/lambda) (psi_b/psi_a) where lambda is the dominant eigenvalue of adjacency matrix, psi is corresponding eigenvector. Now stationary probability distribution is p_a is proportional to psi_a^2 We can generalize uniform distribution among paths into Boltzmann distribution and finally while making infinitesimal limit for such lattices covering R^n, we get behavior similar to quantum mechanics. This similarity can be understand that QM is just a natural result of four-dimensional nature of our world http://arxiv.org/abs/0910.2724 In this paper further generalizations are made in classical field of ellipsoids as its topological excitations. Their structure occurs to be very similar to known from physics with the same spin, charge, number of generations, mass gradation, decay modes, electromagnetic and gravitational interaction. Here is for example presented behavior of the simplest topological excitations of direction field - spin 1/2 fermions: http://demonstrations.wolfram.com/SeparationOfTopologicalSingularities/ What do you think about it?
-
Are particles made of topological singularities?
Duda Jarek replied to Duda Jarek's topic in Modern and Theoretical Physics
It occurs that rotational modes of the simplest energy density in such ellipsoid field already creates electromagnetic and gravitational interaction between such topological excitations. I've finished paper about this whole idea of deterministic physics http://arxiv.org/abs/0910.2724 -
Are particles made of topological singularities?
Duda Jarek replied to Duda Jarek's topic in Modern and Theoretical Physics
What do You mean? Looking at the demonstration - minimizing local rotations would make opposite (the same) singularities attracting (repelling). Using local curvature of rotation axis we can define E vector, and B vector using some local curl. I'm not sure, but probably E_kin = Tr(sum_i d_i M * d_i M^t) M - the matrix field (real) d_i - directional derivative or some its modification should lead to Maxwell's equations. For flat spacetime we assume that time axises are aligned in one direction. The potential term should make given eigenvalues preferable - for example sum_{i=1..4} Tr(M^i - v_i)^2 where v_i are sums of powers of eigenvalues. But more physically looks potential defined straighforward by eigenvalues (l_i) sum_i (l_i-w_i)^2 in this representation we can write M = O diag(l_i) O^t where O are orthogonal matrices (3 deg. of freedom in 3 dimensions). Now O corresponds to standard interactions like EM. {l_i} are usually near {w_i} and changes practically only near critical points (creating mass). These degrees of freedom should interact extremely weakly - mainly while particle creation/annihilation - they should thermalize with 2.7K EM noise through these billions of years and store (dark) energy needed for cosmological constant. About further interactions - I think essential are 'spin curves' - natural, but underestimated result of that the phase is defined practically everywhere (like in quantum formulation of EM). It can be for example seen in magnetic flux quantization - it's just the number of such spin curves going through given area. Taking it seriously it's not surprising that opposite spin fermions like to couple - using such spin curves. We can see it for nucleons, electrons in orbit, Cooper pairs - they should create spin loop. How to break such loop? For example to deexcitate an excited electron which is in such couple. The simplest way is to twist to 'eight-like shape' and reconnect to create two separate loops containing one electron (fermion), which could reconnect to create lower energy electron couple. Such twist&reconnect process makes that one of fermions rotates its spin to the opposite one - changing spin by 1 - we see selection rules ... which made us believe that photons are spin 1. Going to baryons... In rotational matrix O we can see U(1)*SU(2) symmetry ... but topological nature of strong interactions is difficult to 'connect' with SU(3) ... But this ellipsoid model naturally gives higher topological excitations which are very similar to mesons/baryons ... with practically the same behavior ... with natural neutrino<electron<meson<baryon mass gradation ... and which can naturally create nucleus like constructions ... Practically the only difference is the spin of Omega baryon - quarks model gives 3/2 spin and as topological excitation it's clearly 1/2 ... but these 3/2 spin hasn't been confirmed experimentally (yet?). Pions would be Mobius strip like spin loops, kaons makes full not half internal rotation. Pions can decay by enlarging the loop - charged part creates muon, the second one - neutrino. Kaons internal rotation should make them twist and reconnect creating two/three pions. Long and short living kaons can be explain that internal rotation is made in one or opposite way. Baryons would be spin curve going through spin loop (could be experimentally interpreted as 2+1 quark structure). The loop and curve singularities uses different axis -the spin curve looks to be electron-like and the loop to be meson-like (produces pions). Strangeness would make this loop make some number of additional internal half-rotations. It's internal stress would make it to twist and reconnect to release part of it's internal rotation into a meson - most of decay processes can be seen in this way. Two neutron could reconnect their spin loops creating 'eight-like shape' holding both of them together. With proton it could reconnect their spin curves - deuteron would be two attracting loops on one spin curve. Finally in this way could be constructed larger nucleons - hold by interlacing spin loops. -
Are particles made of topological singularities?
Duda Jarek replied to Duda Jarek's topic in Modern and Theoretical Physics
Look at Schrodinger's equation solutions for hydrogen atom - there is e^{i m phi} term (m - spin along z axis) - if we look at the phase while making a loop around the axis, it rotates m times - in differential equation theory it's called topological singularity, in complex analysis it's conservation is called http://en.wikipedia.org/wiki/Argument_principle Generally for any particle, while making rotation around some axis, spin says how to change the phase - while making full rotation the phase makes 'spin' rotations - so in same way particle is at least topological singularity. In fact this underestimated property can lead to answers to many questions, like from where - mass of particles, - conservation properties, - gravity/GR, - that the number of lepton/quark generations is equal to the number of spatial dimensions, - electron coupling (orbits, Cooper pairs), - cutoffs in quantum field theories, - neutrino oscillations, and many others come from. Let's start from the other side. Some time ago I've considered some simple model - take a graph and consider a space of all paths on it. Assumption that all of them are equally probable, leads to some new random walk on graph, which maximize entropy globally (MERW). It can be also defined that for given two vertices, all paths of given length between them are equally probable. Standard random walk - for all vertices, all edges are equally probable - maximize uncertainty only locally - usually gives smaller entropy. http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PRLTAO000102000016160602000001 This model can be generalized that paths are not equally probable, but there is Bolzman distribution among them for some given potential. Now if we cover R^3 with lattice and take continuous limit, we get that Bolzman distribution among paths gives near QM behavior - more precisely we get something like Schrodinger equation, but without Wick rotation - stationary state probability density is the square of the dominant eigenfunction of Hamiltonian - like in QM. Derivations are in the second section of http://arxiv.org/abs/0710.3861 In some way this model is better than QM - for example in physics excited electrons aren't stable like in Schrodinger's equations, but should drop to the ground state producing photon, like wihout Wick's rotation. Anyway in this MERW based model, electron would make some concrete trajectory around nucleus, which would average to probability distribution as in QM. This simple model shows that there is no problem with 'squares', which are believed to lead to contradictions for deterministic physics (Bell's inequalities) - they are result of 4D nature of our world - the square is because there have to meet trajectories from past and future. This simple model - Bolzman's distribution among paths is near real physics, but misses a few things: - there is no interference, - there is no energy conservation, - is stochastic not deterministic, - there is single particle in potential. But it can be expanded - in some classical field theory, in which particles are some special solutions (like topological singularities suggested by e.g. strong spin/charge conservation). To add interference they have to make some rotation of its internal degree of freedom. If it's based on some Hamiltonian, we get energy conservation, determinism and potentials (used for Bolzman distribution in the previous model). To handle with many particles, there are some creation/annihilation operators which creates particle path between some two points in spacetime and interacts somehow (like in Feynman's diagrams) - and so creates behavior known from quantum field theories, but this time everything is happening not in some abstract and clearly nonphysical Fock space, but these operator really makes something in the classical field. The basic particles creating our world are spin 1/2 - while making a loop, phase makes 1/2 rotation - changes vector to the opposite one. So if we identify vectors with opposite ones - use field of directions instead, fermions can naturally appear - as in the demonstration - in fact they are the simplest and so the most probable topological excitations for such field - and so in our world. A simple and physical way to create directional field is a field of symmetric matrices - which after diagonalisation can be imagined as ellipsoids. To create topological singularities they should have distinguishable axises (different eigenvalues) - it should be energetically optimal. In critical points (like the middle of tornado), they have to make some axises indistinguishable at cost of energy - creating ground energy of topological singularity - particle's mass. Now one (of 3+1) axis has the strongest energetic tendency to align in one direction - creating local time arrows, which somehow rotates toward energy gradient to create gravity/GR like behaviors. The other three axises creates singularities - one of them creates one singularity, the other has enough degrees of freedom to create additional one - to connect spin+charge in one particle - giving family of solution similar to known from physics - with characteristic 3 for the number of generations of leptons/quarks. With time everything rotates, but not exactly around some eigenvector, giving neutrino oscillations. Is it better now? Merged post follows: Consecutive posts mergedThere is nice animation for topological defects in 1D here: http://en.wikipedia.org/wiki/Topological_defect thanks of [math](\phi^2-1)^2[/math] potential, going from 1 to -1 contains some energy - these nontrivial and localized solutions are called (anti)solitons and this energy is their mass. Such pair can annihilate and this energy is released as 'waves' (photons/nontopological excitations). My point is that in analogous way in 3D, starting from what spin is, our physics occurs naturally. I think I see how mesons and baryons appears as kind of the simplest topological excitations in picture I've presented - in each point there is ellipsoid (symmetric matrix) which energetically prefers to have all radiuses (eigenvalues) different (distinguishable). First of all singularity for spin requires making 2 dimensions indistinguishable, for charge requires 3 - it should explain why 'charges are heavier than spins'. We will see that mass gradation: neutrino - electron - meson - baryon is also natural. Spins as the simplest and so the most stable should be somehow fundamental. As I've written in the first post - from topological reasons two spins 'likes' to pair and normally would annihilate, but are usually stabilized by additional property which has to be conserved - charge. And so electron (muon,tau) would be a simple charge+spin combination - imagine a sphere such that one axis of ellipsoids is always aiming the center (charge singularity). Now the other two axises can make two spin type singularities on this sphere. And similarly for other spheres with the same center and finally in the middle all three axises have to be indistinguishable. The choice of axis chooses lepton. Now mesons - for now I think that it's simple spin loop (up+down spin) ... but while making the loop phases make half rotation (like in Mobius strip) - it tries to annihilate itself but it cannot - and so creates some complicated and not too stable singularity in the middle. Zero charge pions are extremely unstable (like 10^-18 s), but charge can stabilize them for a bit longer. The hardest ones are baryons - three spins creating some complicated pattern and so have to be difficult to decay - the solution could be that two of them makes spin loop and the third goes through its middle preventing from collapse and creating large and 'heavy' singularity. Spin curves are directed, so there are two possibilities (neutron isn't antineutron). We believe we see up and down quarks because two creating the loop are different form the third one. -
In quantum mechanics spin can be described as that while rotating around the spin axis, the phase rotates "spin" times – in mathematics it’s called Conley (or Morse) index of topological singularity, it’s conservation can be also seen in argument principle in complex analysis. So particles are at least topological singularities. I'll try to convince that this underestimated property can lead to explanations from that fermions are extremely common particles up to the 'coincidence' that the number of lepton/quark generations is ... the number of spatial dimensions. I've made a simple demonstration which shows qualitative behavior of the phase while separation of topological singularities, like in particle decay or spontaneous creation of particle-antiparticle: http://demonstrations.wolfram.com/SeparationOfTopologicalSingularities/ The other reason to imagine particles as topological singularity or a combination of a few of them is very strong property of spin/charge conservation. Generally for these conservation properties it’s important that some ‘phase’ is well defined almost everywhere – for example when two atoms are getting closer, phases of their wavefunctions should have to synchronize before. Looking form this perspective, phases can be imagined as a continuous field of nonzero length vectors – there is some nonzero vector in every point. The problem is in the center of a singularity – the phase cannot be continuous there. A solution is that the length of vectors decreases to zero in such critical points. To explain it form physical point of view we can look at Higg’s mechanism – that energy is minimal not for zero vectors, but for vectors for example of given length. So finally fields required to construct such topological singularities can be field of vectors with almost the same length everywhere but some neighborhoods of the singularities where they vanishes in continuous way. These necessary out of energetic minimum vectors could explain (sum up to?) the mass of the particle. Topological singularity for charge doesn’t have something like ‘spin axis’ – they can be ‘pointlike’ (like blurred Planck's scale balls). Spins are much more complicated – they are kind of two-dimensional – singularity is ‘inside’ 2D plane orthogonal to the spin axis. Like the middle of a tornado – it’s rather ‘curvelike’. The first ‘problem’ is the construction of 1/2 spin particles – after rotating around the singularity, the spin makes only half rotation – vector becomes opposite one. So if we forget about arrows of vectors – use field of directions – spin 1/2 particles are allowed as in the demonstration – in fact they are the simplest ‘topological excitations’ of such fields … and most of our fundamental particles have 1/2 spin … How directions – ‘vectors without arrows’ can be physical? For example imagine stress tensor – symmetric matrix in each point – we can diagonize it and imagine as an ellipsoid in each point – longest axis (dominant eigenvector) doesn’t choose ‘arrow’ – direction fields can be also natural in physics … and they naturally produce fermions … It's emphasized axis - eigenvector for the smallest or largest or negative eigenvalue would have the strongest energetic preference to align in the same direction - it would create local time dimension and its rotation toward energy - creating gravity and GR related effects. One of other three axises could create one type of singularity, and there still would remain enough degrees of freedom to create additional singularity - to combine spin and charge singularity in one particle - it could explain why there is 3*3 leptons/quarks types of particles. Another ‘problem’ about spins is behavior while moving the plane in ‘spin axis’ direction – like looking on tornado restricted to higher and higher 2D horizontal planes - the field should change continuously, so the critical point should so. We see that conservation doesn’t allow it to just vanish – to do it, it has to meet with opposite spin. This problem occurs also in standard quantum mechanics – for example there are e^(i phi) like terms in basic solutions for hydrogen atom – what happens with them ‘outside the atom’? It strongly suggest that against intuition, spin is not ‘pointlike’ but rather curve-like – it’s a ‘curve along it’s spin axis’. For example a couple of electrons could look like: a curve for spin up with the charge singularity somewhere in the middle, the same for spin down - connected in ending points, creating kind of loop. Without the charges which somehow energetically ‘likes’ to connect with spin, the loop would annihilate and it’s momentums should create two photon-like excitations. Two ‘spin curves’ could reconnect exchanging its parts, creating some complicated, dynamical structure of spin curves. Maybe it’s why electrons like to pair in orbits of atoms, or as a stable Cooper pairs (reconnections should create viscosity…) Bolzman distribution among trajectories gives something similar to QM, but without Wick’s rotation http://www.scienceforums.net/forum/showthread.php?t=36034 In some way this model corresponds better to reality – in standard QM all energy levels of a well like made by a nucleus are stable, but in the real physics they want to get to the ground state (producing a photon). Without Wick’s rotation eigenfunctions are still stable, but the smallest fluctuation make them drop to the ground state. What this model misses is interference, but it can be added by some internal rotation of particles. Anyway this simple model shows that there is no problem with connecting deterministic physics with squares appearing in QM. It suggests that maybe a classical field theory would be sufficient … when we understand what creation/annihilation operators really do – what particles are … the strongest conservation principle – of spin and charge suggests that they are just made of topological singularities… ? What do you think about it? I was said that this kind of ideas are considered, but I couldn’t find any concrete papers? There started some discussion here: http://groups.google.com/group/sci.physics/browse_thread/thread/97f817eec4df9bc6#
-
Thanks for the paper. It's surprising for me that decay is faster in lower temperature ... Generally 2.7K looks to be much too small to have some essential influence on such processes ... I don't like this explanation, but remaining way to cross such energy barrier is some kind of tunneling ... About 'the temperature of vacuum' (2.7K) - the existence of different interactions (weak, strong, gravitational), which should be carried by some fundamental excitations (modes), - the requirement of quite large cosmological constant - energy of vacuum, strongly suggest that there is more than observed EM modes - microwave radiation. The speed of loosing temperature by black body radiation suggests that it's made practically only by photons - EM modes, so interactions with these 'other modes' should be extremely weak. The idea of this topic was the only way I could think of to observe directly these 'other modes' ... but they are probably too weak ... The standard 293K temperature known from chemistry is stored in kinetic and EM energy and they interact extremely weakly with these 'other modes' - the rate of thermalisation between them should be extremely slow - they probably could thermalise in billions of years, but for time scale used by us, these temperatures - chemical (~293K) and of 'the other modes' (~2.7K) can be different.
-
I'm not taking about fine-structure constant - it's a combination of some fundamental physical constants. I'm also not talking about absorption like neutron capture - in these cases the energy barrier is crossed thanks of energy of captured particle. I'm talking about decay - there is a stable state and after some statistical time it spontaneously exceed the energy barrier which made it stable and drops to a lower stable state ... where energy required to cross the barrier comes from? For me it's clearly thermodynamical process ... this energy has to came form some thermal noise ...
-
Particle decay is clearly some statistical process. Generally speaking, particles are some stable solutions of some physics (like a field theory) - they are some local/global energy minimums for given constrains like spin or charge. So from energetic point of view, particle decay should be getting out of some local energy minimum by crossing some energy barrier and finally reaching some lower energy minimum - just like in thermodynamics (?) Energy required to cross such energy barrier usually comes from thermal noise - in case of particle decay there would be required some temperature of vacuum ... Generally the universe is built not only of particles, but also can carry different interactions - EM, weak, strong, gravitational. This possibility itself gives vacuum huge amount of degrees of freedom - some fundamental excitations, which not necessarily have nonzero mass like photons ... and if there is some interaction between them, thermodynamics says that they should thermalize - their energy should equilibrate. We can measure thermal noise of EM part of these degrees of freedom - 2.725K microwave background, but degrees of freedom corresponding to the rest of interactions (weak, strong, gravitational) had billions of years to thermalize - should have similar temperature. The EM part gives about 6*10^-5 of energy of vacuum required to obtain expected cosmological constant, maybe the rest of interactions carries the rest of it ... Anyway we believe that this microwave background is cooling - so 'the temperature of universe' should so. Shouldn't it became more difficult for particles to cross the energy barrier to get to a lower energy minimum? It would increase decay times ... We have experimental evidence that physical constants like e,G are unchanged with time, but is it so with decay times? Maybe radiometric dated things are a bit younger than expected... Similar situation is for example for excited electrons ...
-
The important is that their spin and charge parts are connected, but can behave separately - it strongly suggests that the fundamental blocks building our physics are the carriers of indivisible properties like charge or spin. Sometimes they create pairs to reduce energy and finally particles are stable when they are in the state of the lowest possible energy, like neutrino or electron. And strong argument that this spin part is just a neutrino is muon decay muon -> electron + electron antineutrino + muon neutrino isn't that just exchange of the spin part to get the lowest energy and so the stable state?
-
From http://www.lbl.gov/Science-Articles/Archive/ALS-spinons-holons.html: and there is a nice graph with two distinct peaks ...
-
From the abstract of the paper: "The spinon and holon branches are found to have energy scales of approx 0.43 and 1.3 eV". Spinons and holons undergo "separation into collective modes" ... but to behave in separate modes, doesn't they have to separate themselves? Imagine a string in a harmonic mode ... now it separates into two modes/strings ... doesn't it mean that that it's atoms also separates? Ok - these amplitudes can be extremely small so they stay 'in one particle' ... but behave separately. Neutrino is 'a pure (electron) spin' ... and so is the spinal part of electron ... They energetically prefer to stay together (modifying their structure a bit), but 'pure spin' has unchangeable quantum number (spin) and has extremely small energy - doesn't have what to decay - should be stable (neutrino). 'Pure charge' (holon) interacts much stronger, have larger energy - should quickly 'catch' neutrino (spontaneously created in pair) - should have very small half life time. And we have Majorana hypothesis - there are only two types of electron neutrinos ... adding the charge we have four possibilities as in Dirac's equations ...
-
Recent experiments http://www.nature.com/nphys/journal/v2/n6/abs/nphys316.html confirmed theoretical results that electrons are not undividable as it was believed -for some conditions it's more energetically preferable for them to separate their charge part (called holon or chargon) and its spin part (spinon). I think it's a good time to discuss about these results and their consequences. Thinking about 'pure spin' (spinon) made me associate it with low energy electron neutrino, especially that I imagine particle's properties which can only occur in some integer multiplicities like charge or spin as topological singularities - such separation greatly connects the whole picture. Another argument is for example muon(tau) decay - it looks that e.g. there has been spontaneously created neutrino-antineutrino pair (electron) and the spin part of the muon(tau) was exchanged with the one with smaller energy and so more stable. The other question is about statistics of these (quasi?)particles. For me statistics is the result of spin - so spinons should be clearly fermions ... What do you think about it?
-
Data correction methods resistant to pessimistic cases
Duda Jarek replied to Duda Jarek's topic in Computer Science
The simulator of correction process has just been published on Wolfram's page: http://demonstrations.wolfram.com/CorrectionTrees/ Is shows that we finally have near Shanon's limit method working in nearly linear time for any noise level. For given probability of bit damage (p_b), we choose p_d parameter. The higher this parameter is, the more redundancy we add, the easier to correct errors. We want to find the proper correction (red path in simulator). The main correction mechanism is that if we are expanding the proper correction - everything is fine, but in each step of expanding a wrong correction, we have p_d probability of realizing it. With p_d large enough, the number of corrections we should check doesn't longer grow exponentially. At each step there is known tree structure and using it we choose the most probable leaf to expand. I've realized that for practical correction methods (not requiring exponential correction time), we rather need a bit more redundancy than theoretical (Shannon's) limit. Redundancy allows to reduce the number of corrections to consider. In practical correction methods we rather have to elongate corrections and so we have to assume that the expected number of corrections up to given moment is finite, what requires more redundancy than Shannon's limit (observe that block codes fulfills this assumption). This limit is calculated in the last version of the paper (0902.0271). The basic correction algorithm (as in the simulator) works for a bit worse limit (needs larger encoded file by at most 13%), but it can probably be improved. Finally this new family of random trees has two phase transitions - for small p_d < p_d^0 the tree will immediately expand exponentially. For p_d^0 < p_d < p_d^2 the tree has generally small width, but rare high error concentrations make that its expected width is infinite (like long tail in probability distribution). For p_d>p_d^2 it has finite expected width. Used today error correction methods works practically only for very low noise (p_b < 0.01). Presented approach works well for any noise (p_b < 0.5). For small noises it needs size of encoded file practically like for Shannon's limit. The difference starts for large noises: it needs file size at most twice larger than the limit. Practical method for large noises give new way to increase capacity of transmition lines and storage devices - for example place two bits where we would normally place one - the cost is large noise increase, but we can handle it now. For extremely large noises, we can no longer use ANS. On fig. 3 of the paper is shown how to handle it. For example if we have to increase the size of the file 100 times, we can encode each bit in 100 bits - encode 1 as (11...111 XOR 'hash value of already encoded message'. The same with 0. Now while creating the tree, each split will have different number of corrected bits - different weight. -
Is LIGO just observing viscosity of the vacuum?
Duda Jarek posted a topic in Astronomy and Cosmology
In experiments like LIGO we want to observe extremely weak gravitational waves from sources millions of light years away - we are assuming that their strength decreases like R^-3. But because of this distance, even slightest interactions with the vacuum and other objects on the way, could diffuse/absorb them - and so the amplitude would decrease exponentially, making such observations completely hopeless. In November 2005 LIGO has reached assumed sensitivity "At its conclusion, S5 had achieved an effective range of more than 15 Mpc for the four-kilometer interferometers, and seven Mpc for the two-kilometer interferometer." http://www.ligo.caltech.edu/~ll_news/s5_news/s5article.htm But for these 3.5 years its only success is is a non-detection: "During the intense blast of gamma rays, known as GRB070201, the 4-km and 2-km gravitational-wave interferometers at the Hanford facility were in science mode and collecting data. They did not, however, measure any gravitational waves in the aftermath of the burst. That non-detection was itself significant. " http://mr.caltech.edu/media/Press_Releases/PR13084.html What is vacuum? It definitely isn't just 'an empty space' - it for example is medium for many different waves, particles. Nowadays many people believe that it can for example spontaneously create particle-antiparticle pairs... Modern cosmological models says that there is required cosmological constant - additional density of energy of ... this vacuum ... Anyway, even being only a medium for many kind of interactions - there is at least some field there - it has many internal degrees of freedom (like microwave radiation). We usually believe that they can interact with each other, so there should be thermalization - all of them should contain similar amount of energy. In physics there are usually no perfect mediums - there are always at least some very very very small interactions... We observe more or less uniform 2.725K microwave radiation - it is believed to be created in about 3000K and then reduce the wavelength due to red shift in expanding universe. But assume that the field of which vacuum is build is not perfectly transparent - for example that such photons interacts at average once per a million years - that would be already enough for thermalisation process. So if the field of the vacuum is not perfectly transparent (there is interaction between different interactions), its internal degrees of freedom should have temperature 2.725K. We observe only electromagnetic degrees of freedom (according to Wikipedia: about 6*10^-5 of total density of universe), but we know well that there is more types of interactions (weak, strong, gravitational ... ). And their energies probably sum up to the cosmological constant... Returning to the question from topic - general relativity theory says that vacuum is kind of fluid for gravitational waves. It it already a field - it has some internal structure ... and there is QED, QCD, etc - I just don't believe we can assume that it's a perfect medium. For fluids this kind of friction - converting macroscopic energy into internal degrees of freedom - is called viscosity (try to make waves on honey). If there is some extremely small viscosity of vacuum (which has nonzero energy density/temperature), multiplying it by millions of light years, it could essentially reduce strength of gravitational waves reaching earth... They are are already believed to be extremely weak... Do You think it is why LIGO only success is a non-detection? If not - why is that? -
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
Yes I claim But instant communication seems to be much less against our intuition than retrocausality ... which is clearly seen in Wheeler's experiment ... CPT conservation suggests that causality should be able to go in both causality cones ... If we accept it, instant communication is a piece of cake - send information back and forward or oppositely... This doesn't mean that physics is 'nonlocal' as Bell's inequalities' enthusiasts claim - if we think about physics as in any field theory (QED, standard model, general relativity): fourdimesionally - it's absolutely local. Merged post follows: Consecutive posts mergedIf something is interested, there are two more interpretations of QM in which we try to understand QM fourdimensionally as in CPT conserving field theories - 'transactional interpretation' and 'theory of elementary waves'. I believe here starts new large discussion about it: http://groups.google.com/group/sci.physics.electromag/browse_thread/thread/749d5a06be67485f/eac28a1f73a81aab?lnk=raot#eac28a1f73a81aab