Duda Jarek
Senior Members-
Posts
592 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Duda Jarek
-
Indeed the main question here is if the baryon number is ultimately conserved? Violation of this number is required by - hypothetical baryogenesis producing more matter than anti-matter, - many particle models, like supersymmetric, - massless Hawking radiation - black holes would have to evaporate with baryons to conserve the baryon number. From the other side, there is a fundamental reason to conserve e.g. electric charge: Gauss law says that electric field of the whole Universe guards charge conservation. In other words, adding a single charge would mean changing electric field of the whole Universe proportionally to 1/r^2. We don't have anything like that for baryon number (?) - a fundamental reason for conserving this number. Indeed the search for such violation (by proton decay) has failed, but this search was performed in room temperature water tanks. One of the question is if required conditions can be reached in such conditions: if energy required to cross the energy barrier holding the baryon together can be spontaneously generated in room-temperature water. In other words: if Boltzmann distribution of size of random fluctuations still behaves well for such huge energies. If baryon number is not ultimately conserved, it would rather require extreme conditions, like while Big Bang (baryogenesis) ... or in the center of neutron star, which will exceed all finite limits before getting to infinite density required to start forming the black hole horizon and the central singularity. Such "baryon burning phase" would result in enormous energy (nearly complete matter -> energy conversion) - and we observe this kind of sources, like gamma ray bursts, which "The means by which gamma-ray bursts convert energy into radiation remains poorly understood, and as of 2010 there was still no generally accepted model for how this process occurs (...) Particularly challenging is the need to explain the very high efficiencies that are inferred from some explosions: some gamma-ray bursts may convert as much as half (or more) of the explosion energy into gamma-rays." ( http://en.wikipedia.org/wiki/Gamma-ray_burst ) So we have something like supernova explosion, but instead of exploding due to neutrinos (from e+p -> n), this time using gammas - can you think of other than baryon decay mechanisms for releasing such huge energy? NASA news from 2 days: http://www.nasa.gov/press/2014/october/nasa-s-nustar-telescope-discovers-shockingly-bright-dead-star/ about 1-2 solar mass star, with more than 10 millions times larger power than sun ... is no longer considered as a black hole! Where this enormous energy comes from? While fusion or p+e->n converts less than 0.01 matter into energy, baryon decay converts more than 0.99 - are there some intermediate possibilities?
-
Have you forgotten to add "in contrast to forming infinite density singularity in large matter concentrations" ?
-
Not me, Hawking radiation means: gather lots of baryons into a black hole, wait until it evaporates (massless Hawking radiation) - and there is this number of baryons less in the universe. Also, if we believe in baryogenesis which create more matter than anti-matter ... it also violated baryon number conservation.
-
After Stephen Hawking "There are no black holes": http://www.nature.com/news/stephen-hawking-there-are-no-black-holes-1.14583 now from http://phys.org/news/2014-09-black-holes.html : "But now Mersini-Houghton describes an entirely new scenario. She and Hawking both agree that as a star collapses under its own gravity, it produces Hawking radiation. However, in her new work, Mersini-Houghton shows that by giving off this radiation, the star also sheds mass. So much so that as it shrinks it no longer has the density to become a black hole." What is nearly exactly what I was saying: instead of growing singularity in the center of neutron star, it should rather immediately go through some matter->energy conversion (like evaporation through Hawking radiation or in other words: some proton decay) - releasing huge amount of energy (finally released as gamma ray bursts), and preventing the collapse.
-
Determinant is just a sum over all permutations of products - I don't see a problem here? Cramer formula allows to write inverse matrix as rational expression of determinants - what seems sufficient ... Anyway, there still seems to be required exponential number of terms to find the determinant ... But maybe there can be a better way to just find n-th power of (Grassman) matrix ... ?
-
Indeed you woud need [math]g_i^{-1}[/math] for such direct inverse, eg. [math](AG)^{-1} = diag(g_i^{-1}) A^{-1} [/math]. However, I think, having a quick (polynomial) way to find determinant of such matrices should be sufficient (no inverse needed). I was fighting with Gauss elimination, but there can appear terms with all combinations - thieir number grows exponentially ...
-
While determining the existence of Euler cycle (going once through each edge) for given graph is trivial, for Hamilton path (going once through each vertex) it is NP-complete problem (e.g. worth a million dollar). Denoting adjacency matrix of this graph by [math]A[/math] and its number of vertices by [math]n[/math], diagonal elements of [math]A^n[/math] count also eventual Hamilton cycles – the problem is that they also count other length n cycles – going more than once through some vertex. The question is if we could somehow "subtract" those going multiple times through some vertex ... Grassman variables (anticommuting), used in physics to work on fermions, seem to be perfect for this purpose. Assume we have [math]g_1,..., g_n[/math] Grassman variables: [math]g_i g_j = - g_j g_i[/math] what also implies [math]g_i g_i = 0[/math]. So multiplication of [math]n[/math] of such variables is nonzero iff it contains all indexes (vertices). Denote [math]G = diag(g_i)[/math] as diagonal nxn matrix made of these variables. It is now easy to see that: Graph [math]A[/math] contains Hamilton cycle iff [math]Tr((AG)^n) \neq 0 [/math]. Grassman variables can be realized by matrices – so we could see this formula in terms of block matrices ... unfortunately known realizations require e.g. [math]2^n[/math] size matrices, what is not helpful - we get only an implication: If P != NP, then there is no polynomial size matrix realization of Grassman variables. So probably these realizations just require exponentially large matrices, what seems reasonable. We could easily find [math](AG)^{-1}[/math], so maybe there is a way to quickly find [math](1-AG)^{-1}=\sum_{i=0}^n (AG)^i[/math], what should be sufficient? Any thoughts?
-
Roamer, I think in most of parliamentary open list elections, you choose among candidates from your district (?) - so that they not only represent their parties, but also their regions and so the people who voted for them. Up to single-member districts where people only choose someone directly representing their region. E.g. in Germany (mixed system) they get a card with two lists: of candidates and of parties.
-
Indeed John, as I have mentioned, in some situations it is impossible to find a voting system satisfying looking basic requirements, like in the entioned Arrow's or Holmstrom's theorem - mainly because of Condorcet's cycles: when preferences are of type A<B, B<C, C<A. It is partially solved in Borda systems - that voters give point values to options and finally there is chosen option with the highest number of points. Quantitatively defining "optimality" function of apportionment is somehow similar - we are choosing apportionment having the best "optimality". As “It has been said that democracy is the worst form of government except all the others that have been tried”, for example probably the biggest problem with dictatorship is finding the proper person and especially his succeeder, we still have have to find the best voting methods for various situations - not only in politics. So how should we choose them?
-
A nonexistence of the optimal voting system can be proven in many situations, I wanted to propose a general discussion about choosing the best voting systems for various purposes and countries. Especially regarding the most interesting - parliamentary election: there is a territory divided into districts in which people vote for local candidates (usually representing one of parties), and we want to find seat apportionment to fulfill two priorities: 1) The total number of seats of different parties is proportional to their total number of votes, 2) Locally there are chosen those having majority of votes. Unfortunately these two priorities exclude each other – there are usually used systems based on the first one (proportional representation, e.g. Holland, Portugal, Switzerland, Spain, Poland, Brazil) or the second (e.g. single-member district - USA, Canada). As we would like to fulfill both priorities, there are also mixed systems (e.g. Germany), like: half of the seats are chosen by local majorities, half by proportional representation – what has some technical difficulties to fulfill. There is also being developed more modern biproportional apportionment to fulfill both priorities at once, but it based on approximations. I think that in the age of computers we don’t have to be satisfied by some approximation, as we can find the optimal apportionment – if only we would quantitatively define what do we mean by the best apportionment – define “optimality” function, such that we are searching for an apportionment having its highest value. Then a computer can start with some approximation and search nearby apportionments to find the best one. As it is a difficult computational problem, after voting statistics are announced, they could wait e.g. a day when everybody could search for a better apportionment (with higher “optimality” value) and finally the best found would be set. So the question is how to define this “optimality” function – it should be some average (e.g. weighted arithmetic) of terms corresponding to penalties of both priorities: 1) minus distance of proportion of seats and proportion of votes, e.g. the simples Gallagher index. We could also take a more complex distance to emphasize the fact that accuracy is more essential for small parties (e.g. Kullback-Leibler). 2) e.g. sum over districts of minus “the number of voters choosing a candidate with larger number of votes than the winner for this district” – for single-member districts (can be easily generalized). So it is kind of a number of people having a reason to complain as their candidate got more votes than the winer - it is zero if the one having majority has won. There has remained many questions, like what weights, distance, function in 2), averages should we choose. E.g. arithmetic average is more tolerable for compensating than geometric average (e.g. if 3,0 is better than 1,1 ?). Then, what kind of question should be asked – to motivate voters to come and to properly represent their choices. Maybe a choice of a single candidate, maybe a few, or maybe some preferential system? What would be the best voting systems and why – especially for your countries? What do we mean by the best apportionment – how to define the “optimality” function?
-
Particles as wave packets - why they don't dissipate?
Duda Jarek replied to Duda Jarek's topic in Physics
Interesting, so why e.g. cosmologists bother what was happening before us, astrophysicists bother what is happening inside a star - where we will never be able to directly measure ... ... or what is the solution of Schrodinger equation for hydrogen - for which we cannot measure the whole wavefunction, we can observe only its far consequence: energy spectrum. Indeed modern physics has lost the objectivity - everybody has own subjective physics ... about which real physics doesn't care about - just making that the world objectively works as it works ... -
Particles as wave packets - why they don't dissipate?
Duda Jarek replied to Duda Jarek's topic in Physics
So any of two paths this photon will choose, it will change momentum of corresponding mirror - be "observed" as you say ... so how can we get interference? -
Particles as wave packets - why they don't dissipate?
Duda Jarek replied to Duda Jarek's topic in Physics
swansont, I am not talking about detecting the event by an subjective observer, but what is objectively happening there ... physics is still working without observers (e.g. millions of years ago). Delta1212, I am asking about something more concrete than probability: e.g. energy or charge distribution. Can energy of a single photon or charge of elementary charge dissipate? It is what would happen if you would see them as pure wave packet (without a mechanism to prevent dissipation - make them soliton). -
Particles as wave packets - why they don't dissipate?
Duda Jarek replied to Duda Jarek's topic in Physics
Reflecting from a mirror means changing momentum of photon and so of the mirror - if you are saying that photon literally goes both ways, does it mean that it has changed momentum of both mirrors? How much? - like there was complete photon going both ways or (as there was initially only single photon) maybe there were two "halves of photon" (or charge in electron interference)? And generally if you want particle/photon go a more complex trajectory, every change of direction needs a momentum transfer with something (vacuum???) -
Particles as wave packets - why they don't dissipate?
Duda Jarek replied to Duda Jarek's topic in Physics
Even for Mach-Zehnder interferometer we draw two classical trajectories, saying only that we don't know which one is chosen. Here situation is even simpler - no interference. I think you are referring to Feynman path integrals? But the basic their approximation is taking the classical trajectory and small variations around it (van Vleck formula) - in QM energy travels through a bit fuzzed classical trajectories. -
Particles as wave packets - why they don't dissipate?
Duda Jarek replied to Duda Jarek's topic in Physics
So imagine a single excited atom produces single optical photon, which comes through a prism and finally is absorbed by another single atom - suggesting that energy has traveled localized through a concrete trajectory between them. While if it would be a wave packet, this energy should be dissipated - especially after the prism. Don't we need some additional mechanisms to hold this wave packet together - make it maintain its shape (become a soliton)? -
Particles in quantum mechanics are often seen as wave packets - linear superpositions of plane waves summing to a localized excitation. But wave packets dissipate - for example passing such single photon through a prism, its different plane waves should choose different angles - such single photon would dissipate: its energy would be spread on a growing area ... while we know that in reality its energy remains localized: will be finally adsorbed as a whole by e.g. a single atom. Analogously for different particles like electron - any dependence on momentum while scattering would make such wave packet dissipating (e.g. indivisible elementary charge). How is this problem of dissipating particles solved? Aren't there some additional (nonlinear?) mechanisms needed to hold particles together, make these wave packets maintaining their shapes - being so called solitons?
-
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
I am not sure what do you mean by "a point taking up all the density of the universe"? In an infinitesimal volume in the center on neutron star there would be relatively small mass, but just infinitely compressed - the question about GRT doesn't bother about is if matter can be indeed infinitely compressed. Indeed Big Bang is another suspicious assumption, especially that it would definitely exceed the condition of being inside event horizon, what means that the only direction anything could travel is toward the center ... It is one of reasons I prefer Big Bounce scenario, in which we don't need a singularity .... but it is for a different discussion: http://www.scienceforums.net/topic/62644-what-about-2nd-law-of-thermodynamics-in-cyclic-universe-model/ -
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
By destruction of baryons I mean e.g. proton decay - that they turn mainly into gammas (nearly complete matter->energy conversion). Such huge explosion in the center should temporarily prevent collapse and finally high energy gammas should leave the star in bursts. If proton decay is possible, in some extreme temperature below infinity it should become statistically essential - neutron star should start "burning its baryons" in the center before start forming the event horizon ... -
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
The event horizon has to evolve in a continuous way - it cannot just emerge in nonzero radius. See for example: http://mathpages.com/rr/s7-02/7-02.htm -
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Before transforming into a black hole, it was a neutron star - I am asking about the starting moment of this transformation: when event horizon has just appeared in the center of the neutron star. Then it evolved to finally get out of its surface - from this moment we can call it a black hole. As radius of event horizon is proportional to mass inside, mass is proportional to density times third power of radius, density of matter in the moment of starting event horizon in the center had to reach infinity first. But if baryons are destructible, they should not survive this infinite compression - should be destroyed earlier, creating pressure inside and temporarily preventing the collapse ... -
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Sure, Hawking radiation does not directly violate baryon number conservation, but only implies that destruction of baryons is possible. If so, it should start happening before getting to infinite density in the center of neutron star, what is required to start forming the event horizon ... -
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Swansont, Huge amount of baryons form a star, which collapses ... and then "evaporates" into massless radiation. Lots of baryons in the beginning ... pooof ... none at the end - how it is not baryon destruction? Maybe they have just moved to an alternative dimension or something? MigL, so is baryon number ultimately conserved? Could there be created more baryons than anti-baryons in baryogenesis? Can baryons "evaporate" through Hawking radiation? -
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
I am not asking about some specific theory, but the reality. Black hole evaporation requires that baryons are destructible, while formation of event horizon requires reaching infinite density in the center of neutron star - requires that matter can be infinitely compressed, without destruction of its baryons - contradiction. -
The paradox of Hawking radiation - is matter infinitely compressible?
Duda Jarek replied to Duda Jarek's topic in Physics
Ok, it is not exactly a paradox, but just a self-contradiction: if baryons are not indestructible, they should be destroyed before reaching infinite density required to start forming the event horizon.