Jump to content

Duda Jarek

Senior Members
  • Posts

    592
  • Joined

  • Last visited

Everything posted by Duda Jarek

  1. To argument that MERW corresponds to the physics better, let's see that it's scale-free. GRW chooses some time scale - corresponding to one jump. Observe that all equations for GRW works not only for 0/1 matrices, but also for all symmetric ones with nonnegative terms. k_i=\sum_j M_ij (with diagonals) We could use it for example on M^2 to construct GRW for time scale twice larger. But there would be no straight correspondence between these two GRWs. MERW for M^2 would is just the square of MERW for M - like in physics, no time scale is emphasised. P^t_ij= (M^t_ij / lambda^t) psi(j)/psi(i)
  2. While thinking about random walk on a graph, standard approach is that every possible edge is equally probable - kind of maximizing local entropy. There is new approach (MERW) - which maximizes global entropy (of paths) - for each two vertexes, each path of given length between them is equally probable. For regular graph they give the same, but generally they are different - in MERW we get some localizations, not met in the standard random walk: http://arxiv.org/abs/0810.4113 This approach can be generalized to random walk with some potential - something like discretized euclidean path integrals. Now taking infinitesimal limit, we get that p(x) = psi^2(x) where psi is normalized eigenfunction corresponding to the ground state (E_0) of corresponding Hamiltonian H=-1/2 laplacian + V. This equation is known - can be got instantaneously from Feynman-Kac equation. But we get also analytic formula for the propagator: K(x,y,t)=(<x|e^{-2tH}|y>/e^{-2E_0}) * psi(y)/psi(x) Usually we variate paths around the classical one getting some approximation - I didn't met with not approximated equations this type (?) In the second section is the derivation: http://arxiv.org/abs/0710.3861 Bravely we could say that thanks of analytic continuation, we could use imaginary time, and we get solution to standard path integrals ? Have You heard about this last equation? Is physics local - particles decide locally, or global - they see the space of all trajectories and choose with some probability... ? Ok - it was to be rhetoric question. Physicist should (?) answer, that the key is the interference - microscopically is local, than interfere with itself, environment ... and for example it looks like a photon would go around negative refractive index material... I wanted to emphasize, that this question has to be deeply understand ... especially while trying to discretize physics, for example: which random walk corresponds to the physics better? It looks like that to behave like in MERW, the particle would have to 'see' all possible trajectories ... but maybe it could be the result of macroscopic time step? Remember that an edge of such graph corresponds to infinitely many paths ... To translate this question into lattice field theories, we should also think how does discrete laplacian really should look like...?
  3. I've just realized that Hamming, tripling bits are some special (degenerated) cases of ANS based data correction In the previous post I gave arguments that it would be beneficial if any two allowed states would have Hamming distance at least 2. If we would make that this distance is at least 3, we could unambiguously instantly correct single error as in Hamming. To get tripling bit from ANS we use: states from 1000 to 1111 Symbol '0' is in 1000 state, symbol '1' is in 1111 (Hamming distance is 3) and the rest six states have the forbidden symbol. We have only 1 appearance of each allowed symbol, so after decoding it, before bit transfer the number of state will always drop to '1' and three youngest bits will be transferred from input. To get Hamming 4+3, states are from 10000000 to 11111111 We have 16 allowed symbols from '0000' to '1111', each one has exactly one appearance - the state 1*******, where stars are 7 bits it would be coded in Hamming - two different has Hamming distance at least 3. After coding the state drops to '1' again and this '1' will be the oldest bit after bit transfer. The fact that each allowed symbol has only one appearance, makes that after decoding we each time drops to '1' - it's kind of degenerated case - all blocks are independent, we don't transfer any redundancy. It can handle with great error density, like 1/7 (for Hamming 4+3) ... but only while in each block is at most 1 error. In practice errors doesn't come in such regularity and even with much smaller error density, Hamming looses a lot of data (like 16 bits per kilobyte for 0.01 error probability). Let's think about theoretical limit of bits of redundancy we have to add for bit of information for assumed statistical error distribution to be able to full correct the file. To find this threshold, let's think about simpler looking question: how many information is stored in such uncertain bit? Let's take the simplest error distribution model - for each bit probability that it's switched is equal e (near zero), so if we see '1' we know that with probability 1-e it's really '1', and with probability e it's 0. So if we would know which of this cases we have, what is worth h(e)=-e lg(e) - (1-e) lg(1-e), we would have whole bit. So such uncertain bit is worth 1-h(e) bits. So to transfer n real bits, we have to use at least n/(1-h(e)) these uncertain bits - the theoretical limit to be able to read a message is (asymptotically) h(e)/(1-h(e)) additional bits of redundancy /bit of information. So a perfect data correction coder for e=1/100 error probability, would need only additional 0.088 bits/bit to be able to restore message. Hamming 4+3 instead of using additional 0.75 bits/bit, looses 16bits/kilobyte with the same error distribution. Hamming assumes that every 7bit block can come in 8 ways - correct or with changed one of 7 bits. It uses the same amount of information to encode each of them, so it would add at least lg(8 )=3 bits of redundancy in each block - we see it's done optimally... ... but only if the probability of all of this 8 ways would be equal for this error distribution... In practice the most probably we would have the possibility without error, later with one error ... and with much smaller possibilities with more errors ... depending how does error distribution in our medium looks like. To go into the direction of the perfect error correction coder, we have to break with uniform distribution of cases like in Hamming and try to correspond to real error distribution probabilities. If the intermediate state for ANS based data correction could have many values, we would transfer some redundancy - the 'blocks' would be somehow connected and if in one of them would occur more errors, we could use this connection to see that something is wrong and use some unused redundancy from succeeding blocks to correct it - we use the assumption that according to error distribution, the succeeding blocks are with large probability correct. We have huge freedom while choosing ANS parameters to get closer to the assumed probability model of error distribution ... to the perfect data correction coder.
  4. I've just realized that we can use huge freedom of choice for the functions for ANS to improve latency - we can make that if the forbidden symbol occurs, we are sure that if there was only single error, it was among bits used to decode this symbol. Maybe we will have to go back to the previous ones, but only if there were at least 2 errors among these bits - it's an order of magnitude less probable than previously. The other advantage is that if we would try to verify wrong correction by trying to decode further, single error in block will automatically tell us that it's wrong correction. There could be 2 errors, but they are much less probable, we can check it much later. The trick is that the forbidden symbol usually dominate in the coding tables, so we can make that if for given transferred bits we would get allowed symbol, for each sequence differ on one bit (Hamming distance 1) we would get the forbidden symbol. So to make the initialization, we choose some amounts of the allowed symbols and we have to place them somehow. For example: take unplaced symbol, place it in random unused position (using list of unused positions), and place the forbidden symbol on each state differing on one bit of 'some' last ones. This 'some' is a bit tricky - it has to work assuming that previously only allowed symbols were decoded, but it could be any of them. If we are not making compression - all of them are equally probable, this 'some' is -lg(p_i) plus minus 1. Plus for high states, minus for low. There should remain some states unused after this procedure. We can fill them with forbidden symbols or continue above procedure, inserting more allowed symbols. This random initialization still leaves huge freedom of choice - we can still use it to additionally encrypt the data, using random generator initialized with the key. If want data correction only, we can use that in this procedure many forbidden symbols are marked a few times, the more the smaller the output file ... with a bit smaller but comparable safeness. So we could consciously choose some good schemes, maybe even that uses Hamming distance 2 (or grater) - to go back to previous symbol there would have to occur 3 errors. For example 4+3 scheme seems to be perfect: we transfer at average 7 bits, and for every allowed symbol there occurs 7 forbidden ones. For some high states like 111******** (stars are the transferred bits) we have to place 8 forbidden symbols, but for low like 10000****** we can place only six. Some of forbidden states will be marked a few time, so we should make whole procedure, eventually use a bit less amount allowed symbols (or more).
  5. We can use ANS entropy coding property to make above process quicker and distribute redundancy really uniformly: to create easily recognizable pattern, instead of inserting '1' symbol regularly, we can add a new symbol - the forbidden one. If it occurs, we know that something was wrong, the nearer the more probable. Let say we use symbols with some probability distribution (p_i), so we at average need H = -sum_i p_i lg p_i bits/symbol. For example if we want just to encode bytes without compression, we can threat it as 256 symbols with p_i=1/256 (H = 8 bits/symbol). Our new symbol will have some chosen probability q. The nearer to 1 it is, the larger redundancy density we add, the easier to correct errors. We have to rescale the rest of probabilities: p_i ->(1-q) p_i. In this way, the size of the file will increase r = (H - lg (1-q) )/H times. Now if while decoding we get the forbidden symbol, we know that, - with probability q, the first uncorrected yet error has occurred in some of bits used to decode last symbol, - with probability (1-q)q it occurred in bits used while decoding the previous symbol, - with probability (1-q)^2 q ... The probability of succeeding cases drops exponentially, especially if (1-q) is near 0. But the number of required tries also grows exponentially. But observe that for example all possible distributions of 5 errors in 50 bits is only about 2 millions - it should be checked in a moment. Let's compare it to two well known data correction methods: Hamming 4+3 (to store 4 bits we use additional 3 bits) and tripling each bit (1+2). Taking the simplest error distribution model - for each bit the probability that it is switched is constant, let say e=1/100. So the probability that in 7 bit block we have at least 2 errors, is 1 - (1-e)^7 - 7e(1-e)^6 =~ 0.2% For 3 bit block it's about 0.03% So for each kilobyte of data we irreversibly loose: 4*4=16 bits in Hamming 4+3, 2.4 bits for tripling bits. We see that even for looking to be well protected methods, we loose a lot of data because of pessimistic cases. For ANS based data correction: 4+3 case (r=7/4) - we add forbidden symbol with probability q=1-1/2^3=7/8, and each of 2^4=16 symbols has probability 1/16*1/8=1/128. In practice ANS works best if lg(p_i) aren't natural numbers, so q should (not necessary) be not exactly 7/8 but something around. Now if the forbidden symbol occurs, with probability about 7/8 we only have to try to switch one of (about) 7 bits used to decode this symbol. With 8 times smaller probability we have to switch 7 bits from the previous one... with much smaller probability, depending on the error density model, we should try to switch some two bits ... and even extremely pessimistic cases looks to take reasonable time to correct them. For 1+2 case (r=3), the forbidden symbol has about 3/4, and 0,1 has 1/8 each. With probability 3/4 we have only to correct one of 3 bits ... 255/256 one of 12 bits ... ----- There is some problem - in practice coding/decoding tables should fit into cache, so we can use at most about million of states. While correcting trying thousands of combination, we could accidentally get the correct state with wrong correction - a few bits would be corrected in wrong way and we wouldn't even notice it. To prevent it we can for example use two similar stages of ANS - the first creates bytes and the second convert the first to the final sequence. The second would get uniformly distributed bytes, but ANS itself creates some small perturbations and it will work fine. Thanks of this the number of states grows to the square of initial one, reducing this probability a few orders of magnitude at the cost of double time requirements. We could use some checksum to confirm it ultimately.
  6. First approximation of free electron in conductor can be a plane wave. So shouldn't there be more analogies from optics? Remember that single electron can go through two slits at the same time... Photons interact with local matter (electron/photons) which results (in first approximation) in complex coefficient (n) - refractive index. It's imaginary part describes absorption - corresponds to resistance for conductor. It's real part corresponds to phase velocity/wavelength, is there analogy in free electron behavior? Different conductors have different local structure, electron distributions etc. - so maybe they have a difference in refraction index... If yes, there should be more effects from optics, like partial internal reflection, interferences ... we could use in practice. I know - electrons unlike photons interact with each other - so electron waves should quickly loose it's coherence. But maybe we could use such quantum effects on short distance in crystals? Or maybe in one dimension - imagine for example long (-CH=CH-CH=CH- ...) molecule. It's free electrons should behave like one-dimensional plane wave. Now exchange hydrogen to for example fluorine (-CF=CF-) - it still should be a good conductor, but the behavior of electrons should be somehow different ... shouldn't it have different refraction index? If yes, for example (-CF=CH-) should have intermediate... What for? Imagine for example something like anti-reflective coating from optics: http://en.wikipedia.org/wiki/Anti-reflective_coating Let say: thick layer of higher refractive index material and thin of lower. The destructive interference in thin layer happen only from the anti-reflective side (thin layer) - shouldn't it reflect a smaller amount of photons/electrons than from the second side? If we choose reflective layer for dominant thermal energy of photons/electrons, shouldn't it spontaneously create gradient of densities? For example to change heat energy into electricity...
  7. Maxwell's demon is something that creates spontaniously ('from nothing') gradient of temperature/pressure/concentration - reducing entropy. It doesn't have to be perfect: if one side of the mirror would just a bit more likely reflect photons - it will enforce pressure gradient. The slightest pressure gradient it would spontaneously create can be used to create work (from energy stored in heat). For example we could connect both parts to constantly equilibrate their pressure. Through this connection would dominate direction from higher to lower pressure, which we can use to create work (from heat) - for example placing there something like water wheel but made of mirrors. ----- I completely agree that we usually don't observe entropy reductions, but maybe it's because such reductions has usually extremely low efficiency, so they are usually just imperceptible, shadowed by general entropy increase... ? 2nd law is statistical mathematical property of model with assumed physics. But it was proven for extremely simplified models! And still for such simplified models was used approximation - while introducing functions like pressure, temperature we automatically forget about microscopic correlations - it's mean field approximation. Maybe these ignored small scale interactions could be use to reduce entropy... For example thermodynamics assumes that energy quickly equilibrate with environment ... but we have eg.ATP, which stores own energy in much more stable form then surrounding molecules, be converted into work... ------------------------------------------ I apologies for the two-way mirror example, I generally feel convinced now, that they work only because the difference in amount of light - the effect while looking on dark glasses could be explained for example by their curvature. When I was thinking about it, I had a picture of destructive interference from anti-reflective coating. But let's look at such coating... http://en.wikipedia.org/wiki/Anti-reflective_coating Let say: thick layer of higher refractive index material and thin of lower. The destructive interference in thin layer happen only from anti-reflective side (thin layer) - shouldn't it reflect a bit smaller amount of photons than the second side? ... create gradient of pressure in photon containment - reducing entropy.
  8. Everybody has seen two-way mirror - transparent from one side, reflective from the second ... isn't that Maxwell's demon for photons? Ok - it's not perfect - it absorbs some photons increasing own heat and emits thermal photons - so it can stay in thermal equilibrium with environment. Let's take a container for photons (covered with mirrors), now place two-way mirror in thermal equilibrium with photon gas inside, dividing container into two parts. The density of photons on the reflective side should be larger than on the second - so it would reduce entropy?
  9. I was thinking about 2nd law of thermodynamics and crystallization. During this process we get higher ordering (lower entropy), but the cost is energy difference between free and bind molecule - this energy is usually just dispersed around, increasing general temperature. But what if we wouldn't allow this energy to run away randomly ... for example storing it in chemical energy of some molecule, like ATP ... That lead me to mechanisms that could allow organisms to feed directly with heat (not using thermal infrared): Let say that we have two molecules(A,B) which has larger total energy separated(E1) than when they are bind (E2<E1). Additionally there is energy barrier between these states. Now when they are bind in solution, their thermal energy statistically sometimes exceed the barrier, and they split (reducing temperature!). But to bind them back, they not only have to reach the barrier, but they have also to find each other in the solution - it's not very likely, so statistically concentration of AB is relatively small comparing to concentration of separated molecules. Now we will need a catalyst which reduce the barrier, but then use the energy difference for example to bind ADP and phosphate. For example it catches all required molecules and uses energy stored in own structure to take A and B closer, to make them reach the top of the barrier, then use energy they produce to bind ADP + P and restore own energy. I know - this enzyme would work in both directions, but concentration of AB should be small, such that the wanted direction should dominate. Is here any problem?
  10. I see how to make the required nanodiodes for nanoantennas for thermal photons - they should use that after absorbing a photon, the electron will be excited and will slowly equalize this additional energy with its environment. So if we place something which need high energy electron nearer one side of antenna, it's more likely that electron jump through this threshold. So the whole electricity generator should look like: -conductor-threshold-antenna-conductor-threshold- and electrons will more likely go left. If the antennas are printed, above threshold could be just narrowing.
  11. When I've met with a heat to sound article, it was written that it needs pure heat ... but when I've read physorg article I've linked - I've finally seen that it uses gradient of temperature... But what about nanoantennas? They use heat energy - thermal infrared to enforce movement of electrons. The problem is if we can change it into their regular movement - we would need diodes which would be something like Maxwell's demon for electron... I think that it's possible, because temperature describes average temperature of molecules. But their electrons have completely different behavior - are much faster, have different energies, move along scaffolding made of molecules ... There are two different thermodynamics there! Of course there are correspondences/interactions between them, but there is also some independence we may be able to use... ? Simple counterexample to 2nd law using thermal photons: Imagine empty tube, which internal surface is covered with perfect mirror. Now near it's one end place two separators - reflective on the end of the tube and transparent to its middle. Place hot gas between the separators. It's isolated thermally, but it produce thermal photons. The only way photon can escape is through the second end of the tube, so it would work as jet engine - because photons have momentum in one side, the tube has to get momentum into the second. And we have stream of photons we can use to create work somewhere else. Above example uses that despite that kinetic energy of molecules behave randomly, each one has specific movement/oscillation, which energy can be changed into ordered one - electromagnetic oscillation of photon. You will say that the problem is with perfect mirrors, but they are just a perfect isolator for thermodynamics of photons.
  12. I was recently interested by some news that it's possible to drain energy from pure heat. I've read about two ways: use sound resonator or absorb infrared thermal radiation: http://www.physorg.com/news100141616.html http://www.physorg.com/news137648388.html Other problem is for example that while spontaneous crystallization entropy goes in 'forbidden' direction: http://www.garai-research.com/research%20statement/Entropy/Entropy.htm ... It would be nice to localize simplifications of looking to be such general theory like thermodynamics. One way of their reasons can be simplifying physics for thermodynamical model, like - it corresponds to molecules, while we can say that their electrons live in completely different world - on a scaffolding made of molecules. Their energies doesn't correspond straightforwardly, - thermodynamics usually ignores thermal radiation and it's energy. But maybe there are deeper problems - thermodynamics usually ignores internal structure - for example from two states of the same energy one can be easier accesable... What do You think about it?
  13. Standard approach to fight with viruses is to use antigens which search for some specific place on the surface, but the problem is that the capsid is varying rapidly. What usually doesn't change is that the virus still targets to the same molecules on cell's surface - maybe we should try to use it. For example create empty liposome - water + phospholipid with specific molecules - for example CD4 and some chemokine receptors for HIV. Now if the virus would catch the bait, it will enter inside and loose its capsid - even if the liposome will be destroyed - it shouldn't longer be a threat or at least much smaller than it would be swimming in capsid. Eventually we could add inside for example reverse transcriptaze inhibitor or some RNA cutting enzyme. Imagine such stealth liposom with CD4 - it should swim through veins for a few hours catching viruses, than be consumed with it's content by the immune system - perfect scenario. And remember that every HIV virus has some version of gp120 - should catch the bait... Update: I was just told on a different forum, that research on something similar - using erythrocytes instead of lyposomes, is already in progress: http://www.thescienceforum.com/viewtopic.php?p=140400
  14. They could also feed with heat in indirect way : hot objects emit thermal infrared (a few micrometers)... We even want to use it in much smaller temperatures for example to power MP3 players: http://www.sciencemag.org/cgi/reprint/320/5883/1585.pdf Maybe some of thermophiles have constructed photosynthesis for these frequencies... ?
  15. About vibration absorption ... myosin was only example - it's functions are too directed, too complicated to be reversed in practice. But imagine a protein which is connected to cytoskeleton (for example on crossings of filaments) and catches ADP and phosphate. Now if the cell vibrates, movement of the cytoskeleton is transferred to the protein which can enforce binding the molecules into ATP. I'm not saying that it's simple, but it looks to be possible. And if yes, mother nature is extremely inventiveness creature Look how sophisticated machinery was constructed to use energy from light... About using heat - I agree that it looks even less probable... At the first spot it seems to be against classical thermodynamics - converting pure heat into different energy. But this theory is strong simplification. For example hot iron emits photon. Heat energy is random microscopic movement - a noise. The trick is to use a resonance to gather surrounding frequencies and convert them into coherent movement - light, sound ... Lately it was proved that it can be done - change heat into sound and then we can use for example piezoelectric effect to convert it into electricity: http://unews.utah.edu/p/?r=111907-2 The question is if it can be done in microscopic level using proteins and temperatures smaller than 120C? For example a molecule which can resonance to bind ADP and phosphate. If yes - evolution should have found it... We have plenty of microbes in deep earth for billions of earth - there were/are some sources of chemical energy, but generally they are starving. Scientist has problem to explain their extremely low metabolism: http://www.sciencemag.org/cgi/content/full/sci;276/5313/703 Extremely low metabolism has also psychrophiles - but it's because of cold - all reactions are slowed down. It's not because of lack of energy - they usually have access to it. We are talking about thermophiles , which should have consumed most of available chemical energy sources for last billions of years and new come extremely rarely. Remember that energy is needed not only for metabolism, reproduction ... it's necessary to sustain the structure of the organism, fight with increasing entropy - especially in high temperatures! Their life would be much easier if they would be able to feed not only with chemical energy, especially when there is plenty of it in heat and tectonic vibrations around...
  16. But water molecule has mirror reflection - the same molecule In physics taking mirror reflection is called P-transformation. This transformation isn't perfectly conserved, but the corrections are many orders of magnitude smaller than thermal noise in biochemistry - they shouldn't alter biology. http://wikibin.org/articles/chiral-life-concept.html
  17. Thanks for constructive arguments. I'm not saying that we should do it, but that there can be possibilities - if it's true, somebody, sometimes will anyway do it! So I believe that it should be discussed to understand dangers and possibilities ... and hoped that I can find it here...
  18. Biology has to offer many kinds of energy conversions - for example solar into ATP and later glucose. We can now take whole organisms and eg burn them to gain energy (biofuels). But remember where natural gases (and other fossil fuels) are from... Biology knows these metabolism pathway! Maybe we could take for example unicellular photosynthesizing organism and put into it genes of required proteins? Just to make it work, than take a few dozens(hundreds) of generations of artificial selection to create cheap, efficient(?) living solar panels, from which we could just pump eg. methane... About different kind of energies ... remember that in microscopic scale chemical reactions are reversible - the dominant direction depends of parameters (like ATPase H+). We know that we have mechanisms to produce heat using ATP. Now imagine that it has changed parameters to need more ATP density than there is in around - above some temperature, it should work in opposite direction - change ADP->ATP using heat! We have plenty of microbes kilometer below... what do they eat? Chemical energy of minerals? They should be about their minimum... Maybe they can feed with geothermal energy? To check it we should check if water with eg pyrolobus furmanii cool down faster than it should. If yes, a bit of artificial selection and maybe we could produce natural gas from surpluses of thermal energy in a factory. Another type of energy is vibration. Myosin can change ATP into movement. Again - with changed parameters, it should be able to work in opposite direction - if it would be attached to cytoskeleton, it should produce energy from vibrations. What for? For example to actively absorb them. For example to reduce turbulations in water... we should search for them in fishes, water mammals. Thanks of this we could produce active sound/vibration dampers, which produce energy...
  19. Ok - latency is not good side of the scheme I've presented - simple errors can be quickly corrected, but large ones may need a lot of time... There is also problem with loosing large block of data... using ANS it's a bit problematic, but actually we can start decoding again after it. Unfortunately we are of course loosing it's content. To protect against loosing whole packets scenario, we can for example - place first let say 100 bits as the first bit of 100 first packets, next 100 bits as the second one and so on... now we have to buffer these 100 packets before we can start decoding. By blocking, I meant placing information in completely independent (eg. 7bit in Hamming) blocks - thanks of it we can easily assure short, constant latency, but we cannot 'transfer the surpluses of redundancy' to cope with fluctuations of error density because each block has independent redundancy. I agree that because of various latency it's rather unpractical for telecommunication or memories but may be useful for example for archives, which just have to survive long time... And maybe there are possible faster methods which allows to such redundancy transfers? Thanks of this, we could use smaller amount of redundancy - not according to pessimistic density of errors, but only a bit above average density - it's usually a few orders of magnitude smaller...
  20. But it still can happen... and it's slow and maybe we could use less redundancy to achieve similar safeness... We are adding constant density of redundancy, but errors doesn't have to come with constant density - it can fluctuate - sometimes is above average, sometimes beyond. If are above, they can exceed safe amount that our redundancy can cope with. If is beyond - we've placed there more redundancy than it was required - we waste some capacity. I'm saying that we could transfer these surpluses to help with difficult cases! To do it we shouldn't separate information by placing it in blocks. It have to be one stream that can say that something has just been wrong - we don't see the pattern(redundancy) we've placed there - we have to try to fix neighborhood of this point until the pattern emerge again as it should.
  21. You mean encoding polynomial coefficients by values in more than degree number of points? I agree that it's great method but still pessimistic local arrangement of errors destroys whole block. Besides is very slow in decoding... This standard approach takes some blocks of the data and enlarge it to protect against some specific set of errors. We loose whole block if we get out of this set. I want to show that it's not the only way - that we can use not a small block to locate errors inside, but be able to use potentially all succeeding bits. So even if errors creates some pessimistic pattern, like clustering around a point, investing a large amount of time we could still be able to repair it.
  22. Standard data correction methods, has some maximal number of errors they can correct. For example Hamming (7,4) uses 3 additional checksum bits for 4 bits of information - it works fine when there is at most 1 error per block of 7 bits. We use quite large redundancy, but is it safe now? The problem is with pessimistic cases - if expected error rate is 1/100 bits, still quite often it can happen that we have 2 errors in some of 7 bit blocks. I would like to propose some statistical approach to data correction, which allow to protect against such pessimistic cases. Thanks of this we can for example reduce redundancy to achieve similar safeness. The trick is to use a very precise coding - such that any error would make that the following decoded sequence should be completely random 0/1 sequence (p(0)=p(1)=1/2). For example a block ciphers, which uses previous block to calculate the following one, but there is much better coding for it I will say later about. Now - add to the information some easily recognizable redundancy - for example insert '1' between each digit. If while decoding it occurs that there is '0' in one of these places - that means we had some error before. Knowing statistical characteristics of expected errors, we can make list of most possible errors in such cases, ordered by their possibility - on the top of this list there should be 'switched previous bit', ... after a while there can appear 'switched two bits:...'. This list can be very large. Now if we know that there (nearby) appeared some error, we take this list position by position, correct as it was really this case (switch some bits) and try to decode further fixed number of bits (a few dozens). If everything is ok - we get only '1' on selected positions - we can assume that it was this error. If not - try the next one from the list. This list can be generated online - using large amount of time we could repair even badly damaged transmission. While creating the list, we have to remember that errors can appear also in succeeding bits. Using block ciphers is a bit nasty - slow, we have large blocks to find errors ... There is new coding just ideal for above purpose - Asymmetric Numeral Systems (ANS) - new entropy coder, which has very nice properties for cryptography ... and data correction - it's much faster than block ciphers and uses small blocks of various length. Here for example is demonstration about it: http://demonstrations.wolfram.com/DataCompressionUsingAsymmetricNumeralSystems/ What do You think about it?
  23. SF writer, Greg Bear (http://www.gregbear.com/blog/display.cfm?id=982), pointed me out that we need viruses - the point is that we use some parts (eg capsid) of REV (retrovirus which is in our DNA) in some essential mechanisms, so we can't replace it to something neutral. But over this millions of years, this capsids have been optimized for our purposes. Maybe it's good point for viruses to begin evolution, but there is still a long way, counted in thousands-millions of years. Viruses for evolution requires friendly environment - cells. Ours has quite good protection, much better then when viruses evolved last time. We can also think about transforming only eg human, and use original bacterial flora, which could be compatible (after teaching the immune system)? I've received a long letter from Steve Winter. One of many things he mentioned was that " there was a study where a group fed some bacteria chiral food, and it eventually evolved the ability to eat the food". It's large problem, but I think they should have much more problems with evolution of interactions (like aggressiveness) with chiral organism, and in supported by us chiral ecosystem, they should be dominated... And they usually die with the carrier. But the largest benefit from chiral life are viruses - let's say that we can manage with microorganisms, but elimination of viruses looks hopeless http://virology.wordpress.com/ And the lack of them should slow down the evolution of bacterias, making the creation of stable ecosystem easier. What are the costs of such project? The most of the cost is to transform a few cells of each needed specie - I think that required technology should be standard in a few dozens of years. Then we have to replace seeds for a few fields, clone some cattle ... and humans for adoption... The replacement process can be very slow. And the income ... HEALTH ... crop production ... pests ... maybe to be or not to be for natural Martian life until terraforming
  24. How to make such prokaryote? Huge problem is to create chiral enzymes, I will sketch in a moment how how I imagine that. Now take a solution of phospholipids, it will automatically create a bubble, fill the membrane with proteins, pump DNA, ... , ATP ... and voilla About the other parts of it... The cell should 'live' in specyfic, precise conditions, without most of them. Then it should try to stabilise itself, rebuild what's needed (like the wall). This would give us time to do something to allow it to reproduce. Having this small factories, synthesis of elements will be simple. But the real problem is with eukaryote. I think we could use the original cell and just replace/add what we need... Most of the proteins work with symmetric molecues, the other we could block or do nothing with them - if we place the cell in good conditions, feed it (even artifically with eg ATP), it should be stable while 'slowly' adding chiral molecues, replacing DNA ... and after some time/generations it will replace everything itself. Here is a sketch of production the (chiral) string of protein(/DNA): Prepare a surface with with oriented lattice of something that can adhere amino acids and that they can be easly released (by light, electric current, pH, temperature...). Then 'just' print (like ink printer) or litograph (use different solutions of aminoacids and light specyfic pattern to adhere) given patterns of strings of amino acids... Then use some catalysis to join neighbours. We would have maaany copies full of errors at one time. For selection process, we can use something the correct ones would adhere to. Now we can slowly recreate bottom-up customized ecosystem... But how to do it more effective and stable? Maybe we need viruses? I have a discussion about it on http://www.scienceforums.net/forum/showthread.php?t=27078
  25. We can use normal cell too, especially to transform eukaryote. Most of the proteins work with symmetric molecues, the other we could block or do nothing with them - if we place the cell in good conditions, feed it (even artifically with eg ATP), it should be stable while 'slowly' adding chiral molecues, replacing DNA ... and after some time/generations it will replace everything itself.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.