Jump to content

fredrik

Senior Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by fredrik

  1. No doubt a broken display. There is no way broken graphics board, inverter board or cable would no way give that symptom. It looks just something hit(ie pressure) the display in the upper right corner. In case of warranty not expired, chances are big the service people will suspect something hit the display, so he would have to be act very convincing to have it accepted as warranty. I've seen a lot of those and that's a clear case of broken display. Unfortunaltey the displays can be a bit expensive, so depending on how old/expensive the latop is I'd consider buying a new one, unless it has warranty left and you can convince the service people that something did not hit it, flex, or drop it. Because that is what I would suspect if I saw that display. /Fredrik
  2. I've scanned a few paper, and it seems clear that most papers take various semi-classical and IMO fuzzy approaches, meaning it's probably not that awfully interesting. Anyway, I decided to make an experiment. I am going to try a simple computer simulation and find the maximum relative entropy vs the parameters I would conjecture somewhat associated to "mass" or gravitating energy and and volume (event space volume). I might get back with my findings. The initial test for a specific prior distribution is that I seem to at least get the a*Volume^b shape on fitting with high correlation but I have yet to find out how a and b relates to mass/energy and the prior. In the simulation I'm just usin a simple Rand statement. Anyone happens to know what model is behind the visual basid randomize routine? /Fredrik
  3. I apologize for the sidetrack, I'm aware that this thread was about gravitons not QG in general but to add a personal reflection here to keep he discussion going... I've always had a clear feeling something is incomplete with the path integral formulation. The basic idea is beautiful, but I can't help feeling something is missing. I've taken as a task to find out what it is. I personally suspect the proper formulation should be where the action to be minimized is simply related to a generalized total relative physical information entropy of the states (information divergence). The transition which is most likely is the one with minimal transitional entropy. But from my point of reasoning, this relative entropy is only well define in the differential sense. This is also what requires the integral summing of small steps. I've been working on some ideas on this and I think it should be possible to evaluate the exact transition amplitude, instead of leaving the normalization constant for later. I think this is part of the problem and why it gets screwed up in QG. I thikn leaving the normalization outside of the integral is a mistake because gravity hides in the constant. I'm making some progress and eventually I hope to come up with an expression that I can actually test. IMO, there are many things we take for granted that may or may not need to be revised. /Fredrik
  4. I think part of the logical problem is howto achieve a unique and nonambigous decomposition of the spacetime geometry into a background metric and a fluctuation because you can't measure them separately. The problem is that there is no obvious static background metric. Thus the mathematical perturbation expansion has no full fundamental justification I can see at least. (Computational problems aside) The way i see it, this is somewhat similar to the idea that given an event space, the most natural prior would be that all events are equally likely - this is IMO an illusion. The only real justification I see for a prior is experience, defining the prior. If there is no prior, one need to build/create one from history. This is how I think is the implications of a true relational model should work. I think we need a fundamental review of this. /Fredrik
  5. In addition to what Severian already wrote, I think one can view these problems from several perspectives. Technical, mathematical or philosophical and logical. General relativity and QM aren't directly in competition, they rather on their own describe different restricted domains of reality. But since there is only one reality, and we do not like the idea of decomposing reality into different domains with un unclear connection, thus we expect that ultimately we should find a unified theory that describes all of reality. The technical point is that if we simply take these two theories, and their set of equations, and mathematically try to plug in and solve them assuming both hold (ie solving QM during the constraints implied by General relativity) it gives nonsensial results and infinites and is thus mathematically ill defined. Indicating that something is wrong and we might need to find another way to calculate it. The philosophical aspect acknowledges that QM and GR are from start formulated on different platforms. For example Energy in QM, is not define the same way as energy in GR. Here there are fundamental problems that need to be solved before the equations from te different theories can even be compared properly. Thus in this view, it really isn't unexpected that the calculations made (if ignoring the fundamental problems) leads to problems. Of course, quantum mechanics has philosophical problems of it's own. But these are often ignored from the technical point of view, becuase it's mathematically well defined and the computation schemes have been perfectly successful, thus for some motivating the ignorance of the fundamental issues. Another aspect is to see that 1) QM suggest that state of matters is uncertain due the nature of physical information exhanges. 2) General relativity suggests that our reference frame (spacetime geometry) itself is not fixed, it's dynamical. We should then notice that a state of matters, always has to be evaluate relative to a reference. General relativity suggest that this reference itself is dynamical. This leaves us with something very dynamical with uncertain structure. It seems we have not yet wrapped our heads around howto solve this, and describe it with a consistent well behaved math. So not only are reality uncertain, also our reference is uncertain. This is why extreme care must be taken, because this is a bit unreal to imagine by normal intuition. We basically have no universal reference anywhere. Where do we start? /Fredrik
  6. Thanks for your comments! My original question was more general in nature, but cancer was just an interesting example. Yes I've also read that the inflammation and some gene involved in it are linked to tumour development in already existing cancers and that COX inhibitors in this special case seem to repress tumour growth. This is definitely interesting. I haven't looke up the details though. But I wonder if this is to be considered a special case or not. What about other systemic threats, like more dangerous bacterial and virus infections, where the competition is tighter and the body really needs to focus it's full power to deplete the anomaly? If there are nothing but good sides of COX inhibitors from the point of view of healing the body, what are the past conditions that caused the body to develop such defense? I've seen same scattered cases of special conditions in children (don't remember exactly hough) where NSAID drugs are disadvised because using them may severly worsen the condition. I don't remember now but it wasn't cancer, it was some viral or bacterial infection. So perhaps the cases where these drugs are bad are rare enough to be considered exceptions? What about the possible feedback from the brain? A drug that reliefs pain and makes you "feel better", could that also influence the immune system in some kind of "placebo like" effect? Placebo isn't meant to imply it isn't real thogh, it's just to reflect that from I've seen severe depressions may drop your immune system power too? Suggesting that also the opposite, that the brain can perhaps sharpen the immune system? Anyone think that can be a factor? /Fredrik
  7. QM models the evolution, relative to your original initial information, all along. It does not take into account the fact that information may in fact change dynamically as part of the dynamical evolution. I see this as a logical reference problem. This problem is IMO quite analogous to the problem that lead to general relativity. One can not define a local reference and assume that it will have any universal validity as you move away from the conditions that defined it. The relation has to be transported along with your description. I think the same can be said with probability theory. As things evolve, so does actually our event spaces. And we must model also our own reference. This is how it gets spooky and everything floats. But this is no news, it was the same in GR. But GR constrained itself to basically spacetime stuff. I see it as a possible logical extension of the principles of relativity. /Fredrik
  8. I never liked the Bohm interpretation for various reasons and thus didn't spend that much energy to defend it so to speak. In the orignal QM treatise I always like the copenhagen interpretaton best, for the basic reason that it's seem least speculative. That said I don't think it's complete by any means. For example, one major problem I see is that just because we can't tell, does not imply that we can't learn to tell. And IMO, the potential for this, is clearly immersed in the chaos. Some people look for "hidden structures" sort of in line with what I imagine was Bohm's desire... and I do not see that anything currently excludes this. They way I see it, looking for hidden structures is simply learning... the problem/mistake is IMO to think that you can make use of information that you do not possess, or that you can be influenced by information that has not reached you. I think we will come to understand this more in the future. This is aspects where I think QM is incomplete, and probably also responsible for some of the complications with regards to unification with general relativity. OTOH, a mistake of the copenhagen-style thinking is that information we have is somehow static. And the only way to incorporate information leads to jerks or collapses of the functions - this is IMO unsatisfactory and going against intuition and the reason I think is related to the mentioned issues. But if information is (for some reasons) quantized then at some levels there may be an irreducible jerking going on. I personally think the truth is somewhere in between. I'd expect nature to act on each new evidnce continously and thus there is no need for collapses or jerks, except for possible sample to sample fluctuations, but which would probably be smoothed out quickly for many significant systems. Our current model IMO does not fully reflect this. /Fredrik
  9. I think you should ask a some vehicle engineer, but from the little I know of cars I think it's basically two philosophies howto increase power. I think that for a given total cylinder volume(displacement) you'd expect to get more peak power with many smaller cylinders as compared to one big one, but I think you get lower torque at lower revs. And the fuel economy I think may suffer. I think this is the idea with turbo, it gives better fuel economy then you don't push the pedal, and also leaves you the power when you need it. So turbo charging a v8 for a standard car (racing is another story) seems like an odd design. Normally the peak power comes at high revs which is not normally reached during normal operation. Therefore I think modern engines are designed instead to give best power and torque during normal revs. Higer revolutions also gives higher noise and poor comfort. If I buy a car I'm more interested in high torque a low rev, and here the turbo I think is better. /Fredrik
  10. I am not entirely sure about the correct historical thinking, but thermodynamics is one of those parts of physics that can be deduced from application of the ideas of optimal inference. Which means it's minimally speculative, and the "speculations" made are the optimally infered ones, and thus not entirely ad hoc. Another what I think nice generalization statement of the principle of maximum entropy is "macrostates" - can be interpreted as our prior information. Ie. it's everything we know. Can be thought of as patterns and correlations in samples. "microstates" - is interpreted as the apparent degrees of freedom that we see. Can be thought of as samples. Thus a generalized relative entropy can be thought to be a measure of "the degree of uncertainty left, respecting your information at hand" The maxium entropy principles thus becomes a principles of optimal inference. Given our incomptele information, how do we optimally place our bets. It's a bit like game theory. We place our bets as per our updated prior. The point is not that we can expect to be right in any instance, but if we place the bets optimally and respond to the deviations we will learn and evolve (=dynamics). And that's the next step. Deviations updates our priors, also as per some optimal update principle. If we consider our "information at hand" to be static, and never changing, we have an absolute entropy, not relative. But this leads to a problem of the choice of this background prior information? With respect to a gas bottle, we can accept a background prior, but when it comes to elementary and chaotic interactions this breaks down IMO. There is no motivation for a static background prior. It has to evolve to make sense. I think this can be introduced in a less speculative way. Instead of saying that we "assume", we can consult our prior experience with these microstates(can be thought of as samples). And if they suggest they are equally probable, that is to our knowledge our prior distribution. So it need not be considered as an assumption IMO, it's rather like a best (optimally inferred) guess, unlike an arbitrary guess. Although statistical mechanics is old stuff, it's one of the more beutiful branches of physics, but nevertheless I think we have yet to see the full power of the implications of the underlying principles. /Fredrik
  11. I missed this on first reading. The misspelling was completely unintentionally. I didn't mean to confuse him with the sandals -- I haven't looked all over the place yet but I found several papers that refers back to Bekensteins 1981 paper (the one I fail to find for free download). Anyway, it seems from other comments that Bekenstein originally talks about the standard (absolute) shannon equivalent von neumann entropy, which imples the use of a background prior, which I have a little hard to accept as something possibly truly fundamental, so my initial suspect is that this bound either requires a restricted setting, or is a special case. But I may be wrong, that's why I wante to read the original paper to see the exact proof, and more importantly the assumptions going into it. I'll keeping looking. /Fredrik
  12. I can't speak with authority by I can just add my point of view to the discussion for anyone to make up their own point of view, here some further personal comments from my preferred perspective. I'd say because speed by definition is a sort of pattern of change. If you know there is a relative velocty it really means that you have a particular repeating pattern of changes in your observations. Ie. there is an event space where the events keep drifting, but the drift has an identifyable pattern that is constant. This adds a constraint to the moving clock devices and defines a connection back to you which can be interpreted as a probability, and specifically a measure of the chance of that you mix up the information of the resting and moving clock with each other. This probability is often nonzero. I think this can be consistently understood if you accept that the clock device is not a classical absolute device. The clock device itself, follows the same fuzzy laws as does everything else, and there is an uncertainty in the identification of the clock device itself. So the information about the clock is twisted when it speeds. Because the speeding itself, can be given an interpretation in terms of information change. So your prior information is that the other clock is moving away. Moving away can be abstracted as a connection between two different event spaces, thus relating them. This analogy is really silly and incorrect! but to give a somewhat simpler picture consider that you play dice. Playing dice with a speeding dice, can technically be interpreted as adding another dice. You throw one dice for the "speeding". But the dices aren't classical dices, the dices themselves are fuzzy and has evolved through experience or past data, defining your prior. So your current information "defines the dice" which you play with. /Fredrik
  13. I think I understand your thinking and it's a good question. Clearly a system evolves even if nonone has assigned a clock device, or is looking at the clock device, but in the abstract sense any system can define a "clock". But it's hard to compare a human with an elementary particle since an elementary particle is presumable not as perceptive as a human is, due to it's limited complexity. The clock from a human point of view is merely a way to quantify and probe the evolution, and define time units. But you are still onto something that at some level of chaos, the concept of time gets pretty fuzzy. And there are speculations that at the planck domain spacetime and dimensionality itself gets all fuzzy. This is in fact consistent or expected if you consider time as a parametrization of random dynamics. One possible speculative branch of theories, consider spacetime and it's geometry as beeing statistical in nature, and thus fluctuates. At the macroscale this strucutre appears rigid and classical, but in certain domains this IS fuzzy. And at some point it's so fuzzy that the concept of space and time does not make sense. But this is not really a problem! It's a feature. Because it can even in theory explain away spacetime, as well as possibly in reverse explain how dimensions are born. (But I am still working on this, so I pass commenting) /Fredrik
  14. Thanks everyone. I still didn't find the original paper, but OTOH I found sufficiently new ones that's sufficiently related. Thanks! /Fredrik
  15. I didn't find the paper I searched for but some other related papers from Bekenstein. I'll check them and see if I can get what I look for. What I've after is that there are different kinds of entropy definitions and there are variations of the probabilistic approaches, depending on the thinking and I am curious what foundation/assumptions the bekenstein bound rests on. I've got a distinct feeling that similarly like background metrics are sneaked in, there are background priors assumed depending on the assumed information or knowledge concept. /Fredrik
  16. I searched around and found some papers for sale but I figure it's weird that it's not available free download. OTOH I didn't search arXiv for copies or maybe never papers, I will do that. Good idea. I got the impression that there aren't much older papers there, so I didn't look there. I'll report back here if I find it myself. /Fredrik
  17. Does anyone know where I can find Berkensteins original reasoning/derivation for the entropy bound? I think the papers is "Universal upper bound on the entropy-to-energy ratio for bounded systems" and it was published in Physical Review D 1981. Is is available for free download somewhere? I'm sorry about the lame question, but I don't have access to those archives. But is the paper published elsewhere? /Fredrik
  18. This is a general physics thing though but I am intrigued to see how we will eventually will cross these "uncrossable obstacles". Is it that we just haven't found the answer "yet", or are our questions flawed and therefor doesn't have a sensible answer? Ever since I started taking interest in physics and throughout education I have had the feeling that sometimes one can spend 5 pulling out a question, and then 10 years trying to find the answer. At what point do you ask yourself if something is wrong with the question? The next step is to ask, if finding the answer and finding the question isn't almost the same thing? If that is so, perhaps an easy question with a hard answer can be transformed into a harder question but with an easier answer, if we spend more than the traditional 5 minutes thinking about the question. This had little specifically to do with cosmology though. /Fredrik
  19. First I think simple may be relative to your preferences, and second I haven't consider it important yet to work out the details at this stage but I can picture a what I consider a fairly simple mechanism. A loose lineout is.. a) The first step is to acknowledge time as parametrization of relative change. This can be formalized in terms of relative frequences or probabilities, of cylic clock events and an equiprobable evolution of the general state of information. Time evolution can be pictured as b) If one then presumes space, and further identifies a geometry of space with our prior information, one comes up with the association that the distance metric can be though of as a probabilistic transition probability. c) Then working out the relations (or prior connections) between momenta (taking that as the fourier transforms) and the prior basic geometry. I think one will find the prior connections that defines the time dilation between the basic spacetime transformations. I haven't done this yet, but it's on the todo list to work out, and I definitely think it's possible, even though there are details to work out that is much harder than the above. If this proves out to work like I think, the time dilation effects can be considered to originate from an abstracted relativity of information, and it's actually quite understandable, which is exactly what I think we need. I'd expect that one can show that our conditional probability of the clock events in the moving clock is lower than the same clock at rest. This means as per the construction that moving clocks seems to move slower. The proof I picture, would rely on a mathematical connection of the prior distributions between the moving and nonmoving frames. This would also be entangled with the concept of moving. Since time is devised from scratch one can not simply without care introduce the concept of velocity as it relies on speed. That's why momenta via fourier patterns can be exploited for a systematic introduction. The last thing then that bugs me is the choice of this pattern. I'm looking for a systematic inference argumentation there. This is missing. I hope to be work that out later, but my strategy is to not make leaps and before I get there the dimensionality of spacetime needs to be defined too. /Fredrik
  20. Like Freethinker says biomass is more than just proteins. As an example, the typical dry biomass composition of a yeast cell (not counting the water) is something like proteins ~ 45% carbohydrates ~ 40% RNA/DNA material ~ 7% Lipids ~ 3% Ash/minerals ~ 5% All of the different compounds are necessary for different tasks within a cell. Some relate to structure and compartmentization, some perform various catalytic functions, some are simply nutritional sources/pools, some are genetic code. The exact biomass composition is also different for different cells. The study of each of the above compounds are interesting in different ways, and they all impact the function of the cell. /Fredrik
  21. I sense we are discussing different things at a time here which is confusing. "Corrupted" or not, the human brain follows the laws of nature as much as anything else. So I am not sure in what sense you think the brain is corrupting. It's probably not corrupting anything more than the electric field "corrupts" free electrons, or that black holes "corrupts" spacetime? I think what you refer to as corruption is that fact that we make models that later prove to not stand up to the tests. But at that point we adapt. We run into conflicts, and we resolve them. I would not personally call that "diversion" a corruption. I'd call it evolution or learning or equilibration and it's really part of the concept. Instead of focusing on the apparent corruption, I focus on the way nature *resolves* the "corruption". Because it does, and the idea is that we can describe a consistent logic to this! But by the same token, this we can of course we misaken on this logic as well.... but I can live with that, because regardless of the chance of transient failure, progress with time is unavoidable. The human mind is clearly not perfect. We make mistakes, and wrong guesses. But that doesn't mean we are useless - it's part of the game. Regardless of all the mistakes we make, we do make progress. The proof is in that the successes dominate over the mistakes. Imperfections, does not prevent us from progress, it's rather a requirement. I'm not sure if I understand you, but I defeinitely agree that somehow we always find details that bug us, something is missing and is imperfect. This used to bug me too but I kept thinking about it and have reached a resolution and can move on. That does not mean I have resolved the imperfection, but I have found a (for me) consistent way of handling it. I've turned it into an opportunity rather than a problem. I see no problem here. The changes we all "live" are powered by the imperfections themselves. So what you suggest to be a problem, I consider to be they key to understanding. The ideas is tat this will be quanfied by models of course. How imperfections power change, can be given probabilistic interpretations. So the human brain need not be perfect, and we do not need to be almighty, for this approache to make sense: Upon receiving conflicting information, a human will revise their opinions as per certain logic. We evaluate the confidence in the conflicting parties and thus find some "best" update. We are not very likely to revise our opinion based on poor evidence of low reliability. Such things is all accounted for by our brains. But what about stuff that has no brain? A particles or subsystems "opinion" is just it's state. Unless it receives conflicting information it will stay in this state. State includes all qualities, including relative motion etc. If a particle is exposed to a force that is in conflict with the current state, the state must be updated and respond to the force. Even a particle has various "evaluations". Gravity and intertia for example. The gravity force from a small mass on a large mass will leave a minor impact only, because the relative significance of the small disturbance relative to the current state. I think we may understand alot of this by focusing on the logic of "decision making", or the logic of resolving conflicting evidence, and I see how this can lead us also to the logic of physical interactions, but originating from more fundamental first principles. At least more fundamental than present. In essence all we can do is guess. And howto be scientific when nothing is perfect? The method would be optimal inference. Howto define optimal when we don't know that truth? That is optimum relative to what we know. This is no more weird than when someone can argue in cases where their decisions have proven unlucky, and they can still argue that it was the correct decision given the information that was at hand. What is the "best guess" is relative to your prior. And deviations are used for corrections. I've tried to describe the principles in words... so the concept exists as is, but in order to tell what is the probability of a given electron transition in an atom... we need math to find the number. But it's a tool. No need to confuse the equations with reality? The equations is how we, to our knowledge, can best describe this. But math is alive too. If we need new mathematical or logical formalisms nothing stops us from creating it. No need to try to squeeze everything into old given mathematical frameworks at all cost. /Fredrik
  22. I agree that there are problems/imperfections interfacing reality and any mathematical model. I think it's even in the very nature of everything that this is so. It IS fuzzy, and it is not perfect. But the amazing part is that in despite of the fuzz we can be amazingly successfull. Lack of perfection, does not, and never did inhibit substantial progress. Asking for perfection is asking too much. Our only reasonable request is for optimal progress constraint to the obvious incompleteness. Progress is also the key in evolution as well as learning. Progress is just another word for "preferred change". An evolutionary process does not know exactly where it's coming from, or where it's going. But it knows the way forward, relative to current. I have no principal problems to merge evolution within the suggested abstract logic. Physical, cosmological and biological evolution can be thought of as "learning" or just "natural evolutions" are different views of the same thing. Of course this is partly cloudy and noone to my knowledge has so far claimed to have this crystal clear and nailed down, and proven. But I would expect that "natural evolution" would follow from the dynamics implied by the ideas. I could well be way wrong. But I will walk in the direction of what I currently judge to be preferred change. To speculate: Suppose our models can eventually show that there is a natural evolution, based on probability theory. And perhaps we could describe the logic of this evolution. Perhaps we can also show that the odds are that patterns appears spontaneously evenetually. And this complexity may increase, and eventually I figure it would be a matter of definition what we label "life"? I see this withing reach. It's IMO not unexplainable. I think that all life needs to begin is the tendency of "natural evolution". Whatever the strange lifeforms looks like after 13 billion years I have no clue. And this "natural evolution" and it's logic is exactly what I have in mind. But first of all formulated in a generic and abstract setting. But I would expect that it's the SAME underlying logical or physical principles if you want, that is ultimately responsible for cosmic evolution as well as biological evolution. But this is definitely way off from "standard QM". It's speculative and not a working theory yet. But it's one of the possible ways forward for science, along with other candidates. But this IMO at least, has the potential to possibly bridge the gap I think you are thinking about? People talke about fractals in some other thread, and I can picture loosely that the ruels of inference are "fractal like". Look at cosmic evolution, and look at biological evolution, learning sessions.. do we find anything similar in the logic? Is it a conincidence? I don't think so at least. /Fredrik
  23. Some of it, but far from all of it. Alot of it is not really resolved and belongs to the future So if you feel bothered about certain things I think you're not alone, it's not settled yet. The philosophy of the original minimalistic copenhagen interpretation is compatible with the lineout above. Either one gets a basic standard understanding (including the math) and then formulate your own opinion, or you can try to dig up some of the old philosophy books. There is one non-mathematical book by one of the QM founders, "Physics and philosophy - the revolution in modern science" by Werner Heisenberg, it's a book which in plain english tries to elaborate the original philosophical considerations made back when QM was founded. Of course it's not that awfully modern anyway as the book is from 1958. But beeing written by one of the founders of QM it's a nice introductory books and it contains from what I recall not a single formula. However it does not treat the new things and "optimal inference" methods... this is far more modern than QM. But take the QM foundations, combine then with the foundations of thermodynamics and bayesian logic and do some further abstractions and I think what I tried to write above will make sense. /Fredrik
  24. I'm not sure I got your point in those long posts, but I don't understand why you keep fearing that we don't need experiments? Exepriements is nothing but observations, input. Without that we don't even have anything to discuss or disagree about. I think the question here is understanding howto interpret, and respond to, the input. We are looking for a invariant pattern here - as we always do, and the suggestion I talked about was that this pattern may in fact be rules of optimal inference in a world of information exchange. Physical exchanges can be loosely thought of as different flavours of information exhange, because there are clearly different types of information statements. Mathematics isn't physics, it's a language derived from logic we use to desribe things that would otherwise be hard to quantify. Mathematics is the study for this language and it's properties. Mathematics has an interesting philosophy and logical foundations of it's ow that IMO aren't quite the same as the philosophical and logical foundations of physics. Physics is the study of the workings of reality or abstractions thereof, rather than the mathematical study of arbitrary abstractions. But a physicists that knows no mathematics would be crippled because it's a tool we use. Of course, especially theoretical physics tend to be more abstract, but a physicists that work on a theory because it has nice geometric properties or because it corresponds to nice mathematical formalism has IMO lost the focus. We must be able to use mathematics and abstract thinking but without loosing the focus on reality, and there I think a grain of philosophy is needed to paste it all together. And this is also where I think it's possible to gain some basic grasp without math, but to apply the ideas in a real situations one needs to actually process data, and computer predictions. But this doesn't mean reality breaks down with human understanding. If anything I think the human is a result of these rules, applied through evolution. /Fredrik
  25. Here are some comments... a) If the time units are defined by cycles of the clock device, the rate of time is reflected by the rate of change in the clock device. But this is always measure with a reference, and is thus relative. The clock device is just a "sample system" which you observe and whos relative change you use to parametrize the rest of your observations. The parametrization of the relative evolution is time. b) I wonder do you _define_"absolute universal time"? /Fredrik
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.