-
Posts
532 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by fredrik
-
Most introductory books on QM tend to provide some setting that makes the introduction of QM plausible. But there exists no foolproof deduction of it. If one is looking for that, one is going to face dissapointment. There is no mathematical proof of quantum mechanics. To provide the setting that makes QM plausible depends on where you come from, but traditionally most QM students know all classical mechanics. At minimum newtons mechanics, but often also lagrange and hamiltons formulations. Then the usual motivation is by historical development, photoelectric effects and spectra lines for example. Many such experimental facts, that appear perplexing in a classical mechanics settings. This sure provides the motivation for something new. Then, usually the formalisms is introduced tentatively or axiomatically, and it's show that the new formalism reduces to classical mechanics is the respective limits. But there sure are things that are not really crystal clear. From a pragmatic engineering point, one could argue that why the theory works is not important. But that depends on your perspective. To explain life, I assume you are not satisfied with just natural selection? I guess first it's reduced to molecular biology and chemistry, which reduces to physics which reduces to fundamental laws. Are you seeking to understand the origin of laws? There are several people who are asking questions like what is the logic and origin of physical law. Indeed I think there is some essence of self-organisation that transcends nature at all scales, even down to the notion of physical law. I consider those questions to belong to a domain of foundations of science, logic and some philsopohy, but while the meaning of the question may be a part interpretational I think it's also simply questions which noone can answer yet. If this is what you are looking for, plain QM will not answer them. You need to go beyond that, the foundations of QM, and the foundations of science to refine the questions. /Fredrik
-
Here are some more pesonal coments... I wonder if you are confusing different kinds of uncertainty? There is the uncertainty that is due to that quantum mehanics is indeterministic on event level, and that determinism is recovered at probability level. So in that sense one can talk about the uncertainty even in a classical probability distribution. Say like in thermodynamics. But this has nothing to do with QM. But then we have the heisenbergs uncertainty principle which is a different story. It states a RELATION between how accurate we can konw the answer to two different questions at a time. IF we measure x and p, then HUP does not say that we can not konw x exactly, or p exactly. IT says that we can not know both exactly at the same time. This is a different type of uncertainty! This is an uncertainty relation, between different informations. It specifically gives a relation between the standard deviations of the probability distributions of x and p. This originates from a postulated relation between two measurement operators. IE. the two measures are related and this relation constrains their independence. One of the core ideas of QM, is that the state of a system, as observed by an observer can be represented as a vector or wavefunction in a hilbert vector space. Why this is so, is not fundamentally clear beyond the fact that it's a postulate of QM and QM has been proven a extremely succesful theory. As per rovelli's relational QM interpretation, this wavefunction is to be thought of as a relation between the observer and observed. Therefore there is no clear observer invariant notion of state of the system. The state of a system has meaning only relative to an observer. But of course, in a certain sense, the different views of the different observers must be "consistent". Just like the different time and lenght measurements made in different reference frames in relativity while differing, are consistent. The consistency is recovered by the theory. This is how I would try to illustrate the meaning of superposition. Classicall if an observer doesn't know exatly if the state of the system relative to himeself is A or B, but the observer knows that it's either or. Then the state(S) of the system can be said to be S = A or B Where the or operator simply means that we make a classical superposition of the probability distributions A or B, and renormalise. P(A or B) ~ P(A) + P(B) But this only makes sense of A and B are statistically independent. In QM, this rules doesn' work, or rather the operator or, does have different properties. In QM one can also think in terms of S = A or B, but here or is what's called the superposition and it does not have the same rule for combining probabilities. And of course, in a measurement. we get NEW information which may collapse A or B to A. A or B -> A The exact logic WHY the superposition rule is here, is something that IMO is not yet satisfactory understood. Books on QM does not explain this, and hardly even argue in favour it in any deeper sense. It's postulated. And the support is experimental. I am however working on trying to find a way to show how the logic of QM, is emergent as a result of self-organisation but I'm not successful yet. But there are I think reasons to think that there is a deepper reason for the superposition principle, that can be understood in terms of fitness of the observer. This will remotely connect to your interests of biology, since there may be game theoric angles to this, where one can argue that the quantum logic are expected to develop in a world of random logic. There are several papers that find that quantum games are in a certain sense more efficient than classical games. But as far as I know, noone has satisfactory "explained" this. The standard procedure is to postulate it. And then the motivation is experimental support for the constructed theory. So IMO the superposition principle is the OR operator of adding information. The question is the why it has the properties it has. IE what is the logic of the quantum logic? Not sure if that helps though? I think sometimes it's easier to understand something, once you realize that noone else really understands 100% it either To understand what QM says and does, and learn howto use it is one thing. But to understand why this is plausible is much harder! /Fredrik
-
There is an online preview of the very first chapter in Dirac's classic "The principles of Quantum mechanics" called "The principle of superposition" -- http://books.google.se/books?id=XehUpGiM6FIC&pg=PA1&dq=principles+of+quantum+mechanics&psp=1&source=gbs_toc_s&cad=1&sig=ACfU3U3u6e4LsWZuzUEz57qh4_1VtxMGSg If that doesn't work just google "principles of quantum mechanics" and it's probably the first hit you get from google books. The point is that the first introductory chapter is in plain english, except in the last pages where he introduces the bra and ket notation. /Fredrik
-
Foodchain, considering your record of questions on QM, have you read Rovelli's Relational Quantum Mechanics? Relational Quantum Mechanics "I suggest that the common unease with taking quantum mechanics as a fundamental description of nature (the "measurement problem") could derive from the use of an incorrect notion, as the unease with the Lorentz transformations before Einstein derived from the notion of observer-independent time. I suggest that this incorrect notion is the notion of observer-independent state of a system (or observer-independent values of physical quantities). I reformulate the problem of the "interpretation of quantum mechanics" as the problem of deriving the formalism from a few simple physical postulates. I consider a reformulation of quantum mechanics in terms of information theory. All systems are assumed to be equivalent, there is no observer-observed distinction, and the theory describes only the information that systems have about each other; nevertheless, the theory is complete." -- http://arxiv.org/abs/quant-ph/9609002 A large part of that paper is very conceptual and you can read and probably apprecaite a large part of without math! I'd recommend it. Rovelli argues that the notion of "absolute physical state" is behind alot of confusion. Instead he argues that the fundamental thing is relative states, which is a relation between observer and observed. Moreover he argues that there are no absolute relations either. I don't agree with all his reasoning in that paper, but for a starter it's a very good reading, and his emphasis on the relative nature of things is excellent IMO. But I don't share his conclusive reasoning that QM is "complete" or satisfactory but I think that's beyond the first point of relational notions he makes. He doesn't really, like the abstract suggests, "derive QM formalisms". But he suggests perhaps at best a possible route towards such a goal. /Fredrik
-
At that age perhaps some more hands on experience is more enriching to feed curiousity? There are these various science kits at toystores... like chemistry boxes with sets of experiements, electriconics sets, microscope sets with slides of bugs. I think I would love that stuff if I were 10. I recall that when my parents bought me felt tip pens for drawing, I found it much more fun to rip open the pens oven, and take the interior out and put in glasses of water to get extract different coloured solutions. Maybe something like that will grow motivation to learn more later. /Fredrik
-
It sure can be difficuly to choose, and perhaps sometimes a sidetrack is necessary to gain perspective. I did the other way around, I studied only physics and math, but I have studied alot of biochemistry and molecular biology on my own. It was very enriching to be added ontop of a very "dry" physics education. Perhaps you should just keep what you are doing, and maybe you can find your own angles to all this. Hard problems probably begs for unexpected solutions Beeing broad in your interest and education is I think a very good thing these days because it's hard to guess your future endeavours. /Fredrik
-
I think this is a good association, but I think it will be more difficult to appreciate the association without knowing the basics of standard QM first. With the basics I mean the basic structure, the supposed physical contents of the axioms and idealisations QM rests on and how this applies to real world problems. I think the interesting associations on this is beyond standard QM in the sense to understnad how theories and laws themselves are selected. What I don't find satisfactory about decoherence alone is that it uses a very complex picture(alot of information), to explain a constrained picture - by information reduction. That I think is easier. But a more interesting problem would be howto grow a complex picture from true ignorance, without relying on a background structure that has no inside-justification. Ie. without a "map" of our ignorance. I think that this map itself contains information, and the realistic scenario is that there is no such map. And then the question becomes, what actions would one expect from such an observer? I think the observer is gambling. And what kind of gamblers would be suspect to survive and be likely to be observed? Somewhat rational decision makers I'd say. That doesn't mean that one wouldn't expect variation. Variation would also be expected. Foodchain, I would think that you might be interested in some of the QM + gravity things, and information physics, but that is even harder to read without the basics. And I also have hard to find interesting papers on this because it's open questions. And there are different lines of speculation out there. And some are more fundamental than others. I think standard QM isn't the best place to look for evolutionary ideas in physics. It's probalbly deeper. Some cosmological thinkers are reflecting over different universes where different laws apply, and ponder which are "viable" and which are not. I take another view, I think you can make the same ponderings by instead of considering a cosmological universe and it's birth, one can consider the birth of an observer in an unknown environment, I think may in many respects be two different views of the same thing. Which abstraction you prefer probably depends on personal history. /Fredrik
-
It is interesting to compare the formalisms of QM and classical mechanics, and seemingly small formal adjustments and introducing an i and planck constants, replacing possion bracket with the commutator etc you can get to QM. But the physical meaning of the various quantisation procedures is IMO not as straightforward. The mathematically heuristic methods may be nice and is often used, but for the philosophical minded it isn't alwas satisfactory. As you may know in QM, one is dealing with the state vector (wave function). This represents the observers information about the system. Measurement observables are special hermitian operators. [math][A,H]\ := AH - HA[/math] [math]\langle [A,H] \rangle := \langle \psi | AH - HA | \psi \rangle[/math] The latters is to be interpreted as a scalar product in a complex vector space. The requirement for hermitian operators for observables guaratees that they have real observables (rather than complex). http://en.wikipedia.org/wiki/Expectation_value_(quantum_mechanics) http://en.wikipedia.org/wiki/Bra-ket_notation Not that since we are talking about operators here in general AB <> BA. For example if we are talking about momentum in position representation. Then P = [math]-i\hbar \frac{\partial}{\partial x}[/math] [math]\langle [P,H] \rangle = \langle \psi | -i\hbar \frac{\partial}{\partial x}H + Hi\hbar \frac{\partial}{\partial x} | \psi \rangle = \langle \psi | -i\hbar \frac{\partial H}{\partial x}| \psi \rangle[/math] Furthermore in position represetation the scalar product looks like [math]\langle \psi | Q | \psi \rangle = \int \psi^{*}(x)Q\psi(x)d^3x[/math] So the basics here is linear spaces, vectors and operators. More specifically one sees it as a vector space of functions, that's why you get into harmonics and fourier bases for many systems, as a basis for the space of state functions. Note that above A and H are not real valued functions they are operators, acting on [math]\psi[/math] /Fredrik
-
In standard QM the expectation value (denoted by the <>) of an operator A varies with time like [math]\frac{d\langle A\rangle}{dt} = \langle \frac{\partial A}{\partial t}\rangle - \frac{i}{\hbar} \langle [A,H]\rangle[/math] So for an operator without explicit time dependence we have [math]\frac{d \langle A\rangle}{dt} =- \frac{i}{\hbar} \langle[A,H]\rangle[/math] So if A commutes with the hamiltonian operator, then it's expectation value is conserved. /Fredrik
-
I agree that you can probably make several different perspectives on this, interesting in different ways. What is a law, and what is time? And how can those things not then be entangled up with what is a symmetry? And suppose we know what a law is and time is, then how do we know that this law will be wont need change in the future? I'd say we don't. We are just guessing. But the "physics of guessing" is sufficiently interesting IMO /Fredrik It's exactly this that Smolin has reflected over in various places: the nature of physical law, and how do you consistently distinguish the notion of physical law from initial condition? Or are they to be treated on the same level? IMO, the simple answer is yes, but the difference is that the confidence(inertia) in physical law is higher, so relatively speaking the initial condition is the low intertial part, seen from an information perspective. I think you can make alot of ponderings about this, and I don't think it should be trivialized. Noethers theorem is nice, but it's a pretty simple thing and it lives in a context that is not explained. /Fredrik
-
Foodchain, I think you often ask interesting questions! Swansont mentions symmetries as a technical explanation for conservation laws. This is a standard view in the standard formalisms, it's in classical mechancis as well as QM. Look up Noether's theorem. http://en.wikipedia.org/wiki/Noether's_theorem But one may continue to ask where the symmetries come from, and what is the "physical basis" for them? I think your question aims to go further. I think from your question you are probing the various concept of "emergent symmetries" which different people have been elaborating in different ways. Smolin has been reflecting over than in the context of quantum gravity for example. I for one think this is a good type of reflection. From the information/observer point, you would certainly ask this: Symmetries or not, the pragmatic question is, how does the process of discovery of a symmetry look like? from a physical point of view? And does the ignorance of the symmetry affect the action of the observer? I think it does. In the standard case of symmetries implying a conserved qty, one starts with some model. Some lagrangian or something. But how is this lagrangian form arrive at? I think that step must not be trivialized. I don't think there is a simple answer to this, but philosophically speaking I personally see symmetries and conserved quantities are emergent form the observers microstructures during an evolutionary process. This would yield only something like a subjective symmetry. As for objective symmetries which all observers in a region would agree upon, the observers need to interact to "equilibrate" at this level. I think there is a selective pressure towards this. So symmetries represent equilibrium. Broken symmetries breaks the equilibrium. Edit: The implicit association here from a game theory view is that there is a COST for beeing in disagreement with your environment. This is the basis for selection. So, in a sense I agree with your association of environment. To get a deeper understanding of emergent symmetries the multidirectional interaction and feedback with the environment must be considered. But even here there are different ways. I think the standard decoherence view is not enough. One has to add ontop of this a further issue, of the limited information capacit of the observers. That's my *personal opinion*at least, I could be wrong. As for mainstream answers, I'm not sure this is a mainstream question - unless of course you are content with noether's theorem. But I see a meaning in your question beyond that, to which there current'y is no standard answer to that I know of. /Fredrik
-
"So what will you do if string theory is wrong?"
fredrik replied to ajb's topic in Modern and Theoretical Physics
I agree that if you see string theory as "applied mathematics" then there is nothing that prevents mathematical interest in it, wether it makes sense from the point of view of physics or not. I do not personally find the basic string theory strategy as I know it to be rational. It is too speculative in methology. This is just my personal view though. The fact that fundamental problems are ingnored and the risks of this ignorance is ignored, that is what I can't accept as a rational choice. The question, "should we keep researching string theory" is a somewhat meaningless question unless put in a larger context. In the larger context there a possibly a sea of theories, or a sea of seeds to new theories that can also be funded. So I think the only rational way is to develop a strategy that can rate the candidates. Most probably string theory should be a player here even in my world, but some of the past domination of string theory as I have perceived it is not rational relative to me own thinking. Let's also reflect over how much money and brainpower that has been denied to other physics threads, IF string theory is wrong. A good strategy should still be able to defend that in the sense of: It WAS the wisest decision we could make based on info at hand. OTOH I guess that is exactly how the community does argue... the problem is that that argumetnation is biased and relative too. There is no objective rationality. So what do to? I'll keep doing my game, and everyone keep doing theris and we'll all see what the collective dynamica effects are. It will be enlightning regardless of the outcome. /Fredrik -
Just to finish this thread. The better algorithms identify the so called X and B points within each cardiac cycle in the dZ/dt curve, which works for normal hearts, time difference beeing LVET. It seems, apart from the formula itself, the variability of blood resistivity with for example haematocrit values and other variables, is another issue that is usually ignored in the standard formula for stroke volume(SV), but which seems hard to correct for by non-invasive methods. [math]SV = \rho_{blood} * (L/Zo)^2 *LVET* |dZ/dt|_{max}[/math] L = distance between sensing electrodes. So Zo, dZ/dt and LVET are measured and computed from the electrode voltage. The other parameters are assumed constant. /Fredrik
-
I see this from a physical point, but I think we may underestimate what can happen in complex systems in general. I have been very impressed by something as simpled as a living yeast cell. You can not "talk to a cell", but you can observe it, disturb it and see how it reacts, and you can be amazed how it "seems" that there is a reason for everything because there is a logic to the cells responses, that is clearly distinguishable from a random response. I played with the thought of what would I do if I were a yeast cell, and I really don't think I could do any better. So in some sense, I'm not sure if my "specific intelligence" is higher than a yeast cell. Neither do I think there is anything "divine" with intelligence whatsoever. We humans have mapped the genome of cells, mapped alot of the enzymes and the reactions taking place in a cell, and if we are given that description and ask ourselves how the cell should behave in order to survive and reproduce, it seems the cells is doing a great job. It seems the origin of "intelligence" somehow lies in the laws of nature by some intrinsic ability of self-organisation and selection. Surely the human brain is fantastic and impressive, no doubt about that. But somehow, so is the very structure of nature. Not only animals, but also plants and single cellular life, and even smaller physical systems are pretty amazing. /Fredrik
-
Some papers are harder to read than others, in various ways. In some papers the start right a head trying to argue towards a particular goal, and implicitly assumes that the reader is already tuned in, and agree on the relevant questions that should be asked. Other papers are technically difficult and work on an abstraction level that is sometimes also assumed to be known to the reader. Many papers seems to belong to larger research programs, and they don't outline the research program in each paper. And for an amateur not knowing everything, it comes out as cryptic. The starting point and choice of reasoning seems to differ alot. I'm not sure if you see it from the AI view but I am trying to merge information and physics more tighter, and try to connection physical interactions with information processing with focus on observability. I see great AI couplings to fundamental physics, although I'm not a comp guy. The reason is that if you take QM seriously there are a layer of information processing that covers all of physics. Because whatever we learn about nature, and the laws of nature, this information is acquired. And the acquisition process certainly has to put constraints on things. In a "true AI" the brain or computing device of the AI, must evolve too, and selected by fitness. My take on that is that all measures are relative, and that even measures can be assigned physical properties like information. And this may provide a way to unify evolution of matter and life as we normally call evolution, with a kind of evolution of the laws of physics. It could IMHO really be one coherent line of reasoning for all this. The CDT does not attempt any of this as I can see. So if you want anything like the above, I'd keep looking. I have seen that Lee Smolin has expressed a sentiment towards what I inerpret as this direction in various papers. But nothing seems yet mature IMO. /Fredrik About readability I personally see this in two ways... you can read a paper and try to understand what the authors intentions and sentiments are, and acquire an understanding of that... the next step is to from that stance, try to understand the logic of the tentative suggestions outlined. I am personally in a process of reading up on a few ideas, Rovelli's thinking, Penrose thinking and also some others. Often you can share questions, and see the general direction of the research and appreciate it, but still don't understand, or disagree with specific suggestions. But I suspect that since these things are to a large extent open questions, I personally try to focus on the general ideas, and intentions, because this is more fundamental than specific ideas since the specific ideas are born out of these intensions, but in a slightly random process as is oftne the case with learning. But if one traces this down one level, it seems to get more clear. Just reading how a particular person describes the problems, is often more enlightning than listening to speculative solutions without having the context. The solution is I think easier to guess from the formulation of the problem, than the other way around. And the formulation of the problem to me also determines my further motivation. If I don't agree on the questions, I would not be particularly interested in trying to answer those questions. /Fredrik
-
Here are a fairly readable CDT paper. If I am not mistaken(?) there was another thread around somewhere on this where Martin was involved? The Emergence of Spacetime or Quantum Gravity on Your Desktop "Is there an approach to quantum gravity which is conceptually simple, relies on very few fundamental physical principles and ingredients, emphasizes geometric (as opposed to algebraic) properties, comes with a definite numerical approximation scheme, and produces robust results, which go beyond showing mere internal consistency of the formalism? The answer is a resounding yes: it is the attempt to construct a nonperturbative theory of quantum gravity, valid on all scales, with the technique of so-called Causal Dynamical Triangulations. Despite its conceptual simplicity, the results obtained up to now are far from trivial. Most remarkable at this stage is perhaps the fully dynamical emergence of a classical background (and solution to the Einstein equations) from a nonperturbative sum over geometries, without putting in any preferred geometric background at the outset. In addition, there is concrete evidence for the presence of a fractal spacetime foam on Planckian distance scales. The availability of a computational framework provides built-in reality checks of the approach, whose importance can hardly be overestimated." -- http://arxiv.org/PS_cache/arxiv/pdf/0711/0711.0273v2.pdf I read that and I was not convinced that it was something for ME to spend more time on. I like some of the basic motivation, but they are referring to a kind of universality that I find somewhat ambigous and not satisfactory. But perhaps I just don't get it, so I'll recommend reading yourself. I also have issues with their choice of action. If you are to build something from nothing, you need some guide... and in a sense the action formulation is such a guide. But I sense that alot of information gets hardcoded as the action measure is chosen. /Fredrik From their conclusions section... "Starting point (left of the dashed line) is the regularized form of the sum over geometries in terms of causal triangulations, which in itself is unphysical. By taking the continuum limit of this formulation (achieved by fine-tuning the bare cosmological constant to its critical value [20]), one arrives at a continuum theory of quantum gravity." "By construction, if such a limit exists, the resulting continuum theory will not depend on many of the arbitrarily chosen regularization details, for example, the precise geometry of the building blocks and the details of the gluing rules. This implies a certain robustness of Planck-scale physics, as a consequence of the property of universality" They seem to think that they are doing a fair sampling over the space of possible spacetimes.. in some sense... what I fail to understand however is how they knows it's fair. IMO, it looks like a guess, on the level similar to most equiprobability hypothesis that is often made. It's argued that given no observable difference equiprobability is the natural choice. It sounds very nice, but IMO it seems more to be something in line with that given that you don't know, any guess is as good as any. Which is true. But it's a guess nevertheless, which means it's also subject to revision. But I suspect Martin can argue more in favour of the paper than I can. /Fredrik
-
I don't know of any papers from the top of my head but this seems to be in line with my own personal preference. I think that in a certain sense there does not need to be a contradiction between continous vs discrete structures. But I like to think of the "enumeration of possibilities" as something that ought to be described as a physical process - not a theoretical endeavour. Since I prefer to think in terms of information and learning, to START with a continous index of possibilities just seems wrong, or at best like a speculative hidden structure. My guiding principle is to imagine how a real observer, doing real interactions can LEARN about this continuum. And I suspect that the observers bounded complexity is simply unable to relate to an arbitrarily complex environment. Instead the environment must be "mapped" and indexed from the inside so to speak. In this way of thinking starting out by assuming a massive manifold is a very non-trivial way of injecting information the backdoor. I don't like it. I'm currently thinking about this myself, but is still looking. I am leaning towards the idea that the laws of physics are dynamical and self-assembled in a manner similar to learning. I don't find it sensible to consider universal degrees of freedom, I only find sense in considering observable degrees of freedom, and in that sense it also means the the number of degrees of freedom are dynamical, and can possibly by built from a simple elementa of information by some self-organising logic. Unlike the cellular automata idea, where there is some large microstructure wherein there is self-organisation, I think the microstrucutre itself is dynamical, and there is no global objective microstructure. Or rather, I see no reason to assume there is one. I do not ban it, but if it exists, it is will be emergent. I started reading up on LQG, but took a break and is now reading penrose book. I am not sure I like it. I found that I may not completely agree on rovelli's reasoning so my motivation for further reading his stuff dropped temporarily. As for fractal, I can relate to that if you associate the recursive definition with the self-organisation. But I doubt this rule is deterministic, I think it will be imperfect fractal. But possible perfect on average of something like that. Something like a fractal guided random walk. /Fredrik Something that has become clear is that different people see the concept of a theory in different ways. I think at least many who work with continuum models doesn't necessarily think there is a physical correspondence to the continuum. The continuum lives in the model, without mandatory physical correspondence. And that not everything in the model is observable. Some choose not to be disturbed by this, but I am disturbed. I see it as a redundancy in the model. It is in this sense, that there might be a discrete model that gives the same predictions and thus really isn't in contradiction. But with some other possibly relative, benefits. What I personally want is to take the theory more seriously. Because ultimately the theory is living in the physical world, one way or the other. It's just that on human level, vs a particle accelerator it might be that we have been able to get away with this. If you consider the "fitness" of a theory, as a viable entitry that is evolving, and possible competing with other theories, it should be suggestive that a theory with alot of redundancy or ghost degrees of freedom is possibly less fit, and is thus less likely to adapt, if you consider the adaption to be a physical process (think computation, which means there is a relation between computing time and complexity) This is also similar to my own preference for such theories. However I think that since the degrees of freedom is dynamical, there may temporarily be a reason for "ghost degrees" of freedom, but with time these should dissipate as their support is lost. During that time I think I would not expect "conservation of probability". /Fredrik
-
I've got limited knowledge on human physiology. Given that there must be calibrations for individuals etc, does anyone know what the typical variability of the LVET(left ventricular ejection time) is as a function of the relevant variables. I guess assuming normal heart function. I presume LVET = f(HR,...?) with some calibration paramters HR = heart rate. I'm working on a bioimpediance application to do a non-invasive estimate cardiac output, and the standard prescription is to put in a typical fixed LVET. But I am doubtful how sensible that is. Should I at minimum include a linear correction vs bpm? There will be simultanous ECG measurement too. Any "typical" relations? /Fredrik In the unlikely event that someone else is interested I found this old paper. "Heart rate--left ventricular ejection time relations. Variations during postural change and cardiovascular challenges", Br Heart J. 1976 December; 38(12): 1332–1338. -- http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=483176 It suggest a linear correction against bpm, and that LVET drops with increasing bpm, where the constant furthermore depends on the type of activity. I think the solution is to replace the constant with a dynamical beat-to-beat evaluation of the parameter, computer from some of the available signals. Possibly heart sound. /Fredrik
-
If strings are vibrating all the time, then...?
fredrik replied to browndn's topic in Modern and Theoretical Physics
I'm certainly no string expert either but it seems you're talking about "branes", like a generalisation of strings? As far as I know I'm with ajb that if you just say "string" it refers to a 1 dimensional extended object (embedded in some background space of varying dimensionality). A p-brane is the generalisation of string to a p-dimensional object. 0-brane is a point. 1-brane is a 1-D STRING. 2-brane is surface (2D-membrane) and so on. Like a string, these higher dimensional objects can also be thought to be embedded in higher dimensional spaces. But of course if you replace the string with a membrane, I figure the constraints for the external dimensionality is expected to differ. If we take the normal string idea to replace points with strings, the logical continuation is to repeat the induction, and ask what it means if we inflate the string to a plane, and so on. And are there brane theories related to ordinary string theories? (By somehow trading internal degrees of freedom for "space-time" degrees of freedom?) p-brane -> (p+1)-brane Stringers refer to part of the relations between different theories involving p-branes of different p:s as p-dualities. A very *naive* consideration, is that you can picture a highly energetic oscillating string, perhaps depending on the situation it would not be entirely trivial to distinguish this object from a not so energetically vibratiing membrane because you might not be able to distinguish the oscillations of a lower dim object from an extended object, if you imagine a string with the same mass or total intertia as the membrane. The plane in which you have dynamics might effectively map out another dimension. And if you try to formulate a real measurement problem, on how to determined if you've got a plane of a string it seems that one wouldn't expect it to be easy. /Fredrik -
What kind of math is involved in Biology/Micro
fredrik replied to Marconis's topic in Science Education
One field that I personally find very interesting, is modelling of organisms. In theory the behaviour of an organisms is a massive systems of coupled chemical reactions, which are regulated by enzymes and various celluclar structurs, which in turn are traced down to gene expression levels. But it soon becomes obvious that writing down that massive problem as some kind of simple initial value problem simply doesn't work for a number of reasons. Just because you seem to understand the chemistry of every single molecule of an organisms, doesn't mean you understand the whole thing. It's simply too complex, and complexity does limit, so a more clever modelling is needed. Here a mix between microbiologists and computer scientists are making alot of modelling, where one tries to mimic the measure of life as optimation problems, optimising growth rates and survival and so on, given the constraints defined by the stochiometry of the chemical network, which is reasonably well mapped out for many organisms these days. This way, one can as it seems accomplish a reasonable theoretical model of the organism at a high level, without detailed knowledge of the details. One focus on what natures tries to accomplish, and trusts that nature will find a way - HOW it's done doesn't matter to the higher level behaviour. This is not unlike that of stat mech. Clearly newtons equations for billions of molecules would make not sense. So we do statistics. Statistics can be done also on behavioural systems. We don't know the path of every single molecule in a gas, and we don't need to. Similarly one doesn't need to model the dynamics of gene translation and transcription with DE:s. All we need to do, is what the purposes is, and conclude that it's pretty likely that a way to accomplish that purposes the best way, is found sooner or later. I suspect this field will be grow important. If you can make computer simulations on cellular and organism responses, one can imagine how that'll boost progress in testing medicines, by narrowing down the options you need to do real hassly in vivo testing with. And I suspect some numerical analysis and basic modelling skills there is probalby more important than classical biology skills. Still that's not advanced math or numerical analysis. It's quite basic. It's probalby more important to learn how to to "model". How do you go about to make up a model of anything? That's something that I think physics students are trained to - "problem solving". But then the problem comes that it must also be a computable model. A model that can't be used to compute something in finite time, is also effectively worthless. So maybe it's not hte math itself, but it's how to apply it in useful ways to real life problems. I think all branches of science can learn from each other and isolating one discipline from the other seems old faishoned to me. I orignally studied physics and math only. To be honest, I though bio stuff was crap and waste of time. I liked chemistry, but just because of the smoke and fire. But I have studied that on my own now and found that the combined knowledge from various fields means more than any of them alone - so I was very wrong. Life is complex are hard to predict, something biologists have know for a long time, but perhaps not so systematically. But it seems easy for theorists to flip over the other end and think of the world as a system of axioms, and gives the evil eye at anything that ill defined I didn't like bio, I thought it was just about remember names for flowers and insects, but I really didn't get the point. But I changed my mind. Perhasp you similary never got the point of math,in wihch case everything is hard. Numbers and their relations are all over nature. /Fredrik -
Let me pose a question. Is that a piece of toast stuck on page 138? I can't help it but I just got this vision of someone having breakfast at the the xerox machine and I can't get it off my mind. /Fredrik
-
As an outsider judging from the apparent progress the last 50 years or so that might seem to be the case, however, first of all science to me isn't a job, so I couldn't care less about things that a professional HAVE TO care about in order to stay in business. I'm a lucky fool But from my subjective POV, I am not going to work at LHC, and I also currently enough problems to solve that at leat will keep ME personally busy for a few more years. I am interested, not only in "models of physics" as in static modelling, I'm interesting in the modelling itself, as a dynamical phenomenon. I personally feel that an analysis of this is needed. And deep inside me, I think that this may also unravel some of the keys to solving some of the current conceptual problems in physics. I have a feeling that alot of the "observations" of these issues, ie. the logic of the scientific methods and modelling used in physics, are rarely analysed because it's supposedly meaningless. Here I see alot of DATA that is simpyl not analysed. This data will keep at least happy morons like me busy until LHC or the next generation of accelerators start to produce surprises. I think future of physics will be very exciting. I expect the new physics to come out of a new layer of abstractions that also renews the scientific method. /Fredrik
-
Radioactive Decay is Causeless?
fredrik replied to foofighter's topic in Modern and Theoretical Physics
I don't know what John's real view is but to me, there is no implication between acknowledging a lack of cause and the constancy of this conclusion. ie. That there is no explanation, does not imply that there can't be one in a possible "future". The meaning of this is actually quite deep. To me, the trick is to not exclude the unexpected. So I see no contradiction at all between learning, and acknowledging your incompleteness. Because by the same token our knowledge of causes are incomplete, our knowledge of our own incompleteness is incomplete. This OTOH doesn't mean we will overcome this incompleteness, it just means we don't know, it does not we can never know. That statement makes no sense to me. I'm perfectly comfortable with this, and I consider myself very philosophical minded. /Fredrik -
Some quick initial comments. Yes, that's loosely speaking close to what I personally think. I should still add that this is my personal expectations, there may be others who think differently. But given that this is a "discussion" I'll continue That's an excellent question and it's where current formulation of QM is IMHO not making complete sense. In QM, one can start by postulating the existence of operators, which corresponds to measurements, which corresponds to projecting information. But the point you raise here, is how much information that can be encapsulated by a particular observer? In normal QM, there is no limit to this as long as the measurements are compatible. This raises the issue of information capacity! This is a very important point. And IMO, this is most certainly also part of the cause of divergences. But the divergences aren't "real", but neither should they IMO need to be removed by ad hoc tricks, they simply shouldn't be there in the first place. What I'm saying is that MAYBE(nonone knows yet) this is related to the excess information capacity in our models. The way I envision this in principle, is that measurement apparatus itself encapsulates the information. In fact, what I wrote in the other thread is that I think that the reason for emergence of incompatible observables can be understood to be a related to this info issue. I sure can't prove this yet but I'm working on it. IMO, it's worse than that, since we don't even know to that percentage is But the good part is that I think this might make sense anyway. The nice part is to explain why the concepts still stick togethre and actually make sense, while we at the same time suggest that there are no solid references anywhere. This task is very similary to the task Einstein faced. But instead of talking about 4D "spacetime events", we are talking about distinguishable events with no fixed prior structure of dimensionality. And the task IMO at least is to understand the mathematical connections here that may suggest some epxectations on what the elmementary structures are in this world, ie, what can we say of them? this is an open question, no dense assumptions involved except the principles lined out. And what are the principal interactions that we can expect to be distinguished between these structures? So, what I'm suggesting is that the measurement apparatus is sort of, loosely speaking, part of the encapsualted information. So the encapsulation of information is taken seriously. But where do you start? It's easy to get the feeling in the sea of uncertainty that "anything goes". What is the resolution to this "paradox", because clearly it's not the case that anything goes? IMO, you can say anything goes, but anything does NOT go with equal plausibility. So what could create stability and order out of this complete chaos, is the generation of a relationally and observer relative, defined plausability(think probability) measure. I think this can be done. And the challange is to find out how. This will of course be a mathematical formalism, but I don't think it can be overstated that the core problem is not technically a "mathematical problem", it's a physical problem. Mathematics and logic is just the best human language we have to describe it with. /Fredrik
-
The most interesting parallell and also challange I personally see between physics and biology is to unify the biological evolution of life supporting structures and later on cellular based life and ultimately multicellular organisms which may even grow brains. IMO, I think of this starting at a very fundemental level - at which level the principles of physics and biology etc are necesserily the same. I've got a university physics & math education, but not an formal bio one. However I've studied some of that on my own, and it was very enriching to me, and one of the most fascinating perspectives IMO, is to try to understand the inside view. In biology for example, how does "life" look like, from the POV of a single cell? What does the every day problems look like? I found out that the inside view is possibly quite similar to the every day issues of a human. I need to do alot of things. Find food so I can generate free energy, sometimes I need to make choices - which choice is most benefitial to me? I need to fight problems, repair the cellular structure, get rid of toxic waste/by products. Maintain a functioning logistics to make sure all of these tasks are working. If any of this fails, I will waste valuable energy, and possibly even die. So in a way a cell is an "observer" of it's own environment, whos task is to survive. Survival takes make forms, many complex regulations, reproduction of course. It is this "inside view" that I think is the realistic view, and in this inside view, I think the "logic" takes it's cleanest form! I think the many "strange" things in QM, and strange logic arise because of this. Perhaps something like that, but what I think is at least one of the important keys, is that while we can, by reducing information, understand the inside view, relative to the big view, that really isn't a completely satisfactory treatment! To make an interesting parallell. In GR, one considers "curved spacetime". One can understand this curvature if one considers the world embedded in higher dimensions. But the interesting parts is to try to define the inside experienced curvature, without using external structures. There is an analogy here to QM. Intrinsic information vs the external birds view information. The analogy certainly isn't clean but I see it at least. And from the intuitive point, and since hte problem of quantum gravity is exactly how to unify QM and GR, I find this choice of analogy particularly nice. The usual idea on howo keep unitarity is to imagine a bigger information context, where the originally non-unitary deviations are adapter. But I think care should be taken when this is done, because one can not just increase the information capacity just like that. It's not physical. IMO at least. /Fredrik