-
Posts
3451 -
Joined
-
Last visited
-
Days Won
2
Content Type
Profiles
Forums
Events
Everything posted by timo
-
the physical basis of computer simulation
timo replied to Ricky's topic in Modern and Theoretical Physics
Hi Ricky, both the applications and the potential problems with computer simulations are numerous, so it's a bit hard to give you a definite statement about what you're asking (plus, I don't know your scientific background). Finding definitions for the macroscopic quantities that also apply to the finite systems simulated on the computer is in many cases not a big deal. Some cases are obvious, like defining the density as the number of particles divided by the volume. Some may take more sophisticated methods, like entropy (don't know the method off my head now) or the chemical potential (sometimes evaluated via the Widom insertion method). The method I use is essentially measuring average values of something and construct the macroscopic properties from them. For instance, statistical physics tells you that the heat capacity of a system is equal to something like <E²>-<E>² (modulo some factors of volume or so), where <...> denotes the average over a large time (more precisely: the statistical ensemble). So if I measure the average squared energy and the average energy over some time and hope that the time was long enough to not run into the huge amount of potential pitfalls, then I've extracted a thermodynamic property from the system. However, all these methods assume that your small system behaves like a real system to at least some controllable extent. This implies the following The interactions in the simulations (called "force field" in MD simulations) and the bead unit, e.g. whether H2O is represented via three atoms, a single molecule, or completely integrated out ("implicit solvent" in MD language), must be chosen appropriately. I don't think there's systematic rules telling you which level of detail is needed. Some people (e.g. our group) make claims from very abstract models (which we justify by believing in universality), but most experimentalists don't seem to like (or even understand) that. Still, a lot of effort is put into developing more abstract ("coarse grained") systems, either from bottom up (by taking a more detailed system and integrating out some degrees of freedom) or by an ad-hoc definition of a more abstract system and modifying it until it shows the desired behavior (top down). From what I heard from colleagues, some people, especially in the more biological fields, do not like coarse grained simulations at all, since it is not guaranteed that the effect shown is really an effect that would exist in nature. In that context, the term "atomistic simulations", where each atom is assigned one object in the simulation, is sometimes mentioned. I am not a big fan of this attitude because I feel "atomistic simulations" are just a different -arbitrary- layer of abstraction. I could as well claim that they still completely ignore everything we know about quantum mechanics. But since that's a discussion among more competent people than me, I don't think I'm in a position that I should make claims about this issue. The system you simulate needs to be large enough to simulate what you want to see. Imagine you were simulating a vibrating string by taking a piece of string, lay it across the simulation box and use periodic boundary conditions. You might think that due to the periodic boundary conditions, you successfully mimicked an infinite-sized string. That is, however, only partly true. If you describe the deviations of the string from its mean position (via a Fourier transformation), you'll notice that only a certain quantized set of wavelength can appear. Most importantly, wavelength larger than that of the simulation box can not be present. But the high-wavelength parts may just be those that are the most important ones in a real vibrating string (since they have the lowest energy per amplitude). So what you have done is to systematically exclude the most important signal you might have been looking for in your simulation. I highlighted the word "systematically" in the previous sentence, because it gives rise to a very powerful method called "finite-size scaling". If you can find a scaling law which tells you how big the error you make is relative to the system size (those predictions come as a proportionality relation, if you had an exact equation you were already done, of course), then you can just run your simulation for several system sizes and use this relation to extrapolate your results to the sought-for infinite-sized system. A more philosophical aspect (but one which feels very real if you work in the field) is related to the amount of data you can handle to still make a proper analysis, and the insight you can get from that. Assume I was able to run a full QM-based (even Standard Model based if you want to) simulation of red blood cells migrating through a vein. What do I learn from that, then? Sure, I can measure the drift velocity. But I could have measured that in an experiment as well, where I know that my system is realistic. The example is very exaggerated, of course. But as soon as the data and conclusions you extract from your simulations are limited to "we see the same thing as in experiment", then you might wonder why you did the computer simulation in the first place. Your questions are very broad and I don't know where you are coming from (in the academical sense, not where you live). But I hope that you find the points above interesting. -
Do you know that or do you just assume that because it is intuitive, Spyman?
-
I'm afraid no one can answer that for you, except the person who gave you the ambiguous term. I would interpret it the way Spyman did (as [math] \frac{48}{2y}[/math]), but that's just from my taste of aesthetics, not because I'd know some rule that this surely cannot mean [math]\frac{48}{2}y[/math].
-
What's the physical condition for the plane being in "level flight at some height"?
-
Regardless of the quality of your idea or how interesting it is: It is not appropriate for Wikipedia. Wikipedia is not supposed to be a collection of thoughts that the authors find interesting. It is supposed to reflect mainstream knowledge, which an original idea of yours is not (by definition). Quality of an idea is not necessarily a criterion (->fascism).
-
By "the black screen" you mean the console? In Linux, you'd mark the text, then move to the location you want it to appear, and press the middle mouse button to insert it there. If that doesn't work for Windows, I can imagine that there's ways to configure it to behave that way or plugins that can be downloaded.
-
I ME: QED is a QFT. So QED and QFT are not different as in "car and bike" but as in "car and vehicle".
-
the physical basis of computer simulation
timo replied to Ricky's topic in Modern and Theoretical Physics
It is in fact absolutely not clear, and only when the simulated data agree with experimentally-measured results you can extrapolate a bit and claim that other features you get in the simulation (which might not be measurable in experiment) might also be a proper representation of reality. There is a large amount of potential pitfalls in computer simulations, and most claims made in them are from intuition or hand-waving. Your question about the ergodic hypothesis (some people insist on calling it "quasi-ergodic hypothesis", btw) is very broad. It's not quite clear what you are asking. Note that the ergodic hypothesis has its roots in statistical physics, and the problems associated with it ("how long is an infinite amount of time?", for example) are not special properties of computer simulations but general problems. -
"Nature" discusses Leonov's theories
timo replied to Leonov's topic in Modern and Theoretical Physics
I am not quite sure what the text you linked is supposed to tell me; it would probably make more sense in the context of the journal. But the only exotic physics it talks about is Supersymmetry, which is really mainstream, and not at all a recent idea. The only one talking about "superunification" is you in the comments section. Your rambling of not being properly cited is thus a bit embarrassing; I think you should try to delete your comment, if you want to be taken seriously. -
early on, Gravity was radically non-Newtonian ??
timo replied to Widdekind's topic in Astronomy and Cosmology
No one describes the universe as a whole via a Schwarzschild metric. Neither at the current state, nor in the early universe. -
Agreed, except for the "so you are only looking at one dimension"-part which I either didn't understand or which you mis-wrote (the degrees of freedom for the 4-momenta of a 1->2 decay surely are 6-4=2 dimensions). But I thought talking about only a single jump in energy levels would make the issue easier to understand and focus on the relevant part.
-
I once shared a flat with a woman who had invented telepathy. But they stole the idea out of her head using her very own invention. So she tried to re-obtain her results again, which turned out to be a problematic endeavor because they tried to steal her new results (I never quite got that part since I though they already knew everything). So she was in a constant struggle of writing down important scientific results and flushing them down the toilet or spreading them in trash cans all over the town so that they wouldn't find them. Well, the nights in the flat were full of activity of writing and flushing things down the toilet. And apparently, they even broke into the flat from time to time to steal my food from the fridge. Bottom line: I don't have any advice for you as long as your only problem is not being taken seriously on the Internet. I'm usually not being taken as serious as I'd like to, too (like for example with this post - the story above is not made up). But as soon as you start finding out that the physics professors you visit are all blind to see the greatness of your idea, or if you find out they are all part of a conspiracy, or you feel that they try to harm you, or you realize that the energy you put into promoting your idea has a serious impact on your social life or you job, then from my experience I'd say that you should not be ashamed to talk about that with a medical doctor.
-
First of all, I am not very familiar with history of physics. So the statements that follow are meant to help you understanding how a reasoning might work, not meant to reproduce what the physicists actually though 80 years ago (oh, and I didn't even try to listen to the podcast, in case that matters). I don't think it is obvious at all, because there is, as far as I can see, at least one crucial point of information missing. Namely, that the decay is always between the same two energy levels. So let's assume this. Let's also assume that the particles that are created have no other means of storing energy other than their mass (which is fixed) and their momentum (i.e. kinetic energy, which is variable). Both decay products, the electron/positron and the nucleus after the decay, each have three degrees of freedom, their momenta in x-, y- and z-direction ("degree of freedom" means that it is an independent variable that could -at least until this point- in principle have any value). This amounts to six degrees of freedom. However, when one assumes that energy and momentum are conserved, then there are four equations (1 for energy, one for each component of momentum) constraining the degrees of freedom. Only two degrees of freedom are left, then. They are the direction of the momentum of one of the particles (doesn't matter which but let's say we look at the electron). That does imply that the magnitude of the electron's momentum is not variable, but must have some fixed value. That means that the kinetic energy of the electron must have a fixed value. That means that the total energy of the electron must have a fixed value. When you have three outgoing particles, you initially have nine degrees of freedom, but still only four constraints. That means that the result of something decaying into three particles has much more variety (five degrees of freedom) than that of something decaying into only two particles. In particular, the energy of the electron is not fixed but can vary. I would expect his argument went into this direction. The mass of a particle does not put a limit on the kinetic energy that it can have. In fact, in highly-energetic beta decays you can even completely ignore the mass of the electron/positron and get reasonable results (this is sometimes called the "relativistic limit").
-
And what kind of help do you need exactly, i.e. where are you stuck?
-
How about plotting the function and looking at the graph?
-
I have no idea what you are asking (and pardon me for not reading the link you posted; you didn't even explain why you posted it). The Newton-Raphson algorithm is supposed to do the following (please note that what I say in the following is not strictly correct, but I hope you get the idea): Assume you have a function f(x). The Newton-Raphson algorithm attempts to find an x such that f(x)=0. This is done by starting with an initial guess for x, [math]x_0[/math]. Then, you apply the Newton-Raphson step [math]x_1 = x_0 - \frac{f(x_0)}{ f'(x_0) }[/math] to get a new guess [math]x_1[/math], which hopefully is a better guess than [math]x_0[/math]. Now, to further improve this guess, you perform a Newton-Raphson step on this guess again, i.e. [math]x_2 = x_1 - \frac{f(x_1)}{ f'(x_1) } [/math]. By repeating this procedure, you successively produce better estimates for the real x. If this didn't help you, then please be a bit more detailed about what your mathematical background is and what you are actually asking.
-
The obvious choices seem to attend a university course. Or, if that is not possible, get a proper textbook (ideally one with exercises in it) and learn from that. In case you are already attending a QM course at university: pre-prepare the lectures at home, go to the lectures, do your homework, ask the tutor if things are not clear or you get ideas beyond the course material. There's nothing that makes a QM lecture conceptually different from any other physics lectures.
-
It's not quite clear what you are asking for; stating what you actually want to do and writing in complete sentences might help a lot. But one promising "first thing to find" would be the derivative of your function, since it explicitly appears in the iteration scheme and is readily written down by hand.
-
Partial Differential Equations of the second order
timo replied to mooeypoo's topic in Homework Help
I'm afraid I can't tell you how to solve this; I just happen to know what the solutions look like. But maybe it helps you to know that the equation you're showing frequently appears in many physical scenarios as a "wave equation" (and one possible set of solutions are plane waves, then). -
Unless you can define what the speed of an electron around the nucleus is supposed to be (note that the electron is better described as a cloud around the nucleus than as a point rotating around it), you can't even say that.
-
sysD, the images are crappy in this regard because they lack the y=0 line. The trough at x=3 is exactly the negative of the peak at x=1. So after squaring the function, they look exactly the same. Note also Swansont's previous statement that
-
The first part is more trivial than you probably expect it. For the 2nd part of the question (Au=0 and Av=0), your start is indeed promising. Keep in mind what "linear" means for that one.
-
You're showing an interesting parallel to the bloodiest part of European history since World War 2. However, almost all people outside of Serbia would draw a completely different conclusion from this comparison. Considering the idiots bringing it to the streets just for democracy and despite an ok overall standard of living the people of the Democratic Republic of Germany come to my mind. Saying that the people are "happy, fed, and satisfied" is a strange statement about a country in a state of civil war.
-
Yep. Nope. Twin A would see Earth time to appear running slow during both periods. The impact of the time dilatation factor gamma does not depend on whether two objects approach each other or increase their distance. Note that twin A would see earth to jump forward in time during the short period of turning around. Drawing an x-over-t diagram in the frame of earth shows the symmetry. You've defined your symmetric scenario in this frame, after all. Drawing x-over-t diagrams in the frame of either twin is probably out of your reach because of the part of the trip where the twins turn around (which I expect to be rather tricky for that purpose).
-
I appreciate your intent to helping interested people understanding basic Quantum Mechanics. But I think you have a different notion of "educated understanding" than me or -more important- most other people. Trying to explain something when no one actively asked for it should raise the minimum quality of an explanation: The explanation now must be able to compete with standard sources like textbooks or Wikipedia because otherwise it is just noise. "The wave function is the time evolution of a system derived from Schroedinger's equation given the Hamiltonian (the Hamiltonian describes the energy of the system) of a system" is essentially wrong. The wave function is a description of the state, the QM equivalent of "particle is at x and has a momentum p". It is the Schroedinger equation that determines the time evolution of this state (and thus the time evolution of the wave function).