-
Posts
3451 -
Joined
-
Last visited
-
Days Won
2
Content Type
Profiles
Forums
Events
Everything posted by timo
-
The matrix is the Lorentz transformation [math]L = \left( \begin{array}{cc} \gamma & -\beta \gamma \\ -\beta \gamma & \gamma \end{array} \right)[/math] with beta being the velocity of the moving frame in units of the speed of light (i.e. beta=0 is non-moving, beta=0.5 is moving with half the speed of light and beta=1 is light-speed) and [math]\gamma = \frac{1}{\sqrt{1-\beta ^2}}[/math] the Gamov factor (but the name is irrelevant for calculations). And example where a non-invertible matrix screws up my statement: [math] \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \neq \left( \begin{array}{c} 2 \\ 0 \end{array} \right)[/math] but [math] \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right)\left( \begin{array}{c} 1 \\ 0 \end{array} \right) = \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right) \left( \begin{array}{c} 2 \\ 0 \end{array} \right)[/math] In the frame of O, O' moves on a wold-line given by [math]P_2(x) = (x, \beta x)[/math]. The laser moves on a world-line given by [math]L_+(x) = (x, 1-x)[/math]. They do meet at [math]I = \left(\frac{1}{\beta +1 }, \frac{\beta}{\beta+1} \right)[/math]. In the coordinates of the moving frame, [math] I' = LI = \dots = \frac{1}{1+\beta} \left( \frac 1\gamma, 0 \right)[/math]. I.e. the time the moving observer measures between passing through the origin (time coordinate equals zero) and hitting the light of the laser is [math] \frac{1}{\gamma(1+\beta)}[/math].
-
Assuming I understood the setup correctly (I reduced it to 1D as Swansont suggested, but that should not have an impact other than on the nastiness of the calculation): @1) O will "see" the laser strikes on O as simultaneous (to be precise: by "see as simultaneous" I mean "assign the same time coordinate to both events") . O' will see the laser strikes on O as simultaneous, too. @2) O will see the laser strikes on O' as non-simultaneous. O' will also see the laser strikes on O' as non-simultaneous. @3) So in that sense O and O' agree. This is not exactly surprising: A coordinate tuple (let's just call it a vector for simplicity) [math]p=(t, \vec x)[/math] describing an event (like time and position when a laser is emitted or hits something) transforms with an invertable matrix L from one coordinate system to another: p' = Lp. So for two events with the coordinates [math]p_1[/math] and [math]p_2[/math], say the impacts of two of the lasers, the relations [math]p_1 = p_2 \Rightarrow p_1' = Lp_1 = Lp_2 = p_2'[/math] and [math]p_1 \neq p_2 \Rightarrow p_1' = Lp_1 \neq Lp_2 = p_2'[/math] hold. Note that equality of coordinates means equality of space and time coordinates. I guess I have no objections.
-
I guessed so. It's just that your post could be interpreted as states being fully described by a single parameter. Btw: You seem to use the terms "state function" and "equation of state" interchangeably. Typically, the terms are not presented as being two different names for the same thing - but I don't really get the difference. To quote from Atkins: Also, I think I only met the term "equation of state" in the context of the "caloric equation of state" and "thermal equation of state" (translations from German by me, maybe the terms do not even exist in English). Perhaps the different terms is historic in that thermodynamics was first purely an emperical/experimental field ( -> equation of state ) and only later put on solid theory ( -> state functions ). Concerning the original question: Perhaps it is easier to understand what a state function is by trying to understand what is not a state function. Take a nuclear power plant. Assume you have some water in state L (liquid). Then, the water is vaporized. The vapor drives a turbine generating electrical energy. By doing so, it cools down. Then, maybe after some additional cooling, it is put back into the reservoir (I just assume there was a water reservoir in power plants - I'm not an engineer) and again is in state L. By looking at the water in the reservoir you cannot tell how much electrical energy was produced with this water. So the electrical energy produced is not a state function. Or, as an even simpler although not typical thermodynamical example: I am at my computer now and I was there five hours ago. But that does not define how many kilometers I walked within these five hours -> The distance I traveled is not a state function of my position.
-
You can usually not describe the state of a system in a single value. The (macro-) state of an ideal gas can be described by the volume, the number of particles and the total kinetic energy, for example. None of these three parameters can be omitted.
-
I think that according to current knowledge (both on elementary particles and on bound states) the answer is "no". Electrons don't decay. Protons don't do so in a measurable way. Even if they did I am not sure if that implied that all nuclei decay. Free neutrons do decay but iron nuclei (which are said to contain neutrons) don't.
-
n=4 makes more sense but I still doubt it. In fact, I have a good idea what went wrong this time so just go through your calculation step by step again. I had my username changed by asking an admin in the chat.
-
Your units don't even match. Btw, to have more than one letter in the exponent, enclose the exponent with curly brackets, i.e. 10^{-12} instead of 10^-12
-
With "magnitude of" I meant the absolute value of the binding energy, a positive value. Seems like I picked the wrong word. You can formally use the other equation and set one of the ns to infinity. That will give you the same result except that it is harder to write down correctly (how do you divide something by infinity? Is infinity even a sensible natural number? ...) and seems completely unmotivated to me.
-
Isn't the ionization energy simply the magnitude of the binding energy? The binding energy goes with the so-called principal quantum number n as something/n² (probably like your 2nd equation). So yes, you can get n from that and then work out all states belonging to that n.
-
I didn't expect that the books are translated to English. I also did like the books (can't remember why, though) and they also did have a very good reputation for preparing for final exams amongst my fellow students. But looking at it I don't know if it possibly introduces too many concepts and terms too fast for a complete beginner. Considering I feel like seeing a lot of book recommendation threads I wonder if it made sense to compile an sfn-recommendation list (of serious textbooks) with a short comment about the books and put it as a sticky. But then, people could as well browse the user reviews on Amazon.
-
I do have something like an idea: http://www.scienceforums.net/forum/showthread.php?t=47739 . You made a sign error in the denominator when taking the derivative d/dv (1/(1-exp(v)), btw.
-
EDIT: After writing the following paragraphs I noticed that the expressions you wrote do not fit what I said. Maybe your expressions are wrong or there's another obscuring step happening - but maybe there is something else that is called partition function that is different from the partition function in statistical mechanics. I only know partition functions from statistical mechanics. There, the reason why the logs and the derivatives appear is that the partition function is a sum over exponential functions which have products of the observables and their conjugates (under a Legendre transformation) in their exponent. Since that probably does not help you much let's give an example: Take a system at temperature [math] T = \frac{1}{k\beta} [/math] (the point of the equation is to define beta which I will use later) which can have energies E. The number of possibilities for a system to have energy E is the density of states g(E). The partition function Z is [math] Z = \int_{-\infty}^{\infty} \, dE \ g(E) e^{-\beta E} [/math]. The probability that the system has an energy between [math]E_1[/math] and [math]E_2[/math] is [math]\frac 1Z \int_{E_1}^{E_2} \, dE \ g(E) e^{-\beta E}[/math] so in that sense the partition function is just a normalization constant for computing probabilities. The expectancy value [math] \left< X \right> [/math] for some observable X then is [math] \left< X \right> = \frac 1Z \int_{-\infty}^{\infty} \, dE \ X(E) g(E) e^{-\beta E} [/math], i.e. the value of the observable at all energies weighted by the probability to encounter that energy (this requires the observable to be a function of energy, of course). Now, here comes the trick: If I want to compute the average energy of the system then the calculation can be simplified: From definition [math] \left< E \right> = \frac 1Z \int_{-\infty}^{\infty} \, dE \ g(E) E e^{-\beta E}[/math] Since taking a derivative of the exponential function with respect to beta would bring a factor of E in front of it this can be rewritten as [math] = \frac 1Z \int_{-\infty}^{\infty} \, dE \ g(E) \frac{-\partial e^{-\beta E}}{\partial \beta} [/math] Pulling the derivative in front of the integral [math]= \frac {-1}{Z} \frac{\partial}{\partial \beta} \int_{-\infty}^{\infty} \, dE \ g(E) e^{-\beta E} [/math] The integral of course just euqals Z, so [math]= \frac {-1}{Z} \frac{\partial Z}{\partial \beta}[/math] And from using the chain-rule backwards this can be rewritten as [math] = - \frac{\partial \ln Z}{\partial \beta} [/math] And since you are probably more used to temperatures this rewrites as [math] = \frac{1}{kT^2} \frac{\partial \ln Z}{\partial T}[/math] So to answer your questions from that: - The derivative appears when eliminating the variable to average over under the integral. - The logarithm appears when absorbing the normalization constant into the derivative. Of course, taking E as the variable and [math]\beta[/math] as its conjugate was just an example. You can have any other pair of variables related in this way and the last step (rewriting into variables that do not appear as a simple product XY in the exponent) might obscure the result in favor of more experiment-related variables. But the basic scheme is always the one above.
-
You probably meant "mass", not "matter". In some contexts you can probably think of mass as non-kinetic energy.
-
That is probably not what Martin meant. "Information" is an abstract (arguably even non-physical) term. It is definitely not an object. You need objects to encode the information in. The speed that the information can then be transported depends on the object it is encoded in. If you encode it in light (say with the Morse alphabet) then it can travel with the speed of light. If cave it into a rock then it will pretty much not travel at all. There is no reason to call a cosmic gamma ray "information" but denying the same attribute to a book. Irrelevant to the point but potentially interesting: Even if information could only travel with c (say via light pulses) you would still have the whole past light cone as the observable past unless you additionally deny it to change direction.
-
Just put ants on the balloon and be happy that your stars can even move in space now.
-
"Fusion" usually means "nuclear fusion", not chemical bonding as in Q2. But even in nuclear fusion energy is conserved.
-
What are inhomogenous, disordered and partially ordered systems?
timo replied to seriously disabled's topic in Physics
It's definitely not what I said. And I don't think it is true, either. I do not think there is a formal physical definition of "order" and "disorder" without at least some system-dependent context. What I said is that I think the section probably encompasses stuff like polymers which are arguably less ordered than a crystal lattice. It also makes kind of sense to keep the section descriptions a bit vague. The journal's aim is to publish new discoveries, after all. And keep in mind that it was just a guess of mine. -
What are inhomogenous, disordered and partially ordered systems?
timo replied to seriously disabled's topic in Physics
Without knowing it: Phys Rev B is solid state physics (very ordered systems) and material sciences (any materials?). I'd guess that this section then is for what goes into the so-called "soft matter" direction, say polymer physics, glasses, maybe even fluids. -
Bombing planes to "reduce CO2 emissions!!!" seems to make more sense than because "those infidels let their women run around without veils!" or "we want to have our own state!", at least.
-
What did you expect to see by 2010 which isn't here yet?
timo replied to bascule's topic in The Lounge
Phi: You get >100 miles per hour with standard trains (ICE or TGV) which actually are the competitor for the Transrapid (that's what you are talking about, right?). That's what seemed to be favored in Germany. I doubt that any oil companies or airline transportation had much to do with that. I am no engineer or expert on Transrapid technology. But as so often: Qualitative statements like "the tracks are only energized on the sections the train is currently traveling over" don't help much. As soon as you plug in numbers the energy consumption can still exceed that of a train by any amount. In this case, I think the Transrapid energy consumption is higher but in the same order of magnitude as that of an ICE. EDIT: To bring that back to the topic: I think many ideas that sound great in theory are just not so great in practice. Things that come to my mind and which I had counted to expect them by 2010 had you asked me 20 years ago: Maglev trains, fusion and holographic crystals. All those ideas are old and we've been told they will soon be standard. None of them made it to real-world application. I am even somewhat surprised that people still expect to see them. Ok: no one still expects holographic crystals as standard for data storage considering that "could store a TERAbyte of data" doesn't leave people breathless anymore. -
It's bad style. I would be super embarrassed to tell a journal that I withdraw a paper because another journal already accepted it. I assume you wrote it alone, otherwise the co-author could have taken care of the publication process. If none of the profs wants to take the time to help you then ask the people at lunch (assuming you go to lunch together) or some post-docs in your institute for advice. I would imagine that as long as you have a university affiliation and your paper is at least connected to some recent research papers then you do not need to worry about the paper being accepted too much (perhaps not in a high-impact journal, though). One sometimes reads about high rejection rates of scientific journals. But take into account that these journals probably get a lot of papers submitted by people who could be residents in the sfn speculations forum and that these people produce new papers with 2-10 times the frequency than a normal scientist.
-
The assumption that the floor is flat and perpendicular to a plummet?
-
You don't seem to fit parameters or make first approximations.
-
Any idea of how nature works is promising until you start to plug in numbers.
-
Two magnets in the same place with opposite orientation. Alternatively, two magnets with opposite orientation close to another and then seen from a large distance. That sounds like "no" but it's the same "no" as for electric fields. For real materials, things might be a bit more complex because "close" then is the typical distance between atoms where matter usually behaves a bit differently than on the meter scale.