

D H
Senior Members-
Posts
3622 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by D H
-
You missed the point. The relativity theory developed mainly by Lorentz and Poincare did have a preferred reference frame, an aether frame. This was the frame in which light properly traveled. By making length contraction and time dilation axiomatic, light appears to have the same velocity in all frames. This theory was mathematically sound and agreed with observation. In order to experimentally distinguish between special relativity from Lorentz ether theory the two theories must at some point predict different outcomes for some experiment. This is an impossibility: The two theories are mathematically equivalent. Special relativity says there is no preferred reference frame. LET says there is, but you can never find it. While the two theories are mathematically identical, they are very different conceptually. One has seemingly ad-hoc and rather complex axioms (length contraction and time dilation) and predicates a metaphysical device as a basis. The other has a simple, aesthetically pleasing axiom (the laws of physics are the same in all inertial frames) and another simple axiom that is dictated by reality (the speed of light is the same for all inertial observers). Even though LET preceded special relativity, the physics world has almost unanimously settled on special relativity as the better explanation. Occam's scalpel most certainly applies here. That's right. However, that is how we look at things in hindsight. At the time, many viewed it is as evidence of an experiment badly done. More experimentation was needed (and a lot more experimentation was done). http://www.mathpages.com/rr/s8-08/8-08.htm http://en.wikipedia.org/wiki/Relativity_priority_dispute
-
It does, however, necessitate something mathematically indistinguishable from relativity. Lorentz and Poincare attempted to keep classical physics alive with what is now called Lorentz ether theory. LET is completely indistinguishable from special relativity in terms of predicted outcomes, but is very distinct from special relativity in terms of axioms. Length contraction and time dilation are axiomatic in LET, as is an ether that defines an absolute reference frame. Making length contraction and time dilation axiomatic is incredibly ad hoc. The big problem with the theory was the axiomatic absolute reference frame. There is no way to detect it. None. The MM experiments did nothing of the sort. Maxwell's equations predicted the constancy of c. Michelson and Morley were diehard classical Newtonian physicists. They knew Maxwell was wrong, and set out to prove this. The MM experiment was a failure, probably the most important failed experiment in all of physics. They, and others, continued to attempt to truly prove Maxwell wrong after the failure of the MM experiment. Actually, special relativity was accepted amazingly quickly. By the time he published his paper, experimentalists had shown rather conclusively that the null results from the MM experiment were correct (the universe is not Newtonian) and had shown that Maxwell was right (light is but a small part of the electromagnetic spectrum). Einstein was a nobody in 1905, but Lorentz and Poincare were the heavy hitters of the time. The reason Einstein was not noted for special relativity in his Nobel prize was a problem of attribution much more than a problem of acceptance.
-
Help! Solving Problems Using The Normal Distribution
D H replied to JamesWatson's topic in Homework Help
This is the "we help you solve your homework problems" section, not the "we do your homework for you" section. You need to show some work before anyone will help you. -
No. Freedom of expression, in my mind, is all about protecting the rights of truly offensive people to say truly offensive things. While freedom of expression does have limits, such as prohibitions against directly inciting violence, directly causing harm (falsely shouting fire in a crowded theater), falsely libeling someone, ..., those limits IMHO need to be very specific and very narrow. My personal "come to Jesus moment" with regard to freedom of expression was when the westboro baptist church came to Houston for the sole purpose of getting publicity in wake of a tragedy. Did I truly despise everything they had to say, and in public? Yes. Did they have the right to do what they did? Absolutely. Regarding speech that hurts someone's tender sensibilities: Man up, you wimps.
-
Since 13 is an unlucky number, I'll add a few more hurdles: 14. Religion. How many civilizations have arisen, only to throttle any scientific advances? See this thread. 15. Peak oil. How many civilizations have arisen, only to consume all resources and send themselve back to the stone age? See multiple threads. 16. Been there, done that. How many civilizations have arisen, sent the equivalent of people to the their moon, but then stopped because they have been there and done that? 17. Cyber space. How many civilizations have arisen and then stagnated because they found their equivalent of the internet far cooler than real life? Merged post follows: Consecutive posts mergedCompletely off-topic: What's with the green background on indents?
-
Correction: That is Webb's Solution 22, not 51 (He only has 50 solutions, after all). I prefer a variant of #50: They aren't there. My variant: If they are out there, they are way out there. We are, for all practical purpose, alone. IMHO, Intelligent life is extremely rare, and when it does manage to beat the odds and form on some planet it is extremely short-lived. What makes the Earth so special? Nothing in particular. What makes the winner of the $200 million lottery jackpot special? Nothing. He just got lucky. How did Earth get lucky? We have one Sun, not two or more. Single stars are not the norm. Most star systems occur in multiples. What are the odds of an Earth-like planet maintaining a stable orbit in the habitable zone of a multi-star system for billions of years? Our Sun is rather stable. Even amongst main sequence stars we have observed stars with much greater variability than the Sun. Until we know more about how stars behave, the relative stability of our Sun might be completely normal -- or we might be incredibly lucky. Other nearby stars haven't wiped us out. A nearby supernova, or a not-so-nearby gamma ray burst, could spell instant death for a planet. The long-term gravitational perturbations by a nearby star could spell a drawn out death for life on a planet. Stars much closer to the galactic core than our Sun are much more likely to suffer these catastrophic and long-term perturbations than are stars further from the core. We have metals, and we have metals heavier than iron. Even if civilization had formed, without metals civilization could not have advanced beyond little huts and villages. First generation stars didn't produce much "metal" (elements heavier than helium), period. Metallicity in general drops with distance far from the galactic core. Our Sun is at a distance that balances the the catastrophic effects of proximity to the core against the lack of metallicity far from the core. Jupiter didn't go kamikaze. Most of the planets discovered to date are hot jupiters. While that does not mean that most planetary systems have hot jupiters (it might just mean that hot jupiters are a whole lot easier to find), that some systems have hot jupiters does reduce the likelihood that Earth-like planets will form. We have a big honkin' Moon. How unlikely is that? (The answer is, we don't know; but it seems incredibly fluky.) Some conjecture that the Moon was essential for the formation and advancement of life (I'll try to dig up some references). Complex life formed. Based on a sample size of one (the Earth), it may not be all that unlikely for simple life to form given the proper conditions. Based on a sample size of one again, formation of complex life is a different issue. Life remained simple on the Earth for the majority of the time that life has existed. Life didn't commit suicide. It tried. It tried to do so multiple times, perhaps. Snowball Earth resulted from life making the sky clear. The Permian-Triassic extinction may have been the result of life being a bit over-exuberant during the Permian. We have fuel. The Earth had its Carboniferous and Permian periods where life was a lot more productive than it is now. Imagine a planet where life managed to just barely survive but still managed to form a truly intelligent species. How would this intelligent life move into our equivalent of a modern age without sufficient hydrocarbons? Intelligent life formed. While near-intelligent life and haphazard use of tools has independently arisen multiple times (apes, octopi, parrots, dolphins, maybe some dinosaurs), Homo habilis pretty much stands alone in its complex use of tools. Communicative life formed. Homo sapiens talks. A species with a brain much bigger than ours that cannot formulate complex thoughts or communicate them others is not truly intelligent. One of the biggest hurdles in any large-scale engineering endeavor such as getting people to the Moon is communication. Life didn't kill us. It tried. Multiple times. The Antonine Plague, the Plague of Cyprian, the Black Death, the Third Pandemic, Spanish flu, just to name a few. We didn't kill ourselves. We've tried. Multiple times. Modern technology has made it far easier to do so. Perhaps some of us will succeed the next time around. With all of those hurdles, it is not surprising to me that intelligent life is extremely rare.
-
The use of scare quotes in my post was implied. I made that use explicit. Thanks.
-
Baseball has rules against performance enhancing drugs. Beauty pageants don't have rules against beauty-enhancing operations. How could they? The pageant paid for her boob job. Those botox/latex/thioglycolate laden "beauties" are about as natural the Golden Gate Bridge.
-
How Many People Here Use "Loose" When They Mean "Lose"?
D H replied to jimmydasaint's topic in The Lounge
In short, it's a contraction. Note to self: Always make intentional mistakes when posting in a thread on grammar. That way the inadvertent mistakes come off as intentional. There's nothing wrong with "A writer must not shift their point of view." If using they in the singular was good enough for Shakespeare, it's good enough for me. If that rubs against you too much, "Writers must not shift their point of view." I like either a lot better than "A writer must not shift his/her point of view" (or whatever is correct now). -
You really do need to start by writing the differential equation. Since you have not done that and since this is not homework, here it is: [math]\frac{d^2x}{dt^2} = \frac{-G(m_1+m_2)}{|x|^3}x[/math] where x is the position of one particle relative to another and m1 and m2 are the masses of the two particles.
-
As has been said many times in this thread, you need to write a differential equation. Since this appears to be homework, I am not going to tell you what the answer is until you make some kind of attempt at doing so.
-
Start with a non-zero normal component to the velocity and you get Kepler's laws. Nice and simple. Start with a zero normal component and you get something quite different and quite a bit uglier. Start with zero velocity period (or zero normal component and a axial velocity smaller than escape velocity) and you get something real ugly. Think of it this way: At some point the particles will collide (or pass through one another). At that instant the gravitational acceleration is instantaneously infinite. That's an improper integral at best. The resulting 1D differential equation does have an analytic solution, but it is quite ugly.
-
This problem is why calculus was invented. You picked a singular case. Much easier is to start with an initial relative velocity vector that has a non-zero component normal to the initial relative position vector. The first step is to write a differential equation. Do you need help doing this?
-
Do we all experiencie time the same way?
D H replied to endless_dark's topic in Psychiatry and Psychology
How humans perceive time has nothing to do with relativity. This thread should go into psychology, which is exactly where I sent it. Time seems to pass very differently when one is in the zone in some physical activity versus when one is completely zoned out in a boring class, and when one is eight years old versus eighty. -
You are in college. There are lots of people there who use computers to do exactly what you are talking about. Rather than asking us what a good language to learn would be, ask them. Ask your advisor. Go snooping around in labs and ask the RAs what they are using. Find out who is publishing papers in this domain at your school and ask those authors. Don't ask us. All you will do is provoke religious wars and get a bunch of wrongheaded answers. Absolutely. Merged post follows: Consecutive posts merged That is an insult. I am more than willing to learn new concepts. I try to learn a new language every year or so. I can easily list twenty plus languages that I have learned over the last thirty years. I did not count several completely immemorable whose names I can no longer remember. ... and sloppily, and more or less by yourself. You are in academia, and you are in a computer science department. Sloppiness and working in very small teams are almost a given. Some elements of computer science used to address issues of reliability, maintainability, understandability, verifiability, verifiability, traceability, cost, and above all, developing and sustaining a large set of knowledgeable workers. Some computer science departments still do concern themselves with such issues. Many no longer do; those boring, real world concerns are now addressed by a rather new discipline, software engineering. The above gets at what the real issue is. It is a modern version of Snow's Two Cultures. Bascule and I are from two very, very different cultures. I worry about reliability and all that crap. Bascule worries slapping crap out quickly. Both concerns are crap, but hey, its the crap we have to worry about. I worry about competing with other companies. Bascule worries about competing with others in academia. These are very different kinds of pressure that lead to very different world views. Different cultures. I worry about being able to hire scientists and engineers who have a rather limited concept of computer programming, computer science, and software engineering. Bascule worries about the keeping up with the state of the art in computer science. Again, very different cultures. I was a part of the 1980-1987 AI revival. I learned several AI languages and made some inroads into applying AI to NASA (one of my programs helped keep a Shuttle flight flying after a major on-board failure). I was also taken in by the AI winter that followed that revival. One reason for that AI winter was that most scientists and engineers could not grok Lisp, rule-based reasoning, or, heaven forbid, backward chaining. The people coming out of schools who could understand logic programming could do Blocks World just fine but were for the most part completely incompetent when it came to real-world applications. There never was a sufficient mass of people who could bridge the different cultures and produce success stories. AI went into a massive decline because of a lack of success. Academic computer science and real-world science and engineering are very, very different cultures. Bascule comes from the former while I come from the latter.
-
This is nonsense. Thread moved to pseudoscience.
-
Probably not. Bascule hates useful languages. He has been fully indoctrinated. A few questions to help you narrow things down, ecoli: What domain are you interested in? I do not mean stochastic modeling; that is far too broad. I mean something like atmospheric modeling, chemical modeling, biological systems, ... Academia or industry? Does one, maybe two, languages dominate in that field? If so, you know what you eventually need to learn. It would also behoove you to learn something of the art of computer programming. Scientists and engineers for the most part are quite lousy at programming because they have either learned it on their own or have learned it with the aid of other (equally inept) scientists and engineers.
-
Surely you jest. Those gross statistics include several programs that are at best peripheral to scientific computing. Haskell even beat C/C++ on a few of those, particular those that are heavy on threads. When you look at the benchmarks that are not peripheral to scientific computing you get a completely different picture. Take one that is near and dear to me: the n-body problem. C++ beats Haskell by nearly a factor of 3, Ruby by a factor of 58, Python by a factor of 73. To boot, this is using the very simple symplectic Euler integration scheme with spherical gravity models. If you look at the Haskell and Ruby and Pascal code, it looks gasp procedural. How much more procedural would the Haskell code look if you had to use a non-simplistic integration scheme and spherical harmonics to represent gravity? How much slower would they be compared to a language suited to scientific computing? Exactly, and you are not going to fight that culture. Scientists and engineers who never touch a line of code in their lives think procedurally. Scientists and engineers who program are first and foremost scientists and engineers. My employers over the past 30 years have uniformly found that it is generally a bad idea to hire computer scientists for anything by computer science type work because computer science majors, for the most part, are incapable of thinking like a physical scientist or engineer. It is a cultural thing.
-
Exactly. Scientists and engineers, at least the ones I work with, do not think functionally. They tend to think procedurally -- even the ones who have never touched a line of code. Speed is important because we do some very computationally intensive calculations. Just imagine watching the nightly news, where the weather forecaster says "Our new Ruby-based atmospheric model finally churned out an answer. We had a 50% chance of rain six months ago." Popularity is important because we have been using computers to solve problems for fifty years. We have a lot of existing solutions, some written a long, long time ago. Switching to a different language is an extremely expensive undertaking. Doing so is only justified when (a) the new language offers a *lot* of improvements and (b) it is almost impossible to hire skilled people with knowledge of the archaic language used in the legacy systems. Most of the FORTRAN code I encounter (e.g., atmospheric models) is anything but modern Fortran. There is a lot of FORTRAN IV code out there in the scientific world. Admit knowledge of FORTRAN (FORTRAN became Fortran with Fortran 90) and you might well be the stuckee in interfacing with (or horrors, maintaining) that incredibly poorly written FORTRAN code.
-
Complex analysis is in a way a lot easier than real analysis. Many colleges teach complex analysis before real analysis for this very reason.
-
Modern computer languages SUCK when it comes to performance and when it comes to mathematical descriptions of physical processes such as the atmosphere. Trying to force scientists and engineers who model physical systems to think functionally is just wrong. That is not how they think. They think procedurally. FORTRAN was written for the scientific community and has evolved with continual oversight by the scientific community because it works well with the way they think. I personally do not like FORTRAN. However, in my opinion, it is better suited to how engineers and scientists, and particularly how atmospheric scientists think, than any "modern" language.
-
That's the same boat I'm in. "Modern" computer languages seem to have forgotten that some of us still use computers for computing. I've never used C# (we don't do windows), and while Java is nice, it makes Matlab look fast. My experience: Implement the same algorithm in Fortran, C, C++, Matlab, Java, and Python. Fortran and C are about the same, with Fortran just a tad faster. C++, if you are careful, and if you cache a lot of things as pointers, can be almost as fast as C (but making that happen makes the code darned ugly). Without paying attention to speed, I find C++ to be about half as fast as C. Matlab and Java are, in my experience, well over an order of magnitude slower than C, and can be much, much worse. Python is just pathetic. An order of magnitude or more increase in computation time means that the overnight ten thousand case Monte Carlo simulation would take a week or more to accomplish, or (more likely) that pathetically weak arguments would have to be given for reducing the number of cases to a few hundred. The week-long machine learning analysis I once did, spread over a boatload of machines to boot: Forget it. Efficiency is usually the last thing I worry about in scientific computing. That nasty slowdown caused by using C++ in lieu of C or Fortran can usually be mitigated by hacking at a small portion of a very small number of computationally-expensive algorithms. I'll take C++ over C any day. There is no hacking around an order of magnitude or more slowdown.
-
Repeating the same falsehood over and over does not make it true.
-
I learned Lisp twenty+ years ago. It changed my life, literally. That I knew Lisp, could teach others how to use it properly, and could apply it to solve some rather complex problems were motivating factors in my then-employer moving me to my current locale. (That I brought Symbolic's Lisp machine #2 with me as my dowry didn't hurt ...) Lisp teaches you to think differently, and better. The best thing to do is to learn a bunch of very different languages. I do not particularly like Python (I loathe Python), even though you can do neat things with it ... My bias is a bit personal. A coworker has attuned me to accessibility issues. I have a few Python modules that I just modified to add open/close braces as stupid comments because the Python developers reject requests to adopt braces as "overmydeadbody". I'd dump those Python modules for a more sane language if I possibly could do so. I absolutely loathe the whitespace indentation in Python. It's one of those coffee stains on the tray tables issues for me.
-
I wouldn't call GPS an application of GR per se. GPS is essentially a space-age application of the techniques used by surveyors for hundreds of years. We would still be able to a GPS system if the universe obeyed Newtonian mechanics rather than general relativity. If that's the justification, then why not focus the research monies on those potential applications directly and be done with it? How does the economic value from these spin-offs compare to the amounts of money spent? Another justification is politics: "Nyah, nyah. My collider is bigger than your collider!" These political reasons were part of what motivated the moon race (and what continues to motivate spending on space research to some extent). The LHC is tiny compared to the size of Eurasia. Why not make a huge collider that more-or-less hugs the coast of Eurasia? Even better, make a collider than encircles the Earth! The engineering spin-offs from building the oceanic segments alone would be immense. Then again, so would the costs. At some point the law of diminishing returns will kick in. Whether we are past the point of diminishing returns with the LHC remains to be seen. We don't know what they will find and how it will change our world. It's already been built, so the best thing to do is to use it. Stopping now would be the equivalent of throwing all the money spent to date in a big hole in the ground, much like the US did with the SCSC.