-
Posts
1566 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by DrRocket
-
Yes and no. It is possible to formulate much of physics in a coordinate-free manner. That is the crux of Einstein's search for a description that is "generally covariant" and his success with the formulation of general relativity in terms of tensor fields on a manifold. It is also the heart of a geometrical treatment of special relativity in terms of the Minkowski metric and invariants, which can be applied in relativistic quantum theory or in the use of generalized coordinates in Hamiltonian or Lagrangian formulations of classical mechanics. However, in order to correlate theory with actual measurements you have to impose at least a local reference frame at some juncture and crunch real numbers as reflected in that frame. Since physics is ultimately judged by the ability to predict the outcome of experiments, this cannot be avoided. An agile mind can handle both situations and switch back and forth with alacrity. The scientifically illiterate can handle neither.
-
Gravitons are the name given to the carrier of the gravitational force in the theory of quantum gravity. Unfortunately no theory of quantum gravity currently exists.
-
The guts of the matter is in your first statement, that spacetime is a pseudo-Riemannian manifold. From that assertion one can proceed. The "equivalence principle", strong or weak, was, as noted by IMe, useful to Einstein in a philosophical way as he struggled to formulate general relativity. But, like "general covariance" the meaning is more than a bit murky until the formulation in terms of pseudo-Riemannian geometry is presented. The principle served its purpose, but now is really extraneous except as it pertains to the history of development of general relativity and its pedagogical value to some people. Things quite often become sticky and obscure when one tries to turn the phycist's approach to physics into something formal and axiomatic. The axiomitization of physics, proposed as a problem by Hilbert in 1900, remains open. Since Hilbert proposed the problem the situation has gotten farther from, not nearer to, resolution, with the advent of both relativity and quantum mechanics. Einstein's approach to divining the laws of nature was not axiomatic, but rather relied on a deep, nearly mystical, insight into how natural laws ought to be formulated. It was not universally successful -- witness his opposition to modern quantum theory and failure to unify general relativity and electrodynamics. But when it worked, it worked very well indeed. This insight was the essence of Einstein's genius. Despite being relatively (pun intended) weak in mathematics, his physical intuition produced spectacular physics that has since been refined and polished by others. The refining and polishing provides clarity and obviates any logical requirement to follow Einstein's original twists and turns to reach a modern understanding of his theories.
-
Understanding how Leibniz notation can be justified as a ratio
DrRocket replied to hobz's topic in Analysis and Calculus
Oh, you can do it. The real question involves the context, and whether or not you would find a good reason to do such a thing,. In the context of this thread it is quite safe to ignore xcthulhu and all of nonstandard analysis. One can do lots of things in mathematics. Some of those things turn out to be very productive. Some don't. Non-standard analysis has not turned out to be particularly productive. But some of its adherents, particularly within the logic community, just enjoy flogging a dead horse. Non-standard analysis is just not appropriate for those trying to learn basic calculus. Those who understand calculus don't need non-standard analysis either. It has become a solution in search of a problem. -
No. The frequency of a photon is essentially the energy of the photon: [math]E=h \nu[/math]. A frequency can thereby be associated with a single photon.
-
Understanding how Leibniz notation can be justified as a ratio
DrRocket replied to hobz's topic in Analysis and Calculus
A couple of decades +. -
Frame of Reference as Subject in Subjective Idealism
DrRocket replied to owl's topic in General Philosophy
General relativity does not in general admit spacelike slices. Such slices, commonly found in discussions of cosmology, are dependent on additional assumptions of homogeneity and isotropy. Those assumptions, the "cosmological principle", permit a decomposition as a one-parameter foliation of spacetime by spacelike hypersurfaces and gives rise to global notions of "time" and "space" that are useful cosmology. However, in general, curved spacetimes need not admit any such decomposition and the very notions of "time" and "space" are only local. There is no universal "reference frame" evenm for a given (local) observer. The notion of a reference frame comes from special relativity, not general relativity. Special relativity itself is seen to be the localization of general relativity, GR on the tangent space at a point. Ultimately in GR one is forced to rely on those quantities that are invariant under the metric tensor, which is locally the Minkowski metric of special relativity. So the basic invariant is the "spacetime interval" (locally) which tranmslates to arc length in the large, and for timelike curves, that arc length is just proper time (times c, which is why it is convenient to choose units so that c=1). What opwl fails, repeatedly, to grasp, is that general relativity embodies the ontology of spacetime, and that no non-mathematical discussion is possible, since the ontology is dependent on an understanding of pseudo-Riemannian manifolds. -
Physics, Computer Science, or Computer Engineering
DrRocket replied to tatertotaggie's topic in Science Education
You need to decide what you want to do. At TI they have avery true adage, "More than two objectives is no objectives." -
Understanding how Leibniz notation can be justified as a ratio
DrRocket replied to hobz's topic in Analysis and Calculus
Brouwer did some good work. Then he turned to intuitionism. Not that vast. I know quite a few mathematicians, having taught at three major universities, and associated with people from several more. I knew one guy who specialized in non-standard analysis. He was let go. Words are simply not available to describe how little I care what the British government, or any other government, regards as mathematical proof. That said, Appel and Haken's proof of the four color theorem by reduction to a finite numberof cases that were checked by computer has my admiration. However, the admirable aspect is the reduction to a finite number of cases, not the computer evaluation of the cases. -
Understanding how Leibniz notation can be justified as a ratio
DrRocket replied to hobz's topic in Analysis and Calculus
I gave you a couple of examples earlier, but you seem to be looking for some really stupid blunder based on loooking at derivativesvas a ratio. I don't make such blunders and so don't have a ready example. The real problem is the failure to recognize what a derivative really is. The whole point of bdifferential calculus is to use simple functions, linear functions, to study a much larger class of functions and to reach some deep conclusions about them. In that sense the derivative at a point is not really a number, but rather a linear operator. This notion generalizes easily to functions of several variables and to functions of complex variable as well. In one dimension this is hidden by the fact that linear functions are just multiplication by a number. The more general, and useful, idea is as follows: The basic idea is that you are trying to approximate a somewhat arbitrary function. near some given point, with a linear function. This is what you are doing in the case of a function of one variable, with the simplification that a linear function of one variable is just multiplication by a number -- the derivative of the function at the point in question. The generalization to several variables goes as follows: Let [math]f: \mathbb R^n \to \mathbb R^m[/math] and [math]x_0 \in R^n[/math] then [math]f[/math] is said to be differentiable at [math]x_0[/math] if there exists a linear function [math]D:\mathbb R^n \to \mathbb R^m[/math] such that [math]f(x_0 + x) = f(x_0) + D(x) + o(x)[/math] where [math]\displaystyle \lim_{\|x \|\to 0} \frac {\| o(x) \|}{\| x \|} =0[/math] Here if [math]x \in \mathbb R^k[/math], [math]x=(x_1,...,x_k)[/math] , then [math] \|x \|= \sqrt {x_1^2+...+x_k^2}[/math] If such a linear function [math]D[/math] exists it is called the derivative of [math]f[/math] at [math]x_0[/math] [math]\|x \|[/math] is called the norm of [math]x[/math] and is a generalization of absolute value, or length. The idea is that [math]D[/math] is the best linear approximation to [math]f[/math] near [math]x_0[/math] and the error [math]o(x)[/math] goes to 0 faster than linearly as [math]x[/math] goes to zero as a vector. This generalizes easily to functions on complex vector spaces and to infinite dimensional Banach spaces as well, which is useful in calculus of variations and partial differential equations. It is also central to the development of differential forms, which is the setting in which expressions like [math] dx[/math] and more generally [math]df[/math] are really formulated in a useful way. This has essentially nothing to do with non-standard analysis or "infinitesimals, but is crucial to differential geometry and modern mathematical physics. Derivatives are approximating linear functions, not ratios. -
Understanding how Leibniz notation can be justified as a ratio
DrRocket replied to hobz's topic in Analysis and Calculus
How about you show me one instance in which all terms are clearly defined in which it does work. -
Understanding how Leibniz notation can be justified as a ratio
DrRocket replied to hobz's topic in Analysis and Calculus
No, it is not a ratio. There is that "limit" thing. A "ratio of infinities" can be literally anything. So can a ratio of "infinitesimals. Try treating dy/dx as a ratio of two things that you can't even define and then proving 1) that a differentiable function is necessarily continuous or 2) the mean value theorem. The mean value theorem is at the heart of calculus and allows you to prove the "fundamental theorem of calculus". Abraham Robinson's reasoning is just fine, but you have no idea what that means. The whole construction of the non-standard real numbers relies on the axiom of choice and the construction of topological entities called ultrafilters. Basically you need both the ordinary real numbers plus a lot of machinery before you can construct the non-standard reals. By "no traction" I mean that non-standard analysis has not received much acceptance as a useful method in the mathematical community. For a short time there was a theorem in operator theory proved by non-standard techniques that took a bit of work by Paul Halmos to find a standard proof. Non-standard analysis is simply mostly ignored. This is not unusual. Quite often new ideas and techniques pique some early interest but then fade into oblivion when they don't live up to early promise. Yeah, you can find examples of people on the fringe who use non-standard analysis, and even have written elementary calculus texts based on the non-standard real numbers, but they do the student a great disservice since they are not prepared to follow the mainstream texts using standard techniques. If you are going to pursue non-standard analysis you should first attain a solid grasp of standard analysis. It is not a replacement. -
Understanding how Leibniz notation can be justified as a ratio
DrRocket replied to hobz's topic in Analysis and Calculus
Unfortunately, in the real numbers there is no such thing as an "infitesimal", so neither dy nor dx has any meaning that makes dy/dx a ratio. [math] \frac {dy}{dx}(x_0)= \displaystyle \lim_{h \to 0} \frac {y(x_0 +h) - y(x_0)}{h}[/math] There are ways to make sense of dy or dx (as the induced map on the tangent bundle) but they are a bit beyond introductory calculus. Similarly there are some unconventional ways to make sense of infitesimals (using nonstandard analysis, based on the nonstandard real numbers, which requires knowledge of ultrafilters) which is again well beyond calculus and has not gained much traction. Thinking of dy/dx as a ratio is a sometimes useful crutch, but it is not correct and can get you into trouble on occasion. When used properly it is just a shortcut for reasoning involving the chain rule. There are several ways to look at the dx in an integral, but at the level of calculus it is just a reminder that in a Riemann sum the multiplier for the value of a function at a point in a partition is just the length of the sub-interval. This will make more sense when you learn about measure theory or differential forms. -
We have previously limited the consideration to closed intervals that are symmetric about 0, hence nested closed intervals. When you take the union of an infinite number of closed intervals, the least upper bound need not be included in the union. See the example based on decimal approximations to pi., but take the intervals to be closed. Then the union of the [-q_n,q_n] is (-pi,pi). You can take open intervals(-q_n,q_n) with each q_n rational for bwhich the union is (-pi,pi) which is not in the given class of sets, so T3 is not a topology.
-
Take a union of closed intervals. The uperbound of the union will be the least upperbound, if it exists, of the intervals in question. That least upper bound may or may not be in the union. The open interval (-r,r) can be realized as a union of intervals [-q_n,q_n] with each q_n rational by choosing q_n converging to r, where r vcould be any real number, rational or irrational. If you like let r be pi and q_n the decimal approximation of pi to n decimal places.
-
You are correct, I did not read closely enough. Brain fart. Unions of disjointbintervals are not intervals, and I neglected to observe that in looking at symmetric intervals that there were no disjoint unions. Unions of symmetric open intervals about the origin are again intervals of that same form. So T2 is a topology. So are T9 and T10. The others are ruled out because unions of closed intervals can be open intervals and because irrationals can be arbitrarily approximated by rationals and vice versa.
-
A topology must be closed under finite intersections and arbitrary unions. Since the union of two intervals need not be an interval, NONE of the collections of sets that you listed are topologies. Are you asking which are bases for a topology ?
-
The circuit is linear. Write and solve equations for loop or nodal currents (not a mixture of both) using Kirchoff's laws. There will be several simultaneos linear equations to be solved. If this is for a physics class, use nodal currents since the loop current method is not usually taught in physics courses. But in any case, more information is needed. Both the voltage source and the battery voltage could be anything as the circuit is given.
-
Not necessarily. There is a momentum change, but dp/dt need not exist is the usual sense of calculus, since the photon does not exist until it is created, and does not accelerate. So p is a step function and dp/dt exists only in the sense of a Schwartz distribution -- the "Dirac delta". The quantum world is not Newtonian. Neither the electron nor the photon are "little marbles".
-
Of course.
-
The electron changes eneergy states and a photon of the frequency corresponding to the energy difference is emitted.
-
What "problem" ? What "force" ? Photons are never at rest. The expected speed of a photon is always c. Classically the speed is always c. An electron is an elementary particle. So far as is known it contains no other particles. Photons are not "in" the electron, except insofar as the energy of an emitted photon is reflected in a change of the energy of the emitting electron.
-
It rather depends on who is doing the understanding. General relativity is a pretty good model of gravitation. Iit has some limitations, and a better theory may eventually be developed, but for now it does provide a high level of understanding.
-
Implicit in your search for a "mechanism" is the assumption that this mechanism is describable in terms of something that you find familiar, presumably the usual Newtonian view of the universe. But the whole point of relativity is that the Newtonian perspective is simply wrong except as a low speed local approximation to that which is the actual reality. There is no universal notion of "where" or "when", so you are basically screwed in you search. Yes, it is counter-intuitive when your intuition is based on the Newtonian model.. That intuition is worthless in this setting. Physics is not philosophy, ordinary words are no substitute for the actual language of the subject, and that language is mathematics.
-
The most fundamental aspect of relativity, both special and general is that the metric that determines the spacetime interval is the same for all observers, it is invariant. In special relativity this means that [math]c^2 \Delta t^2- \Delta x^2- \Delta y^2-\Delta z^2[/math] is the same in all inertial reference frames. In general relativity it means that The Lorentzian metric (aka inner product) which is locally [math]<(t_1,x_1,y_1,z_1),(t_2,x_2,y_2,z_2)>=c^2t_1t_2-x_1x_2-y_1y_2-z_1z_2[/math] is the same for all observers. A path in spacetime for a particle or body is called its world line. The length of that (time like) world line is the time (multiplied by c), called proper time, experienced by the body. Length is determined using the Lorentzian metric. It is a fact that a geodesic path in the Lorentzian geometry of spacetime has a length, proper time, that is a maximum among all timelike curves joining two given end points. It is also a result of general relativity that a body in free fall has a worldline that is a geodesic. So, suppose that two twins start and later meet again at coincident points in spacetime. One twin (the "stay at home twin") remains in free fall (sat at the South Pole of the Earth). The other uses a rocket to break out of free fall, travel to a nearby star and return. The "stay at home twin has followed a geodesic spacetime path, and therefore has experienced the maximum possible proper time. The traveling twin has followed a non-geodesic path, and is therefore younger. There is no "when" this occurred. There is no "where" this occurred either. General relativity does not allow the comparison of clocks at different spatial points -- "time here" vs "time there" loses meaning. There are approximations, but only approximations that permit a sort of comparison between separated clocks (approximating general relativity locally with special relativity). There is no "mechanism" causing clocks to run differently. What GR provides is an entirely different notion of the very nature of time from the Newtonian ideal, and even something fundamentally different from special relativity. Special relativity is merely a local approximation (really the linear approximation on the tangent space) to general relativity. Unfortunately there is no adequate way to describe all of this without used of differential geometry. For a complete explanation see Gravitation by Misner, Thorne and Wheeler.