Jump to content

Xerxes

Senior Members
  • Posts

    254
  • Joined

Everything posted by Xerxes

  1. We can. This is because of the continuous isomorphism (homeomorphism) [math]U \simeq R^n[/math]. Or if you prefer, our manifold is locally indistinguishable from an open subset of [math]R^n[/math] Yes, but a [math]C^0[/math] function is by definition a continuous function, and [math]C^1[/math] subsumes [math]C^0[/math]. As I said Yes we do - see above If it is of class [math]C^{\infty}[/math] all functions (including coordinate functions) are continuous - no corners! Roughly speaking we are working in [math]R^n[/math], or something that "looks very like it", namely the open subset of [math]M[/math] where the homeomorphism [math]U \simeq R^n[/math] holds. D Its standard (see below) I'm afraid I cannot parse this. Look, suppose that [math]f(x)=y[/math]. Then I can write [math]\frac{dy}{dx}=\frac{d(f(x)}{dx}[/math]. But the "x" in the "numerator" MUST be the same as the "x" in the deminator, so I introduce no ambiguity by wring [math]\frac{df}{dx}[/math]. This is standard The superscripts in [math]x^1,x^2,....,x^n[/math] are just tracking indices - they do not imply a natural order. I may have [math]x=x^1,\,y=x^2,\,z=x^3[/math] or equally I may have [math]x=x^2,\,y=x^3,\,z=x^1[/math]. It doesn't matter Well, you need to be careful. If I write, say, [math]\frac{d}{dx}(m)[/math] I really mean [math]\frac{d(m)}{dx}[/math], and this not what you meant. What you write has no meaning.In terms of notation you could, if you wanted to specify a point of application you could write [math]\frac{df}{dx}|_m \in U[/math] The input for any functional is, by definition, a vector. The output is a Real number. What you wrote (sorry, I lost it in transcription) is not a functional. In my last post I gave you 2 functionals - [math]df[/math] and [math]dx^j[/math]. Please check that they are mappings fron a vector space to the Real numbers Oh yes. Good. Sort of, but your reasoning escapes me. If on LHS of the above you mean [math]f(\varphi^{-1}):\varphi(U) \to \mathbb{R}[/math] or [math]f\circ \varphi^{-1}:\varphi(U) \to \mathbb{R}[/math] (they mean the same) and since [math](\varphi^{-1} \circ \varphi)U= U[/math] then how does your composite unction differ from [math]f:U \to \mathbb{R}[/math] (which I gave as a definition)?
  2. Yes. In fact these are called differentiable operators, and are closely related to the directional derivative. They are also the closest we can get, in an arbitrary manifold, to the notion of a directed line segment that is used to define vectors in Euclidean space. Anyway, recall I wrote the property of linear independence for these bad boys as [math]\sum\nolimits_{j=1}^n \frac{\partial}{\partial x^j}x^k = \begin{cases}1 \quad j=k\\0\quad j \ne k \end{cases}[/math] Yes the K. delta is a tensor - its called a "numerical tensor", rather special case. Anyway, from the above, the following is immediate... If I accept these differential operators as a basis for [math]T_mM[/math] then I can write an arbitrary tangent vector as [math]v=\sum\nolimits_{j=1}^n \alpha^j \frac{\partial}{\partial x^j}[/math] so that [math]v(x^j) = \alpha^j[/math] which is unique to this vector. Anyway..... Suppose the point [math]m \in M[/math] and space [math]C_m^{\infty}[/math] of all smooth functions [math] M \to \mathbb{R}[/math] at [math]m[/math]. Recall I defined the tangent space at [math]m[/math] as space of mappings [math]T_mM:C_m^{\infty} \to \mathbb{R}[/math] so that [math]v(f) \in \mathbb{R}[/math] For the mapping [math]f:M \to \mathbb{R}[/math] I now define the differential [math]df:T_mM \to \mathbb{R}[/math]. This is sometimes called the pushforward - see my post http://www.scienceforums.net/topic/93098-pushing-pulling-and-dualing/ I insist on a numerical identity [math]df(v)= v(f)[/math] for some [math]f \in C_m^{\infty}[/math] and and any [math]v \in T_mM[/math] To see why we care, let me replace the arbitrary function [math]f[/math] by the coordinate functions [math]x^j[/math] so that [math]dx^j(v)=v(x^j)[/math] I now replace the vector [math]v \in T_mM[/math] by the basis vectors [math]\frac{\partial}{\partial x^k}[/math] so that [math]dx^j(\frac{\partial}{\partial x^k})=\frac{\partial}{\partial x^k}(x^j)[/math] So we know that the RHS is [math]\frac{\partial}{\partial x^k}(x^j)= \delta ^j_k[/math], so that the LHS implies that [math]dx^j[/math] and [math]\frac{\partial}{\partial x^k}[/math] are linearly independent. But since the basis for [math]T_mM[/math] is already complete, we have to say that the [math]dx^j[/math] are a basis for another but related vector space space. This is called the dual space and is written [math]T^*_mM[/math]. Note the existence of the dual space is thus a mathematical inevitability, not a mere whim PS Note this is not a unique situation in mathematics. Consider the space of eigenvectors - the eigenspace - obtained by the action of an operator on a vector space.
  3. OK, time to "man up" all readers. First the boring bit - notation. One says that a function is of class [math]C^0[/math] if it is continuous. One says it is of class [math]C^1[/math] if it is differentiable to order 1. One says it is of class [math]C^{\infty}[/math] if it is differentiable to all imaginable orders, in which case one says it is a "smooth function". I denote the space of all Real [math]C^{\infty}[/math] functions at the point [math]m \in M[/math] by [math]C^{\infty}_m[/math] So recall from elementary calculus that, given a [math]C^1[/math] function [math]f:\mathbb{R} \to \mathbb{R}[/math] with [math]a \in \mathbb{R}[/math] then [math]\frac{df}{da}[/math] is a Real number. Recall also that this can be interpreted as the slope of the tangent to the curve [math]f(a)[/math] vs [math]a[/math]. Using this I make the following definition: For any point [math]m \in U \subsetneq M[/math] with coordinates (functions) [math]x^1,x^2,....,x^n[/math] then I say a tangent vector at the point [math]m \in U \subsetneq M[/math] is an object that maps [math]C^{\infty}_m \to \mathbb{R}[/math] so that, for any [math]f \in C^{\infty}_m[/math] and since [math]m = \{x^1,x^2,...,x^n\}[/math] we may write [math]v=\frac{\partial}{\partial x^1}f + \frac{\partial}{\partial x^2}f+....+\frac{\partial}{\partial x^n}f[/math]. Or more succinctly as [math]v= \sum\nolimits^n_{j=1} \frac{\partial}{\partial x^j}f \in \mathbb{R}[/math]. As an illustration, recall the mapping (homeomorphism) [math]h:U \to R^n[/math] where [math]h(m)=(u^1,u^2,....,u^n)\in R^n[/math] and the projections [math]\pi_1((u^1,u^2,....,u^n))=u^1 \in \mathbb{R}[/math] and so on. Recall also I defined the coordinate functions in [math]U \subsetneq M[/math] by [math]x^j= \pi_j \circ h[/math] so the [math]x^j[/math] really are functions. So I may have that [math]\frac{\partial}{\partial x^j}x^k= \delta^k_j[/math] where [math]\delta^k_j = \begin{cases}1\quad j=k\\0\quad j \ne k\end{cases}[/math]. So in fact, since this defines linear independence, we may take the [math]\frac{\partial}{x^h}[/math] to be a basis for a tangent vector space. At the point [math]m \in U \subsetneq M[/math] one calls this as [math]T_mM[/math] Good luck!
  4. I am very sorry to hear that. I cannot at present see how to proceed without a lot of it. Differential geometry - yes even in the bastard version that physicists use - involves a lot of partial derivatives. I have been quiet here recently as I have been working overseas. Home tomorrow, when I will try to work out a strategy
  5. OK, good. We have both worked hard to arrive at a very simple conclusion: if a point in our manifold "lives" jointly into 2 different "regions", then it is entitled to 2 different coordinate representations, and these must be related by a coordinate transformation. I will say this to our nearly 1000 lurkers: you have seen an example of rigourous mathematics at work, far from the hand waving of my simple (but true) statement above. wtf. I had planned to say more about the finer points of differentiable manifolds, but on reflection have decided to try and get back to the matter at hand - tensors in the context of differential geometry, since geodief stated his interest was started by an attempt to understand the General Theory. I will say no more tonight as I collided with a bottle wine earlier, causing serious (but temporary) brain damage
  6. Wow wtf, so much to respond to. Sorry for the delay but we have been without power until now. The best I can do for now is to re-iterate my earlier post in a slightly different form I am aware that, on forums such as this it is considered a hanging offence to disagree with the sacred Wiki, so let us say I have confused you. Specifically FORGET the term "transition function". But I am quite willing to use the Wiki notation, as you say you prefer it...... So. We have 3 quite different mappings in operation here. The first is our homeomorphism: given some open set [math]U \subsetneq M[/math] that [math]\varphi:U \to R^n[/math]. Being a homeomorphism it is by definition invertible. Suppose there exist 2 such open sets, say [math]U_\alpha,\,\,U_\beta[/math] with [math]U_\alpha \cap U_\beta \ne \O[/math]. In fact suppose the point [math]m \in U_\alpha \cap U_\beta[/math], so that [math]\varphi_\alpha:U_\alpha \to V \subseteq R^n[/math] and [math]\varphi_\beta:U_\beta \to W \subseteq R^n[/math]. So the composite function [math]\varphi_\beta \circ \varphi_\alpha^{-1} \equiv \tau_{\alpha,\beta}:V \to W \in R^n[/math]. One calls this an "induced mapping" (but no, [math]\varphi_\alpha^{-1}[/math] is not a pullback, it's a simple inverse) Your Wiki calls this a transition, I do not. So let's forget the term. But note that single points in [math]V,\,\,W[/math] are Real n-tuples, say [math](\alpha^1,\alpha^2,....\alpha^n)[/math] and [math](\beta^1,\beta^2,....,\beta^n)[/math], so that image of [math]\tau_{\alpha,\beta}((\alpha^1,\alpha^2,....,\alpha^n))= (\beta^1,\beta^2,....\beta^n)[/math] So the second mapping I defined as: for the point [math]m \in U_\alpha [/math], say, the image under [math]\varphi_\alpha[/math] is the n-tuple [math](\alpha^1,\alpha^2,....,\alpha^n)[/math] likewise for [math]\varphi_\beta(U_\beta)= (\beta^1,\beta^2,....,\beta^n)[/math] then there always exist projections [math]\pi_\alpha^1(\alpha^1,\alpha^2,....,\alpha^2)= \alpha^1[/math] and so on, likewise for the images under [math]\pi_\beta^j[/math] of the n-tuple [math](\beta^1,\beta^2,....,\beta^n)[/math]. Note that since the [math]\alpha^j[/math], say, are Real numbers this is a mapping [math]R^n \to R[/math]. So the composite mapping (function) [math]\pi_\alpha^j \circ \varphi_\alpha \equiv x^j[/math] is a Real-valued mapping (function) [math]U_\alpha \to R[/math] and the n images under this mapping of [math]m \in U_\alpha[/math] is simply the set [math]\{\alpha^1,\alpha^2,....,\alpha^n\}[/math] and the images under this mapping of [math]m \in U_\beta[/math] is the set [math]\{\beta^1,\beta^2,....,\beta^n\}[/math] so that [math]x^j(m) = \alpha^j[/math] and [math]x'^k(m) = \beta^k[/math] The [math]x^j,\,x'^k[/math] are coordinate functions, or simply coordinates The coordinate transformations I referred to are simply mappings from [math]\{x^1,x^2,....,x^2\} \to \{x'^1,x'^2,....,x'^n\}[/math], they map (sets of) coordinates (functions) to (sets of) coordinates (functions) if and only if they refer to the same point in the intersection of 2 open sets. This mapping is multivariate - that is, it is NOT simply the case that say [math]f^1(x^1)=x'^1[/math] rather [math]f^1(x^1,x^2,....,x^2)=x'^1[/math]. Note that the argument of [math]f^j[/math] is a set, not a tuple, appearances to the contrary I hope this helps. It also seems I may have confused you slightly with my index notation - but first see if the above clarifies anything at all. P.S I am generally very careful with my notation. In particular I will always b careful to distinguish a tuple from a set
  7. We will get to that in due course (and soon). It has to do with the difference between Euclidean geometry (algebra) and non-Euclidean geometry (diff. geom.) I will make it so, I hope. Again in due course You are not only permitted, you are encouraged to do so. You do. In my defence, I explicitly said in an earlier post that manifolds with the discrete and indiscrete topologies, being respectively not connected and not Hausdorff, were of no interest to us. But yes, I should have reiterated it. Sorry. No, you are quite right to correct me if I am unclear or wrong. I welcome it Take your time - the incline increases from here on!
  8. So you don't like my use of the term "transitive". I can live with being wrong aboutthat. Let's move on to the really interesting stuff, closer to the spirit of he OP (remember it?) The connectedness property mandates that, every [math]m \in M[/math] there exist at least 2 overlapping coordinate neighbourhoods containing [math]m[/math]. I write [math] m \in U \cap U'[/math]. So suppose the coordinates (functions) in [math]U[/math] are [math]\{x^1,x^2,....,x^n\}[/math] and those in [math]U'[/math] are [math]\{x'^1,x'^2,....,x'^n\}[/math] and since these are equally valid coordinates for our point, we must assume functional relation between these 2 sets of coordinates. For full generality I write [math]f^1(x^1,x^2,....,x^n)= x'^1[/math] [math]f^2(x^1,x^2,....,x^n) = x'^2[/math] .............................. [math]f^n(x^1,x^2,....,x^n)= x'^n[/math] Or compactly [math]f^j(x^k)=x'^j[/math]. But since the numerical value of each [math]x'^j[/math] is completely determined by the [math]f^j[/math], it is customary to write rhis as [math]x'^j= x'^j(x^k)[/math], as ugly as it seems at fist sight*. This is the coordinate transformation [math]U \to U'[/math]. And assuming an inverse, we will have quite simply [math]x^k=x^k(x'^h)[/math] for [math]U' \to U[/math] Notice I have been careful up to this point to talk in the most general terms (with the 2 exceptions above). Later I will restrict my comments to a particular class of manifolds * Ugly it may be, but it simplifies notation in the calculus.
  9. Let just, by way of clarification, that the Hausdorff property I just referred to is NOT transitive. Specifically, if I have 3 points [math]x,\,y,\,z[/math] with [math]x \ne y[/math] and [math]y \ne z[/math] by the Hausdorff property I gave, and writing [math]U_x[/math] for some open set containing [math]x[/math] etc, then by definition [math]U_x \cap U_y = \O[/math] and [math]U_y \cap U_z = \O[/math], but this does NOT imply that [math]U_x \cap U_z = \O[/math]. But of course if I want [math]x \ne z[/math] then I must find new open sets, say [math]V_x,\,V_z[/math] such that[math]V_x \cap V_z = \O[/math]. Clearly [math]x \in V_x,\, x \in U_x[/math] but then [math]V_x \ne U_x[/math]. As a consequence, for the point [math]m \in M[/math] (our manifold) with coordinates [math]x^1,x^2,....,x^n[/math] we may extend these coordinates to an open set [math]U_m \subsetneq M[/math]. Then [math]U[/math] is called a ccordinate neighbourhood (of [math]m[/math]). Or just a neighbourhood.
  10. Yes, this is true. Yes, but every set is also closed Yes, but they are also closed, and they are the only elements in this topology. I had rather hoped I wouldn't have to get into the finer points of topology, but I see now this is unavoidable - and not just as a consequence of the above. If we want a "nice" manifold, we prefer that it be connected and have a sensible separation property, say the so-called Hausdorff property. As to the first, I will assert - I am not alone in this! - that a topological space is connected if and only if the only sets that are both open and closed are the empty set and the space itself. The discrete topololgy cleary fails this test. I will further assert that, if a topological space [math]M[/math] has the Hausdorff property and there exist open sets [math]U,\,\,V[/math] with, say [math]x \in U,\,\,y \in V[/math] then if and only if [math]U \cap V = \O[/math] then I may say that [math]x \ne y[/math] The indiscrete (or concrete) topology fails this test. So these 2 topologies, while they undeniably exist, will be of no interest to us Be careful. Euclidean space has a metric, so do spheres and tori. We do not have one so far - so we do not have a geometry i.e. a shape Point taken, I will try to be as intuitive as I can (though it's not really in my nature) Later.....
  11. OK, so let's talk a bit about tensors in differential geometry. Recall that, in normal usage, differential geometry is the study of (possibly) non-Euclidean geometry without reference to any sort of surrounding - or embedding - space. First it is useful to know what is a manifold. No. Even firster, we need to know what is a topological space. Right. Suppose [math]S[/math] a point set - a set of abstract points. The powerset [math]\mathcal{P}(S)[/math] is simply the set formed from all possible subsets of [math]S[/math]. It is a set whose members (elements) are themselves sets. Note by the definition of a subset, the empty set [math]\O[/math] and [math]S[/math] itself are included as elements in [math]\mathcal{P}(S)[/math] So a topology [math]T[/math] is defined on [math]S[/math] whenever [math]S[/math] is associated to a subset (of subsets of [math]S[/math]) of [math]\mathcal{P}(S)[/math] and the following are true 1. Arbitrary (possibly infinite) union of elements in [math]T[/math] are in [math]T[/math] 2. Finite intersections of elements in [math]T[/math] are in [math]T[/math] 3. [math]S \in T[/math] 4. [math] \O \in T[/math] The indivisible pairing [math]S,T[/math] is called a topological space. Note that [math]T[/math] is not uniquely defined - there are many different subsets that can be found for the powerset. Now often one doesn't care too much which particular topology id used for any particular set, and one simply says "X is a topological space". I shall do that here. Finally, elements of [math]T[/math] are called the open sets in the topological space, and the complements in [math]S,T[/math] of elements in [math]T[/math] are called closed. Ouch, this already over-long, so briefly, a manifold is [math]M[/math] is a topological space for which there exists a continuous mapping from any open set in [math]M[/math] to an open subset of some [math]R^n[/math] which has a continuous inverse. This mapping is called a homeomorphism (it's not a typo!), so that when [math]U \subseteq M[/math] one writes [math]h:U \to R^n[/math] for this, and [math]n[/math] is taken as the dimension of the manifold Since [math]R^n \equiv R \times R \times R \times.....[/math] the homeomorphic image of [math]m \in U \subseteq M[/math] is, say, [math]h(m)= (u^1,u^2,,....,u^n)[/math], a Real n-tuple And really finally, one defines projections on each n-tuple [math]\pi_j:(u^1,u^2,....,u^n)\to u^j[/math], a Real number. So the composite function is defined to be [math]\pi_j \circ h = x^j:U \to \mathbb{R}[/math] Elements in the set [math]\{x^k\}[/math] are called the coordinates of [math]m[/math]. They are functions, by construction
  12. The answer is yes, under certain circumstances. The conventional way to define the Riemann definite intgral of a function[math]f(x)[/math] over a close interval [math][a,b][/math] is to divide this interval into a number of non-overlapping interval [math][x_0,x_1),[x_1,x_2),....,[x_k,x_{k+1}),....,[x_{n-1},x_n][/math] where [math]a \equiv x_0 <x_1 <.....<x_n \equiv b[/math]. You form the so-called Riemann sum [math]\sum\nolimits_{k=0}^{n-1} f(\xi_k)(x_{k+1}-x_k)[/math] where [math]\xi_k[/math] denotes a point in the interval [math][x_k,x_{k+1})[/math]. Now you let the number of intervals increase without bound, so that [math]x_{k+1}-x_k \to 0[/math], then provided the limit of the Riemann sum exists, then this goes over to the integral [math]\int\nolimits_a^b f(x)\,dx[/math]
  13. Yes, you seem to be getting there. But since I am finding the reply/quote facility here extremely irritating to use (why can I not get ascii text to wrap), I will reply to you substantive questions as follows...... 1. Rank of a tensor I use the usual notation that the rank of a tensor is equal to the number of vector spaces that enter into the tensor (outer) product. Not the somewhat confusing fact.... If [math]V[/math] is a vector space, then so is [math]V \otimes V[/math], and, since elements in a vector space are obviously vectors, then yhe tensor [math]v \otimes w[/math] is a vector!! 2. Dual spaces The question of "what are they for?" may be answered for you in the following 3. Prove the relation between the action of a dual vector (aka linear functional) and the inner product. First note that, since by assumption, [math]V[/math] and [math]V^*[/math] are linear spaces, it will suffice to work on basis vectors. Suppose the subset [math]\{e_j\}[/math] is an orthonormal basis for [math]V[/math]. Further suppose that [math]\{\epsilon^k\}[/math] is an arbitrary subset of [math]V^*[/math]. Then [math]\{\epsilon^k\}[/math] will be a basis for [math]V^*[/math] if and only if [math]\epsilon^k(e_j)= \delta^j_k[/math] where [math]\delta^j_k = 1,\,\,j=k,\text{and} \,\,0,\,\,j \ne k[/math] Now note that if [math]g(v,w)[/math] defines an inner product on [math]V[/math], then the basis vectors are orthonormal if and only if [math]g_{jk}(e_j,e_k)=\delta ^j_k[/math] This brings the action of a dual basis on dual vector bases and the inner product of bases into register. Extending by linearity, this must be true for all vectors and their duals
  14. Hi wtf. I would be willing to bet you know as much physics and engineering as I do, but let's see if I can give some insight...... Physics and engineering would be unthinkable without a metric, although this causes no problems to a mathematician. Specifically, a vecto space is called a "metric space" if it has an inner product defined.{edit "with" to "without"} Now an inner product is defined as a bilinear, real-valued mapping [math]b:V \times V \to \mathbb{R}[/math](with certain obvious restrictions imposed), that is [math]b(v,w) \in \mathbb{R}[/math] where [math]v,\,w \in V[/math]. In the case that our vector space is defined over the Reals, we have that [math]b(v,w)=b(w,v)[/math] Turn to the dual space, with [math]\varphi \in V^*[/math] This means that for any [math]\varphi \in V^*[/math] and any [math]v \in V[/math] that [math]\varphi(v) \in \mathbb{R}[/math] In the case of a metric space there always exists some particular [math]\varphi_v(w) = b(v,w) \in \mathbb{R}[/math] for all [math]v \in V[/math]. And likewise by the symmetry above, there exists a [math]\phi_w(v) =b(w,v) = b(v,w)[/math]. But writing [math]\varphi_v(w)\phi_w(v)[/math] as their product, we see this is just [math]\varphi_v \otimes \phi_w(v,w) = b(v,w)[/math], so that [math]\varphi_v \otimes \phi_w \in V^* \otimes V^*[/math]. And if we expand our dual vectors as, say [math]\varphi_v=\sum\nolimits_j \alpha_j \epsilon^j[/math] and [math] \phi_w = \sum\nolimits_k \beta_k \epsilon^k[/math], then as before we may write [math]\varphi_v \otimes \phi_w = \sum\nolimits_{jk} g_{jk} \epsilon ^j \otimes \epsilon^k[/math] then, dropping all reference to the basis vectors, we may have that [math]b = \alpha_j \beta_k= g_{jk}[/math]. Therefore the [math]g_{jk}[/math] are called the components of a type (0,2) metric tensor. It is important in General Relativity (to say the least!!)
  15. Actually, that is not what you said In any case, I cannot parse the new claim that "there is more than one space associated with some field". What does this mean?
  16. Well, I may as well finish off my boring little tutorial. Recall I said that a type (0,2) tensor takes the mathematical form [math]\varphi \otimes \phi[/math] and is an element in the space of linear mapping [math]V^* \otimes V^*: V \times V \to \mathbb{R}[/math] In fact there is no restriction on the "size" of the space thereby created; we may have, say, [math]V^* \otimes V^* \otimes V^* \otimes V^* \otimes.......[/math] for any finite number of dual spaces provided only that they act on exactly the same number of spaces that enter into the Cartesian product. Using the shorthand I alluded to earlier, we may have, say, [math]A_{ijklmn}[/math] as a type (0,6) tensor. Now note that, we may define the dual space of a dual space as [math](V^*)^* \equiv V^{**}[/math]. And in the casee that these are finite-dimensional vector spaces, by a somewhat tortuous argument, assert that [math]V^{**} = V[/math] (I cheated rather - they are not identical, but they are said to "naturally isomorphic" so can be treated as the same) So we may have that [math]V \otimes V:V^* \times V^* \to \mathbb{R}[/math]withe exactly the same construction as before, so that, again in shorthand [math]A^{jk}[/math] are the scalar components of a type (2,0) tensor. Furthermore, we can "mix and match" ; we may have mixed tensors of the form [math]V^* \otimes V: V \times V^* \to \mathbb{R}[/math], once again with shorthand [math]T^j_k[/math] and so on to higher ranks. I close this sermon with 3 remarks that may (or may not) be of interest..... 1. Tensors have their own algebra, which is mostly intuitive when one realizes, as studiot hinted at, that every tensor has a representation as a matrix with one exception...... 2. ....this being tensor contraction. I will say no more than that this operation is equivalent to taking the scalar product of a vector and its dual. 3. The algebra of tensors and that of tensor fields turn out to be identical, so physicists frequently talk of "a tensor" when in reality they are talking of a tensor field
  17. Well, I really cannot see that any of the above has very much to do with with topic at hand. geordieff What follows will certainly raise some questions for you - do please ask. them, and I will do my best to give the simplest possible answers. First suppose a vector space [math]V[/math] with [math]v \in V[/math]. Then to any such space we may associate another vector space - called the dual space [math]V^*[/math]- which is the vector space of all linear mappings [math]V \to \mathbb{R}[/math], that is [math]V^*:V \to \mathbb{R}[/math]. Obviously then, for [math]\varphi \in V^*[/math] then [math]\varphi(v) = \alpha \in \mathbb{R}[/math]. So the tensor (or direct) product of two vector spaces is written as the bilinear mapping [math]V^*\otimes V^*:V \times V\to \mathbb{R}[/math], where elements in [math]V \times V[/math] are the ordered pairs (of vectors) [math](v,w)[/math], so that, for [math]\varphi,\,\,\phi \in V^*[/math], by definition, [math]\varphi \otimes \phi(v,w)=\varphi(v)\phi(w)[/math] The object [math]\varphi \otimes \phi[/math] is called a TENSOR. In fact it is a rank 2, type (0,2) tensor Written in full, this is [math]\varphi \otimes \phi = (\sum\nolimits_j A_j \epsilon^j)\otimes (\sum\nolimits_k B_k \epsilon^k) = \sum\nolimits_{jk}A_j B_k \epsilon^j \otimes \epsilon^k[/math] which we can write as [math]\sum\nolimits_{jk}C_{jk} \epsilon^j \otimes \epsilon^k[/math] where the [math]A,\,B,\,C[/math] are scalar and the set [math]\{\epsilon^i\}[/math] are basis vectors for [math]V^*[/math]. The scalars [math]C_{jk}[/math] have a natural representation as an [math]n \times n [/math] matrix, where [math]n[/math] is the dimension of these dual spaces i.e. the cardinality of the set [math]\{\epsilon^i\}[/math]. Most physicists (and some mathematicians) refer to this tensor by its scalar components i.e[math]C_{jk}[/math] There is more more - much more. Aren't you glad you asked!!
  18. Unfortunately not. 1. Tensors are defined quite independently of manifolds; your understanding of manifolds seems shakey 2. Tensors are essentially multilinear maps from the Cartesian product of vector spaces to the Reals 3. As such, tensors do not have "units" - they "live" in tensor spaces which have dimensions 4. Physicists (and some mathematicians) refer to tensors by their scalar components. This is justified because it is frequently desirable to work in a coordinate-free environment, but can be misleading. If you would like to know more - and if your linear algebra is up to it - I can explain in grisely detail
  19. Sure. And they will only work if engineers fully understand the mathematics. Stalemate. I am done here
  20. Lost me bro. The [math]\alpha^k[/math] are each scalars - numbers. What do you mean by "a scale axis" for a number? How can a number be parallel to anything Do you disagree with my illustration of a tangent vector? In Mickey Mouse terms, it simply says that there exist a subset of the vector space called basis vectors such that the scalars tell you how "far" in each "direction" an individual vector "points" in the "direction" of each basis vector
  21. But a (tangent) vector is written as, say, [math]v_p=\sum\nolimits_k \alpha^k \frac{\partial}{\partial x^k}[/math] where the [math]\alpha^k[/math] are scalars. These give "magnitude" to the "direction" along each of the coordinates. This is a standard definition - open a good text. The idiosyncratic format of your response to me makes it impossible for me to say more
  22. Yes, although in the abstract sense the "point" IS just the local coordinates This is not the best definition of a field - it relies heavily on the infamous Axiom of Choice, but yes, it will do Here we disagree. Coordinates are a property of manifolds. If you are thinking of say a vector field, then sure, each field element must be described relative to something. These are called "basis" vector" not coordinates. But, for a vector tangent to a manifold (this makes sense even for boring manifolds like [math]R^n[/math]) these basis vectors are derived from the coordinates of the manifold in question. For differentiable manifolds these are called directional derivatives (or differential operators) and take the form [math]\frac{\partial}{\partial x^k}[/math] where the [math]x^k[/math] are the coordinates at the point the vector is applied Yes, but the point at which this scalar is applied has the same coordinates as ever. Here I disagree - see above I covered this in an earlier post. Coordinate transformations refer to points. No it isn't. A chart consists of a subset of the manifold together with its continuous and invertible mapping to an open subset of[math]R^n[/math]
  23. Although I am aware this discussion as drifted away from most things that interest me, for accuracy, let me correct myself. I said This is false for the types of manifold that are useful in applications. These are the so-called "connected" manifolds, which have no "holes" or "gaps". This means that for any such manifold [math]M[/math] and any point [math]m \in M[/math] there exist at least two associated coordinate systems. Call them [math]x^1,x^2,.....,x^n[/math] and [math]\overline{x}^1, \overline{x}^2,....,\overline{x}^n[/math]. So my self-quote above is not generally true Then in relativistic applications - in fact in general - since the single point [math]m \in M[/math] is assumed to have some sort of "reality", there must exist a functional relationship between these two sets of coordinates. This is called a coordinate transformation. More abstractly.....for a manifold with the so-called Hausdorff property*, it makes little sense to distinguish between the point [math]m \in M[/math] and the coordinates [math]x^1, x^2, ....,x^n[/math]. In other words, the point IS the coordinate set. *Roughly speaking, the Hausdorff property says that if, for any 2 points [math]p,\,\,q \in M[/math], it is impossible to assign distinct coordinates, we must assume that [math]p=q[/math]
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.