Jump to content

Recommended Posts

Posted (edited)

I must say that this use of the term Hausdorff is quite different from what I've learned about the term. In my understanding, asking if that property is transitive is meaningless.

 

A topological space is Hausdorff if it separates points by open sets. That is, given any two points [math]x, y[/math], there are open sets [math]U_x, U_y[/math] with [math]x \in U_x[/math], [math]y \in U_y[/math], and [math]U_x \cap U_y = \emptyset[/math].

 

For example the real numbers with the usual topology are Hausdorff; the reals with the discrete topology are Hausdorff; and the reals with the indiscreet topology are not Hausdorff.

 

I confess I have no idea what it means for the Hausdorff property to be transitive. It's not a binary relation. It's a predicate on topological spaces. Given a topological space, it's either Hausdorff or not. It would be like asking if the property of being a prime number is transitive. It's a meaningless to ask the question because being prime is a predicate (true or false about any individual) and not a binary relation.

 

Given a pair of points, they are either separated by open sets or not. Of course for each pair of points you have to find a new pair of open sets, which is what I think you are saying.

 

Historical note. Felix Hausdorff was German mathematician in the first half of the twentieth century. In 1942 he and his family were ordered by Hitler to report to a camp. Rather than comply, Hausdorff and his wife and sister-in-law committed suicide. https://en.wikipedia.org/wiki/Felix_Hausdorff

Edited by wtf
Posted

So you don't like my use of the term "transitive". I can live with being wrong aboutthat.

 

Let's move on to the really interesting stuff, closer to the spirit of he OP (remember it?)

 

The connectedness property mandates that, every [math]m \in M[/math] there exist at least 2 overlapping coordinate neighbourhoods containing [math]m[/math]. I write [math] m \in U \cap U'[/math].

 

So suppose the coordinates (functions) in [math]U[/math] are [math]\{x^1,x^2,....,x^n\}[/math] and those in [math]U'[/math] are [math]\{x'^1,x'^2,....,x'^n\}[/math] and since these are equally valid coordinates for our point, we must assume functional relation between these 2 sets of coordinates.

 

For full generality I write

 

[math]f^1(x^1,x^2,....,x^n)= x'^1[/math]

[math]f^2(x^1,x^2,....,x^n) = x'^2[/math]

..............................

[math]f^n(x^1,x^2,....,x^n)= x'^n[/math]

 

Or compactly [math]f^j(x^k)=x'^j[/math].

 

But since the numerical value of each [math]x'^j[/math] is completely determined by the [math]f^j[/math], it is customary to write rhis as [math]x'^j= x'^j(x^k)[/math], as ugly as it seems at fist sight*.

 

This is the coordinate transformation [math]U \to U'[/math]. And assuming an inverse, we will have quite simply [math]x^k=x^k(x'^h)[/math] for [math]U' \to U[/math]

 

Notice I have been careful up to this point to talk in the most general terms (with the 2 exceptions above). Later I will restrict my comments to a particular class of manifolds

 

 

* Ugly it may be, but it simplifies notation in the calculus.

Posted (edited)

So you don't like my use of the term "transitive".

You are using the term in a highly nonstandard way and your exposition is unclear on that point.

 

 

Let's move on to the really interesting stuff, closer to the spirit of he OP (remember it?)

Very much so. I'm interested in why differential geometers and physicists are so interested in using dual spaces in tensor products when the algebraic definition says nothing about them. The current exposition of differential geometry is very interesting to me but not particularly relevant (yet) to tensor products.

 

The connectedness property mandates that, every [math]m \in M[/math] there exist at least 2 overlapping coordinate neighbourhoods containing [math]m[/math]. I write [math] m \in U \cap U'[/math].

I hope I may be permitted to post corrections to imprecise statements, in the spirit of trying to understand what you're saying. The indiscreet topology is connected but each point is in exactly one open set. Perhaps you need the Hausdorff property. Again not being picky for the sake of being picky, but for my own understanding. And frankly to be of assistance with your exposition. If you're murky you're murky, I gotta call it out because others will be confused too.

 

I'm still digesting the rest of your post.

Edited by wtf
Posted (edited)

Are you talking about the transition maps? I'm working through that now. The Wiki page is helpful. https://en.wikipedia.org/wiki/Manifold

 

ps ... Quibbles aside I'm perfectly willing to stipulate that the topological spaces aren't too weird. Wiki says they should be second countable and Hausdorff. Second countable simply means there's a countable base. For example in the reals with the usual topology, every open set is a union of intervals with rational centers and radii. There are only countably many of those so the reals are second countable.

 

Interestingly Wiki allows manifolds to be disconnected. I don't think it makes a huge difference at the moment. I can imagine that the two branches of the graph of 1/x are a reasonable disconnected manifold.

Edited by wtf
Posted

I'm interested in why differential geometers and physicists are so interested in using dual spaces in tensor products when the algebraic definition says nothing about them.

We will get to that in due course (and soon). It has to do with the difference between Euclidean geometry (algebra) and non-Euclidean geometry (diff. geom.)

The current exposition of differential geometry is very interesting to me but not particularly relevant (yet) to tensor products.

I will make it so, I hope. Again in due course

 

 

I hope I may be permitted to post corrections to imprecise statements,

You are not only permitted, you are encouraged to do so.

The indiscreet topology is connected but each point is in exactly one open set. Perhaps you need the Hausdorff property.

You do. In my defence, I explicitly said in an earlier post that manifolds with the discrete and indiscrete topologies, being respectively not connected and not Hausdorff, were of no interest to us. But yes, I should have reiterated it. Sorry.

 

Again not being picky for the sake of being picky, but for my own understanding. And frankly to be of assistance with your exposition.

No, you are quite right to correct me if I am unclear or wrong. I welcome it

 

I'm still digesting the rest of your post.

Take your time - the incline increases from here on!
Posted (edited)

I'm replying to your post #27 which said ...

 

Let's move on to the really interesting stuff ...

I commented on the first half earlier. Now to the rest of it.

 

First there's a big picture, which is that if we have a manifold [math]M[/math] and a point [math]m \in M[/math], then we may have two (or more) open sets [math]U, U' \subset M[/math] with [math]m \in U \cap U'[/math]. So [math]m[/math] has two different coordinate representations, and we can go up one and down the other to map the coordinate representations to each other.

 

My notation in what follows is based on this excellent Wiki article, which I've found enlightening.

 

https://en.wikipedia.org/wiki/Atlas_(topology)#Transition_maps

 

The notation is based on this picture.

 

Screen_shot_2017_02_21_at_8_29_43_PM.png

 

We have two open sets [math]U_\alpha, U_\beta \subset M[/math] with corresponding coordinate maps [math]\varphi_\alpha : U_\alpha \rightarrow \mathbb R^n[/math] and [math]\varphi_\beta : U_\beta \rightarrow \mathbb R^n[/math]. I prefer the alpha/beta notation so I'll work with that.

 

Also, as I understand it the coordinate maps in general are called charts; and the collection of all the charts for all the open sets in the manifold is called an atlas.

 

If [math]m \in U_\alpha \cap U_\beta[/math] then we have two distinct coordinate representations for [math]m[/math], and we can define a transition map [math]\tau_{\alpha, \beta} : \mathbb R^n \rightarrow \mathbb R^n[/math] by starting with the coordinate representation of [math]m[/math] with respect to [math]U_\alpha[/math], pulling back (is that the correct use of the term?) along [math]\varphi_\alpha^{-1}[/math], then pushing forward (again, is this the correct usage or do pullbacks and pushforwards refer to something else?) along [math]\varphi_\beta[/math].

 

So we define [math]\tau_{\alpha, \beta} = \varphi_\beta \varphi^{-1}_\alpha [/math]. Likewise we define the transition map going the other way, [math]\tau_{\beta, \alpha} = \varphi_\alpha \varphi^{-1}_\beta [/math].

 

I found it helpful to work through this before tackling your notation.

 

 

So suppose the coordinates (functions) in [math]U[/math] are [math]\{x^1,x^2,....,x^n\}[/math] and those in [math]U'[/math] are [math]\{x'^1,x'^2,....,x'^n\}[/math] and since these are equally valid coordinates for our point, we must assume functional relation between these 2 sets of coordinates.

Now I feel equipped to understand this.

 

We have [math]m \in U_\alpha \cap U_\beta[/math]. Then I can write

 

[math]\varphi_\alpha(m) = (\alpha^i)[/math] and [math]\varphi_\beta(m) = (\beta^i)[/math], with the index in both cases is the [math]n[/math] in [math]\mathbb R^n[/math]. I don't think we talked about the fact that the dimension is the same all over but that seems to be part of the nature of manifolds.

 

Question: You notated your ordered n-tuple with set braces rather than tuple-parens. Is this an oversight or a feature? I can't tell. I'll assume you meant parens to indicate an ordered [math]n[/math]-tuple.

 

Also you referred to the coordinates as functions, and you did that earlier as well. I'm a little unclear on what you mean. Certainly for example [math]\alpha_i = \pi_i \varphi_\alpha(m)[/math], in other words the [math]i[/math]-th coordinate with respect to [math]\varphi_\alpha[/math] is the [math]i[/math]-th projection map composed on [math]\varphi_\alpha[/math].

 

Are you identifying each coordinate with its respective projection map? That's perfectly sensible. You probably said that earlier.

 

 

 

For full generality I write

 

[math]f^1(x^1,x^2,....,x^n)= x'^1[/math]

[math]f^2(x^1,x^2,....,x^n) = x'^2[/math]

..............................

[math]f^n(x^1,x^2,....,x^n)= x'^n[/math]

Aha. This took me a while to sort out. What is [math]f^i[/math]? Putting all this in my notation, we have

 

[math]f^i(\alpha^1, \alpha^2, \dots, \alpha^n) = \beta^i[/math].

 

So we seem to be starting with the [math]\alpha[/math]-coordinates of [math]m[/math], using the transfer map [math]\tau_{\alpha,\beta}[/math] to get to the corresponding [math]\beta[/math]-coordinates; then taking the [math]i[/math]-th coordinate via the [math]i[/math]-th projection map.

 

Therefore we must have [math]f^i = \pi_i \tau_{\alpha,\beta} = \pi_i \varphi_\beta \varphi_\alpha^{-1}[/math].

 

As far as I can tell this is the equation that relates your notation to mine. Have I got this right?

 

 

 

 

Or compactly [math]f^j(x^k)=x'^j[/math].

I undersand that. But note that it's ambiguous. Does [math]f^j[/math] act on the real number [math]x^k[/math]? No, actually it acts on the [math]n[/math]-tuple [math](x^k)_{k=1}^n[/math]. So if we are pedants (and that's a good thing to be when we are first learning a subject!) it is proper to write [math]f^j((x^k)_{k=1}^n)[/math]. Whenever we see [math]f^j(x^k)[/math] we have to remember that we are feeding an [math]n[/math]-tuple into [math]f^j[/math], and not a real number.

 

But since the numerical value of each [math]x'^j[/math] is completely determined by the [math]f^j[/math], it is customary to write rhis as [math]x'^j= x'^j(x^k)[/math], as ugly as it seems at fist sight*.

This is very interesting. Let me say this back to you. [math]m[/math] has [math]\beta[/math]-coordinates [math](\beta^i)[/math]. And now what I think you are saying is that we are going to identify the coordinate [math]\beta^i[/math] with the map [math]f^i = \pi_i \varphi_\beta \varphi_\alpha^{-1}[/math]. Is that right? We identify each [math]\beta[/math]-coordinate with the process that led us to it! Very self-referential :)

 

This is what I understand you to be saying, please confirm.

 

ADDENDUM: No I no longer understand this. [math]f^i[/math] doesn't play favorites with some particular [math]\beta^i[/math]. It makes sense to say that [math]f^i[/math] maps [math]\varphi_\alpha(m)[/math] to the [math]i[/math]-th coordinate of [math]\varphi_\beta(m)[/math]. But it's a different [math]f^i[/math] for each [math]m[/math].

 

I think I am confused. I should sort this out before I post but I'll just throw this out there.

 

This is the coordinate transformation [math]U \to U'[/math]. And assuming an inverse, we will have quite simply [math]x^k=x^k(x'^h)[/math] for [math]U' \to U[/math]

Ok I had to think about this. Two points.

 

* Each [math]f^i[/math] is a map from [math]\mathbb R^n[/math] to the reals. It inputs an [math]n[/math]-tuple that is the [math]\alpha[/math]-representation of a point [math]m[/math]; and outputs a single real number, the [math]i[/math]-th coordinate of the [math]\beta[/math]-representation of [math]m[/math].

 

So the only way to make sense of what you wrote is to that the the collection of all the [math]f^i[/math] 's are the coordinate transformations.

 

Actually what I understood from the Wiki article is that the transfer maps were the coordinate transformations. So maybe I'm confused on this point. Can you clarify?

 

* There's actually a little swindle going on with [math]\varphi_\alpha[/math]. At first it was a map from [math]U[/math] to some open subset of [math]\mathbb R^n[/math]. But in order to pull back along [math]\varphi_\alpha^{-1}[/math] we have to restrict the domain to the image [math]\varphi_\alpha(U_\alpha \cap U_\beta)[/math]. So we don't really have a map from [math]U[/math] to [math]U'[/math] in your notation; but only from their intersection to itself.

 

Can you clarify?

 

Notice I have been careful up to this point to talk in the most general terms (with the 2 exceptions above). Later I will restrict my comments to a particular class of manifolds

It doesn't seem to matter at this point what the topological conditions are. It's all I can do to chase the symbols.

 

 

* Ugly it may be, but it simplifies notation in the calculus.

I think I'm with you so far. Just the questions as indicated. Two key questions:

 

* How the transition maps can be said to be from [math]U[/math] to [math]U'[/math] when in fact they're only defined from the [math]\alpha[/math] and [math]\beta[/math] images, respectively, of the intersection. I'm just a little puzzled on this.

 

* Your notation [math]x'^j= x'^j(x^k)[/math]. First I thought I understood it and now I've convinced myself [math]x'^j[/math] depends on [math]m[/math].

 

* And now that I think about it, the transition maps are from Euclidean space to itself, they're not defined on the manifold.

 

I'm more confused now than when I started working all this out.

Edited by wtf
Posted

I think I understand what you're saying. In my notation, you are using [math]\beta^i[/math] as both the value of the [math]i[/math]-th coordinate of the [math]\beta[/math]-representation of some point [math]m \in U_\alpha \cap U_\beta[/math]; and also as the function [math]\pi_i \varphi_\beta \varphi_\alpha^{-1}[/math] that maps the [math]\alpha[/math]-representation of some point [math]m[/math] to the [math]i[/math]-th coordinate of the [math]\beta[/math]-representation of [math]m[/math].

 

That's how I'm understanding this. You're taking the [math]i[/math]-th coordinate to be both the function and the specific value for a given [math]m[/math]. It's a little bit subtle. The REAL NUMBER [math]\beta^i[/math] changes as a function of [math]m[/math]; but the FUNCTION [math]\beta^i[/math] does not.

 

Is that right? I want to make sure I'm nailing down this formalism.

 

Secondly I believe that you are a little confusing or inaccurate when you say the transfer maps (without the extra projection at the end) go from [math]U[/math] to [math]U'[/math]. Rather the transition maps go from [math]\varphi_\alpha(U_\alpha \cap U_\beta)[/math] to [math]\varphi_\beta(U_\alpha \cap U_\beta)[/math] and back.

 

Since the charts are homeomorphisms so are the transfer maps in both directions. And I've read ahead on Wiki and a couple of DiffGeo texts I've found, and I see that if the transfer maps are differentiable or smooth then we call the manifold differentiable or smooth. That makes sense. We already know how to do calculus on Euclidean space.

 

So I'm a litle confused again ... the charts themselves don't have to be differentiable or smooth as long as the transfer maps (on the restricted domain) are. Is that correct? So for example the charts could have corners outside the areas of overlap? Perhaps you can help me understand that point.

Posted (edited)

Wow wtf, so much to respond to. Sorry for the delay but we have been without power until now. The best I can do for now is to re-iterate my earlier post in a slightly different form

 

I am aware that, on forums such as this it is considered a hanging offence to disagree with the sacred Wiki, so let us say I have confused you. Specifically FORGET the term "transition function". But I am quite willing to use the Wiki notation, as you say you prefer it......

 

So.

 

We have 3 quite different mappings in operation here. The first is our homeomorphism: given some open set [math]U \subsetneq M[/math] that [math]\varphi:U \to R^n[/math]. Being a homeomorphism it is by definition invertible.

 

Suppose there exist 2 such open sets, say [math]U_\alpha,\,\,U_\beta[/math] with [math]U_\alpha \cap U_\beta \ne \O[/math]. In fact suppose the point [math]m \in U_\alpha \cap U_\beta[/math], so that [math]\varphi_\alpha:U_\alpha \to V \subseteq R^n[/math] and [math]\varphi_\beta:U_\beta \to W \subseteq R^n[/math].

 

So the composite function [math]\varphi_\beta \circ \varphi_\alpha^{-1} \equiv \tau_{\alpha,\beta}:V \to W \in R^n[/math]. One calls this an "induced mapping" (but no, [math]\varphi_\alpha^{-1}[/math] is not a pullback, it's a simple inverse)

 

Your Wiki calls this a transition, I do not. So let's forget the term.

 

But note that single points in [math]V,\,\,W[/math] are Real n-tuples, say [math](\alpha^1,\alpha^2,....\alpha^n)[/math] and [math](\beta^1,\beta^2,....,\beta^n)[/math], so that image of [math]\tau_{\alpha,\beta}((\alpha^1,\alpha^2,....,\alpha^n))= (\beta^1,\beta^2,....\beta^n)[/math]

 

So the second mapping I defined as: for the point [math]m \in U_\alpha [/math], say, the image under [math]\varphi_\alpha[/math] is the n-tuple [math](\alpha^1,\alpha^2,....,\alpha^n)[/math] likewise for [math]\varphi_\beta(U_\beta)= (\beta^1,\beta^2,....,\beta^n)[/math] then there always exist projections [math]\pi_\alpha^1(\alpha^1,\alpha^2,....,\alpha^2)= \alpha^1[/math] and so on, likewise for the images under [math]\pi_\beta^j[/math] of the n-tuple [math](\beta^1,\beta^2,....,\beta^n)[/math].

 

Note that since the [math]\alpha^j[/math], say, are Real numbers this is a mapping [math]R^n \to R[/math].

 

So the composite mapping (function) [math]\pi_\alpha^j \circ \varphi_\alpha \equiv x^j[/math] is a Real-valued mapping (function) [math]U_\alpha \to R[/math] and the n images under this mapping of [math]m \in U_\alpha[/math] is simply the set [math]\{\alpha^1,\alpha^2,....,\alpha^n\}[/math] and the images under this mapping of [math]m \in U_\beta[/math] is the set [math]\{\beta^1,\beta^2,....,\beta^n\}[/math] so that [math]x^j(m) = \alpha^j[/math] and [math]x'^k(m) = \beta^k[/math]

 

The [math]x^j,\,x'^k[/math] are coordinate functions, or simply coordinates

 

The coordinate transformations I referred to are simply mappings from [math]\{x^1,x^2,....,x^2\} \to \{x'^1,x'^2,....,x'^n\}[/math], they map (sets of) coordinates (functions) to (sets of) coordinates (functions) if and only if they refer to the same point in the intersection of 2 open sets. This mapping is multivariate - that is, it is NOT simply the case that say [math]f^1(x^1)=x'^1[/math] rather [math]f^1(x^1,x^2,....,x^2)=x'^1[/math].

 

Note that the argument of [math]f^j[/math] is a set, not a tuple, appearances to the contrary

 

I hope this helps.

 

It also seems I may have confused you slightly with my index notation - but first see if the above clarifies anything at all.

 

P.S I am generally very careful with my notation. In particular I will always b careful to distinguish a tuple from a set

Edited by Xerxes
Posted

Sorry for the delay but we have been without power until now. The best I can do for now is to re-iterate my earlier post in a slightly different form

Sorry about your power loss but the recent pace is fine for me. It might have been me who pulled the plug :)

 

 

I am aware that, on forums such as this it is considered a hanging offence to disagree with the sacred Wiki, so let us say I have confused you. Specifically FORGET the term "transition function".

I'm just grasping at straws to follow your posts. FWIW here is a screen shot from Introduction to Differential Geometry by Robbin and Salamon. This is from page 59 of this pdf. https://people.math.ethz.ch/~salamon/PREPRINTS/diffgeo.pdf

 

Screen_shot_2017_02_23_at_3_19_08_PM.png

 

They use the term transition map exactly as I've used it. But no matter, we can call them something else. But it's clear what they are, you are in agreement even if you prefer to use a different name.

 

 

We have 3 quite different mappings in operation here. The first is our homeomorphism: given some open set [math]U \subsetneq M[/math] that [math]\varphi:U \to R^n[/math]. Being a homeomorphism it is by definition invertible.

 

Suppose there exist 2 such open sets, say [math]U_\alpha,\,\,U_\beta[/math] with [math]U_\alpha \cap U_\beta \ne \O[/math]. In fact suppose the point [math]m \in U_\alpha \cap U_\beta[/math], so that [math]\varphi_\alpha:U_\alpha \to V \subseteq R^n[/math] and [math]\varphi_\beta:U_\beta \to W \subseteq R^n[/math].

 

So the composite function [math]\varphi_\beta \circ \varphi_\alpha^{-1} \equiv \tau_{\alpha,\beta}:V \to W \in R^n[/math]. One calls this an "induced mapping" (but no, [math]\varphi_\alpha^{-1}[/math] is not a pullback, it's a simple inverse)

 

Your Wiki calls this a transition, I do not. So let's forget the term.

Ok. I agree with all your notation so far. As I say it took me the duration of your power outage for all this to become clear so feel free to pretend the power's out as I work to absorb subsequent posts.

 

But note that single points in [math]V,\,\,W[/math] are Real n-tuples, say [math](\alpha^1,\alpha^2,....\alpha^n)[/math] and [math](\beta^1,\beta^2,....,\beta^n)[/math], so that image of [math]\tau_{\alpha,\beta}((\alpha^1,\alpha^2,....,\alpha^n))= (\beta^1,\beta^2,....\beta^n)[/math]

Yes, entirely clear.

 

So the second mapping I defined as: for the point [math]m \in U_\alpha [/math], say, the image under [math]\varphi_\alpha[/math] is the n-tuple [math](\alpha^1,\alpha^2,....,\alpha^n)[/math] likewise for [math]\varphi_\beta(U_\beta)= (\beta^1,\beta^2,....,\beta^n)[/math] then there always exist projections [math]\pi_\alpha^1(\alpha^1,\alpha^2,....,\alpha^2)= \alpha^1[/math] and so on, likewise for the images under [math]\pi_\beta^j[/math] of the n-tuple [math](\beta^1,\beta^2,....,\beta^n)[/math].

Perfectly clear.

 

Note that since the [math]\alpha^j[/math], say, are Real numbers this is a mapping [math]R^n \to R[/math].

 

So the composite mapping (function) [math]\pi_\alpha^j \circ \varphi_\alpha \equiv x^j[/math] is a Real-valued mapping (function) [math]U_\alpha \to R[/math] and the n images under this mapping of [math]m \in U_\alpha[/math] is simply the set [math]\{\alpha^1,\alpha^2,....,\alpha^n\}[/math] and the images under this mapping of [math]m \in U_\beta[/math] is the set [math]\{\beta^1,\beta^2,....,\beta^n\}[/math] so that [math]x^j(m) = \alpha^j[/math] and [math]x'^k(m) = \beta^k[/math]

Yes.

 

The [math]x^j,\,x'^k[/math] are coordinate functions, or simply coordinates

Ok so we are identifying the coordinates with the projection mappings composed on the charts that produce them.

 

The coordinate transformations I referred to are simply mappings from [math]\{x^1,x^2,....,x^2\} \to \{x'^1,x'^2,....,x'^n\}[/math], they map (sets of) coordinates (functions) to (sets of) coordinates (functions) if and only if they refer to the same point in the intersection of 2 open sets. This mapping is multivariate - that is, it is NOT simply the case that say [math]f^1(x^1)=x'^1[/math] rather [math]f^1(x^1,x^2,....,x^2)=x'^1[/math].

Yes this is clear to me.

 

Note that the argument of [math]f^j[/math] is a set, not a tuple, appearances to the contrary

I take this to mean that [math]\{f^j\}_{i=1}^n[/math] is a set of maps where [math]f^j = \pi_j \varphi_\beta \varphi_\alpha^{-1}[/math], is that right?

 

I hope this helps.

Yes very much.

 

It also seems I may have confused you slightly with my index notation - but first see if the above clarifies anything at all.

Yes much better. Of course the couple of days I spent working through this in my own mind helped a lot too.

 

P.S I am generally very careful with my notation.

Maybe I should leave that remark alone :) Let me just say that I sometimes find it productive to work through points of murkiness in your exposition. I'm ready for the next step and do feel free to take this as slowly as you like. Also if you have any particular text you find helpful feel free to recommend it. There are so many different books out there.

Posted

OK, good. We have both worked hard to arrive at a very simple conclusion: if a point in our manifold "lives" jointly into 2 different "regions", then it is entitled to 2 different coordinate representations, and these must be related by a coordinate transformation.

 

I will say this to our nearly 1000 lurkers: you have seen an example of rigourous mathematics at work, far from the hand waving of my simple (but true) statement above.

 

wtf. I had planned to say more about the finer points of differentiable manifolds, but on reflection have decided to try and get back to the matter at hand - tensors in the context of differential geometry, since geodief stated his interest was started by an attempt to understand the General Theory.

 

I will say no more tonight as I collided with a bottle wine earlier, causing serious (but temporary) brain damage

Posted (edited)

wtf. I had planned to say more about the finer points of differentiable manifolds, but on reflection have decided to try and get back to the matter at hand - tensors in the context of differential geometry, since geodief stated his interest was started by an attempt to understand the General Theory.

Thanks Xerxes for all your patience.

 

That is actually my interest too so this direction is perfect for me. My goal is to understand tensors in differential geometry and relativity at a very simple level, but sufficient to understand the connection between them and the tensor product as defined in abstract algebra.

 

In fact lately I've been finding DiffGeo texts online and flipping to their discussion of tensors. Sometimes it's similar to what I've seen and other times it's different. It's all vaguely related but I think it will all come together for me if I can see an actual tensor in action. And if it's the famous metric tensor of relativity, I'll learn some physics too. That's a great agenda.

 

That's what I meant the other day when I said I hoped we didn't have to slog through the calculus part. I don't want to have to do matrices of partial derivatives and the implicit function theorem and all that jazz, even if it's the heart of the subject. I just want to know what the metric tensor in relativity is and be able to relate it to the tensor product. Partial derivatives make my eyes glaze over even though I've taken multivariable calculus and could explain and compute them if I had to.

 

Along the way, maybe I'll figure out where the duals come from. Because with or without the duals you get the same tensor product; but the duals are regarded as important in relativity. That's the part I'm missing ... why we care about the duals when they're not needed in the definition of tensor product.

 

 

 

I will say no more tonight as I collided with a bottle wine earlier, causing serious (but temporary) brain damage

Was that collision between the glass container and your skull? Or of the wine molecules with your brain cells? Or did you use the latter to mitigate the effects of the former?

Edited by wtf
Posted

Partial derivatives make my eyes glaze over

I am very sorry to hear that. I cannot at present see how to proceed without a lot of it. Differential geometry - yes even in the bastard version that physicists use - involves a lot of partial derivatives.

 

I have been quiet here recently as I have been working overseas. Home tomorrow, when I will try to work out a strategy

Posted (edited)

I am very sorry to hear that. I cannot at present see how to proceed without a lot of it. Differential geometry - yes even in the bastard version that physicists use - involves a lot of partial derivatives.

 

I have been quiet here recently as I have been working overseas. Home tomorrow, when I will try to work out a strategy

I'm perfectly happy to have some "character building opportunities" as they say :) Partial differentiate away. No hurry on anything.

 

ps -- In case I'm being too oblique ... just write whatever you want and I'll work through it.

Edited by wtf
Posted

I'm perfectly happy to have some "character building opportunities" as they say :) Partial differentiate away.

OK, time to "man up" all readers.

 

First the boring bit - notation. One says that a function is of class [math]C^0[/math] if it is continuous. One says it is of class [math]C^1[/math] if it is differentiable to order 1. One says it is of class [math]C^{\infty}[/math] if it is differentiable to all imaginable orders, in which case one says it is a "smooth function". I denote the space of all Real [math]C^{\infty}[/math] functions at the point [math]m \in M[/math] by [math]C^{\infty}_m[/math]

 

So recall from elementary calculus that, given a [math]C^1[/math] function [math]f:\mathbb{R} \to \mathbb{R}[/math] with [math]a \in \mathbb{R}[/math] then [math]\frac{df}{da}[/math] is a Real number.

 

Recall also that this can be interpreted as the slope of the tangent to the curve [math]f(a)[/math] vs [math]a[/math].

 

Using this I make the following definition:

 

For any point [math]m \in U \subsetneq M[/math] with coordinates (functions) [math]x^1,x^2,....,x^n[/math] then I say a tangent vector at the point [math]m \in U \subsetneq M[/math] is an object that maps [math]C^{\infty}_m \to \mathbb{R}[/math] so that, for any [math]f \in C^{\infty}_m[/math] and since [math]m = \{x^1,x^2,...,x^n\}[/math] we may write [math]v=\frac{\partial}{\partial x^1}f + \frac{\partial}{\partial x^2}f+....+\frac{\partial}{\partial x^n}f[/math].

 

Or more succinctly as [math]v= \sum\nolimits^n_{j=1} \frac{\partial}{\partial x^j}f \in \mathbb{R}[/math].

 

As an illustration, recall the mapping (homeomorphism) [math]h:U \to R^n[/math] where [math]h(m)=(u^1,u^2,....,u^n)\in R^n[/math] and the projections [math]\pi_1((u^1,u^2,....,u^n))=u^1 \in \mathbb{R}[/math] and so on. Recall also I defined the coordinate functions in [math]U \subsetneq M[/math] by [math]x^j= \pi_j \circ h[/math] so the [math]x^j[/math] really are functions.

 

So I may have that [math]\frac{\partial}{\partial x^j}x^k= \delta^k_j[/math] where [math]\delta^k_j = \begin{cases}1\quad j=k\\0\quad j \ne k\end{cases}[/math].

 

So in fact, since this defines linear independence, we may take the [math]\frac{\partial}{x^h}[/math] to be a basis for a tangent vector space. At the point [math]m \in U \subsetneq M[/math] one calls this as [math]T_mM[/math]

 

Good luck!

Posted (edited)

Good luck!

<Star Trek computer voice> Working ...

 

Actually I read through it and it looks pretty straightforward. I'll work through it step by step but I didn't see anything I didn't understand. The tangent space is an n-dimensional vector space spanned by the partials. I understand that, I just need practice with the symbology.

 

I see at the end you bring in the Kronecker delta. This is something I'm familiar with as a notational shorthand in algebra. I've heard that it's a tensor but at the moment I don't understand why. I can see that by the time I work through your post I'll understand that. This seems like a fruitful direction for me at least.

Edited by wtf
Posted

The tangent space is an n-dimensional vector space spanned by the partials.

Yes. In fact these are called differentiable operators, and are closely related to the directional derivative. They are also the closest we can get, in an arbitrary manifold, to the notion of a directed line segment that is used to define vectors in Euclidean space.

 

Anyway, recall I wrote the property of linear independence for these bad boys as [math]\sum\nolimits_{j=1}^n \frac{\partial}{\partial x^j}x^k = \begin{cases}1 \quad j=k\\0\quad j \ne k \end{cases}[/math]

 

Yes the K. delta is a tensor - its called a "numerical tensor", rather special case.

 

Anyway, from the above, the following is immediate...

 

If I accept these differential operators as a basis for [math]T_mM[/math] then I can write an arbitrary tangent vector as [math]v=\sum\nolimits_{j=1}^n \alpha^j \frac{\partial}{\partial x^j}[/math] so that

 

[math]v(x^j) = \alpha^j[/math] which is unique to this vector.

 

Anyway.....

 

Suppose the point [math]m \in M[/math] and space [math]C_m^{\infty}[/math] of all smooth functions [math] M \to \mathbb{R}[/math] at [math]m[/math].

 

Recall I defined the tangent space at [math]m[/math] as space of mappings [math]T_mM:C_m^{\infty} \to \mathbb{R}[/math] so that [math]v(f) \in \mathbb{R}[/math]

 

For the mapping [math]f:M \to \mathbb{R}[/math] I now define the differential [math]df:T_mM \to \mathbb{R}[/math]. This is sometimes called the pushforward - see my post http://www.scienceforums.net/topic/93098-pushing-pulling-and-dualing/

 

I insist on a numerical identity [math]df(v)= v(f)[/math] for some [math]f \in C_m^{\infty}[/math] and and any [math]v \in T_mM[/math]

 

To see why we care, let me replace the arbitrary function [math]f[/math] by the coordinate functions [math]x^j[/math] so that [math]dx^j(v)=v(x^j)[/math]

 

I now replace the vector [math]v \in T_mM[/math] by the basis vectors [math]\frac{\partial}{\partial x^k}[/math] so that

 

[math]dx^j(\frac{\partial}{\partial x^k})=\frac{\partial}{\partial x^k}(x^j)[/math]

 

So we know that the RHS is [math]\frac{\partial}{\partial x^k}(x^j)= \delta ^j_k[/math], so that the LHS implies that [math]dx^j[/math] and [math]\frac{\partial}{\partial x^k}[/math] are linearly independent.

 

But since the basis for [math]T_mM[/math] is already complete, we have to say that the [math]dx^j[/math] are a basis for another but related vector space space.

 

This is called the dual space and is written [math]T^*_mM[/math].

 

Note the existence of the dual space is thus a mathematical inevitability, not a mere whim

 

PS Note this is not a unique situation in mathematics. Consider the space of eigenvectors - the eigenspace - obtained by the action of an operator on a vector space.

Posted

You're two posts ahead of me FYI. I haven't worked through the earlier one yet. Been a little busy with other things.

 

 

Note the existence of the dual space is thus a mathematical inevitability, not a mere whim

I'm thinking that you are intending this remark as a response to my questions about why dual spaces creep into tensor products, but I don't think you are understanding my question then. Of course I understand what dual spaces are. But in the algebraic definition of tensor products, duals NEVER show up; while in diffGeo/physics discussions, they ALWAYS show up. That's the gap I'm trying to bridge. Apparently no algebraist has ever set foot in the same room as a differential geometer, else there would be a clear and simple explanation of this expositional mismatch somewhere.

 

I hope to get through the earlier post today or tomorrow or the day after.

Posted (edited)

Ok now that I'm going through this I'm completely confused by where all this is taking place. We don't know how to take derivatives on a manifold yet but your notation is assuming that we can.

 

First the boring bit - notation. One says that a function is of class [math]C^0[/math] if it is continuous. One says it is of class [math]C^1[/math] if it is differentiable to order 1.

Picky refinement, my understanding is that a [math]C^1[/math] function has a continuous derivative. There are functions with derivatives that fail to be continuous on one or more (even infinitely many) points. http://math.stackexchange.com/questions/292275/discontinuous-derivative/292380#292380

 

One says it is of class [math]C^{\infty}[/math] if it is differentiable to all imaginable orders

More pickiness, this is a trivial point but of course you mean for all positive integer orders. Then it's no longer a function of someone's imagination. I was thinking fractional derivatives, who knows what else.

 

in which case one says it is a "smooth function". I denote the space of all Real [math]C^{\infty}[/math] functions at the point [math]m \in M[/math] by [math]C^{\infty}_m[/math]

Ok here is an expositional problem that confuses me. This is not pickiness, I'm genuinely confused. We've been letting [math]M[/math] stand for a manifold. But we don't know how to differentiate a function on a manifold. In fact you said that the charts are only homeomorphisms, so for all we know our manifold [math]M[/math] is so full of corners it can't be differentiated at all. In order to get past this point I have to either assume we've defined differentiability on a manifold somehow, or else that we're working in [math]\mathbb R^n[/math]. I hope you will clarify this point.

 

 

So recall from elementary calculus that, given a [math]C^1[/math] function [math]f:\mathbb{R} \to \mathbb{R}[/math] with [math]a \in \mathbb{R}[/math] then [math]\frac{df}{da}[/math] is a Real number.

Little point of notational confusion. I'd believe [math]\frac{df}{dx}\biggr\rvert_{x=a}[/math] or [math]\frac{df}{dx}(a)[/math] but I'm not sure about your notation. Is that a typo or a standard notation?

 

Recall also that this can be interpreted as the slope of the tangent to the curve [math]f(a)[/math] vs [math]a[/math].

Yes.

 

Using this I make the following definition:

 

For any point [math]m \in U \subsetneq M[/math] with coordinates (functions) [math]x^1,x^2,....,x^n[/math] then I say a tangent vector at the point [math]m \in U \subsetneq M[/math] is an object that maps [math]C^{\infty}_m \to \mathbb{R}[/math]

Now you see I have the [math]M[/math] problem in spades. I see you talking about tangent vectors to a point on a manifold but I have no idea how to define differentiability on a manifold. Rather than look it up I thought I'd just ask.

 

Of course if we're in [math]\mathbb R^n[/math] this is clear.

 

This is still an interesting point of view even if I imagine that we are talking about Euclidean space and not manifolds. We're fixing a point and letting the functions vary. If we are in single-variable calculus, we can let [math]x = 1[/math] for example, and then [math]\frac{df}{dx}(1) : C^\infty_1 \to \mathbb R[/math] is a function that inputs [math]x^2[/math] and outputs [math]2[/math], inputs [math]x^3[/math] and outputs [math]3[/math], inputs [math]e^x[/math] and outputs [math]e[/math], and so forth.

 

You see I'm still bothered by your notation. Did you really want me to write [math]\frac{df}{d1}[/math] as you indicated earlier? I have a hard time believing that but I'll wait for your verdict.

 

It's clear to me that by the linearity of the derivative, [math]\frac{df}{dx}(1)[/math] is a linear functional on [math]C^\infty_1[/math]. But the domain is the real numbers, not some arbitrary one-dimensional manifold that I don't know how to take derivatives on. For one thing don't we need an algebraic and metric structure of some sort so that we can add and subtract vectors and take limits?

 

So I do sort of see where you're going with this. But I'm totally confused about how we lift the differentiable structure of [math]\mathbb R^n[/math] to [math]M[/math].

 

so that, for any [math]f \in C^{\infty}_m[/math] and since [math]m = \{x^1,x^2,...,x^n\}[/math] we may write [math]v=\frac{\partial}{\partial x^1}f + \frac{\partial}{\partial x^2}f+....+\frac{\partial}{\partial x^n}f[/math].

 

Or more succinctly as [math]v= \sum\nolimits^n_{j=1} \frac{\partial}{\partial x^j}f \in \mathbb{R}[/math].

Ok I believe this. Or maybe not. First, you are using those set brackets again and I do not for the life of me see how that can make any sense. There's no order to sets so how do you know which coordinate function goes with which coordinate? Secondly of course there is the manifold problem again, I don't know how to define a differentiable function on a manifold.

 

Now if I forget manifolds and pretend we're in [math]\mathbb R^n[/math] then I suppose we could define the functional [math]v=\frac{\partial}{\partial x^1}(m)f + \frac{\partial}{\partial x^2}(m)f+....+\frac{\partial}{\partial x^n}(m)f[/math]. I would almost believe this notation as I have written it.

 

This particular functional is defined at the point [math]m[/math]. However I see that you've left that part out and you're defining this functional for all points? But then it's not defined correctly. I don't know what is the input to the functional.

 

Can you clarify please?

 

Well like I say it's more or less clear what you're thinking but I'm lost n the points I've indicated.

 

ps -- Ah ... slight glimmer ... since [math]m[/math] itself has coordinates, we can break up the partials as acting on each coordinate separately, and we'll end up with some Kronecker-fu leading to the rest of your exposition. Is that the right intuition?

 

I'll push on.

 

(Later edit) ...

 

I can see a way to define differentiability.

 

If [math]M[/math] is a manifold and [math]U \subset M[/math] is an open set, and if [math]\varphi : U \to \mathbb R^n[/math] is a chart, and [math]f : U \to \mathbb R[/math] is a function, then we would naturally look at [math]f \varphi^{-1} : \varphi(U) \to \mathbb R[/math].

 

If [math]f \varphi^{-1}[/math] is smooth then (since [math]\varphi(U) \subset \mathbb R^n[/math]) we can take the partials with respect to the coordinate functions and then I think the rest of your notation works.

 

Is that right?

Edited by wtf
Posted

I'm completely confused by where all this is taking place. We don't know how to take derivatives on a manifold yet but your notation is assuming that we can.

We can. This is because of the continuous isomorphism (homeomorphism) [math]U \simeq R^n[/math]. Or if you prefer, our manifold is locally indistinguishable from an open subset of [math]R^n[/math]

 

 

Picky refinement, my understanding is that a [math]C^1[/math] function has a continuous derivative.

Yes, but a [math]C^0[/math] function is by definition a continuous function, and [math]C^1[/math] subsumes [math]C^0[/math]. As I said

 

 

I'm genuinely confused. We've been letting [math]M[/math] stand for a manifold. But we don't know how to differentiate a function on a manifold.

Yes we do - see above

for all we know our manifold [math]M[/math] is so full of corners it can't be differentiated at all.

If it is of class [math]C^{\infty}[/math] all functions (including coordinate functions) are continuous - no corners!

In order to get past this point I have to either assume we've defined differentiability on a manifold somehow, or else that we're working in [math]\mathbb R^n[/math].

Roughly speaking we are working in [math]R^n[/math], or something that "looks very like it", namely the open subset of [math]M[/math] where the homeomorphism [math]U \simeq R^n[/math] holds. D

 

 

 

Little point of notational confusion. I'd believe [math]\frac{df}{dx}\biggr\rvert_{x=a}[/math] or [math]\frac{df}{dx}(a)[/math] but I'm not sure about your notation. Is that a typo or a standard notation?

Its standard (see below)

 

 

You see I'm still bothered by your notation. Did you really want me to write [math]\frac{df}{d1}[/math] as you indicated earlier? I have a hard time believing that but I'll wait for your verdict.

 

It's clear to me that by the linearity of the derivative, [math]\frac{df}{dx}(1)[/math] is a linear functional on [math]C^\infty_1[/math].

I'm afraid I cannot parse this.

 

Look, suppose that [math]f(x)=y[/math]. Then I can write [math]\frac{dy}{dx}=\frac{d(f(x)}{dx}[/math]. But the "x" in the "numerator" MUST be the same as the "x" in the deminator, so I introduce no ambiguity by wring [math]\frac{df}{dx}[/math]. This is standard

 

you are using those set brackets again and I do not for the life of me see how that can make any sense. There's no order to sets so how do you know which coordinate function goes with which coordinate?

The superscripts in [math]x^1,x^2,....,x^n[/math] are just tracking indices - they do not imply a natural order. I may have [math]x=x^1,\,y=x^2,\,z=x^3[/math] or equally I may have [math]x=x^2,\,y=x^3,\,z=x^1[/math]. It doesn't matter

Now if I forget manifolds and pretend we're in [math]\mathbb R^n[/math] then I suppose we could define the functional [math]v=\frac{\partial}{\partial x^1}(m)f + \frac{\partial}{\partial x^2}(m)f+....+\frac{\partial}{\partial x^n}(m)f[/math]. I would almost believe this notation as I have written it.

Well, you need to be careful. If I write, say, [math]\frac{d}{dx}(m)[/math] I really mean [math]\frac{d(m)}{dx}[/math], and this not what you meant. What you write has no meaning.In terms of notation you could, if you wanted to specify a point of application you could write [math]\frac{df}{dx}|_m \in U[/math]

 

This particular functional is defined at the point [math]m[/math]. However I see that you've left that part out and you're defining this functional for all points? But then it's not defined correctly. I don't know what is the input to the functional.

The input for any functional is, by definition, a vector. The output is a Real number. What you wrote (sorry, I lost it in transcription) is not a functional.

 

In my last post I gave you 2 functionals - [math]df[/math] and [math]dx^j[/math]. Please check that they are mappings fron a vector space to the Real numbers

 

 

 

ps -- Ah ... slight glimmer ... since [math]m[/math] itself has coordinates, we can break up the partials as acting on each coordinate separately, and we'll end up with some Kronecker-fu leading to the rest of your exposition. Is that the right intuition?

Oh yes. Good.

 

 

If [math]M[/math] is a manifold and [math]U \subset M[/math] is an open set, and if [math]\varphi : U \to \mathbb R^n[/math] is a chart, and [math]f : U \to \mathbb R[/math] is a function, then we would naturally look at [math]f \varphi^{-1} : \varphi(U) \to \mathbb R[/math].

 

If [math]f \varphi^{-1}[/math] is smooth then (since [math]\varphi(U) \subset \mathbb R^n[/math]) we can take the partials with respect to the coordinate functions and then I think the rest of your notation works.

 

Is that right?

Sort of, but your reasoning escapes me. If on LHS of the above you mean [math]f(\varphi^{-1}):\varphi(U) \to \mathbb{R}[/math] or [math]f\circ \varphi^{-1}:\varphi(U) \to \mathbb{R}[/math] (they mean the same) and since [math](\varphi^{-1} \circ \varphi)U= U[/math] then how does your composite unction differ from [math]f:U \to \mathbb{R}[/math] (which I gave as a definition)?
Posted

So, in spite of a sudden lack of interest, I will continue talking to myself, as I hate loose ends.

 

Recall I gave you in post#27 that, for set open sets in [math]U \cap U'[/math] we will have the coordinate transformations [math]x'^j=x'^j(x^k)[/math]. Notice I am here treating the [math]x'^j[/math] as functions, and the [math]x^k[/math] as arguments

 

Suppose some point [math]m \in U \cap U'[/math] and a vector space [math]T_mM[/math] defined over this point.

 

Recall also I said in post#41 that for any [math]v \in T_mM[/math] that [math]v(x^j)=\alpha^j[/math] which are called the components of [math]v= \alpha^j \frac{\partial}{\partial x^j}[/math].

 

Likewise I must have that [math]v=\alpha'^k \frac{\partial}{\partial x'^k}[/math]. We may assume these are equal, since our vector [math]v[/math] is a Real Thing

 

 

So that [math]\alpha^j=v(x^j)[/math] and [math]\alpha'^k=v(x'^k)[/math], we must have that [math]\alpha'^k= \alpha^j\frac{\partial x'^k}{\partial x^j}[/math].

 

This is the transformation law for the components of a tangent vector, also known (by virtue of the above) as a type (1,0) tensor.

 

It is no work at at to extract the transformation laws for higher rank tensors, and very little to extract those for type (0,n) tensors.

 

PS I do wish that members would not ask questions where either they they are not equipped to understand the answers, or have no real interest in the subject they raise

Posted (edited)

No lack of interest. I'm working through your posts. I've been busy with other things and you're four posts ahead of me now but I intend to catch up.

 

However you're wrong about differentiability. If I map the graph of the Weirstrass function to the reals by vertical projection, I have a homeomorphism but no possible differentiable structure on the graph because the graph has no derivative at any point. I'll get busy on my next post (which I've drafted but not yet cleaned up) and elaborate on this point.

 

https://en.wikipedia.org/wiki/Weierstrass_function

 

Well never mind I'll just put this bit up here.

 

IMG_0986_800.jpg

 

Now the point is that if the map [math]f \varphi^{-1} : \mathbb R^n \to \mathbb R[/math] happens to be differentiable (or smooth, etc.) then we say that [math]f[/math] is differentiable. Also we need the transition maps to be smooth as well. We talked about them a while back. You can confirm all this in volume one of Spivak's DiffGeo book. I'll add that working through your posts has enabled me to make sense of parts of Spivak; and reading parts of Spivak has enabled me to make sense of your posts. So I am making progress and finding this valuable.

 

You need to define differentiability this way. Mere homeomorphism is not enough, surely you agree with this point but perhaps forgot? Plenty of continuous functions aren't differentiable. Remember that almost all continuous functions are just like the Weirstrass function, differentiable nowhere.

 

Likewise your definition of [math]C^1[/math] is wrong, you need the function to be continuously differentiable and not just differentiable. There are functions that are differentiable but whose derivative is not continuous, and such functions are not [math]C^1[/math]. It is of course my curse in life that my ability to be picky and precise exceeds my ability to understand math, and I'm right about these two points despite being ignorant of differential geometry.

 

I will see if I can focus some attention this week on catching up with your last four posts.

 

 

"PS I do wish that members would not ask questions where either they they are not equipped to understand the answers, or have no real interest in the subject they raise ..."

 

Sorry was that for me? I'm paddling as fast as I can. If it's for someone else, personally I welcome any and all posts. This isn't the Royal Society and I'm sure I for one would benefit from trying to understand and respond to any questions about this material at any level.

Edited by wtf
Posted

PS I do wish that members would not ask questions where either they they are not equipped to understand the answers, or have no real interest in the subject they raise

Would you like me to find someone to dust off your ivory tower? :) You are a clever guy but your intellectual aloofness and the implicit self-aggrandising I get from the quoted post, does you no favours.

 

One doesn't know that one might not understand the answer until one asks. Even though one may not understand completely, it may add a useful piece or two in the jigsaw puzzle for them. At the very least, it gives a person an indication how far they've got to go learning before they can understand and can put signposts in the road ahead for them. Besides, an answer may not prove useful to the questioner but will to someone else that is capable, now or in the future; It's never wasted.

 

I read all your posts in this thread and don't have a clue about most of it but they give me a sense of the scale of what is necessary to be learnt in order to understand this subject. It sets the stage for me, if not the details just yet. With increasing exposure, one becomes familiar with the unfamiliar.

Posted

However you're wrong about differentiability. If I map the graph of the Weirstrass function to the reals by vertical projection, I have a homeomorphism but no possible differentiable structure on the graph because the graph has no derivative at any point.

Yes, but at no pint did I assert that a continuous function needs to be differentiable. Rather I asserted the converse - a differentiable function must be continuous.

 

Likewise your definition of [math]C^1[/math] is wrong, you need the function to be continuously differentiable and not just differentiable.

Maybe I did not make myself clear. I said that the [math]C^k[/math] property for a function "subsumes" the [math]C^0[/math] property. If we attach the obvious meaning to the [math]C[/math] in [math]C^k[/math] we will say that a [math]C^0[/math] function is continuous to order zero, a [math]C^1[/math] function is continuous to order one..... a [math]C^k[/math] function is continuous to order [math]k[/math]

 

I am sorry if my language was not sufficiently clear.

Posted

PS I do wish that members would not ask questions where either they they are not equipped to understand the answers or have no real interest in the subject they raise

I take this to heart and plead guilty. As my philosophy prof once said: The spirit is willing but the flesh is weak. I have the math skills but my interest is drifting. The good news is that your posts enabled me to read parts of Spivak (*) and reading Spivak enabled me to understand parts of your posts. Learning has taken place and this has been valuable. You've moved me from point A to point B and I am appreciative.

 

I have not given up. I'm going far more slowly than I thought I would. I'll post specific questions if I have any. For the record you have no obligation to post anything. I regret encouraging any expectations that have led to disappointment. No one is more disappointed than me.

 

 

(*) Michael Spivak, A Comprehensive Introduction to Differential Geometry, Volume I, Third Edition. PDF here.

 

 

Now, all that said ... I have four specific comments, all peripheral to the main line of your exposition. Regarding the main line of your exposition, I pretty much understand all of it, but not well enough to turn it around and say something meaningful in response. The concepts are in my head but can't yet get back out. You should not be discouraged by that. Your words are making a difference.

 

Question 1) Definition of differentiable structure on [math]U[/math]

 

You wrote:

 

Yes, but at no pint did I assert that a continuous function needs to be differentiable. Rather I asserted the converse - a differentiable function must be continuous.

First I stipulate that this issue is unimportant and if we never reach agreement on it, I'm fine with that.

 

However this remark was in response to my pointing out that you need the map [math]f \varphi^{-1} : \varphi(U) \to \mathbb R[/math] to be differentiable order to define the differentiability of [math]f[/math]. It's the only possible thing that can make sense. And yes of course by [math]f \varphi^{-1}[/math] I mean [math]f \circ \varphi^{-1}[/math], sorry if that wasn't clear earlier.

 

For whatever reason you seem to have forgotten this. It's true that we think of [math]U[/math] as having a differentiable structure. But we have to define it as I've indicated. I verified this in Spivak. Homeomorphism can't be enough because there's no differentiability on an arbitrary manifold till we induce it.

 

Your not agreeing with this puzzles me. And your specific response about differentiable implying continuous doesn't apply to that at all.

 

As I say no matter on this issue but wanted to register my puzzlement.

 

* Question 2) Definition of [math]C^1[/math]

 

In response to this issue you wrote:

 

Maybe I did not make myself clear. I said that the [math]C^k[/math] property for a function "subsumes" the [math]C^0[/math] property. If we attach the obvious meaning to the [math]C[/math] in [math]C^k[/math] we will say that a [math]C^0[/math] function is continuous to order zero, a [math]C^1[/math] function is continuous to order one..... a [math]C^k[/math] function is continuous to order [math]k[/math]

 

I am sorry if my language was not sufficiently clear.

I apologize but you are still not clear. What does subsume mean? You can't mean subset, because the inclusions go the other way. If a function is [math]n[/math]-times continuously differentiable then it's certainly [math]n-1[/math]-times. So [math]C^n \subset C^{n-1}[/math]. So subsume doesn't mean subset.

 

Of course it does mean that a [math]C^n[/math] function is conntinuous. Differentiable functions are continuous, we all agree on that (is this what you were saying earlier?) So you are saying that a [math]C^n[/math] function must be continous. Agreed, of course. That's "subsumed."

 

However this seems to be missing the point. The point is that there exists a differentiable function whose derivative is not continuous.

 

Therefore it's not good enough to say that [math]C^1[/math] is all the differentiable functions. It's all the differentiable functions whose derivative is continuous. There's no way I can fit "subsumes" into this.

 

Again like I say, trivial point, not important, we can move on. But I wanted to be as clear as I could about my own understanding, since like any beginner I must be picky.

 

* Question 3) The notation [math]\frac{df}{da}[/math]

 

Earlier you wrote:

 

So recall from elementary calculus that, given a [math]C^1[/math] function [math]f:\mathbb{R} \to \mathbb{R}[/math] with [math]a \in \mathbb{R}[/math] then [math]\frac{df}{da}[/math] is a Real number.

I have never seen this notation. [math]a[/math] is a constant. I asked about this earlier and did not understand your response. If [math]a = \pi[/math] would you write [math]\frac{df}{d\pi}[/math]? I would write [math]\frac{df(a)}{dx}[/math] or [math]\frac{df}{dx}(a)[/math], which you seem to think are radically different. Or even [math]\frac{df}{dx} \bigg\rvert_{x = a}[/math]. I'm confused on this minor point of notation.

 

* Question 4) The real thing I want to know

 

After glancing through Spivak I realized that I am never going to know much about differential geometry. Perhaps looking at Spivak was a mistake :)

 

I'm trying to refocus my search for the clue or explanation "like I'm 5" that will relate tensors in engineering, differential geometry, and relativity, to what I know about the tensor product of modules over a commutative ring in abstract algebra.

 

What I seek, which perhaps may not be possible, is the 21 words or less -- or these days, 140 characters or less -- explanations of:

 

- How a tensor describes the stresses on a bolt on a bridge; and

 

- How a tensor describes the gravitational forces on a photon passing a massive body; and

 

- Why some components of these tensors are vectors in a vector space; and why others are covectors (aka functionals) in the dual space.

 

And I want this short and sweet so that I can understand it. Like I say, maybe an impossible dream. No royal road to tensors.

 

Ok that is everything I know tonight.

Posted

Ha! So I am fired, in the nicest possible way! *wink*

 

Do not feel bad, wtf. Differential geometry is a hard subject, as you would see if you had all 5 volumes of Michael Spivak's work.

 

I do not pretend to have his depth of knowledge - I merely took a college course. Moreover his reputation as a teacher is extremely high, whereas mine is ....... (do NOT insert comment here!)

 

Regarding applications, all I can say is that I am neither an engineer nor a physicist, so as far as bridge bolts etc. you would need to ask somebody else.

 

On the other hand, it is not possible to study differential geometry without at some point encountering tensor fields, especially metric fields and the curvature fields that arise from them. These are the principle objects of interest in the General Theory of Gravitation.

 

If I offered to give guidance on this subject, it would be strictly as an outsider, an amateur.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.