Xerxes
Senior Members-
Posts
254 -
Joined
Content Type
Profiles
Forums
Events
Everything posted by Xerxes
-
Is what just a template.....? Coordinates do not "alter their position", whatever that means. Let me try to explain. A manifold - for that is what spacetime is - is equipped at every point with a unique set of coordinates that completely describe the position of all events at that point. In "moving" these events to a new point, then a new set of coordinates must (according to co-variant or gauge theories) must describe the same events. This is the reason that coordinate transformations play such a prominent role in field theories - you need to know how coordinates change as you "move" from on point to another. Having established that, using the differential calculus, then you can forget about coordinates and get on with your life.
-
Why are you assuming that "spacetime is nothing"? Why shouldn't spacetime have a structure? In truth, your question has a venerable history; having ditched the so-called aether, Einstein,following his introduction of a tensor field description of gravitation, wondered exactly the same thing. He concluded, in his case, as e.g. Laplace had done long before him that fields exist even in the absence of a gravitational source. In the case of a scalar theory of gravitation, this statement takes the rather simple form of [math]\nabla^2 \phi =0[/math], where [math]\phi[/math] is the scalar field and [math]\nabla^2 \phi= \nabla\cdot(\nabla \phi)[/math] i.e. the divergence of the gradient of the scalar field. In the field theory of gravitation, this reads [math]R_{\mu \nu}=0[/math] i.e. in the absence of a source spacetime is not curved. But note this crucial point - curvature [math]R_{\mu \nu}[/math] is derived from a second order partial differential of the metric field [math]g_{\mu \nu}[/math], which mandates, by the property of second order derivatives, ONLY that if [math]R_{\mu \nu}=0[/math] that the field [math]g_{\mu \nu}= \text{constant}[/math]. In other words, space is ALWAYS "filled" by the metric field. Einstein called this a form of aether Likewise for the electromagnetic field
-
defining composite functions and inverses
Xerxes replied to SFNQuestions's topic in Analysis and Calculus
No, it is not valid (or only trivially). First some terminology..... The domain of a function is the set of all those elements that the function acts upon. Each element in the set is called an argument for the function The codomain - or range - of a function is the set of all elements that are the "output" of the function. So for any particular argument the element in the codomain is called the image of the argument under the function. So, Rule 1 for function...No element in the domain may have multiple images in the codomain. Rule 2 for functions.....Functions are composed Right-to-Left Rule 3 for functions.......Functions can be composed if and only if the codomain of a function is the domain of the function that follows it (i.e. as written, is "on the Left) Rule 4 for functions....... For some image in the codomain, the pre-image set is all those elements in the domain that "generate" this image. Notice that, although images are always single element, pre-images are sets - although they may be sets with a single member in which case the function is said to have an inverse - not otherwise. Look closely at what you wrote above. and check how many of these rules are violated. -
I said no such thing. Read again, paying particular attention to word ordering
-
I don't understand the question. [math]x^3[/math] is a number, and is odd or even according to whether [math]x[/math] is odd or even. the function [math]f: \mathbb{R} \to \mathbb{R}, x \mapsto x^n[/math] is even or odd according to whether [math]n[/math] is odd or even,
-
I am not quite sure what's going on here - perhaps some are conflating the notion of odd/even numbers with that of odd/even functions. Here's a (sort of) definition: A function is even if, for every element in its domain, its sign id preserved in the codomain (sometimes called the "range") Example: if [math]f(x) = 2x[/math] for all Real [math]x[/math], then obviously [math]f(x)+f(-x)=0[/math] Conversely, a function is odd if,for every element in the domain it reverses the sign in the codomain. Example: if [math]f(x)=x^2[/math] for all Real [math]x[/math], then obviously [math]f(x)+(-(f(-x))=f(x)-f(-x)=0[/math]. What's hard about that?
-
Though Mordred's logic is compelling, I am left with an uncomfortable feeling. Namely, granted that 2 clocks with shared coordinates with respect to which they are equally at rest (or co-moving) they function identically. And yet the conclusion seems to be that their relative accumulated time (i.e. their relative "ages") depends upon their respective kinematic histories. And given that these histories may not always be accessible, are we to conclude that elapsed time i.e. age, is a fiction? Sorry if this is a foolish question- my excuse is I am not a physicist
-
some questions about graphing equations and expressions ??
Xerxes replied to bimbo36's topic in Mathematics
Well, let's see if I can help. First, do not confuse expressions with equations - equations, as the name suggests always include an "equals", whereas expressions never do. So [math]2x + 1[/math] is an expression, and it cannot be graphed. To see why, suppose a graph is a line in the plane, then you need 2 numbers (say) to specify each point in this plane. But [math]y = x^2[/math] is an equation, and it can be graphed. In mathematics, we try to make life easy by regarding [math]y=x^2[/math] as an identity, and by suggesting there is a function such that [math]f(x)=y=x^2[/math]' This looks like fancy math-speak, right? But all it is saying is that for any object [math]x[/math] there is another object [math]y = x^2[/math]given by our function. One calls the [math]x[/math] in [math]f(x)[/math] as the independent variable, and the [math]y=x^2[/math] as the dependent variable. Which is merely to say that the output of our function depends on its input. So, by convention, the independent variable is plotted on a horizontal axis in a 2-D plane, and the dependent variable is plotted on the vertical axis. As to whether the same function can give rise to more than one graph, the answer may be yes - that is why it is helpful to specify what sort of object the independent variable is (this is called the domain of our function), and equally what sort of object is the dependent variable (this is called the codomain). If these are carefully specified, there need only be 1 graph per function. As an example consider again our function [math]f(x)=y=x^2[/math]. When [math]x[/math] is some whole number, the graph is "ascending", but when [math]x = \frac{1}{a}[/math] for some whole umber [math]a[/math] the graph is "descending". Can you see why? -
OK. To simplify notation, consider a function [math]f:X \to Y[/math], where [math]X[/math]is called the domain, and [math]Y[/math] is called the codomain. If for any [math]y \in Y[/math] there is at most one [math]x \in X[/math] such that [math]y =f(x) [/math] one calls this an injection. If for any [math]y \in Y[/math] there is at least one [math]x \in X[/math] such that [math]f(x) =y[/math] one calls this a surjection. Roll these together, and you have the bijection: for any [math]y \in Y[/math] there is at least one and at most one [math]x \in X[/math] such that [math]f(x) =y[/math]. Now, no function, on sets or otherwise, is allowed to have multiple images in [math]Y[/math] (these being the result of applying our function to its domain, here [math]X[/math]). But, by our surjection above, the preimage [math]f^{-1}(y)[/math] MUST be a non-empty set, say [math]f^{-1}(y) =\{x_1,x_2,...x_n\}\subseteq X[/math]. The preimage of our injection is a set also. So, by my above, for the injection [math]f:X \to Y[/math], then either [math] f^{-1}(y) = \O[/math] or [math]f^{-1}(y) = \{x\}[/math]. This called a "singleton set" For the bijection we will therefore have that [math]f^{-1}(y) \ne \O,\,\,\, f^{-1} = \{x\}[/math]. In this particular circumstance is customary to ever so slightly abuse the notation, and to equate [math]\{x\} = x[/math] and by a further slight abuse to call [math]f^{-1}[/math] the inverse of [math]f[/math]
-
It is not clear what you mean here. If by "taking the same value" you mean that for each [math]x[/math] in the domain, there are more than one [math]f(x)[/math] in the codomain, then we do not have a function at all! If you mean that more that one [math]x[/math] in the domain has the same image in the codomain, then you are talking about a surjection, not an injection But then, neither does an injection, but it may not be a bijection. But yes, we may assume that a bijection admits of an inverse. (Even though the expression as you wrote it here is not a function - it is just that, an expression.) Why is [math]f(x) = x^7+x^5[/math] a bijection? Well, note that for [math]f(x)=x^n[/math] with [math]n[/math] an odd integer, then the sign in the domain is preserved in the codomain. But when [math]n[/math] is even, all values, both positive a negative in the codomain are positively signed. (easy proof by induction on [math]n[/math]) Hence only in the former case can there be a bijection - this will apply equally to the function in question here. What the image of the inverse is, I cannot say - the usual methods seem to give rubbish answers!
-
Yes,it is called "category theory". Here, one defines a category of objects Set which includes all sets. This category is not, for the reason you have been given, a set itself. Similarly, one may define (for example) the category of rings Rng, the category of fields Fld and so on. The important point about this category theory is that it extracts only those properties of mappings between sets, rings, fields etc that they have in common. Which leads to the (to me at least) nice abstraction that the exact nature of the objects in a category are of less importance that themapings between them. Even more interesting is the fact that one can define a mapping between categories, called a functor and a lot more besides. It leads to some very nice mathematics which is difficult to get one's head around, especially if one is classically trained. I know of only a handful of people who dare to use this theory in applications. The excellent John Baez is one of them. Look him up.
-
Yes. I denied this in another thread, so here is the simple argument I used to convince myself I was wrong. Suppose a proposition [math]P(x)[/math] and that the set [math]A[/math] is such that [math]P(x)[/math] is true for all [math]x\in A[/math]. Then unless our proposition is vacuous, there must be some other set for which the proposition is false. Call this set the complement [math]A^c[/math]. Then there must exist a third set [math]S=A \cup A^c[/math]. But since [math]A[/math] is entirely arbitrary, unless I specify in advance that [math]S \supsetneq A[/math] is well-defined, I may have that [math]A^c = U \setminus A \Rightarrow S=U[/math], the universal "set" whose existence we all abhor as much as Bertie did Moreover, like any other set,it is a subset of itself!
-
So, if I am ignorant, confused or merely mad I am welcome to start a new thread in open forum. And not otherwise. This is a strange ethos, and one to which I cannot subscribe. Sorry.
-
And they say that time travel is impossible? Anyway, my apologies - I found it on my hard drive, forgetting why I had put it there (comes from drinking fermented barley juice on a Saturday night). Just ignore me. PS by edit. Actually, in spite of my apology, I am more than a little cross - on 2 occasions I have tried to inject a little life into a moribund subforum, and met with something approaching hostility (I exclude wtf, btw). But don't worry - I shall not try again
-
This is not as frivolous as the title suggests, but since this subforum seems to have gone into hibernation, try this 1. Take any number at all with more than 1 digit and add the digits. 2. If the result has more than one digit, add again 3. Now subtract this single-digit number from your starting number (not added) 4. Add the resulting digits again. 5. I bet it will be the number 9 Can you see why? Once you do, you will see that the restriction in (1) to more than a single digit is not required. PS If anyone here asks "what is the point of this thread, do I have a question?", the question is already posed, and the "point" is to illustrate an important point in mathematics
-
Since, by your own admission, you do not know any differential geometry, you seem to have failed to grasp that the article you read is a special case of the more general construction I gave. At the risk of attracting more criticism from other members here I will try to explain. First suppose that [math]M,\,\,N[/math] be manifolds with [math]p \in M,\,\,\,q \in N[/math] Let us denote the class (it's not a set!) of all Real-valued functions [math]f: M \to \mathbb{R}[/math] at [math]p \in M[/math] by [math]F^{\infty}_p[/math] Then the vector space of all vectors tangent at the point [math]p[/math] denoted by [math]T_pM[/math]is the set of all mappings [math] v:F^{\infty}_p \to \mathbb{R}[/math]. Now if I have a smooth mapping [math]\phi:M \to N[/math] then, exactly as before I will have the pushforwrd [math]\phi_*:T_pM \to T_q N[/math] as a mapping of maps onto maps. Why your Wiki article says the pushfoward is equivalent to the differential of the mapping [math]\phi:M \to N[/math] requires a greater knowledge of differential geometry, which my recent experience here demotivates me from attempting to explain. There could have been a more tactful way to phrase that! But whatever, I am now truly done here
-
Yes, but I deny that the constructions I "failed" to explain are restricted to differential geometry. As to the first, I had assumed it would be those members interested in pure mathematics. As to the second, how on Earth would I know? You're kidding, right? How does anyone explain (in the way you demand) tangent spaces on differential topological manifolds without first going into point set topology? And a lot more besides I can do it (mod my "expository deficiencies"), but it would take whole book! My final word: If it is the consensus that this thread is rubbish, so be it.There is no point my insisting otherwise. Apologies for wasting forum disc space. Ben
-
Thank you for your candour. The fact that this seems to me a curious thing to say in a mathematics forum is of course neither here nor there.
-
No, I don't have a question, and I thought my "point" was clear Assuming fun is allowed here.... Anyway. Recall that for any vector space [math]V[/math] its dual [math]V^*[/math] is also a vector space defined over the same field [math]\mathbb{F}[/math], let us agree to call the ensemble of all such vector spaces as the category F-Vec. Then the mapping [math]^*:\text{F-Vec} \to \text{F-Vec},\,\,V \mapsto V^*[/math] is referred to as a functor. Moreover it is an Endofunctor. Even more over, since we saw the mapping [math]^*:(f:V \to W) \mapsto [f^*:W^* \to V^*][/math] it is called a contravariant functor. So if we adopt the rather outmoded convention that "ordinary" vectors are contravariant and that dual vectors are covariant, the contravariant functor here maps contravariant vector spaces onto covariant vector spaces. Whereas the identity functor [math]Id_{\text{F-Vec}}[/math], which is obviously also an Endofunctor, maps covariant vector spaces onto covariant vector spaces, likewise maps contravariant spaces onto contravriant spaces. Which I think is rather nice Since nobody seems especially interested in this thread, I will leave it there (but there are questions that I haven't addressed)
-
Well,you guys are no fun! Am I boring you? Then try this..... Recall that, given [math]g: V \to W[/math] as a linear operator on vector spaces, we found [math]g_*: L(U,V) \to L(U,W),[/math] as the linear operator that maps [math]f \in L(U,V)[/math] onto [math]g \cdot f \in L(U,W)[/math], and called it the push-forward of [math]g[/math]. In fact let's make that a definition: [math] g_*(f) = g \cdot f[/math] defines the push-forward. This construction arose because we were treating the space [math]U[/math] as a fixed domain. We are, of course, free to treat [math]U[/math] as as fixed codomain, like this. This seems to make sense, certainly domains and codomains come into register correctly, and we easily see that [math]h \in L(W,U),\, h \cdot g \in L(V,U)[/math]. Using our earlier result, we might try to write the operator [math]L(g,U): L(W,U) \to L(V,U), h \mapsto h \cdot g[/math], but something looks wrong; [math]g[/math] is going "backwards"! Nothing daunted, let's adopt the convention [math]L(g,U) \equiv g^*[/math]. (We will see this choice is no accident) Looking up at my diagram, I can picture this a pulling the "tail" of the h-arrow back along the g-arrow onto the composite arrow, and accordingly (using the same linguistic laxity as before), call [math]g^*[/math] the pull-back of [math]g[/math], and make the definition: [math]g^*(h) = h \cdot g[/math] defines the pullback (Compare with the pushforward) This looks weird, right? But it all makes beautiful sense when we consider the following special case of the above. where I have assumed that [math]\mathbb{F}[/math] is the base field for the vector spaces[math]V,W[/math]. As before, the composition makes sense, and I now have [math]\phi \in L(W, \mathbb{F}),\, \phi \cdot g \in (V, \mathbb{F})[/math], and the pullback [math]g^*: L(W,\mathbb{F}) \to L(V,\mathbb{F})[/math]. But, hey, lookee here.... [math]L(U, \mathbb{F})[/math] (say) is the vector space of all linear maps [math]U \to \mathbb{F}[/math], which defines the dual vector space, so we quite simply have that [math] L(W,\mathbb{F}) = W^*,\,L(V, \mathbb{F}) = V^*[/math], the dual vector spaces. Putting this all together I find that, for [math]g: V \to W[/math] I will have [math]g^*:W^* \to V^*[/math] as my pullback. I say this is just about as nice as it possibly could be. I have one further trick up my sleeve.........
-
This is just so much fun. We will suppose that [math]U,V,W[/math] are vector spaces, and that the linear operators (aka transformations) [math]f:U \to V,\,g:V \to W[/math]. Then we know that the composition [math]g \cdot f: U \to W[/math] as shown here. (Remember we compose operators or functions reading right to left) Notice the rudimentary (but critical) fact, that this only makes sense because the codomain of [math]f[/math] is the domain of [math]g[/math] Now, it is a classical result from operator theory that the set of all operators [math]U \to V[/math] is a vector space (you can take my word for it, or try to argue it for yourself). Let's call the vector space of all such operators [math]L(U,V)[/math] etc. Then I will have that [math]f \in L(U,V),\, g\in L(V,W),\,g \cdot f \in L(U,W)[/math] are vectors in these spaces. The question naturally arises: what are the linear operators that act on these spaces? Specifically, what is the operator that maps [math]f \in L(U,V)[/math] onto [math]g \cdot f \in L(U,W)[/math]? By noticing that here the "[math]U[/math]" is a fixed domain, and that [math]g: V \to W[/math], we may suggest the notation [math]L(U,g): L(U,V) \to L(U,W)[/math]. But, for reasons which I hope to make clear, I will use a perfectly standard alternative notation [math]L(U,g) \equiv g_*:L(U,V) \to L(U,W),\, g_*(f) = g \cdot f[/math]. Now, looking up at my diagram, I can think of this as "pushing" the tip of the f-arrow along the g-arrow to become the composite arrow. Accordingly, I will call this the push-forward along [math]g[/math], or, by an abuse of language, the push-forward of [math]g[/math] So, no real shocks here, right? Ah, just wait, the fun is yet to begin, but this post is already over-long, so I'll leave you to digest this for a while.........
-
What is the minimum number of properties posessed by members of a set?
Xerxes replied to studiot's topic in Mathematics
wtf I thank you for your forbearance and for the above interesting and informative post. I also apologize for doubting that your knowledge of the subject exceeds mine. Sorry! Exactly so. At my college, set theory was introduced as "something you need to be aware of", then dismissed as "dry, not to say arid". We were pointed to a nice little book by Paul Halmos (yellow as I recall), which I read and enjoyed but which was unequivocally titled "Naive Set Theory". But just you wait - one these days I'll best you!! *wink* -
What is the minimum number of properties posessed by members of a set?
Xerxes replied to studiot's topic in Mathematics
wtf I am baffled by your response. I hope the following doesn't come across as aggressive. In this context, he did not. In fact I used quite standard set-builder notation. Since for any object or set the assertion that [math]x \in x[/math] makes no sense, then likewise does its negation, I cannot see your point. You can of course have [math]P(x)[/math] as the proposition that [math]x \ne x[/math] which will define the empty set On the contrary, YOUR construction may allow the existence of some "universal set", which does indeed lead lead to the Russell paradox, which I state correctly here...... "There can exist no set that contains its complement as a proper subset". Or maybe you thought my use of the term "universe of objects" implied I was referring to a universal set. I was not, and there was nothing to suggest that I was. Great Heavens you don't say! I hope you are not offering this as a definition? -
What is the minimum number of properties posessed by members of a set?
Xerxes replied to studiot's topic in Mathematics
Well I do, though I rather think his diversion into equivalence relations was not well-motivated - I'll explain why I think this in a later post (if I get time) Anyway. Suppose that [math]x[/math] an object in the universe of all possible objects. Say that [math]P[/math] be a propostion, and say that [math]P(x)[/math] means the proposition is true for any such [math]x[/math]. One writes [math]X=\{x:P(x)\}[/math] for any [math]x[/math] denotes the set of objects for which the proposition is true. Obviously this has non-trivial content for a very large number of sets But in the case of a set formed by an arbitrary selection of just one element from a (possibly infinite) number of other sets, what can we say about our proposition [math]P[/math]? Only that [math]P(x)[/math] means that [math]x\in X[/math] if and only if [math]x \in X[/math]. Hardly a surprise, you will agree, so I say that the proposition [math]P[/math] is trivial in this case -
What is the minimum number of properties posessed by members of a set?
Xerxes replied to studiot's topic in Mathematics
I'll put the kettle on for them. Do they take sugar?