Xerxes
Senior Members-
Posts
254 -
Joined
Content Type
Profiles
Forums
Events
Everything posted by Xerxes
-
This thread attempts to address (albeit in a somewhat oblique way) some issues in elementary set theory that have emerged here. I promise you will enjoy it, but first let me freely confess I have posted this elsewhere. I dare say it is against the "rules", but then...... Before the fun starts, we have to do a bit of homework. The cardinality of a set is simply its number of elements. If [math]S[/math] is a set, then its powerset [math]\mathcal{P}(S)[/math] is simply the set formed from all its subsets. Thus, bearing in mind that [math]S, \,\,\, \O[/math] are always subsets of [math]S[/math] ([math]\O[/math] is the empty set, btw) then, when [math]S=\{a,b,c\}[/math], we will have that [math]\mathcal{P}(S) = \{\{a\},\{b\},\{c\},\{a,b\},\{a,c\}, \{b,c\}, \{a,b,c\},\O\}[/math]. Note that the elements in the powerset are subsets of the set - this will be important. Note also I am abusing notation slightly - I am slightly conflating sets with elements - this is standard. The context is clear enough however, I trust There is a Theorem of Cantor that says that the cardinality of [math]\mathcal{P}(S)[/math] is always strictly greater than the cardinality of [math]S[/math]. Let's see...... In the example above, this is not hard to see; the cardinality of [math] S[/math] is 3, that of [math]\mathcal{P}(S)[/math] is [math]8 = 2^3[/math]. In fact, using all available fingers and toes, and those of our wives and girlfriends (assuming they are different), it is not hard to see that, in general, for any set [math]S[/math] of finite cardinality [math]n[/math], then the powerset [math]\mathcal{P}(S)[/math] has cardinality [math]2^n[/math] But what if the cardinality of our set is infinite? What do we make of the assertion, say, that [math]2^{\infty} > \infty[/math]? Surely this is madness? One last thing, of MAJOR importance (sorry for shouting). A set is said to be countable iff it is isomorphic to a subset of the natural (i.e. "counting") numbers [math]\mathbb{N}[/math]. Since, from the above, [math]\mathbb{N}[/math] is always a subset of itself, and is infinite, we have the brain-curdling expression "countably infinite". (That's yet another reason to love mathmen). Let's assume our set [math]S[/math] is countably infinite in this sense. Let's also assume that [math]\mathcal{P}(S)[/math] is countably infinite in the same sense, that is, in accord with intuition, [math] 2^{\infty} = \infty[/math]. We will find a contradiction So, since [math]S[/math] is countable, we can index each and every element by an element of [math]\mathbb{N}[/math] (this is due to our isomorphism). Call the [math]n[/math]-th element [math]s_n \,\,\,\,(n \in \mathbb{N})[/math]. (Note that the choice of ordering is arbitrary, but, having chosen, we had jolly well better stick with it) Assuming that [math]\mathcal{P}(S)[/math] is also countably infinite, then, to each element here we can also assign an index. So let's write a list of these elements (subsets of [math]S[/math], remember) and call the [math]n[/math]-th member of this list as [math]l_n[/math]. Now form the set [math]D[/math] by the rule that [math] s_n \in D[/math] iff [math]s_n \in l_n[/math] Now [math]D[/math] is obviously a subset of [math]S[/math], an element in [math]\mathcal{P}(S)[/math], so is eligible to be in our list. Let's call this list element as [math]l_p[/math], so by our rule we have that [math]s_p \in D[/math] iff [math] s_p \in l_p[/math]. But hey! [math] l_p = D[/math] so we arrive at the breath-taking conclusion: [math] s_p \in D [/math] iff [math]s_p \in D[/math]. Wow! Let's write to mother about that. But wait.... For every subset in [math]S[/math] (that is every element in [math]\mathcal{P}(S)[/math]) I can find its complement, that is, there is the element in [math]\mathcal{P}(S)[/math] comprising those elements in [math]S[/math] that are not in [math]D[/math] Let's call it as [math]D^c[/math]. Under the assumption that our powerset is countable, I can add [math]D^c[/math] to our list, call it the [math]q[/math]-th member, and apply the same rule as before: [math] s_q \in D[/math] iff [math] s_q \in l_q[/math]. But [math]l_q = D^c[/math], so we conclude that [math]s_q \in D[/math] iff [math]s_q \not\in D[/math]. This is clearly nuts, so the assumption that the powerset is countable must be false. Cantor's Theorem is thereby proved. A really sweet result, wouldn't you say?
-
Sorry but I can make no sense of these last two posts. It might help to point out a coupla things; 1. The objects that enter into the set operations union and intersection MUST be sets, they cannot be elements (in topology we stretch the point slightly and informally refer to the sets in a topological space as "elements", but strictly this is wrong). Likewise the result of these operations on sets is again a set, never an element. (This is called a "closed" operations, BTW) 2. As my tutor was fond of saying: "since notation is arbitrary, there is no good reason not to use the same as everyone else". Try it...... There is obviously some confusion about simple set theory here. If you would like a brief tutorial, just ask; I am sure some of us here can help out with that.
-
I'm afraid I don't recognize the qualifier "against" in this context Hmm, how can I put this to you kindly? Your understanding of set theory needs a little work. Look, but please pay close attention to the way I use notation, which is standard but arbitrary. Define a set called FRUIT, and include as its elements all apples, all oranges, all pears etc (notice the plural here). Let's write [math]F=\{A,O,P,....\}[/math], so that the union [math]A\cup O\cup P \cup..... = F[/math]. So now we have the subsets {apples}, {oranges}, {pears}, so that any chosen apple, say, is an element in the subset {apples} of the set FRUIT. One writes [math] a \in A \subsetneq F[/math]. Notice that no apple can also be an orange (as far as I am aware) so the intersection is empty, i.e. [math]A \cap O = \O[/math]; this is called being "disjoint", so that the set FRUIT is the union of disjoint sets, what I called the "disjoint union". Of course one may always introduce a new constraint, say size, in which case the union need not be disjoint, since the intersection of subsets may not be empty.
-
Hmm, well, the structure that forgetful functors "forget" is algebraic, so unless our sets are algebraic, I'm not sure it applies. Two classic examples: the functor that sends the category of vector spaces to the category of abelian groups, "forgets" the algebraic operation of scalar multiplication by the field over which our vector spaces are defined. Likewise, the forgetful functor that sends the category of general groups to their underlying sets "forgets" about the group binary operation Although the OP was not very clear, even less so his/her follow-ups, I suspect the "new" operation being described is disjoint union of sets. So that is, say, we have a set of all colours, and a subset of all objects, then their disjoint union is the set {{red,car}, {red, dress}, {blue, car},....,{white, car},....,, {blue, moon},....}} Contrast this to the the "ordinary" union of sets: here we do NOT forget the origin of each set element entering into the union. In a certain sense this is opposite to the forgetful functor!
-
Yes, but that is not all! Try this party trick, amaze your friends.... 1. Take any number at all with more than 1 digit and add the digits. 2. If the result has more than one digit, add again 3. Now subtract this single-digit number from your starting number (not added) 4. Add the resulting digits again. 5. I bet it will be the number 9 Can you see why? Once you do, you will see that the restriction in (1) to more than a single digit is not required. Hint in white follows (no peeking!!) It is simply modulo 9 arithmetic where 0 = 9 mod 9. As it seems you peeked, your forfeit is to explain, using modulo 9 arithmetic, why this works There are any number such "mind reading" tricks one can devise once you see what's going on
-
This is a very bold assertion; can you justify it? For a counter example, any fundamental particle such as, say, the electron is zero-dimensional. Are these not "real" physical objects? Anyhoo, as this is a Math forum, I had assumed that the objects under discussion were mathematical objects, hence my seemingly inappropriate last post But just to nit-pick Area is a number, it is 1-dimensional. You probably meant the surface itself, as I tried to show in my previous (and obviously failed)
-
Now, beware assigning a "hierarchy" to dimensions. In mathspeak, the number that defines a dimension is cardinal, not ordinal. Exists in what sense? You mean that 2-dimensional objects don't exist? No. Look, and I am sorry to be boringly technical (you may well feel it proves your point!) Consider a line of finite length, as lines are usually understood. We will assume that this line can be infinitely sub-divided. Then we will assume that each element in this this sub-division corresponds to a real number. Thus our line is a "segment" of the real line [math]R^1[/math] We now ask, how many real numbers do I need to uniquely identify a point on this line? The answer is, of course 1. Let us call this line as 1-dimensional, by virtue of this fact. Let's take our line segment, and join it head-to-tail. We instantly recognize this as a circle. Obviously, the same applies; any point on this circle can be uniquely described by a single real element, and accordingly I will call this geometric object as 1-dimensional. Mathmen use the symbol [math]S^1[/math] for this character, and call it the 1-sphere. Now let's try and think about the "2-dimensional line", or 2-line. What can this mean (if anything)? Well, using the above, we may assume that this is the "line" that requires two numbers to uniquely describe a point. From which we infer that the "2-line" is the plane. We may also infer, from the above, that the 2-sphere [math]S^2[/math] is, in some weird and abstract sense, a head-to-tail "joining" of a part of this plane. . The 2-sphere is merely some sort of jazzed up plane, that is, it knows nothing about the area/volume it may or may not enclose, any more than does the 2-plane. The same applies to any n-sphere. With a grinding of gears, let's now consider the area enclosed by the 1-sphere as defined above. Intuition tells us, in this case quite correctly, that it is part of the "2-line" i.e the plane. This part of the plane is usually referred to as the "disk" [math]D^2[/math]. It is, in fact, the 2-ball. Similarly, the "area" enclosed by the 2-sphere is a part of the 3-plane and so on. Obviously, this latter "area" is the volume (as it normally understood) enclosed by the 2-sphere, from which we conclude that, provided we are allowed to think of area as a 2-volume, then, as a generalization, the n-sphere encloses an n + 1 volume. But the next thing we have to think about is whether or not, for any "n-volume", I need to have an enclosing n - 1 space. Well, it largely a matter of definition; as a geometric object, the n + 1 volume enclosed by the n-sphere may or may not include the n-sphere. If it does, one says that the n + 1 ball is closed. Otherwise, they say that the n + 1 ball is open. Intuition says exactly this; a set is closed iff it includes its boundary, it is open otherwise (note bene; this is not the topologist's definition) So the boundary of an n-ball [math]D^n[/math] is precisely the n - 1 sphere that encloses it. A concise notation might be that [math]\partial D^n = S^{n-1}[/math] where the [math]\partial[/math] denotes the boundary
-
I agree - "observers" don;t come into it. Try this. You will often see the uncertainty principle explained from what has become known as the "realist" perspective. Namely, that if you want to look at an electron, say, then you have to bombard it with a photon, which changes its energy, position, etc. That's fine as an illustration, but that's all it is. So, it was proposed in, oh I dunno, 1920-something by de Broglie that elementary particles like, say the electron, had "wave-like" properties. And some-one else (M. Born?) found that the square of the wavefunction is the probability of finding our electron at any place at any time; it's called it's "probability density" Now the wavefunction is a measure of the electron's energy - different energy, different wavefunction. (Technically it's an energy eigenfunction).) It therefore follows if you know the energy, you know the wavefunction, and if you know that then, by taking the square, you have a probability density plot. But that's all you have! You don't know precisely where it is. Let's now say you do know where it is. What does your probability density plot look like now? It's a flat line with only a single P = 1 peak, the location. The only way we know of to get a plot like that from a "wave" is to take a whole lot of different waves, each representing different energies, such that "peaks and troughs" destructively interfere except in one place, where the interference is constructive, generating the single peak representing exact location. But then you don't know which wave, which energy eigenvalue, "belongs" to the electron. So the uncertainty principle doesn't say "we can't find out experimentally", or in other words, "observers collapse the wavefunction", it says these two properties - energy and location - cannot as a matter of principle both be known simultaneously to the same level of precision.
-
This is a shameless copy/paste of a thread I started on another forum, where I didn't get too much help. See if you guys can do better: Red Alert - I am NOT a physicist Start paste Dashed if I see what's going on here! The mathematics is not especially exotic, but I cannot get the full picture. As I am working from a mathematics, not a physics, text, I will lay it out roughly as I find it. So. We start, it seems, with a vector space [math]\mathcal{A}[/math] of 1-forms [math]A[/math] called "potentials". Is it not the case that the existence of a potential implies the existence of a physical field? I say "physical field" as I am having some trouble relating this to the abstract math definition - a commutative ring with multiplicative inverse, say. Anyhoo, I am invited to consider the set of all linear automorphisms [math]\text{Aut}(\mathcal{A}): \mathcal{A \to A}[/math]. It is easy enough to see this is a group under the usual axioms, so set [math]\text{Aut}\mathcal{A} \equiv G \subseteq GL(\mathcal{A})[/math] which is evidently a (matrix) Lie group thereby. This is apparently called the gauge (transformation) group. Now for some [math]g \in G[/math], define the [math]g[/math]-orbit of some [math]A \in \mathcal{A}[/math] to be all [math]A',\,\,A''[/math] that can be "[math]g[/math]-reached" from [math]A,\,\,A'[/math], respectively. In other words, the sequence [math]g(A),\,\,g(g(A)),\,\,g(g(g(A)))[/math] is defined. Call this orbit as [math]A^g[/math], and note, from the group law, that any [math]A \in \mathcal{A}[/math] occupies at least one, and at most one, orbit. Thus the partition [math] \mathcal{A}/G[/math] whose elements are simply those [math]A[/math] in the same orbit [math]A^g[/math]. Call this a "gauge equivalence". Now it seems I must consider the orbit bundle [math]\mathcal{A}(G, \mathcal{A}/G)[/math]. Here I start to unravel slightly. By the definition of a bundle, I will require that [math]\mathcal{A}[/math] is the total manifold; no sweat, any vector space (within reason) is a manifold. I will also require that [math]\mathcal{A}/G[/math] is the "base manifold". Umm. [math]\mathcal{A},\,\, G[/math] are manifolds (they are - recall that [math]G[/math] is a Lie group), does this imply the quotient is likewise? [math]G[/math] is the structure group for the total manifold, btw. I am now invited to think of the orbit bundle as a principal bundle, meaning the fibres [math]A^g \simeq G[/math], the structure group. Will it suffice to note that this congruence is induced by the fact that each orbit [math]A^g[/math] is uniquely determined by [math]g \in G[/math] Anyway, it seems that, under this circumstance, I may call the principal orbit bundle the bundle of Yang-Mills connection 1-forms on the principal bundle [math]P(G,M)[/math], where I suppose I am now to assume that the base manifold [math]M[/math] is Minkowski spacetime, and that the structure group is again a Lie group (same one? Dunno)?? I'm sorry, but this is confusing me. Now I want to ask if the connection bundle is trivial, i.e. admits of global sections, but this is already far too long a post, so I will leave it. Sufficient to say that a chap called Gribov pops in somewhere around here. Any other take on this would be most welcome - but keep it simple enough for a simpleton!! End paste
-
Lost me there! f is a function on, say, X, which evaluated at some x in X yields the expression f(x). f(x) is not a function, it's an evaluation of x in the codomain of the function f sin(x) is the evaluation of the function sine at some x in X and so on. So what do you mean by ".....not the actual function"?
-
Thanks for that. But the first two on your list can be expressed as a Taylor series (I'm not too sure about the others). These (the Taylors, that is) are polynomials, surely?
-
So, here's something that's puzzled me off and on for while. Suppose [math]1x+2x^2+3x^3[/math] is a polynomial of degree 3. What is [math]0x+1x^2+2x^3 = x^2+2x^3[/math]? Is this a polynomial of degree 3? Then how about [math]0x^2+1x^3= x^3[/math]? What would we call that? Is it still technically a polynomial of degree 3? Or would one better argue backwards: any expression can be written as a polynomial of some degree? If so (and I'm not suggesting it is so) isn't it rather the case that some polynomial of some degree is a generalization of any expression?
-
Cantor's diagonalization argument showed, oh ages ago, this is not true. Your argument fails, as you are assuming n is in N. As N is, by definition infinite and countable, this is circular Me too! I'm positive, in fact! Let's see; each n in N has a decimal expansion as n = n.0. Yay! 1/7 is infinite? Right Do you know of one that doesn't involve a countable set? Aren't all countable sets isomorphic to N? No it's not! See that 3 on the "end"? That tells you it halts at some point. Maybe you mean 3333....?
-
Ah, OK, we are at cross purpose, it seems. You are talking about a plane as a two-dimensional surface embedded in (or as a slice of) a 3-space. In which case I agree with you. I was talking about the 2-plane as a "free" construction, i.e. with no reference to an embedding space, in which case any point on the plane is referenced by two numbers only.
-
So your "board" is two dimensional, right? Are your "points" on the board, or not. I'm confused. Ugh! This makes no sense to me. Even worse. Sorry, but what on earth are you talking about?
-
Huh? Lost me there.
-
Are you sure? I can define a plane, or rather the plane using only 2 coordinates. Any plane defined with more than 2 coordinates is what? A hyperplane, maybe? Dunno. But I do know that, as commonly understood, a plane is a (flat) two-dimensional surface, wherein any point on the plane can be referenced by two (real or complex) numbers on the coordinate set. PS, by edit: There are other non-planar surfaces where any point can also be referenced by two and only two numbers (this harks back to another thread) Anyone?
-
Well, straight away you have a semantic problem. Axioms, by their very nature are not "derived" from anything. And what does 2(2) signify? Huh? Once again, you misunderstand the meaning of the word "axiom" - axioms are, by definition, un-discoverable i.e. stand-alone. No, the Planck constant is just that - a constant that makes certain equations in science fit the data. This is almost the opposite of being an axiom. I'm guessing that some purists might even call it a fudge (I read that some did in the early 1900's)
-
So, I have amply demonstrated my dimness more than once on this forum. Let me compound it..... Just for the laugh, I tried to get into the ChatRoom - nothing, a blank window. With a bit of fiddling I got some gross text and a lot of unwelcome computery-type stuff, no "chats", and no "buttons" like send, view or delete or whatever. What am I doing wrong?
-
Now that's unkind. You chose to ignore my Pauline conversion, and my apology to you and Dave. Ah well, maybe I deserved it.
-
Yes, thank you (I had more or less come to that conclusion myself), but for one small point. The identity operator [math]I[/math] satisfies [math]I(v) = v [/math] for all [math]v \in V[/math]. Confusingly, this is not the identity on the vector space of all linear maps [math]V \rightarrow W[/math]. Call this the vector identity. This is, you rightly showed, and as HoI originally claimed, the zero operator, [math]0_{L(V,W)}[/math], the zero vector in L(V,W), which satisfies [math]0_{L(V,W)}(v) = 0_W[/math] for all [math] v \in V[/math]. My slight niggle is you did call this the identity function, which it isn't. (This result is no more than we should expect, as the vector space operation is addition) Apologies to both of you.
-
Now I am confused. Every vector space admits of an identity, right? Let [math]L(V, W) [/math] be the space of all linear maps [math]V \rightarrow W[/math], with [math]I_L[/math] defined by for all [math] L_i \in L(V,W), I_L+L_i= I_L+L_i=L_i[/math]. This is our identity on L(V,W), right? Suppose [math] V\cap W = \emptyset [/math] But, [math] I_L \in L(V,W)[/math], so as you say, for all [math]L_i \in L(V,W), L_i: V \rightarrow W[/math], including [math]I_L[/math]. So what is the action of [math]I_L[/math] on [math]V[/math]? I haven't a clue, does anyone? Am I being dumb here? (Tactful answers only, please!)
-
I wonder if there's not a slight slip-o'- the tongue here. For. Let L be the set of all linear transformations V --> W. Then L will be a vector space ( we know that it is) if, (among other things), for some [math]L_m \in L[/math] there is some [math] L_n \in L[/math] s.t. [math]L_m + L_n = 0_L , { }\text m \neq n[/math]; one says that [math]0_L[/math] is the identity on L, i.e. the identity operator But the identity operator [math]0_L[/math] sends each [math]v \in V[/math] to itself, and not to the zero vector in W, surely? Am I mad?