Jump to content

matt grime

Senior Members
  • Posts

    1201
  • Joined

  • Last visited

Everything posted by matt grime

  1. It would depend on te function. sqrt(x) on the positive reals (ie including zero) is not differentiable. On the strictly positives it is differentiabel at all points. You simply show that for any given point of the domain the function is differentiable. Unless you have specific example in mind there is nothing more that anyone can say to you. alext87's post is absolutely wrong.
  2. Define random, why is a random set finite, from what sample set are they "randomly" choosing these sets? (r-permutation? that also is not something that I would consider common terminology) "since it [the set] can vary randomly, let n denote the number of elements in the set" that is a very strange sentence indeed
  3. The leading term in ANY taylor series can and usually is defined to be f(0), it is NOT f(0)x^0/0!, this would lead to issues of defining 0^0 directly. It is onotologically unnecessary to define 0^0 in order to define either taylor series or the exponential.
  4. But the defintion of e^x in that form adopts the convention that 0^0=1, but it is just a convention that is useful there, and there is no universally agreed convention. Jeez you make even the simplest things far too complicated. Note the 0^0 is usually defined to be one since it is an empty product, and it is the cardinality of the set of functions from the empty set to itself, agreeing with m^n being the cardinality of the function set from {1,..,n} to {1,..,m{
  5. Here is a simple reason why 0^0 is not defined ANALYTICALLY. Consider the function x^0 for all x not zero. This is identically 1, hence 0 is a removable singularity so I may define f(x)=x^0=1 for all x as a perfectly good continuous function. Now consdier the function g(x) = 0^x for all x not zero. This is identically 0, so g has a "removable singularity" too so we define g(0)=0 and get a perfectly good continuous function. Hence there is no "best choice" for 0^0 in analytic terms. There are good combinatorical ones for declaring 0^0 to be 1, and we, as a convention, often define empty prodcuts and unions in such fashion, but it is just a convention. Thus I doubt you can actually prove 0^0=1 on its own without more of a context.
  6. The reason you cite as "proof" is exaclty why it is convenient and universally accepted that 0! is DEFINED to be 1. If it weren't then we would have to make several exceptional cases for things like permutations. n! is the number of ways of arranging n objects, or n*(n-1)*...*1, etc. None of those descriptions define 0!, and I need not define 0! to give any of the things above. Indeed I can define your P(n,r) without any reference to 0!, however it is a convenience to extend the factorial to include 0. Your argument a priori assumes 0! "exists" and satisfies those relations. How do you know that?
  7. 0!=1 and 0^0=1 are merely conventions. The first is universal the second isn't.
  8. Oooh, Johnny gets to learn about diadic expansions, christ this is going to be tedious. Still, it comes as no shock to know he thinks real numbers and decimal expansions are the same thing. Question for the credulous and stupid: why is it hard to believe there are exactly two decimal representations of a very small number of real numbers, but you're perfectly happy to accept that there are an INFINITE number of representations of the form a/b for EVERY reational number? The mind boggles, it really does. The discerning reader may have realized I've just snapped in the "wtf is Johnny5 prattling on about now" sense.
  9. He means that he really doesn't understand what the buggery it is you're doing with this, apart from grasping the wrong end of the stick and making really hard (and public) work out of really very trivial things.
  10. Johnny, try doing this for Numbers in the RSA range, where (probably a speeded u version of) Euclid's algorithm is useful. Or you could for once attempt to explain what it is you're hoping to achieve.
  11. Sorry for a bit of necromancy, but this post has an obvious and useful answer. The answer being that the shorthand you're doing to get your answer is disguising the real maths. What you're actually doing is saying: dy/dx= x/y, therefore ydy/dx=x hence if we integrate both sides with respect to x we retain equality (up to adding a constant). You do not actally split it as a fraction. int y(dy/dx)dx = int xdx Also, your definite integrals magically become indefinite, by the way. now we apply the notion that integration is anti-differentiation, since we know that y^2 differentiates to 2ydy/dx.
  12. For pity's sake, have you still not found out what the Euclidean Algorithm is? here, from your favourite wolfram: http://mathworld.wolfram.com/EuclideanAlgorithm.html or how about the search results from google for euclidean algorithm? http://www.google.co.uk/search?hl=en&q=euclidean+algorithm&btnG=Google+Search&meta= Do some thinking on your own!
  13. "In order to answer the question, one must first know the definition of greatest common divisor (also called greatest common factor)." No, one must not. I can get a computer to do Euclid's algorithm, and it does not know what the hcf/gcd is. Knowing what something "is" is definitely good, but is absolutely of no interest in actually applying a simple algorithm. What were you hoping to achieve with that post?
  14. He, the OP is probably talking of the Hausdorff distance between a finite set of points, at which point it makes sense to talk of mins and maxs
  15. Standard means with respect to the standard basis (1,0) and (0,1).
  16. Euclid's algorithm doesn't require you to know the prime factors of either of the numbers, and that is its beauty. Where are you going with this?
  17. Yes, 4 vectors in 3 dimensions must be linearly dependent, but once more I am completely mystified as to what your point is.
  18. Johnny, the naturals are not a field so cannot form the underlying scalars of any vector space. I haven't checked what the vectors are but I dont' need to since I can tell that the question is asking this: Suppose that u,v,w,z are 4 vectors in ANY space, for now. Suppose further that each pair of vectors, {u,v} {u,w} etc is an independent set. Is the set {u,v,w,z} NECESSARILY independent? The answer is no, as the above example shows: we almost certainly have 4 vectors that are pairwise linearly independent, but since they lie in a 3-d space, they must collectively be dependent. It is not a good idea to check dependence by attempting to write one specific vector as a combination of the other three. Let is show this by taking a deliberatly stupid example, suppose we hve the vectors a,2a,3a and b where a and b are linearly independent (in some space), then we although we know these are obviosuly dependent, we cannot write b as a combination of the vectors a,2a,3a.
  19. Domain of Discourse? Bugger, you're not a computer scientist are you? If you understood what the highest common factor was (you posted something about it but seemed to miss the key things) then it'd be obvious that it makes sense to talk about the highset common factor of any pair of integers and that Eulcid's algorithm can be used to calculate it, and that the hcf of x and y is the same as the hcf of -x,y and y,-x and -y,-x. I have no idea why you want to discuss the Euclidean algorithm at all to be honest.
  20. http://mathworld.wolfram.com/EuclideanAlgorithm.html uses Z....
  21. Mess up the latex? Who? Anyway, I am perfectly sure that the euclidean algorithm involves negative integers and I really don't understand why it's important: it's just showing that the integers are a PID, after all if x and y are (negative integers) (x,y)=(-x,-y) Right, I tihnk my patience is up on this at the minute (vicious toothache) and don't really see why you'd invoke the "associativity of multiplication" (completely bleeding obvious) .
  22. Why do you think that Euclid's algorithm must be something that Euclid defined? Just because it has his name attached means nothing: pythagoras didn't invent the pythagorean triples, Galois would have no idea about galois cohomology. If you insist on reading Elements as a modern text book in number theory, or Newton as a learning tool for vector spaces you're not going to get very far.
  23. In some order: 1 is not a prime; the modern definition (and I know you like your old ones) is that a prime is not allowed to be a unit (invertible multiplicatively). Euclid's algorithm states that there is a constructive way to write the highest common factor of two integers, a and b as an integral combination of them. That is to say if d =hcf(a,b) then there are integers s and t such that as+bt=d. A map between two "sets with structure" (eg a binary operation or two) is an isomorphism if it is a bijection on the underlying sets and it preserves the structure. So, for example, two groups, H and G are isomorphic if there is a map f:H-->G such that f(x*y)=f(x)*f(y) and f(x^-1) = f(x)^{-1} and f is a bijection on the underlying sets. I use * to denote the generic composition of elements in G and H. So isos send inverses to inverses and identities to identities. Thus if f is an iso between two fields it must preserve both the + and * of the field, and it must send 0 to 0 and 1 to 1, and so on. In particular, f(2)=f(1+1)=f(1)+f(1) = 2f(1)=2 since f(1)=1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.