Loading web-font TeX/Main/Regular
Jump to content

Recommended Posts

Posted

agghh i hate this kind of maths :P mainly because i am really bad at it

 

"this kind" being of course the proving of limits using deltas and epsilons :rolleyes:

 

can anybody help me or give some advice with this stuff?

 

Cheers

 

Sarah

Posted

Do you have a specific example in mind? I ask because solving such a problem might be more helpful to you, as opposed to just working out a random epsilon-delta proof.

Posted

umm ok just like proving the limit theroems for example. ie. lim(f(x)g(x)=LM x->a

if lim f(x)=L, x->a, g(x)=m,x->a

 

those sort of things....

 

btw i like your quote Dapthar :)

Posted
  Quote
umm ok just like proving the limit theroems for example. ie. lim(f(x)g(x)=LM x->a

if lim f(x)=L' date=' x->a, g(x)=m,x->a

[/quote']Sure. Apparently the LaTex module is still down, so I'll just have to write in plain text.

 

Assume as x->a, lim f(x) = L, and g(x) = M. We wish to show the following:

 

As x-> a, lim (f(x)*g(x)) = LM.

 

First, we need to translate the above condition into epsilons and deltas. (I use e for epsilon, and d for delta).

 

Thus, as x->a becomes |x-a| < d, and

 

lim (f(x)*g(x)) = LM becomes |f(x)*g(x) - LM| < e.

 

Now, given any e > 0, we want to find a d > 0 such that

 

(1) |x-a| < d implies |f(x)*g(x) - LM| < e

 

Keep the above in mind, as it is our goal.

 

Why? If we can do this, we have proven that, given any error e, we can find an x close enough to a such that we can make the difference between f(x)*g(x) and LM smaller than e. This means that 'when x equals a', f(a)*g(a) = LM.

 

We also know that as x->a, lim f(x) = L, and g(x) = M. Let's translate this to e's and d's.

 

First, let's deal with as x->a, lim f(x) = L.

 

This translates to, for any e1 > 0, there exists a d1 > 0 such that:

 

(2) |x-a| < d1 implies |f(x) - L| < e1

 

Now, let's deal with as x->a, lim g(x) = M.

 

Also, for any e2 > 0, there exists a d2 > 0 such that:

 

(3) |x-a| < d2 implies |g(x) - M| < e2

 

Now, we will use these in a little bit, so remember them.

 

We want to make (1) less than e, so we're going to do a standard mathematical trick (As one of my professors used to say, "When you're an undergraduate they're tricks, but when you're a graduate student, they're techniques".), adding and subtracting a quantity.

 

So, we get that

 

|f(x)*g(x) - LM| = |f(x)*g(x) - L*g(x) + L*g(x) - LM|

 

Rearranging this a bit, and factoring, we get

 

(4) |f(x)*g(x) - L*g(x) + L*g(x) - LM| = |g(x)*(f(x) - L) + L*(g(x)-M)|

 

Now, recall (2) and (3). They tell us something about |f(x)-L| and |g(x)-M|, specifically, they tell us 'epsilon and delta information' about |f(x)-L| and |g(x)-M|. So, we'd like to use these to learn 'epsilon and delta information' about (4). Thus, we make use of the triangle inequality.

 

Remember that the triangle inequality says that, for all a and b:

 

|a + b| =< |a| + |b| ( =< is 'less than or equal to')

 

So, let's use the triangle inequality to simplify (4). Here our a = g(x)*(f(x) - L), and our b = L*(g(x)-M). So, we get the following:

 

|g(x)*(f(x) - L) + L*(g(x)-M)| < |g(x)*(f(x) - L)| + |L*(g(x)-M)|

 

Simplifying, we get:

 

|g(x)*(f(x) - L)| + |L*(g(x)-M)| = |g(x)||f(x) - L| + |L||g(x)-M|

 

We can take two approaches from here, the more intuitive, longer one, or the less intuitive, shorter approach. I take the former.

 

We will now use (2) and (3). From these, we know that we can pick any positive e1 and e2, and there exists a positive d1 and d2 such that

 

|x-a| < d1 implies |f(x) - L| < e1

 

|x-a| < d2 implies |g(x) - M| < e2

 

Our first guess might be to pick e1 = e2 = e, so let's do that, and call the associated d1 and d2, d1 and d2. Note that for both conditions to be true, |x-a| has to be smaller than d1 and d2, so let d3 = min(d1, d2).

 

But wait, we didn't say anything about |g(x)| yet, and if we don't, |g(x)| will still be in our final answer, and we definitely don't want that.

 

Intuitively, we want |g(x)| to 'behave like' |L|. We'll see why this is helpful in a minute.

 

Recall that:

 

|x-a| < d2 implies |g(x) - M| < e2

 

Where we pick e2, and get a d2. So, let's pick e2 = 1. Then we get a d2 such that:

 

|x-a| < d2 implies |g(x) - M| < 1

 

Let's call this d2 by the name d4, and let d5=min(d4,d3) (this ensures that all the conditions we've set up so far will hold when |x-a| < d5).

 

Recall the reverse triangle inequality. For all a1 and b1:

 

|a1| - |b1| < |a1 - b1|

 

Let's apply this with a1 = g(x) and b1 = M.

 

Thus, we get:

 

|g(x)| - |M| < |g(x) - M| < 1

 

Therefore,

 

|g(x)| - |M| < 1

 

Rearranging a bit, we get

 

|g(x)| < |M| + 1.

 

 

Now, it's time to put it all together. Thus:

 

|x-a| < d5 implies that

 

 

|f(x)*g(x) - LM| < |g(x)||f(x) - L| + |L||g(x)-M| < (|M| + 1) * e1 + |L|*e2 = (|M| + 1) * e + |L|*e

 

Thus,

 

|x-a| < d5 implies that

 

 

|f(x)*g(x) - LM|< e*(|M| + 1 + |L|)

 

 

Now, this is almost what we wanted, however, we orginally set out to prove that we could find a d such that

 

|x-a| < d implies that

 

 

|f(x)*g(x) - LM|< e

 

However, this isn't a big problem. Now, we can see that insted of picking e1 = e2 = e, if we just picked e1 = e /(2*(|M|+1)), and e2 = e /(2*|L|), we would be given a d6 and d7 such that:

 

 

|x - a| < min(d6, d7) implies that (|M| + 1) * e1 + |L|*e2 = (|M| + 1) * e /(2*(|M|+1)) + |L|*e /(2*|L|) = e/2 + e/2 = e.

 

So, we just 'go back' and make this change. Now, if we set d = min(d4, d6, d7), we get that:

 

|x-a| < d implies that |f(x)*g(x) - LM|< e

 

Which is what we originally wanted.

 

 

If any part of this explanation was unclear, feel free to ask.

 

  Sarahisme said:
btw i like your quote Dapthar :)
Thanks.
Posted

Latex module?

 

"Now, let's deal with as x->a, lim g(x) = M.

 

Also, for any e2 > 0, there exists a d2 > 0 such that:

 

(3) |x-a| < d1 implies |g(x) - M| < e2"

 

 

is that meant to be |x-a| < d2?

 

because down here you have...

x-a| < d1 implies |f(x) - L| < e1

 

|x-a| < d2 implies |g(x) - M| < e2

 

your probably right of course :P i'm just yeah not sure if i know whats going on there? :)

 

and...

Our first guess might be to pick e1 = e1 = e, so let's do tha

 

is that mean to be e1=e2=e?

 

lol its all so complicated, but i think i am getting there.....

i tried this method with another proof, but it involves continuties so i got abit stuck.

"Use the formal definition of limit twice to prove that if f is continuous at L and if lim g(x) = L ,x->c then lim f(g(x)) = f(L), x->c

Posted
  Sarahisme said:
Latex model?? lol
LaTeX is the language used to display mathematical symbols in everything from bulletin boards to research papers. However, I guess it sounds a bit odd when mentioned out of context.

 

  Quote
"Now' date=' let's deal with as x->a, lim g(x) = M.

 

Also, for any e2 > 0, there exists a d2 > 0 such that:

 

(3) |x-a| < d1 implies |g(x) - M| < e2"

 

 

is that meant to be |x-a| < d2?[/quote']Yup, you're right, it should be d2. Mistake on my part.

 

  Quote

Our first guess might be to pick e1 = e1 = e' date=' so let's do tha

 

is that mean to be e1=e2=e?[/quote']Right again, it should be e1=e2=e. I'll edit my earlier post to correct these errors.

 

  Quote
i tried this method with another proof, but it involves continuties so i got abit stuck.

"Use the formal definition of limit twice to prove that if f is continuous at L and if lim g(x) = L ,x->c then lim f(g(x)) = f(L), x->c

You just need to translate 'f is continuous at L' into an epsilon-delta condtion.

 

Whenever anyone says that a function h(x) is continuous at a point b, it is the exact same thing as saying that as x->b, lim h(x) = h(b), i.e., the limit is what you expect it to be.

Posted

Dapthar, your discussion of epsilon/delta proofs was extremely good. I did not have time to inspect it carefully, but when I want to, it is something that will hold my attention. I could see your careful use of logic. ;)

 

Regards

Posted

I found that it didn't really make sense when I did it, but looking back on it after a year (and after you've had chance to pick up on all of the little hints), it gets a lot easier. I had the same kind of problems - I think more or less everyone does.

Posted

Sarah, there is a diagram which helps in understanding the epsilon/delta definition of 'limit' in any calculus text. Have you seen it?

Posted
  Quote
Dapthar' date=' your discussion of epsilon/delta proofs was extremely good. I did not have time to inspect it carefully, but when I want to, it is something that will hold my attention. I could see your careful use of logic. ;)

[/quote']Thanks. I try to write proofs that mimic the thought process one goes through when working out the problem. It usually ends up a bit longer than a normal proof, but hopefully it ends up being a bit clearer.

 

  Sarahisme said:
k thanks i'll look into a bit more, though this stuff still makes no sense to me :P but i'll try :)
Well, if you have any more questions, feel free to ask.
Posted

lol ok i got another question..... i understand your proof now but i still can't do this question....

 

"Use the relevant formal definition to prove that:

x->1-, lim 1/(x-1)=-infinity

Posted
  Quote
lol ok i got another question..... i understand your proof now but i still can't do this question....

 

"Use the relevant formal definition to prove that:

x->1-' date=' lim 1/(x-1)=-infinity[/quote']

 

Can you state this in words please, i want to have a go at it.

 

"The limit as x approaches one from the left, of one divided by x minus one equals negative infinity" <---- is that right ??

Posted
  Quote
lol ok i got another question..... i understand your proof now but i still can't do this question....

 

"Use the relevant formal definition to prove that:

x->1-' date=' lim 1/(x-1)=-infinity[/quote']As before, you just have to translate the conditions into epsilon-delta statements.

 

After you translate the conditions, you get the following. (Again, I use e and d for epsilon and delta)

 

For all M < 0, there exists a d > 0 such that

 

1 - x < d implies 1/(x-1) < M (The lack of absolute value symbols is purposeful.)

 

Since we have a definite function, we can solve for the specific d. Let's work with the 1/(x-1) < M expression.

 

If we are given any M < 0, we want to find an d such that 1 - x < d.

 

However,

 

1/(x-1) < M implies that 1/M < x - 1.

 

Multiplying both sides by -1, we get that

 

-1/M > 1 - x, thus if we let d = -1/M, we're done.

 

Thus, now given any M < 0, there exists a d > 0 such that 1 - x < d implies 1/(x-1) < M.

Posted
  Quote
As before' date=' you just have to translate the conditions into epsilon-delta statements.

 

After you translate the conditions, you get the following. ([i']Again, I use e and d for epsilon and delta[/i])

 

For all M < 0, there exists a d > 0 such that

 

1 - x < d implies 1/(x-1) < M (The lack of absolute value symbols is purposeful.)

 

Since we have a definite function, we can solve for the specific d. Let's work with the 1/(x-1) < M expression.

 

If we are given any M < 0, we want to find an d such that 1 - x < d.

 

However,

 

1/(x-1) < M implies that 1/M < x - 1.

 

Multiplying both sides by -1, we get that

 

-1/M > 1 - x, thus if we let d = -1/M, we're done.

 

Thus, now given any M < 0, there exists a d > 0 such that 1 - x < d implies 1/(x-1) < M.

 

How does this show that the limit is negative infinity?

 

I don't even see the infinity symbol.

 

Regards

 

PS: Nice work again by the way. I'm going to follow this argument eventually.

Posted
  Quote
How does this show that the limit is negative infinity?

 

I don't even see the infinity symbol.

 

Regards

It's one of the big secrets in Mathematics; formal proofs almost never deal with infinity. Note that the proof mentions "for all M < 0"' date=' i.e., I can choose [b']M[/b] to be -1, -10, or -1 000 000, thus, 'in the limit, we go to negative infinity'. Infinite limits 'basically' follow the same rules as 'regular' limits, that for any error e, we can provide a d such that if |x-a| < d then |f(x) - L| < e, so 'at a, f(x) equals L', except here, our L is negative infinity. We just had to show that 'we can get arbitrarily close to negative infinity'.

 

  Johnny5 said:
PS: Nice work again by the way. I'm going to follow this argument eventually.
Thanks.
Posted
  Quote
... for any error e' date=' we can provide a [b']d[/b] such that if |x-a| < d then |f(x) - L| < e, so 'at a, f(x) equals L'...

 

 

Is this the exact definition of limit that you see in calculus books?

Posted

Pretty much. You might see it written like this:

 

\forall \epsilon > 0 \exists \delta > 0 \text{ such that } | x-a | < \delta \Rightarrow |f(x) - L | < \epsilon

 

(using universal quantifiers). Translated into English, this reads:

 

"For any epsilon > 0, there exists a delta greater than zero such that..." and the rest is the same.

Posted
  Johnny5 said:
Is this the exact definition of limit that you see in calculus books?
dave's right. In addition, the for definition for:

 

\lim_{x\to a}f(x) = -\infty is

 

\forall L < 0 \exists \delta > 0 such that |x - a| < \delta \implies f(x) < L

Posted
  Quote
Pretty much. You might see it written like this:

 

\forall \epsilon > 0 \exists \delta > 0 \text{ such that } | x-a | < \delta \Rightarrow |f(x) - L | < \epsilon

 

(using universal quantifiers). Translated into English' date=' this reads:

 

"For any epsilon > 0, there exists a delta greater than zero such that..." and the rest is the same.[/quote']

 

 

That's it thats the one. Symbol per symbol. It was good to write "such that" as well Dave.

 

Ok I have a question about that Dave.

 

In first order logic, there is a difference between writing

 

\forall \epsilon \exists \delta

 

\exists \delta \forall \epsilon

Can you explain it to me rapidly?

 

I know I am being a pest, but thank you.

Posted

Well, the first one says that no matter what epsilon you choose, you can always find a delta. The second one is saying that for one specific delta, there are a load of epsilons.

 

I've heard a really good analogy of this in my Foundations lectures, but I can't seem to remember it offhand. Something to do with brothers and sisters. Perhaps I'll post it later if I can remember it.

Posted
  Quote
Well' date=' the first one says that no matter what epsilon you choose, you can always find a delta. The second one is saying that for one specific delta, there are a load of epsilons.

[/quote']

 

Yes I know that answer... I was thinking more along the lines of whether or not epsilon is a function of delta.

 

You know in the one case yes, and the other no, which links somehow to the meaning of function.

 

I never did understand the definition of 'function.'

Posted

Well, when it comes to deltas and epsilons, we don't consider functions as much as dependent upon; for example, convergence of a sequence:

 

\forall \epsilon > 0 \exists N \in \mathbb{N} \text{ such that } | a_n - a | < \epsilon \forall n \geq N

 

In this case, our N will depend on epsilon; often it's written N(\epsilon). I suppose you can consider it as a function if you wanted.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.