TheLivingMartyr
Senior Members-
Posts
45 -
Joined
-
Last visited
About TheLivingMartyr
- Birthday 04/12/1995
Profile Information
-
Location
UK, nottinghamshire
-
College Major/Degree
At Sixth Form College
-
Favorite Area of Science
particle physics, relativity, quantum field theory
Retained
- Lepton
TheLivingMartyr's Achievements
Quark (2/13)
11
Reputation
-
oh damnit, on my 5th line of TeX, the RHS should be negative, sorry! OK, thanks everyone! Wasn't sure if I'd made a mistake or whatever, thanks for clearing that up!
-
So we have three possible integrals for this function, sin(x)*cos(x).... which are [math] \frac{sin^2(x)}{2} [/math] [math] \frac{-cos^2(x)}{2} [/math] [math] \frac{-cos(2x)}{4} [/math] and these expressions satisfy the following equalities [math] \frac{1}{2} - \frac{cos^2(x)}{2} = \frac{sin^2(x)}{2} [/math] [math] \frac{-cos(2x)}{4} - \frac{1}{4} = \frac{cos^2(x)}{2} [/math] [math] \frac{sin^2(x)}{2} - \frac{1}{4} = \frac{-cos(2x)}{4} [/math] .....Is this something to do with the constant of integration? and if so, then how do you know which one to use when calculating a definite integral? Sorry ajb, my reply was an inb4, it just took ages to type
-
If two expressions are exactly equivalent, then are their integrals exactly equivalent? I was trying to work out, without using integration by parts (trying to avoid infinite series and all that) [math] \int sin(x) cos(x) dx [/math] So naturally, I consulted my double angle formulae, and saw that [math] sin(2x) = 2 sin(x) cos(x) [/math] which obviously implies that, [math] sin(x) cos(x) = \tfrac{1}{2} sin(2x) [/math] The integral of the RHS is an easy one, so I just did, [math] \frac{1}{2} \int sin(2x) dx = -\tfrac{1}{4} cos(2x) + c [/math] and so assumed that, [math] \int sin(x) cos(x) dx = -\tfrac{1}{4} cos(2x) + c [/math] but I then check Wikipedia, and a couple of integral calculators for good measure, and they tell me the actual integral is [math] -\tfrac{1}{2} cos^2(x) [/math] and since [math] \tfrac{1}{4} cos(2x) =! \tfrac{1}{2} cos^2(x) [/math] I'm now a bit stumped as to why my integral is wrong. All the below are confirmed to be correct, [math] sin(2x) = 2 sin(x) cos(x) [/math] [math] \int sin(2x) = -\tfrac{1}{2} cos(2x) [/math] by the same sources which told me the integrals were different! For God's sake, you can even go on one of those graph plotters and ask it to plot the integrals of sin(x)*cos(x) and (1/2)*(sin(2x)), and it plots the same graph twice!!! I'm tearing my hair out here, can somebody please tell me if I'm just missing something obvious, or if some of my sources are incorrect?
-
Trying to integrate a function
TheLivingMartyr replied to TheLivingMartyr's topic in Analysis and Calculus
So, when you say that many integrals cannot be expressed in terms of elementary functions, are you suggesting that there are other ways of expressing integrals? Or are many functions just such that they cannot be "expressed" at all, in any other way than just "the integral of another function"? Sorry, calculus just interests me so much! Taylor series seem to be coming up alot in expressing these complicated integrals! hmmmm, thankyou anyway, this will give me lots to mull over -
Trying to integrate a function
TheLivingMartyr replied to TheLivingMartyr's topic in Analysis and Calculus
Well, the aim of this post was to learn something about integration, I now realise why this function can't be integrated to another function, because this function clearly isn't a product, and no substitution will leave you with du as a scaled form of dx. Sorry for the basic terminology, but I understand why it can't be integrated now! Now, although I'm perfectly aware I'm probably getting ahead of myself, what is this "Elliptic Integral" supposed to achieve? -
I've been trying to integrate this function by substitution, and it doesn't seem to be getting me to the correct place. I'm not sure I fully understand how to use substitution. [math] y = \int\sqrt{x^3 - 1} dx [/math] I've only ever dealt with substitutions where you will and up with [math] du = a dx [/math] where a is a constant, but if I make the substitution [math] u = x^3 - 1 [/math] then I end up with [math] du = 3x^2 dx [/math] And you can't just slap 3x2 back into the the integrand. Can someone integrate this and tell me what you need to do?
-
hmm, I see what you mean about my first question, you'd be looking at a completely different thing if you were in a position to substitute variables, not a differential equation... I understand what the total derivative does, when variables depend on other variables, but - and I'm proposing a totally improvised idea, I haven't tried to do anything with it mathematically yet - why can't you differentiate something with respect to all variables, even when those variables don't rely on each other (just a thought, there's probably some topological reason why ) Something like [math] z = f(w, x, y) [/math] [math] \frac{dz}{dwdxdy} = ...? [/math] looks pretty stupid, but I don't see why it couldn't work...
-
I've been looking at some partial differential equation solving recently, and things like the total derivative come up quite often, lets say, When you can constrain y as a function of x, then [math] \frac{d}{dx} z(x,y) = \frac{dz}{dx} + \frac{dz}{dy} \frac{dy}{dx} [/math] (didn't know how to get the partial operator symbol, but I'm sure you know where it's meant to be) Now I understand that this obviously works, it can be shown via the chain rule, but I have two little intuitive problems with it 1. If you know y as a function of x, why can't you simply substitute in the function of x, and then just have an ordinary differential equation of z(x) 2. Surely the "Total Derivative" ought to show how the function varies as all variables vary, so why can't you have a total derivative when all the variables are independent (something like "The derivative of z with respect to x and y").
-
Integration of polynomials from first principles
TheLivingMartyr replied to TheLivingMartyr's topic in Analysis and Calculus
I've worked out a less messy method to prove that the antiderivative (ie. following the differentiation process backwards) will yield the area when bounds are applied. It relies on having first proved that dy/dx is the change y with respect to x, but allow me that assumption. It would also be nice if i could have a diagram to go with this, but try and imagine it XD. here goes Let A be the sum of the signed areas between a fixed point on y = 0 a, and a variable point x. The corresponding y coordinate of x will therefore be f(x) Now increment x by a small amount dx. We can say that A has consequently increased by a small change dA. We can approximate this dA by using a rectangle, f(x)*dx such that, [math] dA \approx f(x)dx [/math] It therefore follows to say that, [math] \frac{dA}{dx} \approx f(x) [/math] It can be observed that as dx decreases, the rectangle f(x)dx becomes closer to the true value of dA. We therefore say that, [math] \lim_{dx\to0} \frac{dA}{dx} = f(x) [/math] Since it can be shown that, [math] \lim_{dx\to0} \frac{dy}{dx} [/math] expresses the change in y with respect to x, ie. is the derivative of y with respect to x; this means that dA/dx is the derivative of the area with respect to x. If the derivative of the area is equal to the original function f(x), then it follows to say that if the inverse process of differentiation is applied to f(x), the equation for the area will be yielded. [math] \int dA = \int f(x)dx [/math] [math] A = \int f(x)dx [/math] so, for example, following the opposite steps to differentiation for a polynomial gives, [math] f(x) = kx^n [/math] [math] f'(x) = nkx^{n-1} [/math] [math] F(x) = \frac{kx^{n+1}}{n+1} [/math] Well, i think that proof covers pretty much everything, it's a bit messy at the moment, but I think it follows a good logical route. If there's anything I've missed please tell me -
Integration of polynomials from first principles
TheLivingMartyr replied to TheLivingMartyr's topic in Analysis and Calculus
so you mean to prove for the general rule kxm I would have to use a taylor power series rather than a summation of nm? I'm just trying to figure out how else to go about proving it, because that's the route I followed to prove kx2 Thanks though, I would have been on a wild goose chase otherwise XD -
Hello all, I can now differentiate or integrate most functions using the rules that I have learnt, so I set myself the challenge of trying to derive the known formulae: [math] \frac{d}{dx} kx^m = mkx^{m-1} [/math] [math] \int_{0}^{a} kx^m dx = \frac{ka^{m+1}}{m+1} [/math] from first principles (And I don't mean just doing the reverse of the derivative for the integral, I mean by actually summing rectangles of width tending to zero). By this I mean without any external help except for binomial expansions and one summation identity Anyway, so I managed to do it fine for differentiation, and I did it for integration for the special case kx2, but this relied on me being given the identity: [math] \sum_{r=0}^{n-1}r^2 = \tfrac{1}{6}(n-1)n(2n-1) [/math] I can't find anywhere the general rule for the summation: [math] \sum_{r=0}^{n-1}r^m [/math] If anyone knows the general summation, it would be a great help XD I can manage all the rest fine, it's just I don't think i'll have the willpower to inductively work out that summation. thankyou!!!!!!
-
I was listening to the news yesterday morning, and I heard a short mention that the gang down at the LHC had discovered a new particle, the Chib (3P) particle, as they call it, which is composed of a bottom - antibottom pair. It's is supposedly involved in the Nuclear Force, somehow, and I have always had an extremely deep interest in particle physics and fundamental forces, so I'd be excited to know how this particle is involved. If anyone knows anything about it, or has any links to experimental documents, I'd be very interest, and I'm sure the rest of the forum would be too.
-
wait a minute.....oh dear, it seems the equation [math] 4x^2 + \frac{1}{x^2} [/math] is never going to cross the x-axis. I see what I did wrong now, I forgot about the constant of integration earlier on in the question. I'll do that and see if i get any better results Thanks for your help, i realised my folly! The constant turned out to be -4, so i'll rewrite the equation: Solve: [math] 4x^2 + \frac{1}{x^2} - 4 = 0 [/math] So here, i can see that there would be values of x where y would be zero. So again, I'm wondering if it is mathematically acceptable to times by x2 ie, change it to [math] 4x^4 + 1 - 4x^2 = 0 [/math] or would that be losing some of the solutions? sorry, I've just never had to solve things like this before.
-
I'm doing my integration homework, and I can do the integration fine, but one of the functions i've got now is [math] 4x^2 + \frac{1}{x^2} [/math] It asks me to find where this function crosses the x-axis, ie. Solve it. But seeing as there are positive and negative powers of x I don't know how to go about solving it. I thought maybe you could do the following: [math] 4x^2 + \frac{1}{x^2} = 0 [/math] [math] 4x^4 + 1 = 0 [/math] [math] (2x^2 + 1)^2 - 4x^2 = 0 [/math] [math] So -2x^2 + 1 = 0 [/math] [math] 2x^2 - 1 = 0 [/math] [math] (\surd{2}x + 1)(\surd{2}x - 1) = 0 [/math] [math] x = \frac{\surd{2}}{2}... or... x = - \frac{\surd{2}}{2} [/math] I'm not sure if you can do it this way? but if you can could you tell me whether or not my answer is correct, and if you cant do it this way, could someone tell me how you are supposed to solve such an equation. Thanks
-
Can you make a concept of the 5th dimension?
TheLivingMartyr replied to R A J A's topic in Classical Physics
But wait, according to general relativity, one can represent gravity as a curvature of spacetime into another, inaccessible dimension. Obviously, the three spatial dimensions will need a fourth hyperspacial dimension to curve into, and the one temporal dimension will need a second hypertemporal dimension to bend into. So 4 dimensional spacetime, is situated in 6 dimensional hyperspacetime, into which it can curve and bend. Is this not one of the modern understandings of mass', energy's and velocity's affect on space and time?