Jump to content

Jim Kata

Members
  • Posts

    10
  • Joined

  • Last visited

Retained

  • Quark

Jim Kata's Achievements

Quark

Quark (2/13)

17

Reputation

  1. String theory was discovered from the study of Regge trajectories in the scattering of mesons and baryons. Veneziano came up with a beta function amplitude to explain this phenomena. Nambu and Goto showed that such an amplitude can be obtained from a bosonic String theory. Very crudely, the scattering of four open string tachyons can be given at tree level by the S matrix amplitude: [math] S(k_1 ;k_2 ;k_3 ;k_4 ) \propto B( - \alpha 's - 1, - \alpha 't - 1) + B( - \alpha 's - 1, - \alpha 'u - 1) + B( - \alpha 't - 1, - \alpha 'u - 1) [/math] where the Euler Beta function is given by: [math] B\left( { - \alpha 'x - 1, - \alpha 'y - 1} \right) = \frac{{\Gamma \left( { - \alpha 'x - 1} \right)\Gamma \left( { - \alpha 'y - 1} \right)}} {{\Gamma \left( { - \alpha 'x - \alpha 'y - 2} \right)}} [/math] Where gamma is the Euler gamma function: [math] \Gamma (x) = \int\limits_0^\infty {e^{ - u} } u^{x - 1} du [/math] Using the conventions found in Polchinski's book [math] s + t + u = - \frac{4} {{\alpha '}} [/math] Now the Regge limit is when [math]s \to \infty[/math] and t is fixed Using the Stirling approximation of the the gamma function for large s [math] \Gamma \left( {( - \alpha 's - 2) + 1} \right) \approx ( - \alpha 's - 2)^{( - \alpha 's - 2)} \exp \left( {\alpha 's + 2} \right)\left( {2\pi ( - \alpha 's - 2)} \right)^{1/2} [/math] So the Beta function for the s channel is approximately [math] B( - \alpha 's - 1, - \alpha 't - 1) \approx \frac{{\left( { - \alpha 's - 2} \right)^{ - \alpha 's - 2} \exp ( - \alpha 't - 1)(2\pi ( - \alpha 's - 2))^{1/2} }} {{\left( { - \alpha 's - \alpha 't - 3} \right)^{ - \alpha 's - \alpha 't - 3} (2\pi ( - \alpha 's - \alpha 't - 3))^{1/2} }}\Gamma ( - \alpha 't - 1) [/math] Doing a little algebra [math] \frac{{\left( { - \alpha 's - 2} \right)^{ - \alpha 's - 2} }} {{\left( { - \alpha 's - \alpha 't - 3} \right)^{ - \alpha 's - \alpha 't - 3} }} = \left( {1 + \frac{{\alpha 't + 1}} {{\alpha 's + 2}}} \right)^{\alpha 's + 2} \left( { - \alpha 's - \alpha 't - 3} \right)^{\alpha 't + 1} [/math] and [math] \mathop {\lim }\limits_{s \to \infty } \left( {1 + \frac{{\alpha 't + 1}} {{\alpha 's + 2}}} \right)^{\alpha 's + 2} \left( { - \alpha 's - \alpha 't - 3} \right)^{\alpha 't + 1} = \exp (\alpha 't + 1)\left( { - \alpha 's - \alpha 't - 3} \right)^{\alpha 't + 1} [/math] So [math] B( - \alpha 's - 1, - \alpha 't - 1) \approx \mathop {\lim }\limits_{s \to \infty } \left( { - \alpha 's - \alpha 't - 3} \right)^{\alpha 't + 1} \left( {\frac{{ - \alpha 's - 2}} {{ - \alpha 's - \alpha 't - 3}}} \right)^{1/2} \Gamma \left( { - \alpha 't - 1} \right) [/math] Using L' Hopital's rule you can show the square root term approaches 1 in the limit so for the s channel your just left with: [math] B( - \alpha 's - 1, - \alpha 't - 1) \approx \left( { - \alpha 's} \right)^{\alpha 't + 1} \Gamma \left( { - \alpha 't - 1} \right) [/math] I can do a similar thing for the t channel and show I get [math] B( - \alpha 's - 1, - \alpha 'u - 1) \approx \left( {\alpha 's} \right)^{\alpha 't + 1} \Gamma \left( { - \alpha 't - 1} \right) [/math] but for the u channel I get a mess. According to Polchinski's book the answer should be something like: [math] S(k_1 ;k_2 ;k_3 ;k_4 ) \propto s^{\alpha 't + 1} \Gamma \left( { - \alpha 't - 1} \right) [/math] Any ideas or proofs of this would be greatly appreciated.
  2. Alright, let me legitimately try to answer your question. So you know that the integral [math]\int\limits_a^b {f'(s)ds = } f(b) - f(a)[/math]. Yes? You know that the the derivative of a constant is always zero right? [math]\frac{d}{{dx}}C = 0[/math]. So if you are doing an indefinite integral you can only determine the antiderivative up to constant. That is all [math] f(x) + C [/math] will yield the same derivative, namely [math]f'(x)[/math]. Now pertaining to your question, another way you can write [math] \int\limits_a^x {f'(s)ds} = f(x) - f(a)[/math](1), although less rigorous, is [math]f(x)=\int {f'(x)dx}+C[/math]. This is an indefinite integral, there are no definite bounds of integration. Now like I said, equation (1) is the same thing as the integral I just wrote. How? The reason is because [math]\int\limits_a^x {f'(s)ds = } f(x) - f(a)[/math] so [math]f(x) = \int\limits_a^x {f'(s)ds} + f(a)[/math]. The reason mathematicians like to write it like equation (1) is because it already satisfies initial conditions, in that [math]C[/math] is already chosen. As far as your question about replacing [math]b[/math] with [math]x[/math], the answer is quite simple. When you write [math]b[/math] you are talking about a specific number namely [math]b[/math]. When I replace [math]b[/math] with [math]x[/math] I am talking about any possible real number. Think about it the same way as you did for definite integrals, but just now [math]b[/math] is not given to you it can be any real number, [math]x[/math]. You said you wanted to see some examples heres a few, sorry I only know Physics: A basic one [math]a = -g[/math] where [math]g = 9.8\frac{m}{{s^2 }}[/math] so [math]\int\limits_{v_0 }^v {dv} = - \int\limits_0^t g{dt}[/math] [math]v - v_0 = - gt - g(0)[/math] which implies [math] v = - gt + v_0 [/math]. See how in this method I didn't have to plug in initial values in order to find [math]C[/math] it is done by my limits of integration on my velocity integral. You could integrate this again [math]\int\limits_{x_0 }^x v = \int\limits_0^t {( - gt + v_0 )dt}[/math] which gives you [math]x = - \frac{1}{2}gt^2 + v_0 t + x_0[/math]. The equation for the free fall motion of an object. Here's a more elaborate integral: [math] \vec E(\vec r,t) = \frac{1} {{4\pi \varepsilon _0 }}\left[ {\frac{{\rho (\vec r',t_r )}} {{\left| {\vec r - \vec r'} \right|^3 }}(\vec r - \vec r') + \frac{{\dot \rho (\vec r',t_r )}} {{c\left| {\vec r - \vec r'} \right|^2 }}(\vec r - \vec r') - \frac{{\vec \dot J(\vec r',t_r )}} {{c^2 \left| {\vec r - \vec r'} \right|}}} \right]dv' [/math] Solving this gives the general solution to any electric field. There's tons and tons more examples of how integrals are used, butj you should read up on it on your own.
  3. I believe I can prove from my previous post that [math] [J^{\alpha \beta } ,D^{\mu \tau } ] = 0 [/math] tell me if you think my argument works. First just by looking at equation (13) and (16) in my previous post, it should be obvious that they would be consistent if [math] [J^{\alpha \beta } ,D^{\mu \tau } ] = 0 [/math]. Here's a sketch of a proof. I'm sorry if it is not very rigorous. It should be noted that according to the commutation relation (12) in my previous post that the commutator of [math]J^{\alpha \beta }[/math]and [math]P^\rho[/math] is a function of [math]P[/math]. Because of this fact and the fact that [math][D^{\alpha \beta } ,P^\rho ] = 0[/math], it is true that [math][D^{\alpha \beta } ,[J^{\mu \tau } ,P^\rho ]] = 0[/math]. Now using the Jacobi identity we have: [math] [D^{\alpha \beta } ,[J^{\mu \tau } ,P^\rho ]] + [J^{\mu \tau } ,[P^\rho ,D^{\alpha \beta } ]] + [P^\rho ,[D^{\alpha \beta } ,J^{\mu \tau } ]] = 0 [/math]. The first term is obviously zero for reasons mentioned above and the second term in the Jacobi is also zero since [math][P^\rho ,D^{\alpha \beta } ] = 0[/math] as mentioned in my previous post. This leaves the identity [math][P^\rho ,[D^{\alpha \beta } ,J^{\mu \tau } ]] = 0[/math]. Now there is a theorem that says:A linear operator that commutes with each of a complete set of commuting observables is a function of those observable. Using this theorem it is implied that [math] [D^{\alpha \beta } ,J^{\mu \tau } ] = f^{\alpha \beta \mu \tau } (P) [/math]. Now since [math]D^{\alpha \beta }[/math] commutes with [math]P^\rho[/math], looking at the equation in the previous sentence it is true that [math][D^{\alpha \beta } ,[J^{\mu \tau } ,D^{\alpha \beta } ]] = 0[/math]. Lets consider the set of eigenvectors of [math]J^{\mu\tau}[/math]. So [math] J^{\mu \tau } \left| {\mathop m\limits^{\mu \tau } } \right\rangle = \lambda ^{\mu \tau } (m)\left| {\mathop m\limits^{\mu \tau } } \right\rangle [/math], where [math]m[/math] are the labels of the states of the eigenvectors corresponding to any of the operators [math]J^{\mu \tau }[/math], and [math]m[/math] doesn’t have to be the same labeling for each of the operators. I am also discretely assuming that the eigenvalues of these operators are discrete and finite, which for a particular gauge can be shown. Taking the expectation value of the double commutator [math] [D^{\alpha \beta } ,[J^{\mu \tau } ,D^{\alpha \beta } ]] = 0 [/math] you get [math] \mathop \sum \limits_{m'} \left( {\lambda ^{\mu \tau } (m') - \lambda ^{\mu \tau } (m)} \right)\left| {d_{m'm}^{\alpha \beta (\mu \tau )} } \right|^2 [/math] (2.1), where [math] d_{m'm}^{\alpha \beta (\mu \tau )} = \left\langle {\mathop {m'}\limits^{\mu \tau } } \right|D^{\alpha \beta } \left| {\mathop m\limits^{\mu \tau } } \right\rangle [/math]. If there existed an [math]m[/math] for which [math] \lambda ^{\mu \tau } (m') \ne \lambda ^{\mu \tau } (m) [/math] with [math]d_{m'm}^{\alpha \beta (\mu \tau )}[/math] also not equal to zero, one could choose the [math]m[/math] that made [math] \lambda ^{\mu \tau } (m)[/math] the smallest eigenvalue to satisfy the criteria above, this can be done since [math]m[/math] is a sum over a finite range of numbers. In which case the right hand side of equation (2.1) would be positive definite, contradicting equation (2.1). From this it can be concluded that [math]d_{m'm}^{\alpha \beta (\mu \tau )}[/math] must vanish for all [math]m'[/math],[math]m[/math] in which [math]\lambda ^{\mu \tau } (m') \ne \lambda ^{\mu \tau } (m)[/math]. This proves that the eigenvectors of [math]J^{\mu \tau }[/math] also diagonalize [math]D^{\alpha\beta}[/math] and hence [math][J^{\alpha \beta } ,D^{\mu \tau } ] = 0[/math] which means [math]D^{\alpha\beta}[/math] acts like an internal symmetry.
  4. This is why I only eat rocks.
  5. Just to be an even a bigger dick, the generalization of the fundamental theorem of calculus is known as "Stokes theorem". It says let [math] \partial V [/math] be the closed p-dimensional boundary of a (p+1) dimensional surface [math] V [/math]. Let [math] \sigma [/math] be a p-form defined throughout [math] V [/math] Then [math] \int\limits_V {{\mathbf{d}}\sigma } = \int\limits_{\partial V} \sigma [/math] The integral of p-form [math] \sigma [/math] over the boundary [math] \partial V[/math] equals integral of (p+1)-form [math] {\mathbf{d}}\sigma [/math] over the interior [math]V[/math] Where [math] \sigma = \frac{1} {{p!}}\sigma _{i_1 i_2 \cdots i_p } (x^1 , \cdots ,x^n ){\mathbf{d}}x^{i_1 } \wedge \cdots \wedge {\mathbf{d}}x^{i_p } [/math] and [math] {\mathbf{d}}\sigma = \frac{1} {{p!}}{\mathbf{d}}\sigma _{i_1 i_2 \cdots i_p } (x^1 , \cdots ,x^n ){\mathbf{d}}x^{i_1 } \wedge \cdots \wedge {\mathbf{d}}x^{i_p } [/math] with wedge operator [math] \wedge [/math] defined as [math] {\mathbf{A}} \wedge {\mathbf{B}} = {\mathbf{A}} \otimes {\mathbf{B}} - {\mathbf{B}} \otimes {\mathbf{A}} [/math] and [math] \otimes [/math] being the tensorial product of the two spaces. The fundametal theorem of Calculus that you are learning about is a corollary of Stoke's law. Namely: the integral of a gradient [math] {\mathbf{d}}f [/math] along a curve, [math] P(x) [/math] from [math] P(a) [/math] to [math]P(b)[/math] is: [math] \int {{\mathbf{d}}f = \int\limits_a^b {\left\langle {{\mathbf{d}}f,dP/dx} \right\rangle dx = \int\limits_{P(a)}^{P(b)} {\frac{{df}} {{dx}}dx = f(P(b)) - f(P(a))} } } [/math] This law is also called the fundamental law of line integrals Where in your case the line is just the x-axis
  6. I agree with most of all these recommendations, but I have a few to add. At the undergraduate mechanics level, I believe Marion is the standard. A great book on waves and optics is book titled simply waves from the Berkeley series of text books, if you can find it. All the books in the Berkeley series are great. Another great source for classical physics is anything by Landau and Liftshitz, especially The Classical Theory of Fields. Russian authors are the kings of all forms of writing not just literature. A nice little book on symplectic mechanics is Mathematical Methods of Classical Mechanics by V.I. Arnold I feel like I have to mention a book on condensed matter since w=f(z) didn't. For undergraduate Kittel is pretty standard, but I would skip this book and go straight to Solid State Physics by Ashcroft and Mermin. It's a little older, but still the best. You could always supplement this with A Quantum Approach to Condensed Matter Physics by Taylor and Heinonen. I don't see how you can even mention relativity and not mention Gravitation by Wheeler, Misner, Thorne. Granted it may be 35 years old, but it is still a tome of knowledge. As far as QFT is concerned, DO NOT BUY PESKIN & SCHROEDER. I don't care if it is the standard used in grad schools. This book, in my opinion, SUCKS! I much prefer Weinberg's treatment of the subject. Another great book in QFT is Itzykson and Zuber, but this book is a little older and will probably have to be supplemented with a more current text. I've heard Zee's book has some interesting applications to Condensed Matter, but I haven't had a chance to look at it. If you want to look at an alternative to String Theory such as Loop Quantum Gravity check out John Baez's book Gauge Fields, Knots, and Gravity or Carlo Rovelli's book Quantum Gravity. An online copy of this book can be found at http://www.cpt.univ-mrs.fr/~rovelli/book.pdf Well, there you go. If you actually read, and understood, all this shit, you'd be a professional physicist.
  7. Let me preface this by saying, I am not a professional mathematician, and I do not know that much about Lie Algebras. In this question I will be using the strong Einstein summation convention. The summation over the greek indices will be over numbers 0,1,2,3. My question involves comparing and contrasting the Lie Algebras of the Poincare group versus that of the group [math] GL(4,\mathbb{R}) [/math] with translations [math] \mathbb{R}^4 [/math]. In special relativity coordinate transformations are given by [math] {\mathbf{\bar x}} = \Lambda {\mathbf{x}} + {\mathbf{a}} [/math] where [math] \eta = \Lambda ^T \eta \Lambda [/math] and [math] \eta _{\alpha \beta } = \left\{ {\begin{array}{*{20}c} {0 if \alpha \ne \beta } \\ {-1 if \alpha ,\beta = 0} \\ {1 otherwise} \\ \end{array} } \right. [/math]. A unitary representation of this transformation may be written as [math] U\left( {\Lambda ,{\mathbf{a}}} \right) [/math]. Infinitesimally, [math] U\left( {1 + \omega ,\varepsilon } \right) = 1 + i\frac{1} {2}\omega _{\alpha \beta } J^{\alpha \beta } + i\varepsilon _\rho P^\rho [/math] where [math] \omega _{\beta \alpha } = - \omega _{\alpha \beta } [/math] and [math] J^{\beta \alpha } = - J^{\alpha \beta } [/math]. It can be worked out that [math] U(\Lambda ,{\mathbf{a}})U(1 + \omega ,\varepsilon )U^{ - 1} (\Lambda ,{\mathbf{a}}) [/math] will give two equations: [math] U(\Lambda ,{\mathbf{a}})J^{\alpha \beta } U^{ - 1} (\Lambda ,{\mathbf{a}}) = \Lambda _\mu ^\alpha \Lambda _\tau ^\beta (J^{\mu \tau } - a^\mu P^\tau + a^\tau P^\mu ) [/math] (1) and [math] U(\Lambda ,{\mathbf{a}})P^\rho U^{ - 1} (\Lambda ,{\mathbf{a}}) = \Lambda _\mu ^\rho P^\mu [/math] (2) Where [math] \Lambda _\mu ^\tau = \eta _{\mu \sigma } \Lambda ^\sigma _\gamma \eta ^{\gamma \tau } [/math] Taking [math] U\left( {\Lambda ,{\mathbf{a}}} \right) [/math] and [math] U^{ - 1} \left( {\Lambda ,{\mathbf{a}}} \right) [/math] as infinitesimals from equation (1) you obtain two Lie Algebras Namely: [math] i\left[ {J^{\alpha \beta } ,J^{\mu \tau } } \right] = \eta ^{\beta \mu } J^{\alpha \tau } - \eta ^{\alpha \mu } J^{\beta \tau } - \eta ^{\tau \alpha } J^{\mu \beta } + \eta ^{\tau \beta } J^{\mu \alpha } [/math] (3) and [math] i\left[ {P^\rho ,J^{\mu \tau } } \right] = \eta ^{\rho \mu } P^\tau - \eta ^{\rho \tau } P^\mu [/math] (4) Doing a similar procedure with equation (2) you obtain two more Lie Algebras Namely: [math] i\left[ {J^{\mu \tau } ,P^\rho } \right] = - \eta ^{\rho \mu } P^\tau + \eta ^{\rho \tau } P^\mu [/math] (5) Which is consistent with equation (4) and [math] \left[ {P^\alpha ,P^\beta } \right] = 0 [/math] (6) Now recreating this entire process except this time using [math] GL(4,\mathbb{R}) [/math] instead of [math] O(3,1) [/math]. A general coordinate change is given by [math] {\mathbf{\bar x}} = {\mathbf{Ax}} + {\mathbf{a}} [/math] where [math] {\mathbf{A}}\varepsilon GL(4,\mathbb{R}) [/math] and [math] {\mathbf{a}}\varepsilon \mathbb{R}^4 [/math]. It has the property [math] {\mathbf{\bar g}} = {\mathbf{A}}^T {\mathbf{gA}} [/math] where [math] {\mathbf{g}}[/math] is the symmetric bilinear form on the space. It's unitary representation is [math] U\left( {{\mathbf{A}},{\mathbf{a}}} \right) [/math]. Infinitesimally, [math] U\left( {1 + \xi ,\varepsilon } \right) = 1 + i\xi _{\alpha \beta } A^{\alpha \beta } + i\varepsilon _\rho P^\rho [/math] where [math] \xi _{\alpha \beta } = g_{\alpha \mu } \xi ^\mu ,_\beta [/math] and [math] \varepsilon _\rho = g_{\rho \mu } \varepsilon ^\mu [/math]. Doing a coordinate change [math] \xi _{\alpha \beta } = \xi _{[\alpha ,\beta ]} + \xi _{\{ \alpha ,\beta \} } [/math] and [math] A^{\alpha \beta } = J^{\alpha \beta } + D^{\alpha \beta } [/math] where [math] \xi _{[\alpha ,\beta ]} = \frac{1} {2}(\xi _{\alpha \beta } - \xi _{\beta \alpha } ) [/math],[math] \xi _{\{ \alpha ,\beta \} } = \frac{1} {2}(\xi _{\alpha \beta } + \xi _{\beta \alpha } ) [/math], [math] J^{\alpha \beta } = A^{\alpha \beta } - A^{\beta \alpha } [/math], and [math] D^{\alpha \beta } = A^{\alpha \beta } + A^{\beta \alpha } [/math] you obtain [math] U(1 + \xi ,\varepsilon ) = 1 + \frac{1} {2}i\xi _{[\alpha ,\beta ]} J^{\alpha \beta } + \frac{1} {2}i\xi _{\{ \alpha ,\beta \} } D^{\alpha \beta } + i\varepsilon _\rho P^\rho [/math] (7) Now doing the same procedure as done for the Poincare group three equations are obtained for [math] U({\mathbf{A}},{\mathbf{a}})U(1 + \xi ,\varepsilon )U^{ - 1} ({\mathbf{A}},{\mathbf{a}}) [/math] namely: [math] U(A,a)J^{\alpha \beta } U^{ - 1} (A,a) = A^{ - 1\alpha } _\mu A^{ - 1\beta } _\tau (J^{\mu \tau } + a^\mu P^\tau - a^\tau P^\mu ) [/math] (8) [math] U(A,a)D^{\alpha \beta } U^{ - 1} (A,a) = A^{ - 1\alpha } _\mu A^{ - 1\beta } _\tau (D^{\mu \tau } - a^\mu P^\tau - a^\tau P^\mu ) [/math] (9) [math] U(A,a)P^\rho U^{ - 1} (A,a) = A^{ - 1\rho } _\mu P^\mu [/math] (10) Taking [math] U({\mathbf{A}},{\mathbf{a}}) [/math] and [math] U^{ - 1} ({\mathbf{A}},{\mathbf{a}}) [/math] as infinitesimals from equation (8) you obtain three Lie Algebras namely: [math] i\left[ {J^{\alpha \beta } ,J^{\mu \tau } } \right] = g ^{\beta \mu } J^{\alpha \tau } - g ^{\alpha \mu } J^{\beta \tau } - g ^{\tau \alpha } J^{\mu \beta } + g ^{\tau \beta } J^{\mu \alpha } [/math] (11) which is the equivalent of equation (3) for a general metric [math] i\left[ {P^\rho ,J^{\mu \tau } } \right] = g ^{\rho \mu } P^\tau - g ^{\rho \tau } P^\mu [/math] (12) which is the equivalent of equation (4) for a general metric [math] i[J^{\alpha \beta } ,D^{\mu \tau } ] = g^{\beta \mu } J^{\alpha \tau } + g^{\beta \tau } J^{\alpha \mu } - g^{\alpha \tau } J^{\beta \mu } - g^{\alpha \mu } J^{\beta \tau } [/math] (13) Doing the same procedure with equation (9), you obtain: [math] i[D^{\alpha \beta } ,D^{\mu \tau } ] = g^{\beta \mu } D^{\alpha \tau } + g^{\beta \tau } D^{\alpha \mu } + g^{\alpha \tau } D^{\beta \mu } + g^{\alpha \mu } D^{\beta \tau } [/math] (14) [math] i[P^\rho ,D^{\mu \tau } ] = - g^{\rho \mu } P^\tau - g^{\rho \tau } P^\mu [/math] (15) [math] i[J^{\alpha \beta } ,D^{\mu \tau } ] = g^{\beta \mu } D^{\alpha \tau } + g^{\beta \tau } D^{\alpha \mu } - g^{\alpha \tau } D^{\beta \mu } - g^{\alpha \mu } D^{\beta \tau } [/math] (16) and doing the same procedure with equation (10) you obtain [math] i\left[ {J^{\mu \tau},P^\rho } \right] = -g ^{\rho \mu } P^\tau + g ^{\rho \tau } P^\mu [/math] (17) which is consistent with equation (12) [math] i[P^\rho ,D^{\mu \tau } ] = g^{\rho \mu } P^\tau + g^{\rho \tau } P^\mu [/math] (18) and [math] \left[ {P^\alpha ,P^\beta } \right] = 0 [/math] Looking at equations (15) and (18) led me to conclude that [math] [P^\rho ,D^{\mu \tau } ] = 0[/math] looking at equation (14) I also had to conclude that [math] [D^{\alpha \beta } ,D^{\mu \tau } ] = 0[/math] Now equations (13) and (16), do not appear to be consistent. The only way I can think of in which they would be consistent is if [math] [J^{\alpha \beta } ,D^{\mu \tau } ] = 0[/math] is there an easy way to prove that, maybe using the Jacobi identity? Am I going about this all wrong? What is going on? Why do I get these inconsistencies in the Lie Algebras? I know the answer is probably simple.
  8. Sorry in advance for the length of this question. Can someone show me the solution to the integral [math] \int\limits_{ - \infty }^\infty {\prod\limits_r {d\xi _r } } \exp ( - 1/2\sum\limits_{r,s} {K_{rs} } \xi _r \xi _s + V(\xi )) [/math] Where [math] K_{rs} [/math] is positive definite, non-singular, and symmetric and [math] V(\xi ) [/math] is analytic? The reason I ask is because in quantum field theory you encounter integrals like these in the Feynman path integral formalism. I want to show through a path integral integration for the photon propagator you get [math] - i\Delta '_{\mu \tau } (x,y) = \left\langle {T\{ A_\mu (x)A_\tau (y)\} } \right\rangle _0 [/math] Where [math] \Delta ' = \Delta [1 - \Pi ^* \Delta ]^{ - 1} [/math] with [math] \Delta _{\mu x,\tau y} = (2\pi )^{ - 4} \int {d^4 } q\frac{{\eta _{\mu \tau } }}{{q^2 - i\varepsilon }}e^{iq \cdot (x - y)} [/math] in the Feynman gauge and [math] i(2\pi )^4 \Pi ^{*\rho \sigma } (q) [/math] equal to the sum of the one particle irreducible graphs (with two external photon lines). I would like to see two things. One that [math] \int\limits_{ - \infty }^\infty {\prod\limits_r {d\xi _r } } \exp ( - 1/2\sum\limits_{r,s} {K_{rs} } \xi _r \xi _s + V(\xi ))= \exp (Z) [/math]. Where Z is the sum disconnected diagrams. That is I would like to see the exponentiation of disconnected diagrams. Also, if someone could show me perturbatively why the tadpole diagrams in QED disapear, through a path integral. I know this integral can be solved since [math] \int {\prod\limits_r {d\xi _r } } \xi _{r_1 } \xi _{r_2 } \ldots \xi _{2n} \exp ( - 1/2\sum\limits_{r,s} {K_{rs} } \xi _r \xi _s ) = [Det(\frac{K}{{2\pi }})]^{ - \frac{1}{2}} \sum\limits_{\scriptstyle pairings \hfill \atop \scriptstyle r_1 \ldots r_{2n} \hfill} {\prod\limits_{pairs} {K^{ - 1} } } [/math] with the sum being over all ways of pairing indices [math] r_1 \ldots r_{2n} [/math], with two pairings being considered the same if they differ only in order of pairs, or by the order of indices within the pair. So for example, [MATH] \int {\prod\limits_r {d\xi _r } } \xi _{s_{_1 } } \xi _{s_2 } \exp ( - 1/2\sum\limits_{rs} {K_{rs} \xi _r } \xi _s ) = [Det(\frac{K}{{2\pi }})]^{ - \frac{1}{2}} [(K^{ - 1} )_{s_1 s_2 } ] [/MATH]. I do not understand in quantum field theory when say [math] \Delta '(q) = \Delta (q) + \Delta (q)\Pi ^* (q)\Delta (q) + \Delta (q)\Pi ^* (q)\Delta (q)\Pi ^* (q)\Delta (q) + \cdots [/math]. Can this be shown directly from the path integral equation [math] \frac{{\int {\prod\limits_r {d\xi _r \xi _\mu \xi _\tau \exp [ - 1/2\sum\limits_{rs} {K_{rs} \xi _r \xi _s + V(\xi )]} } } }}{{\int {\prod\limits_r {d\xi _r \exp [ - 1/2\sum\limits_{rs} {K_{rs} \xi _r \xi _s + V(\xi )]} } } }} [/math] and the perturbative properties of QED, not Furry's theorem. Here's the sketch I see, let [math] I_{\mu \tau } = \frac{{\int {\prod\limits_r {d\xi _r \xi _\mu \xi _\tau \exp [ - 1/2\sum\limits_{rs} {K_{rs} \xi _r \xi _s + V(\xi )]} } } }}{{\int {\prod\limits_r {d\xi _r \exp [ - 1/2\sum\limits_{rs} {K_{rs} \xi _r \xi _s + V(\xi )]} } } }} [/math] so [math] I_{\mu \tau } = \frac{{(\Delta _{\mu \tau } + \Delta _{\mu \rho } \Pi ^{\rho \sigma } \Delta _{\sigma \tau } + \Delta _{\mu \rho } \Pi ^{\rho \sigma } \Delta _{\sigma \lambda } \Pi ^{\lambda \delta } \Delta _{\delta \tau } + \cdots )\exp (Z)}}{{\exp (Z)}} [/math] while some where along the line showing that the sum of the diagrams with one external photon line disappear. I think that can be shown from the trace properties of the Dirac matrices.
  9. We all know that a photon has no rest frame, and so it travels at the speed of light. My question is, what is happening at the event horizon of a black hole? Isn't the event horizon the plane where even light can not escape? At the event horizon, is the light in a rest frame?
  10. I can not derive the Fermi Walker transport equation to save my life, help! The Fermi Walker transport equation is [math] \frac{{d\hat e_\alpha }} {{d\tau }} = - \Omega \bullet \hat e_a [/math] where [math] \Omega ^{\mu \nu } = u^\mu a^\nu - a^\mu u^\nu \[/math] with u and a being the proper velocity and acceleration. I can derive the equation for a Thomas precession with [math] A_T = I + \frac{{\gamma ^2 }} {{\gamma + 1}}(\vec v \times \delta \vec v)S + (\gamma ^2 \delta \vec v_\parallel + \gamma \delta \vec v_ \bot )K \[/math] with [math] S_{1 = } \left( {\begin{array}{*{20}c} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & { - 1} \\ 0 & 0 & 0 & 1 \\ \end{array} } \right) \[/math] ,c=1 and so on. I'm drunk and tired, but looking at these equations the Fermi Walker transport looks similar to Thomas precession except for a factor of [math] \frac{{\gamma ^2 }} {{\gamma + 1}} \[/math] and some other differences. What is the connection between these formulas, and how do I derive the formula for the Fermi Walker transport? Please do not quote Misner chapter 6, chapter 8, or chapter 13.
  11. I know this has been answered a million times, but I am using math type and I want to know how to post it on forums. I know how to change mathtype into LaTex, but I don't know how to get the LaTex to show up in html.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.