fermion Posted August 21, 2007 Posted August 21, 2007 Lagrangian density for a self interacting real scalar field in four space-time dimensions is: L = (1/2)* [((del-mu)fi) * ((del-mu)fi) - (m**2) * (fi**2)] + (1/4) * lambda * (fi**4), where fi=real scalar field, m=mass, lambda=quartic self coupling constant, and (del-mu)fi=derivative of the field fi with respect to the space-time index mu, with mu = 0,1,2,3. If lambda=0, we have a free scalar field. With lambda > 0, we can scale it away with a simple transform: psi = fi * sqrt(lambda). If we substitute psi for fi in L we get: L = (1/lambda) * { (1/2)*[((del-mu)psi) * ((del-mu)psi) - (m**2) * (psi**2)] + (1/4)*(psi**4) } Expressed for the new scaled field psi, (1/lambda) is an overall numerical constant factor multiplying the Lagrangian density, which has no effect on the equations of motion. In other words lambda is scaled away. Does that mean that all non-zero values of lambda are equivalent? Is this interpretation correct? Is this a well known result???
timo Posted August 21, 2007 Posted August 21, 2007 That's a boring and an interesting result at the same time. The boring part is that you're effectively saying that f(x) and f(y=2x) are equivalent. It is interesting in the sense that it seems to show where and how the quantization criteria interplay and kind of give scale to physics (well, that's at least what I do think; it's not that it's something I ever thought about before or have some authorative 3rd party opinion). In other words: I think the point you're looking for is the quantization criteria. [math]\phi[/math] and [math]\psi[/math] cannot obey the same quantization criterion because the commutator of two [math]\psi[/math]-dependent (where "-dependent" shall mean "expressed as [math]\psi[/math], the two variables are of course just multiples of each other) operators will differ by a factor of [math]\lambda[/math] from that of the [math]\phi[/math]-dependent ones. I'd have to read up a bit on it, but I think that's what it boils down to: As soon as you fix the quantization condition for your field, you cannot (or at least should not) abritrarily rescale it, anymore.
ajb Posted August 28, 2007 Posted August 28, 2007 This is an interesting question. Classically, I think that this rescaling is absolutely fine. As Atheist says, quantum mechanically this won't work. The reason is due to remormalisation and how we interpret the Lagrangian you wrote down. If [math]\lambda[/math] is interpreted as the "unrenormalised" or "bare" coupling (i.e. no quantum corrections) then it is formally infinite. (This is the multiplicative renormalisation scheme) This is ok, as you cannot measure it. You should also note that the mass [math]m[/math] is also formally infinite and we have quantum corrections to it. We also need to renormalise the field. Now, as infinity is not a number we cannot divide by it and so we cannot preform the rescaling that you suggest. The other way to proceed is to interpret [math]\lambda[/math] as the renormalised coupling and to counter act the divergences we need to add counter terms. I imagine that the difficulty in rescaling will show as a difficulty in choosing the counter terms, the counter terms are proportional to terms in the original Lagrangian and so will have a [math]\lambda[/math] dependance. This must be the case as both multiplicative and counter terms schemes are equivalent. Either way, I don't think that the rescaling is useful otherwise we would do it from the start.
BenTheMan Posted August 28, 2007 Posted August 28, 2007 In other words lambda is scaled away. Does that mean that all non-zero values of lambda are equivalent? Is this interpretation correct? Is this a well known result??? The higgs self coupling lambda is related to the vacuum expectation value of the higgs field, so I would say that there are REAL consequences. Ahh, but no. If you redefine the field, you still get the same vev.] The yukawa sector may make trouble for you, but I think, again, you'll get lambdas in all the right places to compensate. The thing is that you have to do it uniformly---if you rescale the interaction term, you have to rescale all of the other terms, too. My guess is that you'd get factors of lambda in all the right places (i.e. the propogator) to make up for the lambda you took away in the interraction. Finally, you want a perturbative series in the coupling, so you probably want lambda to be much less than 4 pi. Otherwise, the series doesn't converge. Again, I suspect that you'll get lambdas in all the right places so that everything works out to be the same. This is an EXCELLENT question for a midterm Now, as infinity is not a number we cannot divide by it and so we cannot preform the rescaling that you suggest. Ahh you mathematicians. Isn't this the easy way to do renormalization?
fermion Posted September 4, 2007 Author Posted September 4, 2007 Thank you for three useful replies . I think I agree that the quantization is the key issue here as all three replies focused on it. However, there may be some subtle weakness in the quantization argument. Before I get to that, I would suggest BenTheMan not to worry about the Yukawa sector: there are no fermions in this model. (With Fermions, the scaling is still possible, but it involves the simultaneous rescaling of fermions as well.) We can also avoid the complications of the vacuum expectation value of fi (both classical and renormalized) by choosing the sign of m**2 appropriately so that the vacuum expectation value is zero. Back to the quantization argument: Typically the free field fi (with lambda=0) can be expanded in terms of (momentum dependent) creation and annihilation operators, and these are assumed to satisfy the standard canonical quantization rules. (See the book by Peskin and Schroeder for example.) All is well at this point. The next step is to turn lambda on. Now, at the Lagrangian level, the field fi can no longer be expanded in terms of the creation and annihilation operators (with or without attempting to quantize the creation and annihilation operators.) At this level, therefore, it is impossible to assert that the quantization sets a well defined scale for lambda, because we don't even know what we are quantizing. The statement could still be true, but it is a leap of faith, not a rigorous argument. But there is a way out. We turn lambda on, but we keep it small (as small as possible, assuming that a small values of lambda are stable and do not grow too big by renormalization. This is another leap of faith, but it sounds plausible.) With this small lambda, we intend to do perturbative renormalization. To this end, we assume that the expansion of fi in terms of creation and annihilation operators is still valid, and we quantize it as we did for the free field case. Based on this free field quantization condition, we get the usual quadratically divergent renormalization counter-terms for the mass and lambda, which are handled within a well defined and self-consistent renormalization prescription. This of course prevents rescaling of lambda as pointed out by two of the replies. The weakness in this argument is in the second sentence of this paragraph: "... but keep it small...". Obviously the assumption that lambda remains small is contrary to the statement that lambda can be scaled away. If scaling was possible there would be no difference between lambda=0.001 and lambda=100. Now, only if we could make the argument in the above paragraph without relying on the fact that the (renormalized) values for lambda remain small if the bare coupling is small, then I think I will be convinced.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now