Jump to content

Seeking Help of Mathematicians (Lie Algebra, Invariants, Group Theory) To Complete A Derivation.


Recommended Posts

Posted

Hello internet. My name is Kyle, I am a third year electrical engineering student at the University of British Columbia (in Canada) and I am looking for the help of those of you who are mathematically trained or so inclined specifically within the fields of Group Theory, Lie Algebra, and Invariants. I have stumbled across a very interesting problem that I would like to either solve or see solved, but I have realized lately that it could take me years worth of research to do so while someone who is already an expert in the aforementioned fields might be able to solve this problem very quickly, that is not to say that I am giving up on trying. What I am proposing to solve I have already partially completed. Many people who might read this post and the corresponding problem may find it very controversial and so I also invite these people to try and prove me wrong ;) .

 

There has been a new theory proposed recently within the field of Quantum Mechanics that you may or may not be aware of, it is known as: “Tetryonics”. For anyone rolling their eyes right now it was published in a peer reviewed journal, the International Journal of Scientific and Research Publications, Volume 4, Issue 5, May 2014, a quick google scholar search of “Tetryonics” will pull up the relevant paper. I believe I have devised a way to mathematically derive and prove this theory to be correct and henceforth supersede all others.

 

For those of you unaware of the existence of this new theory, or its intricacies I will not give a detailed explanation here, doing so would simply take too long. Simply defining my current problem and strategy to solve it is already going to take up a lot of space. The following is a link to a youtube video that gives a good overview and introduction of the theory that was created by the theory’s author Kelvin Abraham: https://www.youtube.com/watch?v=0p_NyNfXd7k. The video is very long (2 hours) but I highly recommend it to anyone interested in the field of Quantum Mechanics. Unfortunately it is probably not possible to critique, improve or maybe even understand the majority of the rest of this post without being familiar with the basics covered in that video, or if you do not have a mathematical background. I am going to breeze through many advanced mathematical concepts in Differential Geometry because once again rigorously defining everything will just take too much space. I am going to assume from after the next paragraph that you have watched the video and that you basically have all or most of a bachelors degree in mathematics, and know a thing or two about statistics, quantum mechanics, and differential geometry (or calculus on manifolds).

 

However in order to wet the appetite of anyone unfamiliar with the theory, in as few words as possible I will now try and summarize the basics of Tetryonics. In a nutshell Tetryonics reformulates all concepts in physics in terms of Geometry. Single Quanta of Energy are proposed to possess a literal 2d geometry with real dimensions. The tessellation of this 2d geometry creates familiar concepts regarding the energy of particles and systems, the overlap of these 2d tessellations creates forces, and 3d standing waves comprised of these 2d energy geometries create Matter. I believe I have found a way to mathematically derive and prove the theory of Tetryonics. Doing so involves the following steps:

 

1.) Show that energy can and should possess geometry.

 

2.) Prove that this geometry should be a regular tessellation of the plane.

 

3.) Prove that of the three possible regular tessellations: triangles, squares and hexagons, triangles should be used because they create the Standard Normal Distribution.

 

4.) Prove that the 2d triangular geometry of energy can be used to create 3d models of Matter.

 

5.) Prove that of the five possible forms of 3d Matter, Tetrahedrons are the ones that should be used to model individual quanta of Matter.

 

As I wrote before I have already partially completed this derivation. Steps 3.) and 4.) are mostly complete, though I do not think what I have come up with so far for them could actually be classified as a mathematical proof. Furthermore I have what I think is a very good strategy for completing step 1.), but my expertise in the fields required to carry out this strategy is sorely lacking, basically non-existent. Steps 2.) and 5.) I have no idea how to complete.

 

I am going to now explain my “proofs” of steps 3.) and 4.) in order for anyone interested in lending a hand to get a better feel for what I am trying to do here.

 

Tetryonics proposes that what we normally think of as the Standard Normal Distribution is in fact only an approximation to the true Standard Normal Distribution as created by Tetryonic Geometry.

 

In Tetryonics the square fractal geometries of triangular energy quanta create square energy levels, looking at any picture of a KEM field you can see that it “approximates” the familiar bell curve of the standard normal distribution.

 

post-112772-0-98178600-1437098745_thumb.png

 

Take any KEM field and starting from the center and heading toward either the left or right edge of the KEM field, iteratively add up the total number of quanta in each successive vertical column and divide that by the total number of quanta in the entire geometry. You will find unsurprisingly, that this “approximates” the standard normal distribution, and is the most accurate at points near the center of the distance toward the edge. What is less obvious is that if you take the integer numbers of quanta in each vertical column as elements in a probability distribution, this distribution is in fact a general normal deviate. The distribution being the set of integers, X = {1, 2, 3,… N-2, N-1, N, N-1 ,N-2,… 3, 2, 1}. Proving this is actually very easy, it merely requires re-scaling the elements of the distribution. This proof comprises part of step 3.), the other part of step 3.) is the justification as to why this is useful.

 

The following is a quote from Wikipedia, and is something so commonly known in statistics that I don’t think it requires a more reputable source “if X is a general normal deviate, then Z=(X-μ)/σ will have a standard normal distribution”, where μ and σ are the average and standard deviation of the elements of X respectively. Knowing this, the following algorithm can be implemented in excel or your favorite computational mathematical software for any distribution X of maximum value N and achieve the same end result.

 

1.) Compute the average of X, μ.

2.) Compute the standard deviation of X, σ.

3.) Rescale all entries of X by subtracting μ from them and dividing by σ.

4.) Calculate the average and standard deviation of these rescaled entries.

 

You will find that for any N you will calculate an average value of approximately 0, and a standard deviation of 1. I say approximately because there is some error, but in Excel for an N of only 10, I calculate an average value of -1.51925E-16, and a standard deviation of 1. I applied the same algorithm to the algebraic calculation of the standard deviation of the rescaled elements of X and got an equation filled with composite sums, there is a picture attached to this post that shows what I got.

 

post-112772-0-59152100-1437098626_thumb.png

 

The most inner sums of ∑(2*i)+N, are the result of summing all the elements of X, in order to calculate μ.

In order to “prove” that Tetryonic geometry creates the normal distribution I take the limit of this expression as N goes to infinity and show it equals one. I have not been able to successfully do this yet using algebra, trying to simplify this expression by hand is so complicated as to be nearly impossible. All of the algebra software programs I have tried so far have resulted in divergence to infinity despite always giving a calculated limit for a fixed, finite N of 1. This I believe is because they do not actually calculate composite sums but try to approximate them, and these composite sums are difficult to get rid of using algebraic manipulation. However if this algorithm results in convergence to one numerically then there has to be a closed form algebraic solution to the limit of that expression as N goes to infinity, I simply have not yet found it.

 

As I have written before, verifying my results is actually quite easy and I recommend doing it yourself but for now I will move on to explaining why this result is important, and useful to the larger derivation.

 

Basically this result shows how the geometry of energy itself results in a perception and literal measurement of “uncertainty” in the energy, location, and momentum (or state) of particles. A KEM field is used in Tetryonics to model the Kinetic energy and momentum of a particle, because quanta of energy have a literal geometry with real dimensions and each particle is situated within its KEM field the KEM field acts like a “buffer”. That is the best way that I understand and can explain it. The interaction of a particles KEM field with the particles environment is most of the time more significant than the interaction of the electrostatic fields created by the particles charged fascia. As a result, depending on what portion and how much of a particles KEM field you are able to interact with and measure at any given time you will measure uncertainty in the state of that particle. The state of this particle will be normally distributed through space, because Tetryonic geometry is normally distributed. This is how the physical geometry of energy creates what is measured to be a probability distribution of the different aspects of the state of a particle. It is now pertinent to try and provide an answer to a logical question.

 

Why does Tetryonic geometry only approximate what has come to be accepted as the standard normal distribution?

 

There are two possible explanations. The first is that what is measured as the standard normal distribution in experiments has simply been interpolated wrong. Though this may be impossible, if you were to sample the distribution of a particle with far greater spatial accuracy you may in fact measure a more triangular distribution. The second is that what we measure may appear to be curved simply because of noise. A KEM field grows and increases in energy through the addition of ever increasingly larger odd numbers of quanta, meaning that the interference of KEM fields of large energy levels should produce distributions with sharp curvature, because the net change in energy between high energy levels is high. While the net change in energy between low energy levels is low, producing distributions with lower curvature. These two scenarios correspond to the characteristic central tip and edge of the bell curve. Basically we may never be in fact measuring the energy or momentum of just one particle, but the superposition of multiple KEM fields of varying energies.

 

The implications of the idea that Tetryonic geometry is normally distributed are very important. What this does is show how we can reformulate the laws of physics to be “continuous across all length scales”. Things no longer suddenly become “strange” when we enter the quantum realm, everything is still determinate, the fact that energy can superimpose on top of each other making it impossible for us to observe a quantum system without altering it as well as the geometry that energy itself possesses simply makes working on such length scales with large degrees of accuracy difficult.

 

That concludes step 3.), what I have so far is definitely not perfect, but I think all of the pieces are there. Now for step 4.). I will start step 4.) by first providing an explanation and mechanism as to why and how we are unable to actually measure the true 3d geometry of energy and this will lead into a (partially complete) mathematical description as to how we can model 3d Matter using surfaces created out of 2d energy.

 

The basics of the explanation is very simple, any “glowing” object that is sufficiently small will appear to be a point source, regardless of its actual geometry. A helpful analogy as to why is this: imagine we are trying to create an image of a sports car by throwing rubber balls at it and making notes about the balls trajectory as they bounce off. Keeping with the rules of Quantum Mechanics, large balls (or particles of large wavelength, and low kinetic energy) travel very slowly and because of their large wavelength or size cannot bounce off of every crevice and curve of the car, they end up giving us a “lumpy” image. In fact if the balls are large enough the car might even look spherical. Smaller balls (particles of lower wavelength and higher energy) are able to bounce off of smaller features of the surface of the car and give us a better image. However as we decrease the wavelength or size of our balls and subsequently are required to increase their energy we eventually will run the risk of smashing the car to pieces due to the high energy of the balls, and we will be unable to actually image the car. Through this analogy we can see that because of the nature of trying to work with the smallest bits of energy and Matter our “resolution of measurement” is fundamentally limited and sufficiently small objects will always appear spherical or upon impact with wavelengths small enough to hit their tiny faces will be obliterated. A more mathematical description of this phenomenon that can be related to 2d geometries involves something called the Gauss Bonnet Theorem, and a related mathematical construct known as The Gauss Map. I will start by explaining the Gauss Bonnet Theorem (GBT). The GBT simply put says that the total curvature of any closed orientable surface without any holes in it is 4 pi. The actual mathematical formulation of this is:

 

∫KdA+∫Kg ds=2πX(S)

 

For anyone who does not know what that equation means it is simply this: the addition of the total curvature over a surface, plus total line curvature around any excluded regions on the surface (imagine a sphere with a hole cut out of one side) is equal to two pi multiplied by a constant (characteristic of the type of surface) called the surfaces “Euler number”. For closed orientable surfaces without any holes in them (exclude anything like a “donut” or torus, or an incomplete surface) such as spheres, or in our specific case the platonic solids this is equal to four pi.

 

The value of the total curvature is not important, the fact that the total curvature is conserved between different surfaces of the same Euler number is, as this guarantees that we can define (at least one) isomorphism between them. Or more specifically between points with a particular value of curvature on each surface. The isomorphism that is guaranteed to exist is called the Gauss Map.

 

The simplest way to visualize how this works is to imagine a Tetrahedron, and a normal vector placed upon it of infinite length. Now let this normal vector “wander continuously” across every point on the Tetrahedron. As you do this the tip of the normal vector at infinity will trace out a sphere of infinite radius. The trick is to imagine the normal vector bending continuously over every edge and point.

 

The flow of the ideas behind this derivation become a bit rocky at this point, but as with step 3.) I think I have all the necessary pieces, just not a good order to present them in. I will now describe a few things from the field of Differential Geometry (without rigorous definition) as these are important to how the Gauss Map actually functions and how we can use it. Anyone who has gone through undergraduate science or engineering will be familiar with how a surface can be defined as having a range in 3d space and a domain in the 2d plane. In Differential Geometry the domain in the 2d plane is more rigorously defined as a collection of open sets, each associated with a map onto part of the surface. These maps and associated open sets are called “coordinate patches”, the entire (minimum) set of coordinate patches that it takes to cover a surface is called the surfaces “Atlas”.

 

The number of possible coordinate patches that you could define that would be able to completely cover any surface is basically infinite and so you may also define something called a “transition map” or an isomorphism between different coordinate patches. This allows you to reparametrize a surface in terms of a different 2d domain, still resulting in the exact same surface. An example of this for parametric spheres is to let θ and ϑ range instead of between 0 and 2 pi, between 6 pi and 8 pi, this works obviously because trig functions are periodic. Or specifically in the case of the Gauss Map we can reparametrize one surface into a completely different surface by defining isomorphisms between the 2d coordinate patches that define each separate surface. By composing these isomorphisms together you can construct the Gauss Map. Attached to this post is a crude picture as to what the Gauss Map looks like.

 

post-112772-0-50876600-1437098643_thumb.png

 

So how is this relevant to using 2d geometry to model Matter? Well I am eventually going to get there but it is a very odd roundabout route. We first need to do something that will at first seem very strange. We are going to transform a sinusoidal wave into a triangular one. Doing this is very simple if all we want to do is conserve the total energy in the wave, coincidentally the way that we equate the total area under the curve over two cycles of the wave will conserve curvature for us (and construct the Gauss Map). To conserve the total energy of the sinusoidal wave of amplitude A, and transform it into a triangular wave of amplitude B we integrate the root mean square of the sinusoid and the triangular waves simultaneously over two cycles (4 pi) and equate them. Imagine that these waves’ energy were quantized, what this allows us to do is define a linear transformation or simple conversion factor between our triangular and sinusoidal energy quanta or more specifically their amplitudes, but we could do this for any aspect of their geometry simply change the unknown in the integral from the amplitude of the wave to whatever you wish. For now lets just assume that Matter is in fact a standing wave defining a surface, or a wave restricted to propagate only across a surface. Then our model of 3d Matter would almost be complete, all that we would need to do now is define how the 2d geometry becomes a surface. Unfortunately this derivation stumbles here as I do not know how this actually physically happens, but it is very simple to envision the mathematical construction of such a thing for a sphere. For a sphere we simply define the sphere by its polar parametrization, (ρ sin⁡〖(ϑ) cos(θ) ,ρ sin(ϑ) sin(θ),ρcos(ϑ)), and by letting θ, and ϑ range from zero to two pi we are able to “wrap” a sinusoidal wave around a sphere (with some overlap). By associating the “information” or energy (momentum, mass etc. you could conserve whatever you want) of the wave with curvature, or in this case conveniently directly with the domain of our 2d coordinate patches, or the cycles of the wave, we can define The Gauss Map between spheres and Tetrahedrons in such a way that not only is the total curvature of each surface conserved, but also the total energy (or information: mass, momentum, etc. whatever you want really). In order to completely define The Gauss Map however we would also need the transformation that takes 2d triangular waves and wraps them around a Tetrahedron. This is once again a point where this derivation stumbles, I do not actually have that right now, though it is obviously not an impossible task to define them, you could do it with 10 piecewise functions, six for the edges, and four for the faces, not impossible, simply irritating, and so I have not yet bothered.

 

Having completely defined the Gauss Map, completely defines the math for our structural model for 3d Matter using 2d waves and how any platonic solid sufficiently small can be mistaken for a sphere (more about this in a second). There are some conceptual stumbling blocks though that unfortunately have no foundation in the standard model and can only be explained using Tetryonics.

 

Waves are technically neutral, how does this create particles with net charge? Even if a standing wave can have a net charge, why and how can we measure it around a particle?

 

The only way to demonstrate how this is possible is by using Tetryonics, the very thing we are trying to derive, so this is again a huge problem with this part of the derivation. Basically it comes back to the way in which Tetryonics defines energy quanta, they are double sided triangular energy “coins”. The orientation of each coin comprising a face of the Tetrahedron determines the Net charge of the entire Matter quanta. There are two different Net neutral configurations of Tetryons, one positive, and one negative, you’ll have to go back and look at Kelvin’s Materials to get a good view of how this works. This creates the structure, but does not necessarily explain how we can measure the charge. I am a little bit unsure of this one myself, currently I have no explanation other than that is just how it works, right now we just define it that way. An important thing to keep in mind about this however is that in Tetryonics electric fields do not propagate throughout all of space, energy with its real geometry and finite dimensions also comprises electrostatic fields meaning that electric fields have finite size, for now we simply say that you can sense the charge of a wave if you are REALLY close to it.

 

In the standard model, technically different aspects of an EM wave don’t exist at the same time or place, they induce each other, or in mathematical terms are conjugates reciprocating each other through an action (in this case propagation).

 

Once again there is unfortunately no explanation for this in the standard model, Tetryonics simply shows that this idea is wrong. Or at least it shows that it is possible to construct an alternate model that creates the same results, both theoretically and experimentally, but through an entirely different method, and since we can’t ever get an accurate peak at what is occurring at the quantum level, is probably impossible to disprove experimentally.

 

 

As shown in the picture above, once again, energy is a triangular coin with a literal geometry and real dimensions. These triangular coins when comprising the diamond geometry of a photon radiate outward increasing in size (and in wavelength, producing an alternate mechanism for red shift coincidentally). When you look at one half of the diamond geometry you can see that the individual quanta within it create a triangular wave of alternating electric and magnetic potential, as this wave expands and propagates outward, without the quanta actually changing or oscillating, we, when measuring at a fixed point, measure an oscillation as the wave passes us. The oscillation is not measured as being triangular for the same reasons why the actual bell curve is not actually curved, the curve arises out of noise at the quantum level.

 

How does energy physically fold into becoming 3d Matter?

 

I have no idea. Matter quanta are held together by the interactions of the magnetic dipoles of each quanta and larger particles (like protons) are held together by the combined interaction of these Matter quanta’s magnetic dipoles as well as the interactions of the electric charge on each of the faces. For very large 3d geometries and things like atoms it is actually not just a combination of attractive forces but repulsive ones as well that give these larger geometries their structure. Matter is created out of the tessellation of Tetrahedrons covered in differing poles of electric and magnetic energy as macro scale objects are created out of the tessellation of atoms. While this provides an explanation as to how Matter stays together it does not explain why it forms in the first place. As I understand things, parts of the theory of Tetryonics are still works in progress. Kelvin may have an answer but I have not yet seen it.

 

As a bit of an interesting aside, and to summarize and further explain the idea of Matter being created by a standing wave across a surface I am going to now delve back into the formulation of the transformation between a triangular and a sinusoidal wave. As I explained earlier to define this all we do is equate the integral of the root mean square of each wave to create a conversion factor between some aspect of each waves geometry. I would like to demonstrate what happens to the units of the electric and magnetic fields under a few re-parametrizations, one that Kelvin discovered, and a few that might be well known to anyone familiar with the theories of special and general relativity.

 

By integrating the magnetic field over an area we get magnetic flux, since right now we are only interested in what happens to the units we will just assume that when we integrate something we are talking about integrating the root mean square. Magnetic flux has the units of:

 

Tesla^' s = V*s = ((kg*m^2)/(Amp*s^3 ))*(s) = (kg*m^2)/(C*s)

 

This has units of planck quanta per coulomb. Now when we integrate the electric field over an area to get electric flux, we see that electric flux has units of:

 

(N*m^2)/C = (((kg*m)/s^2 )*m^2)/C = (kg*m^3)/(C*s^2 ) = ((kg*m^2)/(C*s))*c

 

At the last step what I did was, I factored out a meters per second, and since we are talking about energy, which only moves at one speed, I decided to replace that meters per second with the speed of light. So this becomes planck quanta per coulomb multiplied by the speed of light. Now in our transition map since we want to conserve all of the information of the wave we are going to add the result of these two integrals together and conserve their sum. Doing so we get this equation what I like to call the fundamental theorem for the reparametrization of energy, this is what we use to change the shape of energy (hopefully) without breaking physics.

 

E+B=((kg*m^2)/(C*s))_E+((kg*m^2)/(C*s))_B*c=Conserved

 

We now take the inverse seconds to signify frequency.

 

((f*kg*m^2)/C)_E*((f*kg*m^2)/C)_B*c

 

Now obviously since Einstein’s time everybody knows that this is true.

 

E=kg*c^2

 

Energy has the units of mass times the speed of light squared, but now we are going to do something very interesting that Kelvin came up with. Those of you familiar with general relativity may have worked in a relativistically normalized coordinate system before. Basically you take units of measurement of distance to be the distance light travels in one second, units of time to be seconds, but interestingly enough you can if you wish take c2 to mean two oppositely traveling beams of light travelling out of a central point defining the diameter of a circle, or radial area per second. So now we reparametrize mass as being a measure of energy per area, or energy per radial second.

 

E/c^2 =kg

 

And our sum now becomes this.

 

((f*(E/m^2 )*m^2)/C)_E+((f*(E/m^2 )*m^2)/C)_B*c

 

This equation is full of subtlety. We are not going to cancel the area terms even though their values are equal. Why? Because they mean different things and cancelling them obscures what is happening here. The inverse area under the energy is the measure of the amount of 2d area we have that is going to be wrapped around our surface, it is the area of each energy quanta in the 2d wave. The area that is not in the brackets underneath the energy is the measure of the surfaces surface area. If both of these area’s and subsequently the frequency are changing we have an expanding EM wave, if they are constant we have Matter. There is a bit of a stumbling block here too, not so much in how this works but simply understanding it. A quanta of Matter is a Tetrahedron, comprised of two diamond shaped energy geometries that are restrained to exist as a standing wave on a surface. However an expanding photon in Tetryonics is not an expanding Tetrahedron, it is two perpendicular expanding diamonds. If you were to project the quanta in these diamonds onto a surface you would get an octahedron. The confusion arises from the Gauss Map itself, since technically it creates isomorphisms between all closed orientable surfaces without holes in them. But since as part of our derivation we have shown that only triangles are an actually viable 2d geometry for energy we are restricted then to only using the platonic solids, because these are the only closed orientable surfaces with regular faces, and coincidentally coincide with each of the possible ways of tiling a sphere with equilateral triangles.

 

I should make a quick remark about the speed of light term. This actually does have an explanation in terms of both Tetryonics and the standard model. In each case it is the root property of energy that creates the Lorentz transform. For those of you who do not know what that is basically in the standard model it is thought that the magnetic field is actually a doppler shifted electric field. A spherical charge travelling very fast will emanate an electric field, though this charge is very small it still has finite dimensions and the speed of light though large is also finite, meaning that you would detect the electric field from the front of the charge slightly before the electric field emanated from the back. I don’t understand the rest of the specifics, but it is this Doppler shifted electric field that actually is the magnetic field. In Tetryonics when you watch / or read through the part about kinetic energy you will see how momentum is associated with the electric field only, the electric field propagates and essentially drags the magnetic field behind it. And so in the case of our equation we can see why we have a speed of light associated with the root mean square of the electric field integrated over an area. There is a more mathematical description as to why that c term is there too. We are working in a relativistically normalized coordinate system, by doing that and saying that our energy geometries travel at the speed of light we are also saying they travel along unit speed curves, a requirement of the machinery of the Gauss Bonnet Theorem.

 

It is also important to clarify what seems like a fudging of the meaning of electric flux here, we seem to be integrating over a 2d area parallel to the propagation of EM energy and defining that as flux, that’s not the case at all. The diamond geometries <>, expand outward laterally, towards the edge of the page < > , but you also have to imagine them travelling toward your face, OUT of the page. A true photon is actually three perpendicular diamond shaped geometries each expanding in two directions, < left, right >, and propagating perpendicular to the diamond. So this concludes step 4.), it is a bit of a jumbled mess but as I have already written I think all of the necessary pieces of a complete proof are here.

 

But so far we have a pretty good idea how to do steps 3.) and 4.), as I said before I have no idea how to do step 5.), step 1.) I think I have a good strategy but not enough knowledge to find what I think I should be looking for, and step 2.) is probably closely related to step 1.).

 

So now for my strategy to completing step 1.). The people best suited to the task are people who are familiar with the fields of Group Theory, but more specifically Lie Algebra and the study of Invariants. Some peoples intuition may already be telling them what mine is. Noether’s theorem provides a way to test theories in physics by checking to see if they obey laws of conservation, and derive models of physical systems based upon the aspects that they conserve. It also provides a path for the mathematical derivation of the existence of conservation laws. What I am interested in is if it can be used to somehow associate a conservation of energy with a conservation of geometry. I am currently in the process of learning about Group Theory and eventually Lie Algebra but it is slow going, it could take me years to get to the level of knowledge I would probably need and that is why I am here asking other people for help, or advice.

 

The general justification for why this approach might work is quite simple. If you have a linear system of momentum and you rotate it, it behaves the same way, it is invariant under rotation and this (somehow) implies that conservation of radial momentum exists. Similar arguments exist for other spatially dependent conservation laws, what I mean by that and am assuming is that the degree of freedom that you have in your system with respect to invariance can basically be tacked on to the units of the system itself to provide the quantity that can be conserved. In this case rotation can be redefined in terms of radians which is a measure of rotation but has units of length per length, hence the invariance of the system under rotation. But if you tack on this redundant length to the units of mass times velocity, kilogram meters per second, you get mass area per second, or radial momentum our systems conserved property. So imagine a light source. You can translate it, in any direction, giving a redundant unit of length, you can rotate it, in any direction, giving another redundant unit of length, together both of these redundancies can be taken to mean area. And no matter how you combine these spatial operations the light source is always going to emit light in exactly the same way. So what do you think? Is there anyone out there that can help me?

 

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.