Jump to content

Recommended Posts

Posted

They are some common ideas, like using a grid and rules. States are updated by step.

But there's a difference : Edward Fredkin uses a deterministic rules and nokton theory uses some kind of probabilities.

 

There have been a lot of attempts to develop theories like this (Stephen Wolfram is perhaps the most high profile). As far as I am aware none of them have achieved anything significant.

Posted

 

Can you explain more.

 

Unfortunately the builders pulled out my line along wiht the phone and internet a couple of days ago.

 

I have now got a temporary fix in place and I see that things have moved on a bit.

 

Yes but I need to know where to start.

In other words I don't know your level of mathematical knowledge.

 

Do you know the difference between a function and an operator or that whilst the solution to a function equation is a value the solution to an operator equation is a function?

 

If not I will start by explaining this.

Posted

Consider this function, which was first enunciated by Gauss well before any quantum or atomic theory.

 

[math]f(t) = \frac{1}{{\sqrt {2\pi } }}\frac{1}{\tau }{e^{\left( { - \frac{{{t^2}}}{{2{\tau ^2}}}} \right)}}[/math]

 

Now take the Fourier transform

 

[math]g(w) = \frac{1}{{2\pi \tau }}\int\limits_{ - \infty }^\infty {{e^{\left( { - \frac{{{t^2}}}{{2{\tau ^2}}}} \right)}}} e\left( { - iwt} \right)dt[/math]

Completing the square and executing some algebra leads to

[math] = \frac{{e\left( { - \frac{{{\tau ^2}{w^2}}}{2}} \right)}}{{\sqrt {2\pi } }}\frac{1}{{\sqrt {2\pi } \tau }}\int\limits_{ - \infty }^\infty {{e^{\left( { - \frac{{{{\left( {t + i{\tau ^2}w} \right)}^2}}}{{2{\tau ^2}}}} \right)}}} dt[/math]

 

The integral on the right can be shown to equal one by complex integration so

 

[math]g(w) = \frac{1}{{\sqrt {2\pi } }}{e^{\left( { - \frac{{{\tau ^2}{w^2}}}{2}} \right)}}[/math]

 

Which is of the same form ( in w) as the original function in t.

 

This is, of course, the normal or gaussian distribution in statistics.

 

The spread or uncertainty for each is

 

[math]\Delta t = \tau [/math] and [math]\Delta w = 1/\tau [/math]

 

leading to

[math]\Delta w\Delta t = 1[/math]

 

Which is an uncertainty theorem.

 

A physicist will tell you that if t is time and w is frequency of an electrical impulse than the above pair tells us that the narrower an electrical impulse the greater the spread of the frequency components.

 

She might also say that in classical wave theory wave number k and position x are similarly related so

 

[math]\Delta k\Delta x = 1[/math]


 

 

Posted

Thank you, now I understood the origin of Heisenberg inequality.

I see also that Heisenberg inequality is :post-113857-0-40594700-1444033663.png

My question is : Exists a same kind of this inequality for this "new nokton theory" ?

 

Posted (edited)

Doesn't the very nature of these 'noktons' contradict the uncertainty principle?

 

In the uncertainty principle the values taken by the two operators are allowed to vary continuously, but the value taken (or assumed) by one affects the value allowable for the other. Since their deltas have an inverse relationship we can say that the large one delta is the smaller the other is, but there is (in mathematical theory) no upper or lower limit to either.

 

Discretisation of the values (quantisation in physics) but introducing lower (and by implication upper) limitschanges things is is curently the subject of some debate.

 

Is reality discrete or continuous?

 

Prefessor Shan Majid of London University has published an interesting book, collecting thoughts from many famous scientists and mathematicians, on this matter.

 

On Space and Time

 

Shanh Majid

 

Cambride University Press

 

It is interesing reading

 

Edit a couple of interesting points about the version of the uncertainty theorem in my last post.

 

 

The theorem above is purely numerical and has no units, whereas the Heisenberg theorem has units.

 

My presentation was unusual and constructed to avoid quantum theory for demonstration purposes.

An interpretation of the theorem that is often given is that it applies to processes that it involves the composition/convolution of two operators, say AB.

If the order is important, that is if AB is not equal to BA

Then (AB - BA ) is not zero and the relation can be derived from this.

Physically this means that this allows for the fact that if you first fix the momentum and then measure the position where this occurs you will obtain a different result from if you fix the position and measure the momentum at that position.

This is where the confusion arises leading to the misunderstanding that it is only a measurement issue and not inherent in the theory.

Edited by studiot

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.