Jump to content

Recommended Posts

Posted (edited)

What's the maximum possible processor speed? Surely there comes a point when nHz is indistinguishable from (n+1)Hz.

Edited by ydoaPs
Posted

What's the maximum possible processor speed? Surely there comes a point when nHz is indistinguishable from (n+1)Hz.

We've already reached that point many years ago.

 

The development however is not to go from n Hz to (n+1) Hz.

We go from n Hz to 2n Hz.

 

I'm not sure if there actually is a theoretical upper limit.

Posted

i think the theoretical limit is one cycle can be as short as it takes a photon to travel between the two furthest points on the processor. the frequency would be the inverse of this.

 

although, due to the materials of construction and the need to communicate with other parts of the computer means it will never achieve the full potential.

Posted

http://www.nature.com/nature/journal/v406/n6799/full/4061047a0.html

 

Enjoy.

 

The closing remark of the paper:

 

If, as seems highly unlikely, it is possible to extrapolate the exponential progress of Moore's law into the future, then it will take only 250 years to make up the 40 orders of magnitude in performance between current computers that perform 1010 operations per second on 1010 bits and our 1-kg ultimate laptop that performs 1051 operations per second on 1031 bits.
Posted

The maximum rate at which information can be processed is governed by the Bekenstein bound, which I believe was first investigated from a black hole perspective. It sets the limit based on a finite volume and finite energy physical system.

 

Now, the "engineering limit" must be set by the physical properties of the materials we use. So silicon will only get us so far. I don't know if any technologies based on carbon nanotubes and graphene could out preform silicon. Any thoughts anyone?

Posted

The maximum rate at which information can be processed is governed by the Bekenstein bound, which I believe was first investigated from a black hole perspective. It sets the limit based on a finite volume and finite energy physical system.

 

Now, the "engineering limit" must be set by the physical properties of the materials we use. So silicon will only get us so far. I don't know if any technologies based on carbon nanotubes and graphene could out preform silicon. Any thoughts anyone?

 

From sporadic conversations from my graphene colleagues, I'd have to say, no time soon and probably not... More likely are things like InSb, but it's rather expensive and there are not as many processes available for fabrication... One of the big problems is we are so very much leant towards Si based systems, anything new would have to integrate with them.

Posted

http://www.nature.com/nature/journal/v406/n6799/full/4061047a0.html

 

Enjoy.

 

The closing remark of the paper:

If, as seems highly unlikely, it is possible to extrapolate the exponential progress of Moore's law into the future, then it will take only 250 years to make up the 40 orders of magnitude in performance between current computers that perform 1010 operations per second on 1010 bits and our 1-kg ultimate laptop that performs 1051 operations per second on 1031 bits.

Would using multiple cores increase the overall speed even if the individual processors max out?

Posted

Only if a program is designed to take advantage of them. Not all algorithms are suited to being split up into chunks and processed in parallel. There are newer programming languages and tools designed to make this easier, but they have a ways to go.

Posted

It allows for more processes to be done simultaneously thus allowing for greater throughput in a shorter amount of time. The trick with this question is, complete operations can take multiple clock cycles, so unless the subcomponents of an operation were somehow distributed, the higher frequency processor would finish first. There aren't any processors that can distribute the subcomponents of an operation as these are most often serially dependent. GPUs are a good example of all this where they are specifically designed to distribute a high number of the same operation to be done at the same time over a high number of parallel cores!

 

And what Cap' said :D

Posted

Would using multiple cores increase the overall speed even if the individual processors max out?

 

In the situation described in the paper I posted, I'd say no. Making the device larger would allow for it to be faster. I believe it was a 1liter volume device they were discussing.

  • 2 weeks later...
Posted (edited)

 

I'm not sure if there actually is a theoretical upper limit.

 

Actually, the limit is bounded by the speed of light. So if components connected to the processor are 30 cm away, the limit is 1 gb. If the components are 10 cm away, the limit is 3 gb.

 

It seems like there should be at least one more variable, such as the number of ports. After all, this answer was posted on answers.com, but I just posted it to raise the issue, because I knew there were limitations.

 

I've been checking up on it and reading reports of 5,7, even 10 gb, probably consisting of some rather odd motherboard configurations, though the sources were not all so trustworthy.

Edited by Realitycheck
Posted

reality check, the motherboard typically runs at a very different frequency to the processor (typically down around 133Hz i believe) the processor will be run faster than this by a set multiplier.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.