Jump to content

Recommended Posts

Posted (edited)

I read a report regarding this a few years ago. I don't remember many of the details (and it was confidential anyway). But I think they reckoned that the limit for silicon transistors would be reached in 10 years or less. We are already seeing various scaling problems. For example, power density is the reason that clock speeds of PCs stopped increasing a decade or so ago. There are problems with with leakage current, supply voltages approaching threshold voltages, switching speeds, etc.

 

But the thing about Moore's Law (apart from the fact it is just an historical observation) is that it is about number of transistors on a device. This has been achieved in two ways: smaller transistors but also larger die. In the future, designs will move to 3D (stacked) technology to keep increasing the number of devices. This also has the benefit of allowing much greater bandwidth to memory (lookup the hybrid memory cube, for example).

 

At the same time, people are looking at alternative materials to replace silicon.

Edited by Strange
Posted

Moore's law has been dead for two decades. It stated "double the number of transistors and the clock speed every year". Intel has rewritten it "double the number of transistors every 18 months" to claim that stick.

 

I was in the job at the 4µm era during the upper paleomonolithic (when Vlsi meant dense - meanwhile we had Ulsi long ago, adn seemingly this naming series was abandoned for lack of hypes). Already then and meanwhile, many people got famous for a quarter hour by predicting the end of the progress, including at 1µm "because of fundamental physical limits" that have all been smashed.

 

A few hard nuts presently:

- Oxide thickness can't really go down, and did I read that it didn't go down recently?

- Lithography can't stay indefinitely at hard UV. Last time I checked they used 146nm vacuum wavelength, in full high-index immersion, with short focal lenses, and diffraction-correcting masks, to achieve 34nm line width then. At some point the game will be over, needing a different source. But if needed, electron beams have been available for decades.

- The supply voltage can't go much down. It's already below one bandgap, meaning that the Mos are never well on nor well off.

 

So the clocks don't improve, the consumption improves too slowly, and the Cpu gain speed only through more parallelism - and that's the real problem, because this kind of parallelism brings nothing to existing software.

 

Presently the Core is an absolute dead end. For my applications, it has made zero dot nothing progress since the Core 2.

 

The ways out don't need better semiconductor processes:

- Develop processors that exploit the present vector or parallel hardware on existing sequential binaries. By far the best solution.

- Develop some magic software that rewrites existing sequential binaries (ignoring the source) to exploit the present parallel hardware.

- Less good: have compilers that parallelize the binary even if the source isn't optimized for. At least new software will run faster. Most programmers have done strictly nothing for that despites the Sse dates back to the Pentium III and multicore to the P4.

- More probably, smart phones or supercomputers will promote processors that are uncompatible wit the 8086 and run more efficiently. The OS and applications written for them will be new and hopefully meant for parallelism.

 

As for semiconductor processes, the way out of the limits may be 3D - but what manufacturers call presently 3D is only stacking. Unless, of course, something new works better than ever-smaller silicon, be it graphene or something else.

Posted

Moore's law has been dead for two decades. It stated "double the number of transistors and the clock speed every year".

 

Citation needed.

 

From wikipedia

Moore's "1965 paper described a doubling every year in the number of components per integrated circuit". No mention of clock speed.

https://en.wikipedia.org/wiki/Moore's_law

 

You may be thinking of Dennard Scaling.

https://en.wikipedia.org/wiki/Dennard_scaling

Posted

Look into a Silicon Valley start-up called Soft Machines, Enthalpy.

They have produced a test chip that seems to validate their claims, which are that it can divide a single thread into virtual threads running in parallel without a re-writing of the code.

Soft Machines claims their processor rapidly translates the binary code into its own native VISC ( variable instruction set computing ) code.

It does this, not simply like a superscalar processor have been doing since the 80s, but across multiple cores.

Whether it can do this without re-writing single thread code will be a big deal if it does happen

Posted

[Answering my : Moore's law stated "double the number of transistors and the clock speed every year"]

 

Citation needed.

 

That doesn't need any citation, because I know it from memory with perfect accuracy, as does anyone from that time.

 

Fact is that Intel and the others have rewritten Moore's law afterwards. That they spread their lie over Wikipedia and everywhere doesn't make it true.

 

And you know what? I have no intention to justify myself for telling the truth. If you believe any lie just because it's repeated, that your problem.

Look into a Silicon Valley start-up called Soft Machines, Enthalpy.

 

Thanks MigL! I'll definitely have a look.

Posted (edited)

 

That doesn't need any citation, because I know it from memory with perfect accuracy, as does anyone from that time.

 

Fact is that Intel and the others have rewritten Moore's law afterwards. That they spread their lie over Wikipedia and everywhere doesn't make it true.

 

And you know what? I have no intention to justify myself for telling the truth. If you believe any lie just because it's repeated, that your problem.

 

Memory is notoriously unreliable (and I was only 8 when the paper was published in 1965*) so I thought I would look it up. The only reference I can find to clock speed in the original paper is the rather general:

"In fact, shrinking dimensions on an integrated structure makes it possible to operate the structure at higher speed for the same power per unit area"

http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf

 

(But presumably, as part of your conspiracy theory, Intel have made the University of Texas put up a fake version.)

 

* I am very impressed that you have such a clear memory 50 years on.

Edited by Strange

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.