Enthalpy Posted November 2, 2015 Share Posted November 2, 2015 On this diagram of a Skylake die, I show what is useful to me. Intel won't be delighted with it, and Amd neither, but this is what I really need. Maybe a third player gets inspiration from it? I don't need the graphics processor at all, nor the display controller. I don't need three of the four cores. I don't need three-quarters of the 256-bit Avx. I don't need hyperthreading nor the Sse and Avx instruction set extensions, so most of the sequencer can drop out. Not represented on the diagram, but that's much area, power, and cycle length. How much L1 and L2 do I need? Do I need an L3? Unclear to me. The Usb links do not need to be on the Cpu. But I do need a good Dram and Pci-E interface. So of the execution area and power, I use 1/10th to 1/30th. The rest of the die consumes far less and can stay as is. The Core 2 added already much hardware useless to me but it did accelerate my applications a lot. Since then, progress is minimal, like 25%. Could that possibly be a reason why customers don't buy new computers any more? For new applications, a few editors and programmers sometimes make software that uses several cores and the vector instructions. Better: compilers begin to vectorize automatically source code that would be vectorial but is written sequentially - excellent, necessary, and it begins to work. Fine. But I want my to accelerate my old applications. I don't have the source, the authors have changed their activity - there is no way these get rewritten nor even recompiled. Sorry for that. That's why I suggested that a magic software (difficult task!) should take existing binaries and vectorize them. It's not always possible, it's a hard nut (artificial intelligence maybe), it may produce wrong binaries sometimes, but this would take advantage of the available Sse and Avx. The much better way would be that the processor manufacturer improves the sequencer to run several loop passes in parallel. This too is difficult for sure since the sequencer doesn't have as much time as an optimizer to make smart choices, but it would apply to binaries that can't be modified. The Skylake has hardware for four 64b mul-acc (in just one core), and a better sequencer would eventually take advantage of it. Even better, this would work on an OS that isn't aware of the Avx registers. Hey, is anybody there? Link to comment Share on other sites More sharing options...
Ophiolite Posted November 4, 2015 Share Posted November 4, 2015 None, as long as the computer is up to the task it's meant for. Why would you get a Ferrarri to go grocery shopping? To impress the neighbours. Link to comment Share on other sites More sharing options...
Enthalpy Posted November 23, 2015 Share Posted November 23, 2015 Intel's Knights Landing is the new thing for supercomputing. One socket carries 1152 multipliers-accumulators on 64b floats working at 1.3GHz to crunch 3TFlops on 64b. That's 3 times its predecessor and much more than a usual Core Cpu. Better: the new toy accesses the whole Dram directly, not through a Core Cpu, and Intel stresses its ability to run an OS directly. https://software.intel.com/en-us/articles/what-disclosures-has-intel-made-about-knights-landing In other words, one can make a Pc of it, far better than with its predecessor, and software would use the new component a bit more easily than the predecessor. This shines new light on the question: "Why do people need faster computer?" - which, for the new component, would mean: "How many people would buy a Pc built around the Knights Landing?" ---------- My first answer would be: I still want the many single-tasked operations of a Pc run quickly on the machine, but I don't want two separate main Dram in the machine so a separate Core Cpu is excluded, hence please have within the Knights Landing some good single-task ability. Maybe one tile that accelerates to 3GHz if the others do nothing. Or have one Core in the chip that accesses the unique Dram properly and slows down a lot when most of the chip runs. ---------- Finite elements are an obvious consumer of compute power in a professional Pc. 3D fields (static electromagnetism, stress...) run better and better on a Core, but 3D+time are still very heavy: fluid dynamics, convection - with natural convection being the worst. Finite elements use to run together with Cad programs, which themselves demand processing power, but less as linear algebra and more as if-then-else binaries. Fine, the Knights Landing handles them better than a Gpu does, as it runs the x86-i64 instruction set and has gotten a good sequencer. Then you have many scientific applications that may run inefficiently on a vector Cpu with slow Dram, like simulating the collision of molecules, folding proteins... These fit better chips designed with one compute unit per very wide Dram access, as I describe elsewhere. What would need a different design are databases. Presently they run on standard Pc which are incredibly inefficient on such binaries. What is needed is agility on unpredictable branches and small accesses anywhere in the whole Dram - for which nothing has improved in the past 20 years. Many programming techniques of artificial intelligence have the same needs and are expected to spread now. Web servers have similar needs too. For databases and AI, but in fact for most applications, we need the Dram latency to improve a lot, not the compute capacity. This need has not been stressed recently because neither Os, video games nor common applications require it heavily, but databases do and they get increasingly important. Link to comment Share on other sites More sharing options...
metacogitans Posted November 25, 2015 Share Posted November 25, 2015 For reasons other than gaming, what are the main points that an average person could use a strong computer for certain subjects (business/school) Being able to have 15 different programs open and 20 different windows on your internet browser are important; the laptop I'm using right now can't handle it. Put it this way though; my desktop computer which me and my friend built on a budget of $700 in 2009 is still able to play games. The RAM is dated though; I only have 2 gigs of DDR3 (back when I built it though, DDR3 just came out and my friend was selling it to me as being as fast as 4 gigs of DDR2). It's rocking an AMD Phenom II quad core 3.2GHz, which is still a great processor today, really. Also, a 1gb I-don't-know-what video card that seems decent I guess. Cheapest MOBO on newegg at the time, had a few quirks over the years but still working more or less. Cheapest case on newegg, its basically in pieces at the moment. Should have invested in a better case. Gone through about 5-6 power supplies until my friend pointe out to me that I was buying a scam-brand power supply over and over which is why they kept blowing up and sparks were shooting out the back of my computer; bought a decent brand power supply then for a few extra bucks and works just fine. All in all, my desktop has given me so much more than its worth over the years and I love it. And I'm the type of person who never usually spends that much money on something. Remember, time is money; the extra money you spend building a decent rig will pay for itself in the time you save not having to wait for programs to load or having stuff lag/freeze up all the time. If you have a little extra money to spend on building a decent computer, I highly recommend it. Also, speaking of 'time is money', windows 8 has the most time-consuming GUI to navigate; it actually acts as a deterrent from getting stuff done on your computer too because you feel aversion to the clunky interface and only use it as much as is necessary. You can of course go to the windows 8 program store or whatever it is and install user-made add ons reverting the UI to a more traditional windows UI, but doing that takes - you guessed it - time, about an hour I'm guessing just to get all the touch-screen crap features out of the UI and make it tolerable again. Then an update probably breaks it every month and you have to do it again; you can't win - they wanted to put a nail in the whole 'freeware, net neutrality, actually having control over your computer' thing and they're being real jack-offs about it Link to comment Share on other sites More sharing options...
Klaynos Posted November 25, 2015 Share Posted November 25, 2015 When processing large data sets I tend to push as much data as possible into the ram and manually parallise the processing to use all but 1 or 2 of the cores depending on what else I want to do with the computer. I find ram and disk read write speed are what hold up my desktop processing. Link to comment Share on other sites More sharing options...
DevilSolution Posted November 25, 2015 Share Posted November 25, 2015 (edited) For reasons other than gaming, what are the main points that an average person could use a strong computer for certain subjects (business/school) To put simply, You would want a fast computer for a specialised task, This could be an array of things from creating CGI, Physics simulation, Chemical simulations, Neurological simulations, Number sequencing / pattern finding, AI etc etc. Another reason most normal people buy high spec machines that arent programmers are that they are quick and easy to use. You can load up 3 web browsers with 20 tabs in each while opening 2 word documents, listening to music and having a virtual machine running a test version of an OS you like.......etc On this diagram of a Skylake die, I show what is useful to me. Intel won't be delighted with it, and Amd neither, but this is what I really need. Maybe a third player gets inspiration from it? SkylakeUseful.png I don't need the graphics processor at all, nor the display controller. I don't need three of the four cores. I don't need three-quarters of the 256-bit Avx. I don't need hyperthreading nor the Sse and Avx instruction set extensions, so most of the sequencer can drop out. Not represented on the diagram, but that's much area, power, and cycle length. How much L1 and L2 do I need? Do I need an L3? Unclear to me. The Usb links do not need to be on the Cpu. But I do need a good Dram and Pci-E interface. So of the execution area and power, I use 1/10th to 1/30th. The rest of the die consumes far less and can stay as is. The Core 2 added already much hardware useless to me but it did accelerate my applications a lot. Since then, progress is minimal, like 25%. Could that possibly be a reason why customers don't buy new computers any more? For new applications, a few editors and programmers sometimes make software that uses several cores and the vector instructions. Better: compilers begin to vectorize automatically source code that would be vectorial but is written sequentially - excellent, necessary, and it begins to work. Fine. But I want my to accelerate my old applications. I don't have the source, the authors have changed their activity - there is no way these get rewritten nor even recompiled. Sorry for that. That's why I suggested that a magic software (difficult task!) should take existing binaries and vectorize them. It's not always possible, it's a hard nut (artificial intelligence maybe), it may produce wrong binaries sometimes, but this would take advantage of the available Sse and Avx. The much better way would be that the processor manufacturer improves the sequencer to run several loop passes in parallel. This too is difficult for sure since the sequencer doesn't have as much time as an optimizer to make smart choices, but it would apply to binaries that can't be modified. The Skylake has hardware for four 64b mul-acc (in just one core), and a better sequencer would eventually take advantage of it. Even better, this would work on an OS that isn't aware of the Avx registers. Hey, is anybody there? Intel's Knights Landing is the new thing for supercomputing. One socket carries 1152 multipliers-accumulators on 64b floats working at 1.3GHz to crunch 3TFlops on 64b. That's 3 times its predecessor and much more than a usual Core Cpu. Better: the new toy accesses the whole Dram directly, not through a Core Cpu, and Intel stresses its ability to run an OS directly. https://software.intel.com/en-us/articles/what-disclosures-has-intel-made-about-knights-landing In other words, one can make a Pc of it, far better than with its predecessor, and software would use the new component a bit more easily than the predecessor. This shines new light on the question: "Why do people need faster computer?" - which, for the new component, would mean: "How many people would buy a Pc built around the Knights Landing?" ---------- My first answer would be: I still want the many single-tasked operations of a Pc run quickly on the machine, but I don't want two separate main Dram in the machine so a separate Core Cpu is excluded, hence please have within the Knights Landing some good single-task ability. Maybe one tile that accelerates to 3GHz if the others do nothing. Or have one Core in the chip that accesses the unique Dram properly and slows down a lot when most of the chip runs. ---------- Finite elements are an obvious consumer of compute power in a professional Pc. 3D fields (static electromagnetism, stress...) run better and better on a Core, but 3D+time are still very heavy: fluid dynamics, convection - with natural convection being the worst. Finite elements use to run together with Cad programs, which themselves demand processing power, but less as linear algebra and more as if-then-else binaries. Fine, the Knights Landing handles them better than a Gpu does, as it runs the x86-i64 instruction set and has gotten a good sequencer. Then you have many scientific applications that may run inefficiently on a vector Cpu with slow Dram, like simulating the collision of molecules, folding proteins... These fit better chips designed with one compute unit per very wide Dram access, as I describe elsewhere. What would need a different design are databases. Presently they run on standard Pc which are incredibly inefficient on such binaries. What is needed is agility on unpredictable branches and small accesses anywhere in the whole Dram - for which nothing has improved in the past 20 years. Many programming techniques of artificial intelligence have the same needs and are expected to spread now. Web servers have similar needs too. For databases and AI, but in fact for most applications, we need the Dram latency to improve a lot, not the compute capacity. This need has not been stressed recently because neither Os, video games nor common applications require it heavily, but databases do and they get increasingly important. What are you two actually discussing? What the best technology is or whether we need it? Something im not sure you mentioned or atleast not fully is the ability to tap into your GPU. With NVidea cards you get access to all the micro processors that specific card has and is called CUDA, it uses a form of grid notation and each ALU has limited capacity, but it can make number crunching extremely easy, i don't know what frequency each processor runs at but firstly there's so many of them that they stack up to be huge and secondly they are a secondary resource to you CPU's, L1, L2 and L3 which used in combination greatly enhance computational ability. Edited November 25, 2015 by DevilSolution Link to comment Share on other sites More sharing options...
Enthalpy Posted November 27, 2015 Share Posted November 27, 2015 (edited) [something went wrong. This message can be removed.] Edited November 27, 2015 by Enthalpy Link to comment Share on other sites More sharing options...
Enthalpy Posted November 27, 2015 Share Posted November 27, 2015 (edited) Gpu do bring processing power but usual quad-core Cpu catch up slowly, and the Knights Landing offers the same floating-point capability. Gpu have very serious drawbacks: - They are difficult to program! If processing signal, the functions exist already, fine - but if you program anything exotic you've lost. - Their caches are tiny and difficult to use. - Many don't offer a generic instruction set. - Most are very slow on double precision. - They access the main Dram slowly. - They don't fit multitask parallelism, but many applications are parallel through multitasking. Consequently, the Knights Landing fits far better the needs of many users. Running the same instruction set as the Core is a further advantage, and if it can run the existing Os, even better. ---------- In two separate messages I told - What I need: a single Core without Avx nor Sse that executes more instructions per second; - What the Knights Landing can bring to other users. Instead of L1, L2, L3, which are quite difficult to use properly, and rarely feed the Cpu enough, a good Dram would be better. As I suggested, this needs to spread the cores among separate smaller Drams, needing to redesign the processors and also the applications. Edited November 27, 2015 by Enthalpy Link to comment Share on other sites More sharing options...
John Cuthber Posted November 27, 2015 Share Posted November 27, 2015 I am reading this thread on a 900 Mhz Pentium computer running Windows XP. It works OK so I don't need a powerful computer for this. My colleagues who do computational fluid dynamics on 24 core machines with enough throughput that they need specialist cooling, get tired of waiting 2 or 3 days for the computer to do the arithmetic for them. I rather suspect that the power you need depends on the job you are seeking to do. Link to comment Share on other sites More sharing options...
Enthalpy Posted February 3, 2016 Share Posted February 3, 2016 (edited) Flash memory chips improve quickly. One single chip interface can toggle at 333MHz over 8 bits width, even at 128Gbit=16GByte size http://www.micron.com/products/nand-flash/mlc-nand and Usb sticks transfer >200MB/s. That is, 8 or 16 chips (this can begin at 256GByte) deliver a throughput like 2 or 4GByte/s that neither Usb 3.0 nor Sata/6000 can handle. A recent wide Pci-E (32GByte/s for x16 v4.0) can still carry the data but it won't cope forever. It's time to define a parallel interface for Flash storage. Maybe the disks must be modules plugged on the mobo like Dram modules are, or if possible have a wide cable to the mobo. Fibre optics won't help much. It looks like the bus must connect to the Cpu directly Maybe Flash chips on the video card would be useful to load textures faster, but the OS must be aware of it. [...] on 24 core machines [...] waiting 2 or 3 days [...] If using the Avx256 properly, this machine at 2.2GHz would bring 422GFlops but the Knights Landing 3TFlops, doing the job in 1/3 day if the soft fits the hard. Your colleagues look like a customer group for the new toy. [...] I rather suspect that the power you need depends on the job you are seeking to do. If I sometimes wanted to scoff I'd add: "and on how recent your software is". Edited February 3, 2016 by Enthalpy Link to comment Share on other sites More sharing options...
EdEarl Posted February 3, 2016 Share Posted February 3, 2016 For reasons other than gaming, what are the main points that an average person could use a strong computer for certain subjects (business/school) Program run times depend on many things, and business/school computer projects are varied. For example, suppose you must manage a project with thousands of people and hundreds of projects. Project management software may be able to help schedule work and keep things organized; however, this kind of software can require a powerful computer. Even a spreadsheet may become complex and large enough to require a powerful computer. From the information you have given, there is no rational way to advise you whether to get a basic computer or a monster. Since a powerful computer can do simple tasks quickly, the only reason for not getting one is available funds. Thus, my recommendation is to get as powerful computer as you can afford, considering you may have to wait sometimes for complex programs to run, which may be annoying, but rarely of critical importance. Often a slower computer with lots of memory can run programs almost as fast as a faster computer with similar memory, so I tend to buy computers with lots of main memory that are not the fastest. I hope this helps. Link to comment Share on other sites More sharing options...
petrushka.googol Posted March 11, 2016 Share Posted March 11, 2016 Depends on the application - using a supercomputer for playing tic tac toe is overkill. I have used my computer in a peer to peer network for the SETI program. For met applications supercomputers are a must. Link to comment Share on other sites More sharing options...
Enthalpy Posted November 30, 2019 Share Posted November 30, 2019 Software would help restore movie master records when lost, broken or too much damaged. How much processing power it needs, I don't know, whether it's already done neither. Take as many copies as possible, and the master too if available. Digitize everything. By comparing pictures from different copies, remove all the scratches and dust, if needed the parts completely lost at some copy. Compare also successive pictures, especially if only one copy is available, which is easier for nearly-static scenes, but heavy for characters and fast motions. If the many copies have a lower resolution than the master had, processing can recover some too. ========== I'd like the same for sound records. Removing noise shots from one copy should work better than averaging many copies. The potential public is smaller, but for musicians, some records are invaluable, and master records are lost while many copies exist, say as discs. From Heifetz' beginnings, more generally from the early twentieth century, I've heard only badly damaged copies, but many copies exist. Even more useful, we could hear how a composer played his pieces, very useful. Records exist from Eugène Ysaÿe or Béla Bartók for instance. Marc Schaefer, aka Enthalpy Link to comment Share on other sites More sharing options...
mistermack Posted November 30, 2019 Share Posted November 30, 2019 I've often wondered if it would be possible to not repair, but reconstruct video and stills using a powerful computer. If you amassed a huge repository of digitised images, of good focus and definition, and broke them down into small squares with the digital characteristics of colour tone and image map encoded, then you could take an old cherished image, that is small and grainy and out of focus, and break that down into similar squares. Then search the database for a match for all of the squares and assemble them in a totally new image, that has nothing from the original, but which is bigger and clearer than the original, and looks exactly like what the original would have looked like if it had been taken with top quality equipment. To simplify it a bit, say one square has a hand at a certain angle, your computer finds a modern high quality match for it and fits it to that square. Like a mosaic, it builds up a reconstruction of what the original should have looked like. Maybe there's something out there already. I'm just musing and daydreaming what I'd like to see. To add to that, I'm picturing something like a police photoFIT picture, only not done from memory, but by comparison with the original, and done using a gigantic database of stock originals. A much more sophisticated and accurate version of this, done on a powerful computer : https://en.wikipedia.org/wiki/Facial_composite The above was done by an artist, but this one is computer generated : It still has the defect of being done from memory. If you had an original photo, you could probably recreate it to a very high standard, so that the human eye couldn't tell that it wasn't an original. Link to comment Share on other sites More sharing options...
SagarS21 Posted January 17, 2020 Share Posted January 17, 2020 On 9/1/2015 at 6:50 AM, silverghoul1 said: For reasons other than gaming, what are the main points that an average person could use a strong computer for certain subjects (business/school) I think it totally depends on requirements such as large program compilations and graphic and video making. Link to comment Share on other sites More sharing options...
MigL Posted January 19, 2020 Share Posted January 19, 2020 All depends on the data you need to process. Large scale video processing, where large amounts of simple operations require multiple simple processing units to run in parallel. In a case like that even an AMD Threadripper with 32 cores is underpowered, and paralleled 64/128/512/1024 NVidia graphics cards with thousands of simple processing elements will outperform it. That is how modern supercomputers are built. For gameplay, on the other hand,which requires complex operations, and parallel coding is difficult, a single or dual core, at 5 GHz is probably best. To access the internet, I would recommend as large a memory as possible, allowing multiple tabs to be opened, but even 10 year old technology, like a 1st/2nd generation i5 @ 2 GHz, is probably overkill Link to comment Share on other sites More sharing options...
Rajnish Kaushik Posted January 29, 2020 Share Posted January 29, 2020 There can be many reasons, like heavy programming software, games or so. I need a powerful pc for animation and game development. I also need some power when I need to host a web server on my local machine or so. Other then gaming, many companies provide powerful machines to developers to reduce the compile time of code so more time is spent on coding then waiting for code compilation. Link to comment Share on other sites More sharing options...
mariyajonsan Posted February 21, 2020 Share Posted February 21, 2020 Now days everyone want fast speed in computer on in internet , if your computer is up to date then you never face such problem, for speed improvement you need to clear unwanted file and clear cache then you can improve speed of your computer and make your computer strong. Link to comment Share on other sites More sharing options...
MigL Posted March 1, 2020 Share Posted March 1, 2020 In case anyone else wants one... AMD has just introduced the Threadripper 3990X, built using 7 um ( Zen 2 ) technology. It comes with 64 hardware cores ( 128 logical cores ), runs at 2.9 - 3.4 GHz, and the recommended minimum RAM complement is 128 GB. For multithreaded tasks, such as video rendering and image manipulation, it is easily the fastest CPU available. As a matter of fact, it outperforms Intel equivalents in productivity, and all content creation. but still lags slightly in gameplay. The price ( at intro ) is US$ 3990 ( holy cr*p ! ), but that is still less than half the price of Intel server ( Xeon ) chips with half the core count. Link to comment Share on other sites More sharing options...
John Cuthber Posted March 1, 2020 Share Posted March 1, 2020 34 minutes ago, MigL said: In case anyone else wants one... AMD has just introduced the Threadripper 3990X, built using 7 um ( Zen 2 ) technology. Should that be 7 nm? 1 Link to comment Share on other sites More sharing options...
MigL Posted March 2, 2020 Share Posted March 2, 2020 You're absolutely right, John. Considering that my first computer was a Sinclair ZX-81, built from a kit, The senility excuse isn't a large stretch. Link to comment Share on other sites More sharing options...
Markus Hanke Posted March 2, 2020 Share Posted March 2, 2020 3 hours ago, MigL said: Considering that my first computer was a Sinclair ZX-81, built from a kit Same here! That was a great machine back in the day, happy memories Link to comment Share on other sites More sharing options...
John Cuthber Posted March 2, 2020 Share Posted March 2, 2020 5 hours ago, MigL said: Considering that my first computer was a Sinclair ZX-81, built from a kit, The senility excuse isn't a large stretch. That makes 3 of us. Link to comment Share on other sites More sharing options...
Trurl Posted March 2, 2020 Share Posted March 2, 2020 This is a young person that ask why we need a fast computer. I was in graphics school in 1997 and the computers were state of art at the time, and storage was the main factor. If you made a graphic in Photoshop you had no media to back up your file. Thirty people printing at a time made the network crawl. It used to be you had to buy a computer every year because yours went out of date. Now I have 12 cores. I think the problem is programming so those cores work as one. But I am more interested in using less processing power to solve tasks. From what I know of programming it is more complex to programming cores. Does anyone know any good sites on C++ and Python that discuss this? Many cores made the Sega Saturn and PlayStation 3 hard to program. Games were delayed for the PS3. Xbox 360 games were smoother. So even with 7 cores of the Cell processors, if you cannot program it efficiently it does not get used. On the other hand, programmers complained the Nintendo Wii wasn’t powerful enough. Link to comment Share on other sites More sharing options...
akyle32 Posted May 27, 2020 Share Posted May 27, 2020 If youre into animation and media that requires a ton of rendering you definitely need to have a fast computer. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now