Jump to content

AtomicMaster

Senior Members
  • Posts

    157
  • Joined

  • Last visited

Everything posted by AtomicMaster

  1. You do skip the step where you first have to code the number as two's complement, and yes procs are fast at it, but its operations you don't have to do at all with trites. Also Alex (thats me) <3 SSE (SSE2 and SSE3), except when he has to reverse something that uses them, which can be real pain...
  2. Logic totally applies to AMD as well. For example 7750 clocks memory at 1124, and core at 800 with a gig of vram, pushing 819 GFLOPS of Single Precision. A 6990 in contrast clocks gpu at 830-880MHz, memory at 1350 at 4 gig of vram, and rocks out 5.1 to 5.4 TFLOPS of single precision. Even 7790 only does 1.79 TFLOPS. Again, best thing one can do is to consult specs, comparative reports (like PassMark or Tomshardware) and reviews from people who know graphics hardware (like say John Carmack if he were to ever do one)
  3. Shrug, i just described it in a way that is easy to program though
  4. Sorry that is entirely not true, do NOT shop based on how high the model number and that actually is as bad of a performance benchmark than the RAM. NVidia GeForce 7900 performs much better than say GeForce 8600; burst doesn't know what the model number means, and bigger doesn't mean better (I mean they claim it here in Texas, but i digress). For graphics card makers, the model numbers represent roughly the same thing (and its still true with nvidia, just with 3 digit numbers now as well as them running 2 different models for the same architecture) The first digit of the model indicates the generation and thus the architecture of the GPU, so for example 4/500 series cards were Fermi-based, where as 6/700 series is Kepler-based. NVidia split the lower, general computing lines in the lower number series (400ds and 600ds) and the higher power, higher end computing lines being 500 and 700 series respectively for the architectures. Furthermore the developments of the cores and their optimizations makes the dies and internal structures to be updated to allow for higher clock speeds and better performance (through optimization), which is indicated in the second series of numbers. For example it makes sense that the 550 card doesn't perform as well as a 570 card, but at the same time a 480 outperforms a 560 just because of the optimizations in the corresponding gpu/design/spec/driver iteration, and 590 performs on par with a 770. AMD is roughly the same, just with extra digits and an occasional X2 to watch out for (as does NVidia) for dual GPU cards. Select the manufacturer based on application/OS/personal preference. From there, the best thing you can do is find and research the hardware you can or are willing to afford, read reviews from reputable sites (tomshardware for example) that are least likely to have a bias, and based on that information, opinions, data, make your own informed decision.
  5. Don't forget about Setun and ternary computers, which decreased amounts of half-adder operations by about 1.5 times for normal additions and negated the need for any conversions when it came to negative numbers, also naturally covering almost all the binary logic as well as adding the ability to do tertiary. Fun times
  6. I think it would be easier to list languages that you should not bother learning, and really, either because they are poor languages, or that they are just old and shouldn't be used by anyone, other than that, every language has it's perks and applications, weirdnesses and frustrations, pluses and minuses. I have learned and used over 30 and i code in about 15 in the course of work and my daily interactions with technology (more if i count languages that rotate in and out) and i have no intention of stopping from learning the next language i may have to learn for whatever purpose... Better question perhaps is what programming language is best for doing [task]..?
  7. Of course there is a way to compare images, otherwise how would you gauge different compression algorithms. http://www.rimtengg.com/iscet/proceedings/pdfs/image%20proc/101.pdf As far as fairly simple to implement compression techniques: Easy technique 1: (255,255,255,255)(red,green,blue,opacity) takes 32 bits per pixel to store, drop opacity, now you are down to 24bits, now cut the palette in 1/2, thats 21 bits. Not quite as easy technique 2: take a note of all the values actually used in the image, cut out colors that are close to each other (say within 1 or 2 or 3 (quality comes in affect here) values (128,129,130,131,132) and assign them the median value, pay attention to some hues more than others as our eyes recognize some tones better than others (we're horrible at green by the way). Now create a map of colors and store them in a table (or just array), now readdress every pixel with respect to its corresponding number in the table. (64 colors (without opacity, though you could implement limited levels of opacity) is addressable in 6 bits/pixel plus table size) Also make sure you note the index size so that you can decode your image correctly...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.