Jump to content

SarK0Y

Senior Members
  • Posts

    57
  • Joined

  • Last visited

Everything posted by SarK0Y

  1. Exactly. With Time passing by, it becomes 100% Threat. & noone knows how to eliminate that: any solutions are too costly. + Chernobyl nests in quite seismic-stable zone, but Fukushima & others of japanese nuke plants have deal w/ immense number of quakes.
  2. @5worlds theoretically, hardened electronics may withstand at higher levels of radiation than there have been. But practically bots are rather expensive things & they, anyway, need to be maintained by humans + they're almost zero-capable to perform autonomous work, so more/less tricky operations become too laggy. Look at darpa robotic challenge top space, you can shield as much as you'd like or be able. But bottoms always have been questioned. Tooooooooooooooo crazy expensive.
  3. Fiveworlds, look at the Fukushima. have you ever seen many robots out the??? what about the dust to spread all around??? + such approach makes too high probability to propagate toxic materials through GroundWaters + concrete cracks along Time & cracking can get not too long as well.
  4. comporators/filters can be of the different kind He's got his answer
  5. Analogue computation summarize continuous functions as output. meantime, you need to infiltrate any noises. Thereby filters are comparators of AC's
  6. at asm level, loops are using jXX's, so they're just variation of IFs only situation to avoid loops is, when we get const number to run a loop.
  7. all programs use mostly precomputed consts w/ needful precision. frankly, John, i don't understand what you stand for. we use precomputed vars to boost code -- 'tis rather routine practice. we use if-reducing approches as well for. But fully IF-less codes are extremely limited things & mostly they're Just things for fun
  8. John, it's unfair if you use all available virtual space, YeZzzzzzzz -- the're no ways to go beyond. But-but-but… for 8-bit machine, it's 2^8 addresses, 64-bit -- 2^64 ones. Not juicy to init that much for each time Yes, you can avoid jXX ops in code entirely, but you need to use "call" op as alternative to jXX's. In short, you have no ways to abandon IFs as class.
  9. Sensei, what is funny w/ lookup tables -- there is very need to test the indices for proper bounds. how to test them w/o IFs???
  10. John, not sure of what you do mean & any reason of your laugh. expressions of logical & arithmetic operations w/ vars (& w/ pointers, in particurlar) definetely can substitute IFs. Another question has been, this way makes mostly a lot of "dead" runs, so performance suffers very much. Only for some cases, that substitution might pay off. much more efficiently to reduce number of cmp's op. for instance, cmp %rdx, %rax jXX jYY jZZ 'tis really significant speed-up in comparison w/ cmp %rdx, %rax jXX cmp %rdx, %rax jYY cmp %rdx, %rax jZZ
  11. John, i was playing w/ different approaches to avoid IFs as much as possible. precomputed table of func pointers ain't something new & 'tis really efficient thing for some cases. But situation of abnormal indices have been very curse for such approach. Yes, Jonh, we all are damn sure that computer deals w/ "0"s & "1"s. Meanwhile, "bad" indices can call wrong addresses. i consider you ain't gonna argue that situation is deeply bad for security & stability. in fact, it's possible to avoid IFs entirely. Practically, mostly, it has zero or (very) negative effect upon performance. here i did do some hints on such techniques. meantime, it needs some remarks: 1. your if-free code at high level could become very if-stuffed in asm output thanks to compiler. 2. for the speed-up's sake, better off to use logical AND than multiplication. ============================================ significantly more efficient techniques are placed here. Func pointers are good instrument to self-change algo on-fly, thereby it makes possible to minimize "dead" blocks. If to return precomputed matrices, i can add yet another moment: they're very bad in case of your example because you take short task & it turns into not just mindblowing waste of memory, but into waste of Time as well. E.g., deposit of $1k makes matrix 100x1000 to account each cent there. Data centers of banks shall be completely ruined by such method
  12. John, some IFs, indeed, can be avoided w/ func pointers. However, such method ain't clear for fractions. Sensei has been right upon memory penalty, but fractions make situation much worse. You have matrix 100 by 1000 & what do ye gonna do, if program calls matrix[10.4][94.2]????
  13. Actually, algo complexity (steps to resolve problem) is hardware dependent: you can have slower processor (in terms of freq.), but it could spin algo faster because of larger/better cache/memory/3OE(out-of-order execution). for instance, app gets fallen short on memory, then machine runs swaping & I/O becomes bottleneck the.
  14. amount of steps can be easily converted to Time. actlually, (for each algo) we want to know how long it takes to resolve given problem at n width (width could mean number of elements, bits of precision).
  15. take really-living problems & you'll see 'flesh & blood' of such methodics. For instance, you have polynom of complex roots & you have algo to solve polynom of real roots. What could be done? Ye can represent P(x+i*y) ==F(x, y)+i*T(x, y). in other words, the(re) becomes: T(x, y) == 0 F(x, y) == 0 ==================== So, we make polynom w/ real roots, solve it & then turn real roots back to the complex form.
  16. well, enlighten me upon it, if you're so knowledgeable. Please However, i prefer real test drive
  17. @Strange compiler Just applies templates to convert pseudo-code to machine instructions, what template gets picked up has been up to programmer. if you solve standard task, compiler could be quite enough. specific tricks run beyond the capability of compilers. For instance, you want efficient 3OE (out-of-order execution) optimization, then you must write deeply asmed algo. HPC w/o Asming has been because of the idea to cut edges on funding. But, in fact, it's surrogate HPC. Actually, you can compile my fastsort (for floats) & try to outperform it w/ purely C-written algo if you make freaky tricks to speed-up, they become not so rare + one freaky tricks at C level makes different asm output for different hardware/compilers/(compile-time options).
  18. asm code is quite portable, if you deal w/ CPUs of the same set of instructions 2nd moment: if someone cannot write asm codes faster than compilers, it's Just matter of weak skills/knowledge of programmer. And 3rd, you're right: high-level programming has been only for easy/fast developing. For standard codes, compilers are the best choice. For really true HPC, you must deal mostly w/ pure asming. + look at intrinsics: that's pure admittion of impossibility to abstract hardware level fully did you encounter ever w/ situations where compiler does do something wrong??? full portability is from theoretical assumptions. if you do run something specific, ye have to keep in mind a probability of whatever bugs due to compiler/hardware/users/(3rd party libs).
  19. efficient QC is just a myth speed of computation is tightly tied w/ power consumption. At the best case, consumed power is the cube of speed. in respect to QC, such machines are severely vulnerable for electromagnitic noise, so they easily become gambling apparatus.
  20. send(to, from, count) register short *to, *from; register count; { register n=(count+7)/8; switch(count%8){ case 0: do{ *to = *from++; case 7: *to = *from++; case 6: *to = *from++; case 5: *to = *from++; case 4: *to = *from++; case 3: *to = *from++; case 2: *to = *from++; case 1: *to = *from++; }while(--n>0); } } http://www.lysator.liu.se/c/duffs-device.html funny, but useless such tricks toughly dependent onto compiler. By practice, real HPC gets based on many tricks/methods to boost algos. However, better off to implement any freaky codes w/ pure asm because it's much more efficient & even more portable/compatible actually, i have seen many claims how efficient compilers can do, but i haven't seen such efficiency in practice. that if-reducing strategy, 1st of the all, is based upon function pointers. So algo changes itself on-fly. i cannot imagine compilers to implement such kind of things in automatic mode. perhaps, Skynet would be ok on that
  21. if-reducing techniques have been always on the very edge of HPC. some implementation, you can get here: http://alg0z.blogspot.ru/2014/07/fastsortfsort-no-if-vs-qsort.html It's short post about & the're link to take really working algo for.
  22. Hi there, my Friends. here, i'd like to discuss the possible & the best techniques for subj. 1st & foremost, i would like to share my humble approach to protect buffers. code: https://sourceforge.net/projects/dasofi description: http://alg0z.blogspot.ru/2014/10/dabofi.html perhaps, description seems too short, but i hope code is more verbose ====================== Thanks a lot in Advance for your contribution.
  23. Hi there, my Friends, i need your contribution to benchmark this sorting algo (http://alg0z.blogspot.ru/2014/07/fastsortfsort-no-if-vs-qsort.html). Thanks a lot in Advance.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.