DevilSolution Posted November 11, 2013 Posted November 11, 2013 So, lets say i had windows xp source code and was able to compile it. Lets also say i dropped a keylogger in there and also a backdoor to do as i wish. What would be able to detect a kernel level program? one thats specifically embedded in the system from code to ISO. You know exactly what im saying, how do we trust our vendors and especially vendors hiding their code?? And for blackmarket software, how can we reverse engineer such a programming. Hex editing? Line by line? what about code obscurity?
AtomicMaster Posted November 14, 2013 Posted November 14, 2013 So, lets say i had windows xp source code and was able to compile it. Lets also say i dropped a keylogger in there and also a backdoor to do as i wish. Why would you go through all the trouble of doing this? Windows XP already comes with numerous backdoors as well as built-in keyloggers. And its security features are such that you can pretty much do as you wish with WinXP machines already. What would be able to detect a kernel level program? one thats specifically embedded in the system from code to ISO. What you fail to realize here is that you dont actually have to embed this code into the system image. You cant run code that wouldn't look, feel, and work (or have some sort of special "undetectability") than any current advanced logger/bd doesn't already have. For example look at boot kits... You know exactly what im saying, how do we trust our vendors and especially vendors hiding their code?? Rule #1: Trust noone! Regardless, open, closed-source, matters not, it's not any simpler to audit ~6-8m lines of linux kernel. And it's not just software. How do you trust ford that all the welds on your car are correct and everything is put together correctly at the assembly line. How do you trust your cell network provider that they don't forward your every call through china, and so it doesn't get recorded there? You simply can not trust anything. That said, you still rely on vendors all the time. So you have to test, verify claims, and always approach everything you use with caution. One thing for open systems is that those can at least be audited, however difficult that may be, at least open systems are open for auditing. And for blackmarket software, how can we reverse engineer such a programming. Hex editing? Line by line? what about code obscurity? "black-box" maybe? There is a field called reverse engineering, lovingly called reversing by the people who are in it. There are many types and thus solutions to code obscurity; white papers come out annually for both sides of that. How to reverse is a different problem from trust and auditing. One can audit by reversing, but it can be extremely complex, and very time consuming, and that is before you can see the executing code sometimes, and sometimes you can't even see the code and so you have to guess based on what you see happen and without any other information (it's sometimes a bit like particle physics where you can often not see effects, but be able to extrapolate backwards based on models and events of events of events, think Higgs boson for example). I'm going to leave it there for now. See where this turns 1
Strange Posted November 14, 2013 Posted November 14, 2013 "in a paper entitled "Reflections on Trusting Trust", Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the "login" program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of "cc" the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the "login" backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed." http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html
EdEarl Posted November 14, 2013 Posted November 14, 2013 "in a paper entitled "Reflections on Trusting Trust", Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the "login" program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of "cc" the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the "login" backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed." http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html I expect more than one paranoid programmer will now start writing machine code to detect whether their compilers and Login programs are infected.
AtomicMaster Posted November 14, 2013 Posted November 14, 2013 (edited) yeah but a debug or any program with a login page compiled with this compiler would be different from other compilers. If i say use this compiler to compile my code, and also use gcc and the intel compiler, this compiler would produce a program that would be significantly bigger than my other compilers, and when i am debugging my code, it would be very evident that these unintended events are happening (because when i fuzz my code and it breaks, the first break will be put right after you click login), and oh noes the assembly from gcc and intel will look similar, while this special c compiler will significantly differ. And if any student peruses the optimization stages of the compiler for a project the insertion will also be blatantly evident And if you ask "why cant you just insert this into gcc and intel compilers". I suppose you could, but you would have to be very smart about it, as code is severely audited prior to patching, and gcc self-tests do things like look at the end-size of executables and developers would be very interested in this difference.... Edited November 14, 2013 by AtomicMaster
Sensei Posted November 14, 2013 Posted November 14, 2013 "in a paper entitled "Reflections on Trusting Trust", Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the "login" program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of "cc" the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the "login" backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed." http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html That can be childish easy to do, and working on any platform, any language- just add some code to regular startup module It's attached to every compiled executable at beginning, and after initializing things is calling main() function.
DevilSolution Posted November 16, 2013 Author Posted November 16, 2013 (edited) Why would you go through all the trouble of doing this? Windows XP already comes with numerous backdoors as well as built-in keyloggers. And its security features are such that you can pretty much do as you wish with WinXP machines already. What you fail to realize here is that you dont actually have to embed this code into the system image. You cant run code that wouldn't look, feel, and work (or have some sort of special "undetectability") than any current advanced logger/bd doesn't already have. For example look at boot kits... Theres alot less work involved if you could circulate a seemingly legit copy of windows with your built in backdoors and keyloggers. In the case of breaking into machine on a peer to peer basis you would have to scan for IP's, check OS, check ports, check services running on them ports, check if you have a vulv for that version of that service, check if your payload works with the user priv logged in etc etc etc, theres a list as long as a peice of sring for breaking into a machine on that basis. Sometimes it may be instant, quick hash table or kiddie script but in others it can be almost impossible. The OP demonstrates an example of where you get automatic access to any machine that installs your modified OS. I think you see the danger here. You could essentially have unauthorized access to an array of systems without even needing to scan a single IP address. I'm not overtly paranoid about these issues however it seems fairly obvious that anyone running pirate software is at automatic risk. Also, how would you go about building your own kernel? would you need specific layers of abstraction built for you? I definitely wouldnt be able to build an OSI model for the network card in assembly. Is there some *safe* point from which you can build knowledge of these features?? And to that extent even trusting the compiler your using to construct the kernel itself. Is there like some very very basic system from which to start building and understanding?? Edited November 16, 2013 by DevilSolution
Enthalpy Posted November 18, 2013 Posted November 18, 2013 There are many complete files of Windows XP which you can replace at will and Win does not notice it. Quite important files, that are run every time with zero check. So you don't really need to decompile and modify files.
DevilSolution Posted November 18, 2013 Author Posted November 18, 2013 (edited) There are many complete files of Windows XP which you can replace at will and Win does not notice it. Quite important files, that are run every time with zero check. So you don't really need to decompile and modify files. What? how you gona get those files onto someone elses OS? I wasnt really talking about reverse engineering the whole of windows xp source, i meant if you already had it. Edited November 18, 2013 by DevilSolution
AtomicMaster Posted November 18, 2013 Posted November 18, 2013 Theres alot less work involved if you could circulate a seemingly legit copy of windows with your built in backdoors and keyloggers Sorry, historical malware data shows this to be false many, many times over. Viri like Alureon, that was a remarkable bootkit, which still had millions of machines infected this time last year, massive spreaders like Conficker, and insanity like Stuxnet, which spread a remarkable amount of ways (and it was in the wild for over a year before it was detected) there is plenty of data that shows that it is a lot easier to spread the types of malware i am talking about, then it is to spread a seemingly-legitimate copy of windows. During Conficker.B outbreak the amount of new hosts infected per 1 hour was fluctuating from 75 to 140 thousand, which continued for a great number of days! To spread that amount of windows XP copies (only assuming 300MB) in that kind of time, you would need a nice and brisk 90 gigabit pipe, and your Teir 1 ISP, i assure you, would not ask any question about what this data is... As far as what is involved, i can assure you that compiling windows is not quite as simple as issuing a make, or pressing the "Compile" button, and because you would have to go through hundreds of thousands of lines of code before you can even take a gander at where to put the rootkit into the OS, and then you will have to actually compile it, QA a full OS, and thats before the logistics of distributing, and the fact that you will be discovered as soon as anyone looks at the md5 sum of your iso (if you choose to distribute it as such). In the end, i am just saying, there are a LOT easier ways to distribute the same code... I definitely wouldnt be able to build an OSI model for the network card in assembly Why would i do this? In the case of breaking into machine on a peer to peer basis you would have to scan for IPs, check OS, check ports, check services running on them ports, check if you have a vulv for that version of that service, check if your payload works with the user priv logged in etc etc etc, theres a list as long as a peice of sring for breaking into a machine on that basis. Sometimes it may be instant, quick hash table or kiddie script but in others it can be almost impossible. Breaking this into pieces and answering each one. There is no difficulty with scanning for ips, it takes less than a minute to scan the most popular (1024) ports through 65535 hosts (thats a /16 or a med-large corporate intranet, just for a size comparison), thats besides the point of why you would do something like that... You don't have to scan either, you can just sit and listen for services, some are quite chatty. Checking ports and services running on them is trivial, especially when you know exactly what you are looking for, but even scanning and identifying every port and every service on a machine takes some seconds, maybe a minute? You know exactly what version(s) of what service(s) you have an 0-day for in these situations. You also know exactly as what privilege that service is run (user, system, other), and you already have a priv escalation (if you need one) ready before you even connect or distribute these kinds of things, and your payload just works, regardless of machine, user, etc, if it doesn't you go and find a vuln where it always does and use that one... The term script kiddie doesn't come from being able to write a simple script to take over a machine, it comes from a complete cluelessness about what the security script/program does and so simply executing whatever program you saw or downloaded that was promised to attack in an attempt to take over a machine; like a kid. Hence script kiddie. No idea where you are going with the hash table...
DevilSolution Posted November 18, 2013 Author Posted November 18, 2013 (edited) Why would i do this? Because its the only sure fire way to ensure security. There is no difficulty with scanning for ips, it takes less than a minute to scan the most popular (1024) ports through 65535 hosts (thats a /16 or a med-large corporate intranet, just for a size comparison), thats besides the point of why you would do something like that... You don't have to scan either, you can just sit and listen for services, some are quite chatty. Checking ports and services running on them is trivial, especially when you know exactly what you are looking for, but even scanning and identifying every port and every service on a machine takes some seconds, maybe a minute? You know exactly what version(s) of what service(s) you have an 0-day for in these situations. You also know exactly as what privilege that service is run (user, system, other), and you already have a priv escalation (if you need one) ready before you even connect or distribute these kinds of things, and your payload just works, regardless of machine, user, etc, if it doesn't you go and find a vuln where it always does and use that one... The term script kiddie doesn't come from being able to write a simple script to take over a machine, it comes from a complete cluelessness about what the security script/program does and so simply executing whatever program you saw or downloaded that was promised to attack in an attempt to take over a machine; like a kid. Hence script kiddie. No idea where you are going with the hash table... I'm not sure why you went into detail over each aspect of the attack's that i used as an example of why the process is longer. Dont forget the ISP plays a big role in these kinds of attacks, i know because ive had a warning....then you go through each item i listed and each requires some application or knowledge. At the top level you develop your own vulns but that requires finding exploits in vendor software and the sort, an ardious business. You can collect vulns in a databse like on metasploit or openVAS (which is open source) and use methods of tracking new ones on the market but you either have to be a pen tester or pro hacker to really do damage. Point is to take computers down like this takes alot of practice, expertise and programming knowledge. My example requires basic knowledge if you had access to such things. Sorry, historical malware data shows this to be false many, many times over. Viri like Alureon, that was a remarkable bootkit, which still had millions of machines infected this time last year, massive spreaders like Conficker, and insanity like Stuxnet, which spread a remarkable amount of ways (and it was in the wild for over a year before it was detected) there is plenty of data that shows that it is a lot easier to spread the types of malware i am talking about, then it is to spread a seemingly-legitimate copy of windows. During Conficker.B outbreak the amount of new hosts infected per 1 hour was fluctuating from 75 to 140 thousand, which continued for a great number of days! To spread that amount of windows XP copies (only assuming 300MB) in that kind of time, you would need a nice and brisk 90 gigabit pipe, and your Teir 1 ISP, i assure you, would not ask any question about what this data is... As far as what is involved, i can assure you that compiling windows is not quite as simple as issuing a make, or pressing the "Compile" button, and because you would have to go through hundreds of thousands of lines of code before you can even take a gander at where to put the rootkit into the OS, and then you will have to actually compile it, QA a full OS, and thats before the logistics of distributing, and the fact that you will be discovered as soon as anyone looks at the md5 sum of your iso (if you choose to distribute it as such). In the end, i am just saying, there are a LOT easier ways to distribute the same code... Rootkits / bootkits are harder to distribute than you make out and again alot more knowledge is required than simply getting hold of some source code and recompiling with new code added, theres alot less hastle aswell, considering once its built you can leave it dormant until you wanna fuck with the system. The OP was specifically asking about detecting that level of sophistication and whether its actually possible. This seems to be a big issue: http://en.wikipedia.org/wiki/Blackhole_exploit_kit Bootkits / rootkits / malware etc require incompetence of the user, everyones after a cheap buck and next to nobody is willing to pay retail on an OS like windows, so the OP still makes perfect sense. As i've also made clear in the following posts im also referring to pirate software and vendors. The term script kiddie doesn't come from being able to write a simple script to take over a machine, it comes from a complete cluelessness about what the security script/program does and so simply executing whatever program you saw or downloaded that was promised to attack in an attempt to take over a machine; like a kid. Hence script kiddie. No idea where you are going with the hash table... I said kiddie script in than EXACT context, if you re-read i said it may be instant access through use of a kiddie script or hash table, hash table table is brute forces big brother, the time trade-off makes breaking a password ALOT faster so access is quicker. Sorry No need to apologise, i do appreciate your input but please refer to my questions and not my statements....... Edited November 18, 2013 by DevilSolution
AtomicMaster Posted November 19, 2013 Posted November 19, 2013 Because its the only sure fire way to ensure security. Firstly OSI is just a conceptual way to represent a conceptually-separated model of a network. Nobody needs to build an OSI model for a newtork card, and surely if that need arose, nobody would do it in assembly; there's no need. Secondly writing anything in assembly doesn't magically make it fast, or secure, it does, most times, make it needlessly complex. Certainly not undoable. I'm not sure why you went into detail over each aspect of the attack's that i used as an example of why the process is longer. The progress is shorter. A competent dev can fully implement multiple attacks a day (3-4) starting say at nothing more than a description given in a typical CVE. Dont forget the ISP plays a big role in these kinds of attacks, i know because ive had a warning. Lol, let's pretend i am an attacker: If i'm in a pinch and i need to infect many machines at the same time, I'm not infecting 100000 machines from my computer, that would be stupid; what i am doing is going to a newly published list of stolen CC and renting out a botnet, I am doing this from my car, in a parking lot of some coffee shop, or just some neighborhood with an open wifi. Then from another coffee shop, at another time, i fire in code for the botnet, the code includes my worm. I do this from a laptop with a live cd running from a thumb drive, and with a wifi card that i burn immediately after use (not the laptop, but the other things). If i am not in such a hurry, i just mitm a coffee shop wifi and inject code to download my virus next time you download anything from the web into [fill in your favorite social media site here], tell it to cache for 10 years while i'm at it. All my virus has to do is replicate over local media, as long as it stays out of detector range, and it wont do weird things on the computer, it will be able to spread wide before it is detected. So, what ISP are you talking about? Even if i theoretically was stupid enough to upload the first round of worms from my own home, transferring lets say 3 megabytes of data, which is a horribly bloated worm, to say even 100 computers i found with a single shodan query, this still puts me at 100:1 ratio of infections to your method, and thats at generation 1 (and no ISP will wonder about 300M of upload). After generation 1 this spreads in a progression relative to the way i chose to propagate my virus without involving my connection, while you have to rely on other people seeding, or have to seed copies yourself... yeah... then you go through each item i listed and each requires some application or knowledge. And digging through 50 million lines of code requires no knowledge or any special build environments or applications at all, how do you imagine this works, you just hit a "Compile" button? At the top level you develop your own vulns but that requires finding exploits in vendor software and the sort, an ardious business. In 2012 there were 1765 cves with the cvss score of 7-10 (thats High, usually means code execution), of which 1675 were network access vulnerabilities, 1634 of which required no credentials, 1466 of which had a low or medium complexity... That's still over 4 a day discovered. For a person with the level of knowledge that can pull off putting in a key logger and a back door into 50M lines of windows code, and be able to build the os, it won't take very long to find a fully exploitable vulnerability, days maybe? I mean if they have windows code, they can just run static analysis on it, I am sure that will bring up plenty of problems/places to look into. You can collect vulns in a databse like on metasploit or openVAS (which is open source) and use methods of tracking new ones on the market but you either have to be a pen tester or pro hacker to really do damage. Or you can be a curious person who likes to break things (i.e. a geek or a hacker, which are fairly synonymous terms) with no obligations, time on your hands and access to the internet i.e. almost any college student. Point is to take computers down like this takes alot of practice, expertise and programming knowledge. My example requires basic knowledge if you had access to such things. You must be joking... Rootkits / bootkits are harder to distribute than you make out and again alot more knowledge is required than simply getting hold of some source code and recompiling with new code added, theres alot less hastle aswell, considering once its built you can leave it dormant until you wanna fuck with the system. Oh yes, i totally forgot about the "Add Invisible Back Door" check box in the Visual Studio project parameters... The OP was specifically asking about detecting that level of sophistication and whether its actually possible. I am saying that this level of sophistication doesn't require compiling and distributing and OS. This happens in the real world (examples were given), and that since we know about them, there is a way to detect them, usually when they are activated and do bad things; malware detection, reversing and identification is a reactionary science by its nature. One can significantly limit such possible problems just by using open systems. Bootkits / rootkits / malware etc require incompetence of the user Nope, sorry, that is incorrect. With the sophistication of modern malware, no user interaction is required at all, you simply will not know when or how you were infected, even following fairly strict security practices. so the OP still makes perfect sense It is already being done, just not by a lone attacker that gained access to 50,000,000 lines of code and now wants to distribute their own ISO; that's just silly. Both US and Chinese-based companies have included back-doors into their product, but that's the actual developers being told by the state to do something. It's just needlessly complex, unrealistic for a non-corporate or government aggressor, and clearly not even the approach that someone with hundreds of millions of dollars of funding will even take. As stuxnet and duqu clearly showed. kiddie script kiddie script as a term makes no sense, it is not a security-term, it is not even a term that google can find, script kiddie, on the other sense is both a security term and it makes sense instant access through use of a kiddie script or hash table, hash table table is brute forces big brother Again, no idea what a kiddie script is. I have a very good idea of what a hash table is, but i have no idea about it's relevance to the topic. What are you doing that requires "speeding up" via a hash table?
DevilSolution Posted November 19, 2013 Author Posted November 19, 2013 (edited) Your mis-directing the thread away from analysing embeded code from source and the threat to which method is the best for building a botnet or such. I know the process fairly well, im no expert but ive dabbled. I'm more concerned with programming and systems programming / high performance specifically. This is more what im concerned with. Writing kernel code and the dangers that can come with any OS that you yourself didnt program. In regards to the OSI thats the model all computers communicate so in any networked machine thats where trafficed data will be processed. P.S- a script kiddie uses a kiddie script. The context i used in was that a *kiddie script* gives you instant access, essentially anyone can use that script to gain instant access. And again, even after asking in the last post, i'd prefer you to answer my questions than debate single phrases ive used, if you dont know the answer then no worries. Edited November 19, 2013 by DevilSolution
Enthalpy Posted November 19, 2013 Posted November 19, 2013 What? how you gona get those files onto someone elses OS? ["These files" = "complete files of Windows XP which you can replace at will and Win does not notice it. Quite important files, that are run every time with zero check."] You can replace these system files by hand on XP, with zero check by the OS, even from a user session without administrator rights. They have the capacity to do anything to the complete software installation, especially run a remote control, and no antivirus will block them. XP does NOT check them, which is a BIG disappointment to me.
AtomicMaster Posted November 20, 2013 Posted November 20, 2013 I am only arguing the silliness and needless complexity of the suggestion of your original question; to the original question, as far back as the first response i said: you have to test, verify claims, and always approach everything you use with caution. One thing for open systems is that those can at least be audited, however difficult that may be, at least open systems are open for auditing. There are tools for static and dynamic analysis which to some extent can examine source code for bugs, and so potentially such systems can look at heuristics and perhaps certain actions to look for secret back-doors, but those would be fairly easy to bypass. There are sandboxing methods for software, but those are reactionary, they look at what the software does to classify the software, but no such systems are in play for running monitoring full-blown OSes (well ish), and again they would be very reactionary. There is currently nothing to my knowledge that can examine a full system to see if there is hidden code in it that is meant to provide a secret back-door. Every way that i can think of, and thats both with and without having actual source code, can be bypassed. Unfortunately context-free grammar of programming languages makes things easy to hide and system nature of the software, such as an OS, makes some techniques that i am thinking about not even applicable. The problem you propose is very involved and difficult, perhaps unsolvable. As with math, sometimes a simpler problem, but one very similar to the original may be a way to find how to solve the original problem. So to try to come up with some answers, lets make this a lot more simple. Lets say that what you have is a website, a modern one, how do you build javascript that that detects whether or not the other code on the page is not malicious. Looking at it as the different parties involved in the thought experiment: On one hand as a system user, i am oblivious to the fact this code exists. On another hand, as someone who is trying to break this system, i have everything in front of me, including the protection mechanism, so i win by default, or do I? On the third hand, if i am writing this, i have no way to know if the original code that is provided by the website developers already has back-doors built in, and if that is the case how can i, and should i even try to detect these holes On the forth hand, if i am the website, i have no real way to definitively say that the security introduced by this code that verifies my code, whether i can trust it, and whether or not it itself doesn't inject new ways of getting owned. So as far as i can see, there are four separate parties: black box, black box that's operating in black box to make sure that the outer black box is indeed a black box, a user that relies on the black box to be a black box, and someone who thinks that one of the black boxes is actually a puzzle box with a treat inside. None of these parties can trust the other, and yet they have to depend on each-other for this system to work correctly and safely. The first step to the solution, and feel free to correct me here, is to solve, or at least simplify or even fully dismiss the trust problem here. Lets start there?
DevilSolution Posted November 20, 2013 Author Posted November 20, 2013 (edited) I am only arguing the silliness and needless complexity of the suggestion of your original question; to the original question, as far back as the first response i said: There are tools for static and dynamic analysis which to some extent can examine source code for bugs, and so potentially such systems can look at heuristics and perhaps certain actions to look for secret back-doors, but those would be fairly easy to bypass. There are sandboxing methods for software, but those are reactionary, they look at what the software does to classify the software, but no such systems are in play for running monitoring full-blown OSes (well ish), and again they would be very reactionary. There is currently nothing to my knowledge that can examine a full system to see if there is hidden code in it that is meant to provide a secret back-door. Every way that i can think of, and thats both with and without having actual source code, can be bypassed. Unfortunately context-free grammar of programming languages makes things easy to hide and system nature of the software, such as an OS, makes some techniques that i am thinking about not even applicable. The problem you propose is very involved and difficult, perhaps unsolvable. As with math, sometimes a simpler problem, but one very similar to the original may be a way to find how to solve the original problem. So to try to come up with some answers, lets make this a lot more simple. Lets say that what you have is a website, a modern one, how do you build javascript that that detects whether or not the other code on the page is not malicious. Looking at it as the different parties involved in the thought experiment: On one hand as a system user, i am oblivious to the fact this code exists. On another hand, as someone who is trying to break this system, i have everything in front of me, including the protection mechanism, so i win by default, or do I? On the third hand, if i am writing this, i have no way to know if the original code that is provided by the website developers already has back-doors built in, and if that is the case how can i, and should i even try to detect these holes On the forth hand, if i am the website, i have no real way to definitively say that the security introduced by this code that verifies my code, whether i can trust it, and whether or not it itself doesn't inject new ways of getting owned. So as far as i can see, there are four separate parties: black box, black box that's operating in black box to make sure that the outer black box is indeed a black box, a user that relies on the black box to be a black box, and someone who thinks that one of the black boxes is actually a puzzle box with a treat inside. None of these parties can trust the other, and yet they have to depend on each-other for this system to work correctly and safely. The first step to the solution, and feel free to correct me here, is to solve, or at least simplify or even fully dismiss the trust problem here. Lets start there? I like the analogy alot, It explains the issue rather well. I suppose trust comes if you have something to hide, maybe you use online banking, have confidential data or simply prefer privacy. The trust works on multiple levels, primarily you give your full trust to the OS which you run, secondly you give trust to the vendor software that has low level access like drivers. After that you put your trust in which ever software you choose to run whether it be a web browser, game or anti-virus software. The lack of trust comes from knowing that certain loop holes exist and can be implemented at any stage of the above list. Most scary for me is that everyone needs an OS though. You can choose between an array of web browsers or throw out your own GET functions and parse the return html yourself if you wanted, thats fairly trivial (In other words theres alot of this software, if one specific browser is especially vulnerable you probably got unlucky in using it). Each driver is specifically tailored via the vendor who's already paid for their contribution and same applies to all commercial 3rd party software really. Were essentially left with pirate software (not excluding a fresh OS rip from pirate bay) and vulns in 3rd party software that further escalates access into the OS etc. I mean if you break the commercial chain at any point, your at risk right? Also at the core of what im curious about is: if you built your own kernel and OS (lets say we get graphics libs etc but re-write the networking modules), who would be able to exploit it? Edited November 20, 2013 by DevilSolution
AtomicMaster Posted November 20, 2013 Posted November 20, 2013 Also at the core of what im curious about is: if you built your own kernel and OS (lets say we get graphics libs etc but re-write the networking modules), who would be able to exploit it? It may be more difficult to exploit it remotely, without any physical access to the device and thus useful debugging ability, but even still it is possible with some effort. When your networking code segfaults because of my packet, i get direct feed-back (or lack of any response) about this happening, so with significant amount of trial and terror, you could potentially fully exploit even such a system. It becomes easier if i can have a copy of your system running locally, or if i can get remote feedback (dump perhaps) or remote debugging-ability from your system. On top of this, if you run other people's software on top of your OS, weird machines that i build inside that software when i exploit it can still work as they worked before and on other platforms. I should say that there is no such thing as completely secure software of any moderate or high complexity... at least none yet. I suppose trust comes if you have something to hide, maybe you use online banking, have confidential data or simply prefer privacy. Trust is a part of any interaction you have with any person or object, it is implicit and thus easily given. If you are in the middle of a desert, you see a chair, and you come over to it and sit down, you automatically trust that the engineers who designed this chair, designed it so it doesn't kill you, that the people who produced parts, followed the engineering and material specifications, that people that put the chair together did their job. You don't make a conscious decision to trust all these people and processes that you don't know, but your trust from your action is implied by the said action. The trust works on multiple levels, primarily you give your full trust to the OS which you run, secondly you give trust to the vendor software that has low level access like drivers. After that you put your trust in which ever software you choose to run whether it be a web browser, game or anti-virus software. There are many levels of trust missing out of this, and many, many more parties that are involved. The lack of trust comes from knowing that certain loop holes exist The lack of trust may come from simply choosing to not trust. I should throw in another term here, which is reliance. For example, I may not trust the ISP that i use with my data, but yet i have no other choice but to rely on them to transfer my data. Or for example i can trust a network, but i may not be able to rely on the said network to transfer my data. Most scary for me is that everyone needs an OS though. You can choose between an array of web browsers or throw out your own GET functions and parse the return html yourself if you wanted, thats fairly trivial (In other words theres alot of this software, if one specific browser is especially vulnerable you probably got unlucky in using it). Each driver is specifically tailored via the vendor who's already paid for their contribution and same applies to all commercial 3rd party software really. Were essentially left with pirate software (not excluding a fresh OS rip from pirate bay) and vulns in 3rd party software that further escalates access into the OS etc. I mean if you break the commercial chain at any point, your at risk right? You have a choice of an OS, and there are plenty of choices there just like there are in browsers. You don't choose a browser for how well it parses a GET request, and i can tell you that writing a browser is no trivial task; modern browsers are extremely complex, not quite as complex as an operating system, but still complex enough to have teams of hundreds of developers, testers, qa people working on them full time, putting in thousands and thousands of man-hours every month. I have a few machines with a multitude of operating systems, installed and virtualized, some i trust more than others, none i trust completely, some are completely free including all the software that i have chosen to run on these operating systems, none of which is pirated, others are not (not all of the software that i use is free as in freedom, and not all of it is free as in free beer). As to the last part there, i can tell you that it is fair to say that a lot of times there are more vulnerabilities in the piratable software (regardless of you running a pirated or a legitimate copy) then there is in open and free software. This is mainly due to the release cycles and the ability of vendors to adapt to modern-day security... Topic for a different discussion perhaps
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now