Jump to content

Aeternus

Senior Members
  • Posts

    349
  • Joined

  • Last visited

Everything posted by Aeternus

  1. Not sure exactly what you mean there. If you mean why can't you use the setRequestProperty() again, it is because the Content-Disposition and Content-Type headers are specifically for the file data, not for the HTTP Request. A Content-Type header has already been set in the HTTP Headers to inform the server that the data being sent is multipart/form-data and of that form, so if another Content-Type declaration for the file were to be defined with those headers it would be extremely confusing and it wouldnt know which to use. You'll note that these headers come after the boundary, this is A) so they are not assumed to be HTTP Headers and B) because multiple files/file fields can be set, each requiring their own Content-Disposition and Content-Type etc fields, each file (or indeed simply POST field) seperated by the boundary string. If you are asking why the method used is POST, this is simply because a GET request would be problematic as data is sent as part of the url and there is generally a limit on the length of a url (ie http://www.google.com/?s=booga&b=cheese etc) so sending file data this way wouldn't work and would simply be impractical. The POST method appends the data to be sent after the HTTP Request Headers and can in theory be as big as you wish, making it the prime choice for sending data this way. There is also a PUT method that can be used which is extremely similar to the POST method but is specifically for uploading files to a server (ie you actually PUT to a file, like PUT /aeternus/cheese.html would cause the server to write to a file at the %HTTP_ROOT%/aeternus/cheese.html, although you would usually have some sort of security involved and Apache for example allows you to specify a script to handle put requests in a variety of situations (per directory, whole server, per location etc)). This isn't used as it isnt what you want in this instance as A) it isnt widely used as far as I know B) it uploads to a specific location and would require server configuration to get a single script to handle it, whereas using POST allows a normal php script to handle it with little configuration like you are doing and C) POST allows multiple fields to be used allow with files etc with multipart/form-data so is more suitable for this application. -------------------- PS - Heh, I'm not a computer science student YET, I start Swansea University, late September doing MEng Computing (4 year MEng version of the basic 3 year Computer Science Course). Link .
  2. HTTP Headers are but those were headers specifically for the File data and so werent to be set with the HTTP Headers with setRequestProperty. I think the reason it is \r\n instead of \n is because windows uses both a carraige return and and new line for a new line for some strange reason so it keeps it happy. The reason some were just \r and \r\n\r is that println() is printline so it appends a new line character anyway so when it actually gets sent to the feed you will have \r\n and \r\n\r\n respectively in the raw data. In regards to Ethereal, is pretty simple really, capture on the network device that you use for net access, do what you want to watch (ie upload the file with firefox or similar), stop capturing and then itll list all packets that went through that device in the time it was capturing. Alot of this will be random network traffic but it lists the protocol being used for each packet and HTTP is one of the ones it recognises, so if you click the protocol tab a couple of times it will order it according to protocol and you can scroll down to find the HTTP packets. Once youve done that, click on one, and it will be displayed in the bottom window offering drop downs that you can select. Select the HTTP Headers one or the data one and it will list them along with the raw data in hex and in plain text in another bottom window.
  3. I'm by no means a Java Expert but after a little fiddling around with Ethereal watching how Firefox sends multipart form data I found that you had to specify a boundary for the file contents and should also specify Content-Disposition and Content-Type fields specifically for the POST data, in addition to the normal HTTP Headers. Was stumped for a bit as it wasn't working but found the boundary needed a trailing "--". Heres the working version - import java.io.*; import java.net.*; public class test{ public static void main(String[] ad) { /* Init Variables */ String filename = "PATH_TO_FILE_HERE"; String ContentDis = "Content-Disposition: form-data; name=\"userfile\"; filename=\"" + filename + "\""; String ContentType = "Content-Type: image/jpeg"; /* Change this, or calculate it */ try{ /* Setup URL Object */ URL u = new URL("URLHERE"); URLConnection uc = u.openConnection(); uc.setUseCaches(false); uc.setDoOutput(true); uc.setRequestProperty("Content-Type", "multipart/form-data; boundary=\"--3245234--------\""); /* Open File, Setup Output Stream (Raw HTTP Request) and write to it */ PrintStream ps = new PrintStream(uc.getOutputStream()); File pic = new File(filename); FileInputStream fis = new FileInputStream(pic); ps.println("----3245234--------\r"); /* Boundary, requires the leading "--" */ ps.println(ContentDis + "\r"); /* Content Disposition (Info on content) */ ps.println(ContentType + "\r\n\r"); /* Content Type of the file */ for(int i=fis.read(); i!=-1; i=fis.read()) { /* File Contents */ ps.write(i); } ps.print("\r\n"); /* End of File */ /* Flush Output */ ps.flush(); ps.close(); /* Read Result */ InputStream is = uc.getInputStream(); for(int i=is.read(); i!=-1; i=is.read()) System.out.write(i); } catch(IOException ioe){ioe.printStackTrace();} } } Where it says name=\"userfile"\" that defines what the field is called so when you access it from say a PHP script it would be for instance $_FILES['userfile'] then $_FILES['userfile']['tmp_name'] etc. Example of the HTTP Request Firefox sends - POST /aeternus/test.php HTTP/1.1 Host: aeternus.no-ip.org User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.10) Gecko/20050716 Firefox/1.0.6 Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Content-Type: multipart/form-data; boundary=---------------------------41184676334 Content-Length: 174 -----------------------------41184676334 Content-Disposition: form-data; name="UserFile"; filename="test.txt" Content-Type: text/plain Some text in this file
  4. Having taken Maths and Physics as well (along with Computing, Chemistry and AS Further Maths) I would completely disagree. I agree to a large extent with the media about A Levels and GCSE's. Numerous subjects have been cut out of the A levels either because it is deemed too hard or because more time is needed to teach other parts of the syllabus. For instance in Computing alot of boolean algebra and basic circuitry (adder circuits etc) was taken out and alot of people are getting away with basic Access projects where Years ago, you were forced to do alot more work via Pascal,Basic etc. In Maths, looking at some of the older papers it isn't necessarily that content is missing or that there are harder topics, it is simply that the questions posed require a greater understanding of the subject matter. Rather than the method being obvious you need to be able to think around the problem. In Physics, alot of the Maths has been dumbed down because alot of people are now taking Physics and not Maths (which is fair enough) but do not have a firm grounding in Maths from GCSE. This is true to some extent with Chemistry as well. In Chemistry and Physics when log's are required we were taught the very basics of what they mean, nothing more than "it's this button on the calculator" etc when a greater explanation would've avoided confusion (problems occured with those who were taking Physics + Chem but not Maths, as Physics used primarily log base e, whereas chemistry used log base 10 and they confused the two). Basic Logs are not really difficult and looking at some of the older papers a far greater understanding was required of this sort of subject matter. I remember at GCSE being shown some of the GCSE papers from 10 years previous. They were far far harder. Again, it wasnt so much the subject matter (although alot of AS/A Level stuff was with the GCSE Higher paper) but was more the depth of understanding required.a As Glider has mentioned, this is a matter of communication, not spelling. If someone can't understand what you mean, they will often immediately disregard your intelligence and assume you are an idiot. It can take a long time to gain their respect. Don't get me wrong, if you are dyslexic or only mispell the odd word, then fair enough but learning how to spell properly for the most part and using grammar properly is very important. When applying for a job, how your CV is presented, how your letter of request is written, any number of things that might involve spelling will be something that will subconsciously work against you with the interviewer. There are many communities online, such as certain channels on the IRC network this forum has it's channel on, that will end up ignoring or out right banning some users because their language skills are so poor and people get tired of reading the often MSN/AOL type language that some people find acceptable. I'll admit, I used to talk alot like that (mostly due to overuse of MSN, which I now avoid like the plague) but then it hit me how much my english skills had deteriorated and I forced myself to write correctly. My grammar and spelling are often far from perfect but I am trying to improve. My handwriting is atrocious and again that is something I wish to improve. Simply writing bad spelling and grammar off as unimportant is unwise to say the least in my opinion, as it can often have such severe effects on peoples opinions of you.
  5. Good point, forgot about that, heh, is always compiled into mine as one of the default options so I never really think about it.
  6. Not sure exactly what you mean "under the terminal" Iif you mean executing php script via the command line its simply "php filename" and the extension won't matter (you are simply passing a file to the php interpreter, it really doesnt matter what the extension is, extensions really dont mean that much). If you mean you wish to use a php script like any other executable on the shell (ie just run ./script.php) then you can like any other script put "#!/path/to/phpexecutable" which will probably work out as "#!/usr/bin/php" as the very first line of the php file. Then you can do "./phpfile.php" and it will run without having to type "php phpfile.php" on the command line. If you mean you wish to see and change what extensions link up to the php interpreter on the webserver (or the webservers php module) then I'm not exactly sure how you'd do it. I know you can see the AddType declarations that link up the extensions to a particular type which has been associated with the module in the mod_php configuration file - You MAY be able to add more extensions using the AddType directive on a per directory basis using the .htaccess file but I'm not sure (would imagine it would be a main config only kind of option, try it). You can also use various scripts via CGI scripts and can simply place them in the cgi-bin and these can use a variety of interpreters (by simply using the interpreter address at the top of the script as I mentioned).
  7. As far as I am aware they are both simply extensions for php scripts. I've seen people use several different extensions for PHP (especially when they have multiple versions (ie 4 and 5) running simultaneously and need to know which interpreter to use for which extension) such as .php3 .php4 .php .phtml . They are all simply extensions, doesn't really matter, i could call it .bog or .tit and as long as I set it up in the Webserver config to use the php interpreter on files with those extensions it would work exactly the same.
  8. Aren't there "Port Forwarding" options. I have a Linksys WRT54G Wireless router. It includes a "firewall" although it doesnt seem to do much. Obviously all the ports are blocked by default but theres a tab called "Applications and Gaming" which has Port Forwarding, Port Triggering etc under it. You should be able to route traffic on port 80 to port 80 on your computer. Again not sure exactly how you do it on yours. Try checking the manual for Port Forwarding or googling it.
  9. Sure, as long as you are able to open port 80 or any custom port you wish to host the webserver on you should be fine. There are quite a few webservers that will run on Windows XP. I don't think you can install IIS on XP Home but you can always install Apache (Download). This can be rather complicated sometimes if youve never configured Apache before and if you want to include things such as PHP and MySQL on your "server" it might be best to download something like WAMP. Either way, you can then place files you wish to serve up into Apache's http root directory and people can then access those pages by going to http://youip:portIfCustom/page.html and then if you would like it to use a nice url you can either buy a url and dns entry or use something like No-Ip or DynDNS which can associate your ip with a domain name.
  10. That was what the "edit .test" was for on the shell prompt. I'd scrolled up a bit to show that, had edited the file and just chucked some text in from the command line.
  11. heh . and .. are references to the current folder and parent folder. For instance you can do cd ./someFolderInTheCurrentDirectory/ or cd ../SomeFolderInTheParentDirectory/, or simply cd .. etc. As far as I can see from my own experiences, adding a leading dot to the name of a file or folder doesnt seem to hide it. The only time I've found this to be true is when working with Unix/Linux Systems. Proof
  12. heh, god i hate these threads. There are so many people who think that "hackers" (what people generally perceive as hackers) are awesome and that they are the elite of the computer industry. Most of what they do is trial and error looking for exploits and exploiting already known bugs. All that is required to do most of this is your garden variety computer programmer knowledge. There is nothing special about these so called hackers (as Klaynos has said, they should usually be called crackers) other than that they choose to spend time doing this. Some normal computer programmers will "hack" (again i use the term in the sense that the media or even the general public (white,black etc hats)) into systems, whether it be their own or their friends to expose vunerabilities and help their friends fix them or just to have a tit around. It could be something as simple as taking advantage of the lack of validation of input on a webpage to allow for sql injection or it could be that a games site simply sends the score with a http request from the flash game so that you can simply do a raw HTTP request to fake your score. It could be something alot more interesting such as taking advantages of buffer overflows in remote system server daemons. There are so many things that could be classed as this "hacking" and most of them are nothing particularly fantastic or require some sort of genius mindset to accomplish. One better use of the word "hacker" is the use in the Open Source community in things such as Linux Development where the Kernel Developers such as Linus Torvalds, Alan Cox etc call themselves "Kernel Hackers" to bring to light the sometimes "hackish" (adhoc, bolt-on) way in which the kernel is written sometimes. As Klaynos has said there are many meanings to the word.
  13. Navbars still shouldn't really be done using tables. They can just as easily be done using a set of <div>'s and aligning them correctly using CSS. Tables are for showing Tabular data as Sayonara said, such as data from a database which has fields and headers with various rows of data. Layout should be done with <div>'s and other tags and CSS, this allows you to easily produce content with PHP etc (dynamic content that changes) and easily change the design of the site with just a few simple changes in the CSS, effecting the whole site.
  14. I think he means, the servers could for instance be in Russia or China, or anywhere in the world, and there are numerous places where the FBI has no jurisdiction.
  15. Yeah, Java doesnt do any additions for you. So, you need to be looking for http://www.xxx.com as the host and /somefile.bla as the file. The reason being, you arent providing a url to a browser that then converts this to a http request, you are providing parameters for a socket connection and then the GET request line in the http request, so the / from http://www.xxx.com/ will not carry over.
  16. With mine, it loads .html in preference to .htm (i assume you are talking about index files), but that is probably due to the order you specify them in the webserver config (ill see if i can fish out the line in the config now). [Edit] Found it -
  17. The difference between URI and URL seems to be that URI can address anything on the Web whereas URL seems specific to documents and some other things. To be honest that doesnt really make much sense and for all intents and purposes I'd say they were the same but I'm sure someone will point out the error of my ways - http://www.webopedia.com/TERM/U/URI.html http://www.webopedia.com/TERM/U/URL.html The difference between - http://www.xxx.com/'>http://www.xxx.com/ and http://www.xxx.com is practically nothing. http://www.xxx.com is the server address (which is resolved via a DNS query to an IP address that can be accessed more directly) and this is connected to via socket based connections on port 80 (default port) and then a GET or POST request is made to the server for a particular file. If the URL was http://www.xxx.com/'>http://www.xxx.com/cheese, the request line would be "GET /cheese" followed by various request data (any cookies, acceptable languages, and extensions etc). Accessing just http://www.xxx.com or http://www.xxx.com/'>http://www.xxx.com/ will do "GET /" (the browser will add the / for you if you dont put it on the end) which just means to try to access the base dir that that server or hostname/dns entry resolves to (this will then be passed through on the webserver to perhaps an index file (index.html, index.php etc), a directory listing or perhaps a page error. The thing is it just says "i want the base dir" but you dont really need it in most browsers as they will automatically use it anyway if you don't use it at the end. [Edit] And as skuinders says below, if you use an address other than the simple hostname (ie you are looking for a specific file) and you specify http://www.cheese.com/test and there is no file "test", most good webservers will redirect you to the directory if it exists. It is however the browser that adds the / if you do not type it with a simple address url as can be found if you watch the packets sent. The difference between .htm and .html is again, pretty much nothing. As far as I am aware, .htm was used when there was a small limit on the amount of characters in a filename, meaning that using .htm and sticking to the typical 3 letter extensions was better and meant that you could have a decent file name. Now that the limits are much larger, .html is used as it is only an extra letter and it states exactly what it is. As far as I am aware, not many people use .htm any more and the standard generally seems to be .html for static html pages (although you can send a text/html content-type with the file and use any other extension, this is done with things like php, perl, asp etc).
  18. DLL's are not the same as EXE's. They are not made to run on their own but contain compiled code for various functions etc. An EXE (or a binary, as exe is simply an extension) can call a function from a dll. This allows compiled code for various functions to be kept seperate from the main code (code that is used across many different applications requiring only 1 dll), which means that the code doesnt have to be repeated in the EXE's of each application reducing the size of the binary and meaning that if changes need to be made to the inner workings of the function included in the dll, they can be without altering the EXE (as long as the function/procedure parameter / call remain the same). As far as what languages can be compiled to dll's, I think there are quite a few. I think it is less a matter of which languages and more which compilers will compile which languages. It is certainly not just .Net programming languages as dll's have been around alot longer. However dll's, as far as I am aware are a window's idea (not libraries but the actual specifics of dll's in windows, in Linux or Unix, the same thing would be done with libraries which are effectively the same thing but probably implemented slightly differently with things such as .so files (again just an extension)) and so as far as I know a windows compatible compiler would be required, passing the correct options to compile to dll (check out winamp plugins and how to make them, involves making a dll). http://en.wikipedia.org/wiki/Library_%28computer_science%29#Dynamic_linking
  19. The reason being that by doing so (not simply leaving certain features out but by deviating from standards (that they agreed to and supposedly abide by)) they make it difficult for website designers to write for multiple browsers. Therefore this forces websites developers to either spend longer writing pages for both (IE and standards compliant) or to just write for IE. This results in quite a few "developers" writing solely for IE which propagates a negative image of other browsers as some sites will not work with them due to being written for IE's broken implementation. As you said, yes this is damn good business. It is also a monopolistic tactic that damages the web as a whole, because not only does it prevent appreciation of web standards (as you are a prime example) but also stiffles competition (due to this odd and broken way of implementing their own self proclaimed standards). Now you can shout and scream all you want about how other browsers can simply add in support for all these inaccuracies in the way IE does things but this isnt always the case and either way as IE is propriatory software and the docs will only be released after changes are made, other browsers will always be left in the lurch having to play catchup to these "IE Fixes" which again means they are being down trodden by IE. Now I'm not saying MS are evil and can not claim that this was definitely MS's motive but it definitely bares thinking about as the effects of the action are still there whether the intent was or not. Your point that following standards doesn't matter and that all of our points are mute may seem good to you but as I have evidenced in a webblog, IE7 developers seem to disagree as they are fixing these things and see them as a problem (not taking anything away from the previous point as pressure and public knowledge of the point mentioned about could overturn the decision). As I said, I hope these problems do get fixed in IE7 and I will certainly try it out. ------------------------------ I didn't post a link to my site so everyone can see how close my web site follows the standards. In fact' date=' a lot of that code was written by the web site design software I use.[/quote'] Indeed, one can tell that by looking at the source for that page. The "generator" metatag along with the numerous Homestead comments (<!-- hs:?? -->) give it away. Most design tools do this for statistics.
  20. While your point about a php script detecting which browser the client is using and providing different javascript code and variables is indeed possible (in fact easy, using $_SERVER['HTTP_USER_AGENT']), it doesnt seem to be what is happening here (check it in IE). Yes, it can give different ads obviously, php is all about dynamic content, but in this case i dont think its random, rather it is determined by the GET variables passed to the script and possibly randomised by ebay as I have received the same code each time. ---------- <tag /> is because xhtml requires a closing tag or to know that a tag is singular, <tag /> indicates its singular so it knows not to look for a closing tag. http://www.w3.org/TR/xhtml1/#h-4.6
  21. Isn't that because it advocates the use of the "sudo" command rather than logging in to root (the idea being that the less you are logged in as root, the less damage youll do, and the less chance of you allowing malicious software root access). It sets the actual root password to a hash that would have no actual typable key combination (ie you could never actually enter it as a password) so root is effectively disabled (ie the hash doesnt equate to any valid key combination, for instance a hash that would occur if the password were in fact 6 null/ string termination characters which is impossible to type and probably store but the hash is possible). As far as I am aware it uses the "passwd -l" option to do this -
  22. Theres always KUbuntu
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.