Aeternus
Senior Members-
Posts
349 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Aeternus
-
Are you sure these arent just .class (bytecode) files rather than .java (actual text source) files? .class files arent the actual source although im sure they could be decompiled back to some form of src. They are precompiled bytecode (kind of half compiled, ie compiled into something the java virtual machine can interpret/parse easily) rather than actual text sourcecode and are used obviously when you try to include those classes/packages in your application. I know Sun has allowed things like OpenSolaris and I dont know whether they might open a few source files here and there but as far as i know, Java itself and the code that goes along with its architecture is closed source. Apparently they are considering it though, although that could just be a feint to try and buy favour with the open source advocates and community without actually having to do it - Link
-
Here is a good example, this very page . The scripts used on scienceforums.net are written in php and have the .php extension. This is so that when a request is made for said file, the webserver knows that the file needs to be processed/interpreted first by the php interpreter before sending the output to the client/user. However, if you were to just take the .php extension, the web browser wouldn't know what to do with it. Therefore by default the web server is usually set up to send the text/html mimetype (note the Content-Type: Header) so that the browser knows to process the content being sent to it as HTML rather than as image data or xml. You might ask "Well why not just have the browsers always think of .php as HTML files?". This isn't possible as php files/script are simply programs/scripts that process data and output data. It might not always be in the form of a html document, it could be image data (using PHP's GD libraries) or xml data or simply text. In these cases the php script can often override the default Content-Type and send a different Mime-type using the header() functionality. If in these cases the data were to be processed as text/html data, the way in which the data would be displayed would be wrong (for instance the image data would just be gibberish). Request Response (Minus Content(HTML))
-
Read a little about the centrino/Pentium M when i was looking to buy my laptop. From what i have read as Ollie mentioned, it's an awesome processor, often easily outperforming other processors at the same clock speed and even at slightly higher clock speeds. In alot of reviews comparisons are made between a Pentium M 1.3ghz and a Pentium 4 2.4ghz etc and with the Dothan Pentium M's making it up to 2.13ghz they can easily compete with the normal more desktop-esk processors while still maintaining a much lower power consumption due to the changes in the architecture. So 1.8ghz Pentium M/Centrino is a damn good laptop processor in terms of performance and portability (power consumption etc) as the battery lifes quoted for Pentium M laptops are often close to double that of other processor laptops. Good buy in my opinion Article on Pentium M/Centrino
-
1) For the first question, it looks like its simply a case of working out the volume one atom takes up, converting this in dm^3 and then multiplying this out to calculate the volume 1 mole of these atoms would take up and then taking this as a ratio/percentage of 22.4dm^3 (1 Mole of Gas at STP). So thats the volume of a sphere - (4/3) x pi x r^3 = (4/3) x pi x (4.2e-8)^3 Which is - 3.1033e-22 cm^3 Which is - 3.1033e-25 dm^3 (1dm^3 = 1000cm^3) So then take that and multiply by Avogadros Constant (number of molecules/atoms of an element/compound in one mole of that element/compound). So thats 6.022045e23 x 3.1033e-25 = 0.186887 dm^3 Then work that out as a percentage of 22.4 dm^3. Which is (0.186887/22.4) x 100 = 0.8343% = 0.83% (2sf) 2) Well for Question 2 you do just add but it looks like you have to convert all of them to mmHg first, so thats 720mmHg, 400mmHg, 650mmHg, which when summed is 1770. Divide that by 760 as you suggested and it comes to 2.33 (2dp/3sf). 3) My guess on the third question is that as the formula for the reaction is - 2CO + O2 => 2CO2 And the number of moles of O2 and CO2 are actually the same, youll end up with some O2 left in the ratio 0.5:1 (O2:CO2) and assuming the CO2 has a partial pressure of 0.5 like the CO, as we now have have the number of atoms of O2 its partial pressure will be halfed (half the number of atoms to collide with etc etc) and so will be 0.25 so adding these together gives 0.75. Not sure if thats right but its a theory.
-
Not exactly inline with what you want but something related to it none the less and interesting considering the topic - http://www.guardian.co.uk/life/feature/story/0,13026,1496690,00.html?gusrc=rss Something that might be useful (not the actual software but the manual ways of analysing the weather could be useful in your paper for comparison and might help further understand the way in which the trends are analysed on a grander scale using computer software/algorithms) - http://www.theweatherprediction.com/ Also you might want to see if you can find this book - http://portal.acm.org/citation.cfm?id=603326 (look in the library, order it in, or otherwise) as it seems to have alot of what your looking for.
-
Says Here that the programs must be multithreaded. Dual Cores or Dual Processing in general are advantageous when multitasking but when running a single threaded application, you cant really try to process bits of it with each core or processor as some bits/chunks may be dependent on the results of others and may require certain other sections of the code to have been executed beforehand. This may not be obvious from the way in which the "code" (i say code but obviously at this point it is simply binary instrutions being passed to the processor) is laid out or implemented (hence why multiple threads (well i say threads but child processes are forked on linux/unix systems instead of threads but the idea is similar)) and so it is extremely hard to simply split single threaded programs/applications up to be executed simultaneously on multiple processors. Not to say that it isn't possible, analysis of the instructions ahead of the actual processing could be done to try to provide some form of dependency checks etc and I think this might be what is being done with the new Cell architecture (as the PPE (PowerPC Element) divides the job into discrete tasks to hand out to each of the 8 cores on the cell processor). This may be what is being done by the CU (Control Unit) as you said, but I don't think it is being done to a high degree as I would imagine it would require more than a simple Control Unit to do it (perhaps more in line with the power of another core) and I doubt the articles shown would mention the problem if the CU took care of it. If you find I'm wrong, please mention it, because as I said im not expert and Id like to learn more about this (this is only what i can gather from what I've read and know already). Further Evidence
-
I think the advantage of the dual-core processors (being made by both Intel and AMD) over things like using multiple processors in SMP etc setups (Symmetric Multi-Processing) is that as the cores are much closer together and the communications are handled alot better. When the two cores need to communicate (perhaps for access timings, bus usages etc (ie making sure the two dont try to do the same thing at the same time etc) they can communicate through things like the new Hypertransport (specifically between these two cores) in AMDs case which allows for far far faster communications between the two cores when compared to the communications via the system bus between two seperate processors. Intel still seem to be using the Northbridge FSB interface and so there doesnt seem to be much difference between their dual core implementation and a classic dual processor arrangement but to be honest I don't really know what theyve done. Most of what i'm saying im getting from a PC Format article on the subject (Great magasine, great article). Also from what i can gather, both cores will be able to use the same cache, obviously increasing the speed of communications between them (as communicating via the RAM is several factors slower) and allowing seperate threads to access and change set data more easily (although obviously that can get rather complicated and hard to ensure robustness) and much more quickly (Not sure if both Intel and AMD are doing this exactly or have different ways of doing something very similar, seems Intel may be doing this and AMD may be simply relying on their HyperTransport Link to allow fast communications between the two processors). The major problem with using dual core over single core (normal processors) is that alot of applications still don't make use of threading (splitting the program up into various processing threads which are processed (seemingly - depending on whether its dual or single core) simultaneously rather than one after the other as a block). This means that as the program is only a single thread, it will only be able to use one of the cores to process it. Dual core setups still help in the case of multitasking (Running multiple programs/jobs (seemingly) simultaneously), and with Intel's hyperthreading technology, along with the increasing number of multiprocessor setups in the server industries, I would imagine alot of software houses have already come out with or are in the process of changing alot of their software to a multithreaded approach. Links - http://www.short-media.com/review.php?r=261 ( Seems decent enough ) http://www.datafuse.net/page.php?news=289 ( Seems ok ) http://www.pcformat.co.uk ( Excellent article, although youll have to actually buy the magasine (although they may open the article online after the end of the month)) http://multicore.amd.com/en/Technology/ (Straight from the horses mouth) http://www.intel.com/technology/computing/dual-core/index.htm (Straight from the horses mouth) In Response - There are also other things on the processor such as registers, various cache elements (different levels of cache), some have seperate memory controllers (i think, not sure) etc. Plus you have all the small interconnections between all these components on the processor and probably more components that i can't remember or don't know about (I'm sure dave or someone will list more). P.S - I AM NOT A PROCESSOR ARCHITECHTURAL ENGINEER So obviously I may be talking complete nonsense. Please feel free to correct me, as i want to learn more about this. This is only what i can gather from articles I've read so...
-
Ada is still around isnt it? Its used in embedded systems isnt it? I think they use it at York Uni in the embedded systems course and i think its use is quite wide spread in that area.
-
Theres quite a few interpreted languages (and some not) that are quite high level and relatively easy to code in, such as Python, Perl, Ruby, Lisp, Haskell etc. Some are quite generic (Python) whereas those such as Lisp and Haskell look to have more specific purposes and don't look to be what i would consider on the beaten path (haven't look into them all that much). There are quite a few different programming languages listed here - http://www.99-bottles-of-beer.net/abc.html (although some are simply variations on the same language) and googling each (well ones that look interesting) will probably yeild their advantages and disadvantages.
-
I know this is the analysis/calculus forum but looking at it from a purely mechanics point of view its pretty nice. One equation, resulting in a quadratic which you can solve for the answer (as D or S is displacement rather than distance and so can take a minus value). So in mechanics terms - ( Taking Up at Positive Direction) S= -63 ( Displacement ) U=+8 ( Initial Velocity ) A=-9.8 ( Acceleration ) T=? ( Time ) s = ut + 1/2at^2 so -63 = 8t + (-9.8/2)t^2 -4.8t^2 + 8t +63 = 0 ( -8 +/- sqrt(64 - (4x-4.9x63) ) / -9.8 ( -8 +/- 36.04 ) / -9.8 so we need a positive time so we need a negative on the top to cancel the negative on the bottom. -8 - 36.04 = -44.04 -> -44.04/-9.8 = 4.49 s = 4.5s Although, im guessing what your talking about it more complicated than that (sounds like it) and has to be worked out differently and I also could have made a mistake, just felt like working through it.
-
Same here, i got it first time (seemed really simple). Seems an odd puzzle if you ask me
-
Can anyone confirm whether i came out with the right answer? or did i make a mistake somewhere? Also , as reverse suggests are things like the sine and cosine rules off bounds?
-
I thought Ubuntu was a debian offshoot (using the bleeding edge builds for everything rather than sticking with debian stable)?? That seems to suggest the main contributor was FreeBSD.
-
Yeah it did i think. Can't you still get Cedega/WineX via CVS(One Link) . Ok that takes more effort and is more complicated to install but at least its free. Also, Cedega doesn't work with EVERY game does it? Think theres still alot of issues with it with certain games (although the same could be said when simply running the games on Windows). I think someone suggested VMWare, but does VMWare support accelerated graphics etc? I think i had a problem with it before (trying to install Ragnarok Online on a virtual machine and it wouldnt work due to lack of proper DirectX support etc with the hardware), although alot of games will have a software mode. As i think everyone has said dual-booting is probably the best bet, as then you can play the game in its native environment and get the full whack out of it. I dual-boot on this machine but VERY rarely boot into windows as i can play any games on my laptop (windows only) and i dont play that many games anyways so its not really vital for me. Good luck anyways and i hope you enjoy using linux as much as I and I'm sure many posters in this thread have.
-
That's what i get the triangle out to be from reading - The diagrams drawn by everyone else seem to have DBC and ECB the other way round which seems strange but it doesnt really matter as its just B and C reversed so whatever i come out with can simply be swapped round. Thats the same basically (ignoring the difference in B and C) as everyone else has. So i figured perhaps sin and cosine rules could help by working out the lengths of the lines in blue and purple and taking away the lines in red and green respectively to get the lengths of X and Y. Then using the cosine rule to get the yellow line and then using that and the angle along with the sine rules to get the angles. I could have made a mistake but i get XA to be 80 and YA to be 30 using a little python script i knocked together (rather than typing it all into the calculator or trying to figure out how different things canceled to give more sensible numbers etc). Python script is Here. I may have made a mistake or something, if so sorry, please correct me.
-
Did you get the code for the two classes with this? Just asking as the last two questions seem to assume you know the structure of the classes. 1) I'd say its just so that if the inner working to produce the height and the width values change (ie you might calculate them somehow instead of storing them if you changed the class for some reason), the actual interface to the class/object doesnt change (ie its still ObjectName->gridWidth() etc). 2) For this i would guess its because static variables are created and exist as a single memory location, before an object is made. Therefore a) If one wants to access the constant before an object is made (since its a constant, this might be needed for reason and you say it should be global so that backs it up) they can and b) Since its a constant and unchanging, the value won't change with different objects and so creating a new variable in memory for each object seems wasteful, therefore using static would make sense (ie only one memory address/variable for all instances of that static variable). 3/4) No idea as i can't see the code and don't want to assume anything to guess Not sure if what i'm saying is right, just what i think might be right. I'm sure someone else will come along and correct me.
-
Red Alert, look at the page source and then look at how php and the http protocol in general handles forms etc. Then you should be able to work out how to do it. Its really easy if you have any experience with that sort of thing.
-
Just got onto novice lvl 10 now (says down for maintenance, not sure if it means it or not but either way i think ill leave it now and get on with something productive).
-
Think i'm on Novice 3 or 4 now (cant remember), the one where you have to find out the isp. Think i know how to do it, just in college atm and probably be doing other things in the mean time so probably wont end up doing it for a while (bit of a waste of time tbh, most of them i know how to do it, its just getting their little clues and reading through them (ie HTTP Protocol, Packet Sniffing etc))
-
got to level 10 on apprentice (ended up using an old java program i wrote when looking at the school webmail), now off to bed
-
Ok heres what i get (but im probably wrong) 20C5 * (x)^5 * (1-x)^5 = 2 * 20C5 * (1-x)^5 * (x)^15 The combinations cancel out and we are left with - x^5 (1-x)^15 = 2 x^15 (1-x)^5 If we take the x's and 1-x 's over to opposite sides we get (1-x)^10 = 2 x^10 and then if we take the x^10 over - ((1-x)^10)/(x^10) = 2 So if we raise each side to (1/10) (hence the 2^(1/10) hint) we get - (1-x)/x = 2^(1/10) (1-x)/x = 1.072 1-x = 1.072x 1= 2.072x x= 1/2.072 = 0.48263 (Probability of getting an even) Then if you plug this into the equation to check - 20C5 x (0.48263)^5 x (0.51737)^15 = 2 x 20C5 x (0.51737)^5 x (0.48263)^15 0.0206 = 2 x 0.103 (True) And since 20C5 = 20C15 (as rCp = rC(r-p)), the whole 15 in 20 even throws (x2) thing is also true (as it is basically the same as the 5 in 20 odd). Dont know about the last bit. Perhaps just - 10,000 x 10 = 100,000 then 100,000 x (1-0.48263) = 51737 ? Although the last bit sounds dodgy and I could easily be wrong. [Edit] Looking at it the whole "in sets of 10" thing seems strange. Does it mean, how many sets of 10 would have no even number?? If so, maybe - 10C0 x (0.48263)^0 x (0.51737)^10 = 0.51737^10 = 0.001374 And then 0.001374 x 10,000 = 13.74 so 13 or 14?
-
Looks cool. Got to love Apple and their nice interfaces Aren't parts of Safari based on KHTML (KDE's own browser)? Maybe well see some of this additional functionality leak back into KHTML ?? (I don't use it but i know Apple do give back to some of the open source projects they work with and its nice to see). Although from what i can gather KHTML seems to be involved in Safari mostly on the page rendering side of things and so perhaps it won't. Oh well never know.
-
Nice. Not something that affects me personally (Two Gentoo linux based Pcs (Desktop and Small Server) and a Windows XP Pentium based laptop) but it's nice to see Microsoft doing something like this, especially considering the fact that work, time and money has obviously gone into the production of a 64 Bit compatible operating system. I'm sure it will certainly appease alot of people. I'm not sure on the actual pros and cons of their implementation of things (ie how they handle 32 bit compatibility (ive heard the interface (read UI/HCI), might be somewhat like the interface for compatibility with older windows version), how much it takes advantage of the various new registers etc open to them, how well their compilers allow the use of these new features), if anyone could provide some more information on the topic, it'd be interesting as I did use 64 bit Gentoo for a while but due to problems with certain older programs (DBDesigner mainly, as well as a few problems with things like Free Pascal Compiler etc) and compatibility, as well as being stuck in the past in portage alot of the time due to the fact that it takes longer to port alot of the code to 64 bit, I ended up going back to 32 bit (not that Gentoo doesnt offer a compelling 64 bit system, its just that my needs at the minute don't match up with it).