Jump to content

Edtharan

Senior Members
  • Posts

    1623
  • Joined

  • Last visited

Everything posted by Edtharan

  1. Yes, the Observer is a big problem in QM, but not for the reason that you think. The problem is that people think that when they say "Observer" then mean a "person". This is not the case. By "Observer" they mean "Any outside thing that can interact with the system under question". Once you understand this, there is no problem. The "Observer" can be an electron, a photon, and so forth. So even if we "Shut our eyes", even if we didn't exist, there would still be interactions going on, and thus there would be "Observers".
  2. I don't mind. Well I have been programming computers for the last 22 years or so (I started when i was around 7). Due to an accident, the last 6 years I have not been able to do much programming, but I design computer games (as a hobby at the moment but working towards having it as a job). SO I am quite familiar with Information Technology. See what has happened here is that you created an initial simulation of the person. Then you created an additional simulation of a computer. So of course that computer will require more memory and processor resource. Just as if you created an additional human simulation. However, if you were simulating all the atoms that made up that person and a table and chair that they could interact with, then moved the atoms around in the table and to make that computer, then no additional memory or processing would be required and the atoms that make the table/chair and the computer were already in memory and you were already simulating them (although they were not doing anything real interesting). By your reasoning, a computer that is switched off and a computer that is operating would require different loads on a server to simulate the interactions of the atoms in it. This is what I don't understand about your augments. If the atoms are in memory and are being simulated, then why does putting them into a different configuration change the processing needs. It would be like being able to use the exact same number of units doing exactly the same thing in an RTS game, but just move them into different places on the map and then getting an "Out of memory" error. Yes, the result is that the recursive simulations run at slower speeds as compared by the level above them. Think about it this way. If I was to run a simulation of a computer, but all the parts were disassembled and without power, then using the same simulation program assembled the components and game them "power", would this take more or less than simulating them without power and disassembled? There are fast approximations that can be used (that is not running the scripts that describe what a component does if it doesn't have power), but these fast approximations require a knowledge of what "power" is in the simulation. Is it mechanical, electrical, chemical, light, and so on. Then we the simulation would have to identify the component as something that could do processing and how it did it, what it is made form, etc, etc, etc. There would be less overheads, just to simulate the thing completely, instead of relaying on these fast approximations. This is what they talk about in computer game "Physics". They don't just mean gravity (easy to simulate) and motion (move the character X pixels each time step). But they mean creating a set of "Laws" that the came can follow that allows the various components of the game to interact. This could be the way that two cubes can "click" together to make a Lego type object (eg: side 1 can join with side 6). Let us use Lego. Instead of talking atoms, let us use Lego bricks. Lego brick have various properties, they can interact (connect together) and so forth. Some Lego bricks can conduct electricity and so forth. With these let us invent some more Lego brick that don't exist. These are variations of the Lego bricks that conduct, but they actually are able to perform logic operation on them (they take two signal inputs and a clock input along with a power line). So we have a program that simulates 10,000 of these bricks, of the various types. Does the way I put these together effect the processing needs of the computer that is hosting them. I don't "Make" any more bricks just the 10,000 of them and they are all constantly simulated weather they are connected to another brick or not. If I was to build a computer out of these, would that effect the processing needs of the host computer? No, the bricks are already being simulated, their configuration does not induce any more processing needs in the host than they would if they were just all heaped in a pile. This is what I am talking about. When you are proposing a new level of simulated universe, you are instancing that in the host computer, not the simulated universe, so yes, your method would require more processing needs, but if the simulated universe could reorganise the matter that is already being simulated (it already exists in the host computer's memory and the processor already calculates it), it would not increase the work load on the host computer as all this already exists as a workload on the host computer. Yes, that is exactly what I said. I don't see what you are trying to say? You are claiming my point is wrong by claiming it is right? No, you are claiming that I am claiming that. I even said that you could not have an infinite recursion because the host computer is limited in both processing speed and time to process. It would be possible to simulate a 30GHz computer on a 300Hz computer, however, the 30 GHz computer would not run at 30GHz as viewed form the 300Hz computer. The only point of view that would have the 30GHz computer running at 30GHz is from the simulated computer. I think this is the source of the confusion between what we are trying to say. You are viewing it all from a single point of view: The Top Level computer. I understand that any viewpoint in this is relative. We see time operating at a certain speed. Weather or not this is real time according to a Computer above us in the chain is irrelevant. 30GHz to us is 30GHz to us, regardless of what any other simulation might see it as (above or below in a chain). So when I talk about processing speeds, it is relative to what others would see (above or below). 30GHz to us, might be 0.0001Hz to our Host. But they are simulating a computer, that to the inhabitants of that simulation, would see it as 30GHz. Trying to pin down the actual speed of a hypothetical host that we have no what of know the specs on is pointless. Thinking about what it would be like if that hypothetical host existed, that is more constructive and the aim of this thread. This is why speculation is necessary. We can speculate that in a universe that 1+1=3 how would computing be possible. Could that kind of universe support complex life, could that life become intelligent, could that intelligence create a computer and could that computer simulate a mathematics vastly different than the mathematics that governs it's operation? If we just say: "Oh, I can't see how it could, so we may as well not try to understand or ask questions", we wont get anywhere. We will never discover anything. Even if the conclusion is: 'We can never tell if we are in a simulation or not", this is an important discovery. It places limits on various aspects of the universe, for instance, any anomalies are real physical effects, not programming bugs. Yes, there are limits, I have acknowledged them in past posts. I didn't mean that we can simulate this entire universe on a PC that fits inside the universe. But, if you can accurately simulate the laws of this universe on a computer than it is computable. If it is impossible to use a computer to simulate any part of this universe completely then it is not computable. We have approximations, but these are approximations, not a true simulation. I am not trying to pinpoint our location in a hypothetical chain, just that if one does exist then we can't be at the top. As we haven't simulated any universe, then we must be on the bottom rung. So if any chain does exist, then it exists above us. If the chain does exist, then we can therefore not be at the top. If we are in the only universe, then no such chain exists and therefore all this discussion comes to the conclusion that we are not in a simulation. If we are just in "Giant's blood", this is the equivalent of a simulation and therefore we are not the top of the chain and the conclusion is that we are in a simulation (although not intentional). The only way we can determine this is if we ask these kinds of questions I am asking in this thread... Not even then. The host would need an infinite amount of computing power. Yes, this could be occurring, but even so, the Host would need an infinite amount of time still to run all those simulations. The chain can not be infinite. The reason we see the kinds of anomalies that you describe, is that the objects in modern computer games can be though of as a kind of Atom. They are indivisible components in memory and errors cause these components to be joined in odd ways. So, the resolution of today's games is not at what we consider the Partial level, but in a much larger scale. If this kind of error were to crop up in our universe, we would see a mess of quantum particles springing in and out of existence. It wouldn't be just on the quantum scale it would occur in the macro scale too. We wouldn't see discreet objects appearing and disappearing, but we would see the equivalent mass/energy of whole objects doing that. They, also, would not behave according to any known physics, we would see violations of the mass/energy conservations, etc. Looking at the resolution of this universe, all these objects are specific amalgamations of groups of particles obeying the laws of the universe, an error would be in violation of these laws so the resulting artefacts would not be as coherent. A burst of unexplained photons, maybe, an image of a person (ghost) not a chance.
  3. That Paris Hilton is a super genius and that she is hiding this so that the Government Doctors won't dissect her to see why she is such a super genius. You did say wacky...
  4. I am allergic to cat dander, so how will this help me... Cats have an antibacterial component to their saliva, as do dogs. In was times, before antibiotics, people had dogs and cats lick them and this provided an antibacterial effect for the wound (I wouldn't suggest that you do this now as our antibacterials are far better). If it was just the antibacterial effect of the dander, then how could these animals eat carrion (full of bacteria) without being killed. It is in their saliva that the antibacterial exists (as even animals that need to be prevented form licking themselves have this antibacterial effect - so where would it come from if not from licking?). Also the initial premise is wrong. There are bacteria and infections that can cause some cancers. This is known. But there are other cancers that are not caused by bacterial infections. Look at the cancers in people from Hiroshima and Nagasaki, it would have to be a really big coincidence that there people had massive bacterial outbreaks, that only people that were in these regions when the bombs were dropped, regardless of where they were move to immediately after - and in among other people) and all had increased occurrences of cancers. So if, by probability we can eliminate a highly specific bacterial outbreak, what else were these people exposed to? Massive amounts of radiation from the nuclear bombs. So, bacterial is not the only cause of cancers. There fore, even if your system did work, it couldn't cure all cancers as not all cancers are bacterial infections. Also, science has produced some extremely powerful antibacterials, these would be a better and easier to produce solution if you are right. They could engineer the antibacterials specifically for that particular bacteria that you claim is causing all the cancers. A company that patented this and produced this could out compete all other companies, they would have an almost monopoly. Why hasn't someone done this already? If a someone were to produce credible results for such a cure (and many people have taken antibacterials - especially people with cancers that effect their immune system as they need the effects of these antibacterials to survive - so the evidence would be there), and could show that all the pharmaceutical companies are conspiring with each other not to make a profit from this cure, then you might be more credible. These companies are competing against each other, why would they work with each other to hide a result like this? If they could give a more effective treatment, they could out compete the others. To do otherwise make no economic sense and if their investor heard of this the company would go bankrupt over night.
  5. No, and I'll explain why: If the simulation stored the states of every particle in the simulation, and a computer within the simulation is just a matter of reordering those particles, then we can not create an "out of memory" error as we are not adding to the memory. The Data in our simulation would have to be stored as states of particles. But these states are already stored in the parent simulation. We are not changing the amount of data in the simulation, no matter what we do, no matter what data we create. It all has to be stored in the states of particle in our universe, which are already stored in the parent simulation. We won't get the BSOD, or virus like activity. How does this really effect the discussion? Because there would not have been an infinite time in the top level universe, then there could never be an infinite level of recursion anyway. Because even at the top level Data/Time!=Infinity, this really has no big impact on the arguments. Sure, there is a limit, and this would put a limit on the probabilities, but there was a limit before this too (due to time). The mistake that you are making is thinking that by creating a simulation within a simulation that more data is added to the parent simulation. This is not the case. The computer and the data in it in the child universe is always in the simulation. You have not added to the data of the parent simulation, you have just arranged it differently. Yes, because the simulated computer is created s an instance on the parent computer. In the simulated universe, the child universes are not created as an instance on the parent, but as a rearrangement of the components of that simulation. It would take up no more room and no more processing power. This is the major difference between the recursive simulation and the recursive algorithm. What I was talking about is that if you had a computer in the parent universe and it was simulating a universe that is functionally identical (although it might be smaller). And in that simulation you created the exact same computer design, then that simulated computer could not run at a faster speed than the parent computer. In fact, this should apply to any system in the parent. It might be possible to do fast approximations in the child universe that might rival the speed that they occur in the parent, but these are fast approximations, not a true simulation (they are approximations). IF such things existed, then they would be these "anomalies" that would allow us to detect that we are in a simulation. So a good systems designer would have accounted for these artefacts and implemented techniques to eliminate them (like making sure that you can't have i=1000). But what we are talking about is computation. Not what the computers are made of. A Universal Turning machine is capable of simulating any possible computer. This doesn't say "any possible computer that is not made of chocolate. You can simulate a Quantum computer of your desktop computer (it wouldn't run nearly as well as a real quantum computer though), even though the physics of their operation is completely different. If the universe is computable, then we can simulate it. If the universe is not computable, then we can not be in a computer simulation of one. From what we do no about atomic particles, all particles of the same type (electrons, etc) are identical. You can not tell one particle form another, except by the values on them (momentum, spin, etc) and as these can be changed, for all we know there might be just 1 particle in the universe, just seen time and time again (how much computer memory would that take up?) It is true that a simulation does not have to conform to the physics of the parent universe. It does not have to have be computational. However, if it is not computable, then it can't be a computer simulation, there fore it is the top level (it has no universe above it simulating it). If however, this universe can create a computation machine, then any universes it simulates must be computable. If it can't create simulations, then it just cuts off the chain (and as we have the ability to create computers and create simulation on them that universe can't be ours). So, either it is the top level universe, or it is not our universe. Either way, if we consider this as a potential variation on the universe, it just strengthens the chance that we are in a simulation. Also there is no Infinite recursion, unless the top level simulation has had an infinite time running the simulations. I am not talking about infinite recursions (I have even mentioned infinite recursions), just that you can get recursive simulations in a simulated universe. These would be a finite number of recursions. No, we would never be able to tell that this had occurred. Yes, we could never tell if the creators had pressed the "Undo" button. What we would not see virus as is "Giant Bugs" eating the solar system. What we would see is nonsensical results in the physics. Parts of the universe behaving randomly, matter and energy would be scrambled. It would be the equivalent of "Static" on a TV screen (and seeing this we could - if we had the time and the virus was not effecting our part of the universe - work out some of the underlying behaviours of the parent system before we got erased or reverted to a previous backup). I doubt that anything as coherent as us morphing into "Sperm Whales" (or a bowl of petunias for that matter - Hitch-hikers Guide to the Galaxy Reference ). Any effect by a virus would, from our perspective, seem random. The virus would not operate on our "physics" but on the underling logic of the host computer.
  6. When you heat some plastics, they can shrink. There is a kids toy call (here in Australia, I am not sure if they have the same name in other countries) call "Shrinkies". What they are is a "tag" like piece of plastic a few centimetres across that kids can draw on and colour in. Once the kid has done this, it is placed in an oven for a certain period of time. While in the oven, then plastic shrinks down to around 1cm across. If you sausage is wrapped in plastic (like chip packets), then the heat from the cooking sausage would cause this plastic to shrink. This is exactly what it is. Shrink wrapping use heat to cause the plastic wrapping to "Shrink". SO this is exactly what is happening to your sausage.
  7. Ok, If a simulation take 1000 years to complete 1 second of simulation. Then if we are in a sub simulation being run at the same rate (1000 years for 1 second of sim) then it will take 1,000,000 years of the top level simulation to produce 1 second for us. If on the other hand, they use fast approximations to increase the sim speed, to 1 second will produce 1000 simulated years, then in 1000 years they will be able to run far more simulations. So on one hand we have a slow simulation and they can only run the 1 (and only 1 second passes in the sim for every 1000 years that the sim is running). Because of this, these people will not produce many simulations over a period of time. On the other hand, we have a fast simulation that in the 1000 years it takes to run 1 second of sim in the other scenario, this group could run billions upon billions. and have then run for a much longer time in that period. This gives us N to the 1 odds that we are in the fast sim group (by "N" I mean that I couldn't be bothered to calculate the actual number as it would be really, really, big). Also, if for every 1 second the sim would experience 1000 years, then this would give the inhabitants time to make their own simulations and run them (the recursion). If all we had was 1 second, we could not simulate much at all. A recursive simulation can not run faster than the parent simulation. It would be like using your PC to emulate a Macintosh which is emulating a PC. that final PC emulation, even though the original PC was fast, would not be anywhere near as fast as the original. Well nothing in a simulation can run faster that the original computer. So even if we did use Quantum computers (and we are in a simulation), the host computer would still have to be faster as they are simulating the quantum effects of the entire universe. no, no strange effects or anomalies. However, if we are in a simulation, the hosts can turn it off on a whim, so they might shut it down then, or not. they might shut it down because of anything. As for virus protection, well if that behaviour is built into the simulation (that is the quantum mechanics are one of the simulation rules), then we are not operating outside the program, we would not be performing any malicious operations. But of course, any behaviour, by anyone or anything could be the trigger that the hosts use to decide to turn us off.
  8. First you argue that nothingness can not be squeezed, then you offer a "Thing" (ie Fog) which is not nothingness to prove your points. That is called a straw man at best. Actually, if we take the "Fog" we can compress it. We can see the dust and gas clouds in space get compressed and then turn into stars. So this actually invalidated your argument even if it wasn't a strawman. What was compressed was "Space". This is the biggest problem with the entire list. You assume that only matter or something was compressed and that space (and time) was still there. "Space" as being a "thing" that can be distorted, compressed and expanded is something fundamental to Relativity. Take light. If we use Newtons laws of gravity, light being massless shouldn't be effected by gravity. So it should always travel in straight lines in a vacuum. However, we see light distorted by massive objects (galaxies and such) so something else must be going on. Relativity states that the light will travel in straight lines in a vacuum, but that space it's self can become distorted. A straight line to the light beam is not a straight line to you or me. So, relativity states that space can be distorted and these same equations state that space can be compressed. This also applies to time. We know, from observation even here on earth, that the mass of an object (the Earth) can cause distortions in time. This has been measured by very accurate atomic clocks. As you approach a region where space has been distorted, time to is distorted. In fact, relativity specifies that there will be a correlation between the spatial distortions and the Time distortions. GPS satellites rely on these calculations being very accurate. If they weren't, then the GPS system would be useless. Because of the GPS systems reliance on these calculations being correct, this makes these calculations one of the most tested aspects of any scientific theory ever. So, space and time can be distorted. What if space and time was so distorted that it was compressed into a space of 0 dimensions. There would be no "outside" for this to "expand" into, there literally "nothing" that exists. When most people think of Nothing, they think of empty space. But empty space is "Something". There is no "Time" for anything to happen as Time does not exist. Our experiences have not equipped us to "visualise" this kind of situation, much the same way we can't visualise what an atom really looks like, or what the inside of a black hole would be like. However, the same mathematics that allow us to use GPS can be applied to the situation of if the universe was compressed into 0 dimensions. We can't visualise this, but we can use mathematics to work out what would occur. Now, just because we don't understand something does not mean it couldn't happen. I bet 500 years ago, people then would not have understood a computer, but does that make computers non existent? Understanding does not equal non existence. Now, we don't have a perfect understanding of what occurred at the big bang, and that is a bad name as it conjured up in peoples minds something like a big firework on new years eve. We don't know what exactly occurred to change this 0 dimensional "nothing" in to the universe, but there are a few ideas (theories). One of them is that when you get down to what is called the Planck Length and the Planck Time, things like position and before and after have no real meaning. This in interesting as this means that a Cause can occur after the Effect. In terms of the Big Bang, the Effect (the change from nothing to something) can occur before the cause. The cause can exit in time and space and occur after the effect. In essence, The universe can create it's self. This fits with the known laws of quantum mechanics. Now the transition between 0 dimensions and >0 dimensions is a breaking of symmetry. Also known as a phase change. Under the known lawns of quantum mechanics, such a phase change would release a lot of energy. A lot of this energy would go into "pressure". Not pressure on matter, but pressure on space. This pressure would cause space to expand. That is counter the force that is compressing it. Some of the energy would be released as other things. What are these other things? Well photons for one. Once you have mass you have gravity. Gravity would counteract the pressure that is causing space to expand and this initial "inflation" would slow. Now this expansion is not occurring into something, what is happening is that their is more distance between objects. There is nothing that we will have encountered that really give a good analogy as to what this is like. The best is a balloon, but it is a poor analogy as the balloon is expanding "into" something (the outside air). But as the best analogy we have it will have to do. Also as the universe got bigger, this pressure will become less and less and this would also slow the expansion of the universe. Some theories actually have the energy contained in this pressure as becoming the matter that we see around us. This too would be a kind of phase change, although not the same, we can use an analogy from experiences we will have encountered to help us here, this would be similar to how water vapour condenses out to form clouds and fog as the air cools. As the expansion of the universe reduced the pressure, it condenses out as matter. This matter would not be the matter that you or I would recognise. It would be exotic particles (that may not even exist any more), but after time this would have settled down into the matter we know today. A lot of this energy would also have been released as photons. This would make the Universe a very bright place. These photons would have had a massive amount of energy, and the more energy a photon has, the higher the frequency. A few decades ago, some engineers were working on a microwave receiver for radar. However, they were getting an annoying "hiss" of noise. They tried pointing it in different directions to eliminate any near by sources. As the direction the antenna was pointing made no difference to the level of the noise, they could rule out any local source of it. There were a few pigeons living around the antenna and they though it might have been pigeons dropping in there causing this noise. So they kept cleaning it out and scaring the pigeons away, but the noise remained. Eventually they gave up and asked around. This problem came to the attention of a particular scientist. This scientist was working on the Big Bang theory, and had reasoned that there should be a lot of left over light from the big bang (for the reasons I stated above). However, what he had done was to take into account the expansion of the universe and how that expansion would have effected the light. As the universe expanded, the light will also have had it's wavelength stretched out, red shifted as it is called. He calculated that it should be a specific frequency within the microwave band. And guess what. This was the exact same frequency of the noise these radar engineers had discovered. The "Noise" was all around because the universe is all around us. The light from the big bang would be coming at us from all directions, so no matter which way they pointed that receiver, they would get this background hiss that is the fading "echo" of the big bang. And why did they initially think that there had been a point where the universe was a single 0 dimensional point? Well, if you take a look at the universe as it is today, then everything is moving away from everything else (taking into account the effects of gravity which can pull nearby things and stop this expansion locally). If you then "rewind" this situation, everything at some point must have been in the same place. Press "play" again and you see a rapid expansion that then cools and forms matter which forms clumps which form galaxies which form stars which form planets which form us... Actually it wouldn't have been "pushed" into that point. It started off in that point and was "pushed" out. And there is such mechanisms that can compress space and energy into a 0 dimensional point. It is called gravity. There is no repulsive force with gravity. It always attracts. Unlike a magnet which has a north and south pole. With a magnet if you put North pole to North pole they repel. Gravity doesn't do this. If you put gravity near gravity, you still get attraction. This gravity bends space and time. If you have enough of it, you can bend space and time to a single point. We call these black holes. The phase change from a 0 dimensional point will provide energy in the form of pressure. This will cause space to expand. And, it wasn't an explosion. It was an expansion. There was no pushing "Nothingness" outwards. There was the creation of more distance (space). One word: Gravity If you throw a ball up on the moon, there is no friction to slow it down, yet it slows down and even returns to you. The reason: Gravity. Again. Gravity. If you get enough stuff, then gravity will be able to compress a gas. If this was not true, then Earth would have no atmosphere. The atmosphere is a gas, but yet is stays clumped here on Earth and doesn't go drifting off into space. Gravity clumps it to the Earth. Lucky for us... Yes, again. Gravity. Think about it. The mass that makes up a Star exists as gas floating around in space. It still has the same gravity as it is the same mass. A star has a lot of gravity, our sun holds planets in orbit of it and it even is effecting the motion of other stars nearby in our galaxy. So this powerful force that is gravity, a star has that much gravity, then why can;t the gas cloud that would form into the star have an equal amount of gravity. It contains the same mass? So we have gravity pulling this gas into a smaller and small space. It heat up and eventually fusion starts to take place. We have a star. Fusion in stars can make atoms up to carbon. When a star goes supernova, atoms with weights higher than carbon can be formed. This should get you started: http://en.wikipedia.org/wiki/Star_formation Go ask an astronomer at your local observatory and they will be able to fill in the other details. No it is quite possible. There is a thing called resonance that allows it. The "energy" gap is filled by the temperature of the star it's self. It is actually quite complex, but it can and does occur. If it didn't we could not be here. No. it didn't "reach" the edge, it was already there. As space expanded, it too the matter along for the ride. It is like a balloon. If you sticky tape two small circles of paper next to each other when the balloon is only partially inflated, and then blow up the balloon the rest of the way, the circles didn't move as they were stuck to the same bit of the balloon, but now they are further apart. As the space between the matter expanded, there was more distance between the pieces of matter. Yes, in the stars. Supernovas produce all the heavier elements and they do it in a flash. Yes they can. The sun does have other elements in it that are not Hydrogen. The reason the Rocky planets don't have much hydrogen and lighter elements on them is they are not heavy enough to hold onto the lighter elements. It's this gravity thing again. As a molecule becomes heavier it moves slower. The escape velocity for Earth is around 9km/s. This works out that a molecule with a weight of around 10 or more would be retained by the Earth where as lighter molecules would escape. Hydrogen has a weight of 2 and Helium has a weight of 4. Therefore Earth doesn't hold onto these gasses and the composition of Earth will be different from the sun 9or even larger planets like Jupiter). You are forgetting Gravity again. Am I seeing a common theme here? No. Super Novas are going off right now. There was one seen only a few centuries ago that occurred in this galaxy. We see them all the time in other galaxies. They haven't stopped so this argument is a complete and utter red herring. Super Novas are still occurring. In fact, they use a type of Supernova called a Type 1a (where a binary star system has one of the stars pulling matter off of the other one until the point is reached where a critical mass is reached that causes the star to go supernova). These supernovas, because this critical mass is the same for all the stars in this situation, are all the same brightness. This makes them useful for determining the distance to a star in a distant galaxy, and for calculating the red shipf of it (as they are also the same "colour". Yes, they were much more common a long time ago. This is because there was not many of these heavy element around. Most of the matter that is tied up in these heavy elements was hydrogen originally. What is meant by "Too close"? Too close to what? The interesting thing is, there is something missing or assumed in such "calculations". Guess what, we are not even sure exactly how much mass is in the universe. So, how could this "scientist" give such conclusive results? If you don't know the mass and don't know the exact rate of expansion, then how can you make such a statement? I have heard of such "calculations" ranging from "the universe would have collapsed instantly" to "the universe doesn't have enough mass to have slowed it down to what we see today". In every case of these "extreme" results, the scientist in question has neglected something or has made assumptions which are critical to the results. There are many mechanisms that can cause very high spin stars. If, during it's formation, another star passed close by, that could give an extra "push" to the gas cloud's rotation and the resulting star would have a very high spin. The person who came up with this "Theory" obviously doesn't pay attention to their telescope. No, according to theory, the older stars should be in the centre of the galaxies. Not a single one of any of these points is correct. They are all misinformed and the person who wrote them has no inkling of what has been going on in astronomy for the last 100 years or more. They are full of red herring and strawman logical fallacies and baseless claims (which seem to be just made up on the spot to support their arguments). This is complete and utter nonsense and they have no idea of what the Big Bang theory is about at all. It is not an explosion. It is an expansion. And it is still going on today. They either are lying to you, or they really have absolutely no idea about astronomy at all. And if they have no idea about astronomy, then they should not be making the claims that they are. For the record. I am an amateur astronomer and have been for over 20 years. I have looked through telescopes and seen many of these things I have talked about in the above response. I understand the theories (even if I don't know the maths), and have talked many times to people who work as professional astronomers about these subjects. And I can say categorically: The person who wrote this article has absolutely no real knowledge about astronomy, just misinformation and half remembered high school teachings (or they are deliberately lying).
  9. This is only considering if we are the top level simulation. Instead of absolutes, we need to look at probabilities. If we are in a simulation that is being run at 1 frame every millennium, then there are not likely to be many recursive simulation (ieL simulations inside the simulation inside the simulation inside the simulation...). This then limits the probability that we are in such a branch. However, if we are in a simulation that is being run much faster, then this increases our chance that we are in such a branch as there will be more recursive simulations possible. So then just looking statistically, we are not very likely to be in such a slow simulation. We could be in such a one, but this is unlikely.
  10. Much bigger than just 400 electrons. And what if you were trying to simulate those 400 electrons? What if you were trying to simulate the Quantum computer? How much computing power would you need? This is what I was meaning about needing a "Computer" bigger than the system you are simulating. Actually, if the universe is not the most efficient "simulator" of it's self (that is you can simulate in less time, the behaviour of 400 electrons with a quantum computer that uses 400 electrons for it's calculations), then that would be good evidence for us not being in a simulation. Under that assumption, if the universe was a simulation, then the simulation could work faster than the computer simulating it. A simulated computer could perform calculations faster than the computer that is simulating it could. As such a thing would be nonsense, it would act as evidence (although not conclusive evidence - as it doesn't rule out incompetent programmers of our simulation) against us being in a simulation.
  11. Ok, the term "Tongue in Cheek" is similar to the " " smiley, but without the rude overtones. As such idioms are easily accessible for explanation on the net (see: http://en.wikipedia.org/wiki/Tongue_in_cheek ), I didn't fully explain what I meant by it in my initial post. Sorry for the misunderstandings. What I meant for this thread was not for us to be silly about the posts, but as an opportunity to escape our own beliefs about the subject as consider for a second the "possibility" of something else. The way I wanted people to respond in this thread was assuming an hypothetical discussion, without having serious overtones of such a term. I am difficult to offend , and anyway, you didn't do anything offensive (you just seem to have misinterpreted the purpose of the discussion and I was attempting to clear up the misunderstanding). It is this whole "text" issue of discussion boards. What is written (or types) down, does not necessarily convey the emotional or other non-verbal cues of conversation. It is estimated that around 80% of human communication is non verbal, so these kinds of misunderstandings will occur, so when they do, I try not to get too worked up about them... Semi serious is closer to the mark. I wanted a less than serious discussion, so that people would be free to "Speculate" (as this is in the Speculations sub forum ) even about concepts that they don't agree with. Rather than marking out positions (as is common with debating and discussion), I wanted to have a discussion where people are not constrained by this way of thinking. If I had just posted "We are in a game simulation" as the topic, then people would only respond with why we are not in one. What I wanted was to explore the options IF we were in a simulation, Could we tell if we were in a simulation? Could we tell what kind of simulation it is? Could we communicate with the creators? And How could we go about doing such things? What would it mean if we were in a simulation (how would it effect our views of each other and the world around us)? And so on. I think such kinds of discussions as this are important, as they not only help us to understand other points of view, they can also help us to understand our selves more as well. On that note: Lets explore those questions I just posted in the paragraphs above...
  12. Ahh, that was what I meant, thanks. But still, the number of XXXX Explained threads can be a negative influence.
  13. Admittedly I didn't understand a lot of the maths that was used so my understanding of what you linked to is lacking. However I have been using your explanations of it and how they are supposed to relate to your essay as the basis for my discussions. As I have said before, I am willing to enter these kinds of discussion because I am willing to change my position. However, not everyone is like this. Also some people will not enter into a discussion if they think the ideas are wrong, they might make a few posts and then when their issues are not addressed well they will leave. Also, this is the 3rd (or is that 4th) "Time Explained" essay that you have posted. Some people will have been put off after the first one (the first one received much more attention than the later ones). It would have been better if you had just stuck to the one thread and posted a new revision of the essay in that (or just put in the title a revision number). It is these that probably have contributed to the lack of interest in you essays rather than anything else. Before you go, In would like one question about your essay cleared up. And that is of my last inquiry. If C is variable, then two observed in different frames of reference will disagree over the energy (due to E=MC^2) contained in the matter of an object (the object does not have to be at relativistic speeds or anything). Specifically in this scenario: An observe (A) in orbit around Earth is in a lower gravity gradient than one on Earth (B). According to your claims, Observer A will have a higher value for C than Observer B. Because Observer B is in a higher gravitational field than Observer A, Observer B will have a lower value for C. This means that a matter/antimatter explosion that occurs near Observer B on Earth will produce different energy values. As observer A sees "Time" moving slower for Observer B and concludes that C is lower relative to himself, in the equation E=MC^2, C will have a different observed value to what Observer B would see (as you said Observer B will see C at the same 300,000km/s locally as Observer A would measure locally). However Observer A is not measuring the value of C locally, but as it is at Observer B relative to himself and so gets a different value of C than he would if he measured it Locally. This means that the Energy that is observed should be different for what Observer A sees and what Observer B sees. How does your essay (or the websites you linked to) address this issue?
  14. Yes, these scenarios are quite probable too. As far as we can tell, these "Simulation End" situations have equal probability of occurring. If the creators have an ethical code like ours, then they would see the ending of such a simulation as potentially being tantamount to mass murder on a Universe wide scale (they could see us as thinking, feeling beings). So, if the creators have a similar moral and ethical code, then we can assume that they would not be completely arbitrary. Either they would intervene and prevent a situation that ended in the shut down of the simulation, or they would not shut down the simulation in case of the discovery of it. If we now consider that the creator have other ethical positions about the shut-down of the simulation, then as far as we are concerned this decision will be completely arbitrary and the discovery or not of the universe being a simulation does not come into it (they could just choose to shut it down because they want to make lunch). I didn't think of that, but it could be. Also the randomness associated with the Uncertainty Principle and Quantum Mechanics could also be a part of this fast approximation. One way that might lead to the discovery of our reality being a simulation is to look at what might be the fast approximations. In a computer simulation (that we make) we sometimes use fast approximations. However, by looking at the way computation works we might be able to determine what the fast approximation is supposed to simulate. What we need to do is look for ways that you could simulate a system, but using more computationally complex methods (and produce an equivalent result). This would be the reverse of algorithmic optimizations. This is of course a much more difficult problem than the one of optimization, but it could be done (and would show up in the creator's system if they were looking for it). Yes, but one artefact that would occur in a simulated universe is that they would be computable. That is, a simulated universe would contain the ability to perfectly simulate its self. If we can determine that our Universe is computable, then it strengthens the chance that our universe is a simulation. This occurs in simulations that are not designed even to accurately simulate our universe (Conway's Game of Life for instance). Our universe might be the top level of the recursive simulations, but statistically this would be unlikely. What I meant is that if you are using a computer to simulate a number of particles, then the computer you are using must consist of more particles (also accounting for the interface and other necessary components) than you are simulating, or you will need to run it at less than real time. Simulations that are running at less than real time will not produce as many simulations as one that is running at real time (or faster). Unless they are using lots of fast approximations, which further increases our chance of finding flaws in the simulation. Yes, the "Laws" of physics may be different, but if they are using computation, then it will conform to the rules that govern computations (namely the Universal Turning Machine), which state that a Universal Turning Machine (a computer) can simulate any other Turing Machine even if it running a different "operating system). If we are in a computer simulation, then our universe is run by a Turing Machine, there fore a Universal Turing Machine can simulate the universe. We don't need to know what the physics of the creator's universe is (they may even be in a simulation themselves), all we need to know is that if we are in a simulation, we must be able to simulate the computer that is simulating us (although at a much slower speed than it is running at). I disagree with this. Take a look at what has occurred over the last decade. Computing power has massively increased and we have more computers. It seems as if More Computing power -> More computers. As computers can be used to design and manufacture computers and make improvements to them, this means that as the power of computers increases, the availability of computers will increase. So what would be a "super Computer" only a few years ago, can become the Typical computer several years later. So this means that as the power of computers increase, the more computers will exist. A few "Super" computers will become Lots of super computers, and then a few UberComputer will be created which will lead to lots of Ubercomputers and so on... So it might start off with a few science simulations, but the super computers will create a massive amount of non science computers and the total processing power of these non science computers will exceed the amount in the super computers. Also, there will be more computers running simulations (pleasure simulations) than the number in of science simulations. So even if there is a few "Super" computers, this will lead to lots of non science computers and non science simulations (pleasure simulations). these universes would not have developed much in the way of simulated universes, so it si unlikely that we are on such branches. Again, this is an unlikely scenario (for a "Playing' race) and so would not contribute much to the probability that we are on such a branch. Yes, computers might become a rarity, but unlikely due to the fact that they are a tool and we are tool using (technological) species. Also this situation would need to occur in all universe (if their is at least 2 simulated universes then it negates this argument). Any simulated universe that chose this path would not simulates many universe on computers. So the number of branches that would contain recursive simulations from this line would not be large and we have less probability of being on one of these branches. But efficiency (needed if you are going to have a lot of users) would dictate that that you only spend processing time on detailed that are actually needed. Fast approximations can be detected as I explained above, If we are in a computer simulation, then the simulation will be computable. Therefore there will be algorithms. If we look for algorithmic optimizations and the fast approximations that such algorithms would by necessity use, then it might be possible to detect that we are in a simulation. Good point. The Players need not be visible to us at all. But, even if it was a simulated move, then that means that this region is being watched (even if the players are not visible, their presence can still be inferred - assuming we are in a simulation). And I agree that Game does not cover all of these kinds of simulation, however I will put forward "Entertainment" simulations as opposed to "Pleasure" simulations as it is a broader category that includes Pleasure simulations. Also, as it is more likely that a playing Race will develop the technology to create simulations, we are more likely to be in an Game form of entertainment simulation as there will be more instances of a game type simulation than an Movie type simulation as a movie type simulation can be broad cast to many watchers with just the one instance. We might not be able to directly tell if there are observers, but we can infer their existence (if we are in a simulation) by the fact that a simulated reality would be computationally expensive and so optimizations would need to be made and Level Of Detail (LOD) is an easy and obvious optimization. This means that any region of the simulation that has a high LOD will be one that is actively observed by the creators. If the creators are not using optimizations, this then limits the number of simulations that can be run on their computers. This means there are less "daughter" simulations under them and make it less likely that we would be in that branch. So we are mor likely to be in a simulation with optimizations than not. I have had some experience with AI systems (basically as a hobby, not as a research position), individuality does not need complex templates. A learning system which is exposed to different stimulus during learning will show individuality. In an experiment I did, I wrote a program that avoided object on the screen. The "Agent" would move across the screen and if it encountered an obstacle it would "look" in a random direction and see if it was open. If there was another obstacle there it would try another random direction. If the space was open, then it would remember that and next time it encountered an obstacle it would first try that solution. Even such a simple system as this demonstrated a form of individuality (and some people said they had a kind of personality) as different "agents" would develop different responses to objects. Some would be "aggressive" and move so they would just miss the obstacle, others would back up and appear to be fearful. It all depended on the history of the "Agent" and what it had found worked in the past. So even simple systems can demonstrate individuality, complex templates are not needed. As i said in my first post, this discussion is not to be taken too seriously. It is supposed to be just pure speculation. I agree that different conclusions can be reached, and that it might be impossible to determine whether we are in a simulation or not. But it is interesting to speculate about it. One idea I would like to put forward (in case we are in a simulation and anyone is listening in on this) is: A technological race that is capable of simulating an entire universe with intelligent, self-aware beings in it, should have the capability of running a smaller simulation of just a single individual that would reside within that larger simulation. They would also be able to connect various external components into that individual simulation and allow direct access to "The Real World" (through digital cameras, microphones and such). There would be willing individuals within this simulation (me for one) who would like to know the truth of the matter and these individuals could be put into such a "machine" and direct communication could take place and this issue could be resolved. It is not my main argument, yes, but it does have relevance to it. If we are in a simulation and there are no players, then we are more likely to be in a research simulation that a an Entertainment simulation. As one of my points was to determine whether we are in an Entertainment simulation or a research simulation it does have relevance. Again, this is not supposed to be a serious discussion and come to a definitive answer (neither is it just about randomly spouting beliefs). It is supposed to be a discussion about something that is of interest to people (not all). Coming to a definitive conclusion would be good, but it is not part of my expectations. Even though it might not be possible to determine whether we are in a simulation or not, or even determine what the purpose of that simulation is, it doesn't mean that any speculation and discussion on that topic is wasted time or effort. It can help us understand our universe better as we discuss topics of relevance. Also the fact that it might be possible that we are in a simulation deserves to be examined. If we are in a simulation, then it could change the way that we think of ourselves and the world around us. Finding out the truth of the matter would be the biggest discover ever (except finding out that our creators universe is a simulation too).
  15. Well, these were just examples. Bio security won't stop the spread of plants, and other animals (it might initially slow down the introduction, but it can't stop it totally). Saying that Malaria (and simialr deseases) is a poverty problem is a mistake. There are quite wealthy nations that suffer from Malaria too (India, of of the Southern European countries suffer from out breaks, etc). What it is is that poor countries have less control and treatment available for these diseases. Richer countries don't stop the spread, but they do control the diseases and symptoms better (and I am not just talking about malaria). The "biological controls" use to eradicate the vectors had other severe ecological problems (remember DDT?). Pest species cross boarders very easily (just think about the fire ants in southern states of the USA). And these species are endemic to the areas and are moving to areas slightly out of their normal habitat zone. What will happen if the endemic area of a species move Northwards and passes the border. These species would then become much harder to control, what is the cost of controlling invading species every year. This cost will just increase with GW. And this is what I meant by equilibrium. The Positive and Negative feedback loops are in equilibrium. What happens if we change this balance? Exactly. Then why should we actively change the balance in favour of one or the other? I agree with this. We should attempt to understand the situation before making too radical of changes. The knee jerk reactions of politicians in democratic countries (as they only have a limited time before the next election so they have to be seen to be acting) can be worse than the original problem (DDT again as an example). But, neither of these arguments addresses the scope of the changes that might occur. Because of the non linear nature of the environment, we can not be absolutely sure how severe or fast these changes will take place. However, if we prepare for the worst, then we might be pleasantly surprised and not have such a serious problem, but if we underestimate the scope of the change and it exceeds our preparations, what is the cost of such a scenario? This was recently addressed and the results are that the cost of under preparedness far exceeds any cost of over preparedness. On that alone, planing for the worst is more financially better than underestimating what might occur. All that aside, doing a little now (even if it is not the "best: solution) will be more cost effective than doing something later.
  16. But the question is whether this trend is increasing or remaining constant. Also, if we are responsible, in any small part, to this increase in temperature and sea level increase, then who pays the bills when changing temperatures cause pests to enter new areas (now that they are warmer) and eat all the crops, or the ranges of diseases like Malaria also change as the vectors that carry them now enter these new areas. Or who is responsible when the rising sea levels swamp low lying areas (or even countries). If thousands of people are displaced from their homes due to higher sea levels, who will take in those refugees? it doesn't take a lot to cause massive amounts of damage. There is also the concern of positive feed back loops in the environmental systems. These create "Tipping Points" where once a critical point is reached, there is no way (or extremely difficult and expensive ways) to reverse certain changes. Eg: As the ice at the poles melt, this will reflect less sunlight back into space, which increases the amount of energy in the system, heating the Earth up more which increases the rate of ice melting. Also tin that example, as the ice melts it speeds up the melting too, so you end up with a runaway effect. It is these "runaway effects" that are the problem. Sure it might be only 2mm/year increase in sea level, but if the ice starts melting faster and the oceans heat up (remember if you heat something it expands - and there is a lot of water there), then it won't be a constant rate of increase. The weather and oceanic system of Earth are a non-linear system, and so we expect non-linear outcomes from changes. This an mean that you might increase something, but non or little effect could come out from that, or that a small change will produce a massive change across many parts of the systems.
  17. you misunderstood what I meant by "perfect". By "Perfect" I meant a simulation without any flaws that could be detected and exploited by the inhabitants of that simulation. Such flaws would show up as defects in the mathematics that underpin physics. That is, even though we had an exact match for the physics, there would still be situations where it would give an incorrect answer (essentially a logic bug in the simulation opposed to a syntax bug). These logic bugs might stem from a "quick and Dirty" method of implementing the rules of the simulation. We use these kinds of fast approximations in simulations all the time. If you want to simulate the behaviour of a bunch of atoms, then you either need to use at least as many atoms, make fast approximations, or take longer to simulate the atoms than it would to actually have those atoms do whatever it is you are interested in. So either the size of the computer is larger than our universe, our universe is running slower than the creator's universe, or they are using fast approximations. If they are using fast approximations, then there will be "artefacts" that can be detected. If either of the other systems are being used, then we can't detect it, but they would not lead to a massive amount of recursive simulations (which makes them less likely a place we will be in - as there will be far less of them). "Agents" (like in the matrix movies) are not needed. All the system needs to do is monitor any intelligence that arises due to the simulation (or if that intelligence is programmed in from the start it makes it much easier) and determine if they are attempting to uncover flaws (as I stated above) in the system. If you have a language translation program, you could eavesdrop in on any conversation (and if it is in a computer - like we are doing - it also makes it easier to detect) and determine if the inhabitants are attempting to determine that they are in a simulation. It would then be a simple matter of "adding in" something that would stop that line of thought. A heart attack, lightning strike, just erasing them (including from all memory of the inhabitants), switching off the simulation, etc. As this hasn't been done, and we are asking these questions, then either we are not in a simulation, or they don't care if we find out that we are in a simulation. As I said, we are more likely to be in a simulation that used fast approximations than one that doesn't, therefore there will be flaws that can be detected (may be the whole Quantum Gravity issue is one of these flaws, or maybe the Dark Matter is the result of these fast approximations creating an error that we are detecting). Yes, we can determine we are in a simulation if the simulation is using fast approximations, and we are more likely to be in a simulation that uses fast approximations. True, and this supports my original proposition, that is we are in a simulation, it is more likely a game than not. Yes, this is speculation, but I think it is a fairly solid assumption. Evolution will mentally equip a species for survival. If ther eis a species that lives in a rapidly changing environment (on the time scale over a few generations - like the onset or end of an ice age, etc), then the ability to rapidly adapt to these changes will be a survival trait. Learning and passing that learning onto the next generation (aka culture) will also be a big survival advantage. So in these situations, it would be expected that the ability to learn (and make inferences about related subjects - aka intelligence) would evolve. It wouldn't evolve every time, but it would be a likely path once the initial potential is there (some basic forms of learning can culture are there, the ability to manipulate their environment, etc ). Also, it might take several such highly variable environments to push a species far enough up the intelligence scale that they are to a point that they become technological. But once the species hits it, it is a massive survival advantage. One of the fastest way to learn about the environment is to experiment, trial and error. This can only really exist in a safe environment. In an environment where a single mistake will kill the experimenter, then this kind of exploration can not survive long (and so will not evolve). However, if you have a safe environment, a "Nest" so to speak, then this kind of learning will flourish. "Nests" require the adults to protect the young, so this is now the seeds of culture. Culture is also a fast way to learn, the individuals that have learnt from their experiences pass it on to the young. However, in a variable environment, this could lead to a stagnation of the knowledge and thus would not be a survival advantage. In a constant environment, the overheads of having all this (large brains and groups) because a disadvantage. So evolution would not encourage it's development. Also, there would be no drive to learn new things, so they would not necessarily develop further. The only way to avoid this stagnation is through experimentation, that is play. Play becomes a necessary aspect of survival. What all this indicates is that Play is a necessary (or at least an extremely likely component) part of developing a species to a technological stage. Also, that a species might develop without play, but they would do so at a much slower rate than one that did employ play. So the playing races would push their technology further and further at a faster rate than the non playing races. That means that the computing power of the playing races will be far in excess of the non playing races. So give a few thousands years of development time, the playing races would have much greater computational capacity and even if they only spent a small amount of time and resources on such simulations, they would still exceed the non playing races in the number of simulated worlds. If we are just a "Brick" in an out of the way place in a "Game" simulation, then why spend so much processing power on us? It would be more efficient to just have a very crude fast approximation algorithm handle us until it was "used" by a player. So either we are not in a game simulation, or we have players all around us. As games seem to be the most common form of simulation (and therefore we are more likely to be in one if we are in a simulation), then we should have players all around us. What is your reasoning here? Why must the number of people being simulated match the number of programmers. The "template" might have been developed in another simulation altogether, it might have be developed over several generations of the creators. They might have created a program to develop such templates and the number of programmers would then be irrelevant as compared to the number of people being simulated.
  18. I was not talking about the Electron an Positron being relativistic, but the observers being relativistic. How can the way I am moving (or if I am near a gravitating object) change the energy output of an explosion in a different location? If you were on Earth doing this annihilation experiment and I was in the International Space Station. How can my position in a lower gravity effect your experiment? If the energy from an explosion were to create a crater, then lots of ejecta, not just 1 rock, would be created and the energy contained in that explosion would be spread around all that ejecta. This means that a rock from an explosion would not necessarily be travelling at relativistic speeds. If, instead of just looking at the results of a hit by some of this ejecta, but instead looked at the trajectory. A 1 tone rock given 1,000,00 newtons of force would have a different trajectory to the same rock given 500,000 newtons of force. If we accepts a variable C, then that rock would be given one or the other amounts of energy dependant on your frame of reference (stationary, moving, accelerating, in a gravity well, etc). This means that two different observers in two different frame of reference will see that rock travel a different distance. If they then went to the location of that rock, they should be at the same location, but because they saw the same rock travel in 2 different trajectories, they will not agree on where that rock landed and so end up at two different location (the rock would therefore need to exist in two different locations simultaneously. As such a result is ridiculous in the real universe, we can conclude that any theory that has this kind of result must be wrong. As having a variable value of C leads to this conclusion (that the rock must exist in two different locations simultaneously), we can determine that C can not be variable.
  19. Yes, I agree there is some exaggerations about GW, but not all of it is. Also,not all of the exaggerations are Pro-Global Warming, there are some that state the opposite (like that the southern hemisphere is not warming up because we have just had one colder than normal winter). If you are going to attack exaggerations about GW, remember both sides have their proponents that are exaggerating.
  20. But the external observer would measure a different value of C for you. And as C is used in E=MC^2, then if a person in a gravity well (or travelling fast) annihilated some matter and anti matter, then the person doing the annihilating will measure one value for the energy released and the distant observer will measure a different value. If this was a bomb, then the observers should see a different sized crater (not to mention the different trajectories of all the ejecta - so one observer might see a rock crush their house where as the other observer would not see it crush the house). So, here again, we have two different, and mutually exclusive results occurring if we accept a variable value for C. No we don't. If we accept everything else to be relative, then there is no problem. If we accept Time to be a physical dimension as is Space and that, using geometry in the 4 Dimensions, then we do not need to have a variable value of C. In fact to get any sensible results, we need to have a constant value for C. Can you mathematically prove that? The maths and experiments in relativity indicate that this is not the case. The mathematics that underpin relativity do not come to this conclusion, so could you show us how it could?
  21. I am not just "following a pseudoreligion and believing it must be so out of religious faith." That sounds a bit like an attempted Ad Hominin at me. I did not just claim that "anything not quite average must be a consequence of global warming", but said that non average weather is an expected result from global warming. Not that any variation must be a result. From your posts, you seem to think that because one region of the Earth was colder, then the Earth can not be warming up. Here in Canberra, Australia, we have had one of the warmest Winters for many years. You are relying on a small sample space and then extrapolating that to the whole southern hemisphere. From what I understand of how they calculate the average temperature, they average out the measurements from all the weather stations in a given area and then use that area in the final calculation. So it wouldn't matter how many weather stations recorded temperatures, as all these recording would first be averaged out into 1 measurement for that area.
  22. I will confirm this. I suffered an injury around 6 years ago and I have a chronic pain condition (more than that as the injury has caused damage to my shoulder which results in frequent re0injury). One thing I have learnt about pain is that it is not the same as injury (a noxious stimulus). For us to experience "pain" we have to be aware of the triggering event, or in chronic pain cases, the original injury can be enough. I have learnt to control my level of pain somewhat by diverting my attention away from the stimulus that caused the pain (my shoulder keeps dislocating).
  23. Equilibrium does not mean a flat line in a graph. Actually, because the Earth's climate system has both positive and negative feedback mechanisms in it, one would expect it to go up and down like a yo-yo, but what you would not expect is a lot of extreme variations (you will get some, but there would not be too many - it would depend on the speed that the feedback loops respond to changes). Expecting a flat line graph as an indicator of stability shown a lack of understanding on how complex system (like the climate) really is. you know, one does not normally expect the number of icebergs off the cost of New Zealand that we have been getting, so what could be the reason for this increase in iceberg numbers? Well, if the climate was warming up, then this would increase the rate of movement in the glaciers in Antarctica, and this would then increase the number of icebergs forming... (and also an increase in size of these icebergs too - so they would last longer and make it further north). Again, you statements show a lack of understanding of how complex the climate really is. A simple view of the climate will not give the correct results as to what GW really means. Have a look at Chaos theory and Complex systems theory (as well as feedback loops). Try to understand these, then apply them to such a system as the environment. The icebergs off the coast of New Zealand could have many different causes. It could be caused by the Earth cooling down, or it could be caused by the Earth heating up. What we need to do is look and see what is actually happening. This has been done. The Earth is warming up. The next question is: Why is the Earth warming up? Well, increased CO2 levels will cause the Earth to heat up. We know this to be true. So, is the amount of CO2 that Humans are producing having any effect. From the amount and from records (ice cores, etc), the answer is Yes. So our increased CO2 production is having an effect and this effect is to warm the Earth. The Earth might be warming naturally, but we are increasing this effect and will cause to to warm quicker and get hotter than it would normally. Increased amounts of CO2 in the atmosphere will decrease the amount of heat lost from the Earth. Whether this CO2 is natural or not. We are increasing the amount of CO2 in the atmosphere. We are producing lots of CO2 (and other gasses that have the same effect). Therefore we are contributing to an increased greenhouse effect. The only way we can not be contributing to an increased greenhouse effect is either that: 1) There is a CO2 sink that balances out the human produced greenhouse gasses or 2) CO2 and the other gasses don't cause energy to be retained by the Earth's climate and oceanic systems.
  24. I have never liked the term "Global Warming". It was a catchphrase that we stated before we knew what the Greenhouse effect could actually do. What Global Warming really is, is more energy being retained by the atmospheric and oceanic systems. This energy could go into increasing temperatures (and is most likely as the majority of the extra energy being retained will be in the form of infra-red radiation), but that is only 1 of many forms this energy could show up as. Global warming could be renamed to Climatic Disturbance, but the term "Global Warming" has a better "ring" to it... Yes, here is Canberra Christmas day was cold. There was even snow that fell in the nearby Snowy Mountains. However, we have also just had the hottest January day for many decades. What seems to be happening is there are more, and larger swings in the climate. And this is what is the expected results of global warming. Maybe not for direct "Thermal" increase, but it is certain that more energy is being retained in the Atmospheric and Oceanic systems. This calculation is very simple to do. There is a constant amount of sunlight reaching the surface of the Earth. There should be a fairly constant emission of this energy back out into space. If the amount of energy coming into the Earth system matches the amount being emitted, then the system is in equilibrium. If more energy is being emitted, then the Earth will "cool". If less energy is emitted than arrives, the the Earth will "heat" up. Earth was in equilibrium before humans started producing a lot of CO2 and other greenhouse gasses. Now we are producing these greenhouse gasses that cause a retention of this energy. This means that less energy is being emitted. If the amount of energy that is coming into the system remains constant, and the amount of energy being lost is reduced, then the only thing that can be happening is that the Earth is "heating" up. Global warming is happening, and we are the cause. What the actual effects of this are is under debate, but a rise in temperature is very likely.
  25. Well, I should have said that you were again stating two mutually exclusive things. First you say that I would measure a lower velocity for C, and then you say that I would measure the same velocity for C. Specifically here: But then you state that C is a constant in the local frame: So which one do you actually mean? Is C constant, or not?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.