pioneer
Senior Members-
Posts
1146 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by pioneer
-
The point is once you add the buzz word "risk", a correlation can create fear beyond any final reality. It is a nebulous concept, which gives everyone risk even if the actual event never pans out. How can you have something and not have it in the final reality? It is called fantasy. For example, I have risk of being struck by lightning. I can also go my entire life and never get struck. In final reality, I never had any real risk based on the final hard data, since there was no real cause and effect for me. But according to the math, I had risk, even if it never happens in reality. It a type of magic alternate reality. It is emotional science. Risk it is based on a lottery mentality. Someone will win the megabucks. This is like the risk. Before the reality of the drawing, everyone has the fantasy of winning, since the odds of winning are spread out equally to everyone who buys a ticket. If you buy your ticket early, you can imagine winning all that money and living in your new beach house. After the drawing, when reality sets in, most of the collective imagination was only fantasy that generated millions. Whereas the money lottery uses desire for the imagination, a risk lottery uses fear. Instead of imagining sipping a cool drink in your new beach house, with a supermodel, winning the risk lottery will cause us to imagine rotting away even if it never pans out in final reality. In this case, we want to give the ticket back. There is a procedure to return the ticket and get out of the lottery. If you pay and the doctor says, you are fine, he gets the ticket and the magic spell is broken. I tend to believe a better way would be a positive correlation that touches the most data points instead of a negative correlation that uses the watered down standard of only a few point for all. If we went into the lottery and it told us the reality of winning was low, so don't quit your job or drift into fantasy, the fantasy would lower. But this would not allow us to sell as many tickets. We need to keep the fantasy fresh. Relative to any lottery, the more often you buy tickets the better the odds of winning. The more risk tickets placed into your pocket the better one feels about the fantasy as reality. The odds are on you side. For example, hangnails can be painful but few lose sleep over them. If we decided to screen every month for hangnails, this will now look more serious, even if it isn't. If it wasn't that bad, why would they require monthly screenings. What do they know they are not telling us. Maybe we need to screen even sooner than that. The more often you buy a ticket, the more excitement we can generate, and more tickets people will buy to increase the excitement of the fantasy.
-
Here is a road maps of California research initiative for climate change; Does anyone see how the deck is stacked? If you want money, you either get with the program or don't expect anything but mud slinging. One will not see a balance with funding available to check out opposing claims, just to see if playing field is a little slanted. Money starts with the political system. They decide the priorities for funding. Politicians can only tell the truth and never make stuff up. They would never use mud slinging against their opposition. If they wanted the moon to be made of green cheese, since this will help them stay in office, they simply put up the money for only that. Now the majority of the data supports green cheese. This consensus is totally unbiased. Let us work under the assumption of the political deck stacking. The government is a major user of energy and resources and has a huge carbon foot print, bigger than any other group. We will set green targets for the government. If they fall short, they can pay a fine or buy green credits, which is given to the people as a tax break. If we could do this, the research funding would change priorities, to avoid the tax refund. That is how you get the system to be honest by applying the game against it.
-
How do cells know to line up homologous pairs at the metphase plate
pioneer replied to CrazCo's topic in Homework Help
Chromosomes are hydrogen bonded molecular configurations dissolved within water. All the packing with proteins, and the various levels of winding, allow chromosomes to define low configurational energy. The arrangement of homologous pairs is an hydrogen bonding configuration of even higher order and even lower energy. When this configuration separates into two sets of chromosomes, it will increase configurational energy. We need to add energy, via ATP, to increase the potential, to get this higher energy arrangement. Without this energy, the pairs would remain closer to lowest energy. This arrangement is influenced by the centrioles, which are also hydrogen bonding configurations positioned at lowest energy relative to the membrane and its proteins and the DNA. The DNA goes to the lowest energy place between the two sets of centriole, since going either way off center would create a potential. The planar arrangement is the sweet spot of lowest energy. Microtubules connect the configurational potential between the centrioles and the DNA, since the DNA and centriole are at different potentials. The highly packed DNA is about as low potential as it gets. The cell then uses ATP energy to increase the energy of the entire composite structure. This will separate the configuration of the DNA into two higher energy configurations, with movement toward the centriole adding extra configurational energy. The cell tries to lower this potential, resulting in the flow of membrane forming two cells. Making use of configurational potential allows the process to be fool proof. Evolution had the impact of making it more efficient. Because the process is based on configurational potential, it is fairly flexible with respect to some details in the process, as long as the potentials add up. This is another fail safe so different forms of life can still get the job done. Increasing configuration potential from the metaphase plane will not be a simple reversal of the DNA packing leading to the homologous pairs. The continuity of an energy increase, makes it easier to separate the pairs rather than reverse packing, due to the steric hindrance created within the DNA, i.e, least energy way. A good example of the effect is throwing paper clips in a dish. They get all meshed together. To reverse this into separate paper clips, again, we will not necessarily copy the toss backwards. But rather grab a bunch and let then separate as a group and then start to separate the bunch. The cell graps two bunches and start to separate the paper clips. This is another fool proof or fail safe design to assure two cells. -
The other three forces of nature give off energy when the force potential is lowered. If gravity is a force, what type of exothermic output does gravity give off when potential lowers? There is no output that is very obvious. As such, this is one way that gravity behaves differently than other forces. It does not output obvious energy. However, lowering of potential and the conservation of energy, implies something, or we are in violation. Say gravity did have some type of output, that is not obvious to the naked eye, its impact would be loosely analogous to that of the other forces. For example, the EM force gives off photons, which can act on other matter held together with EM force. Its impact is to reverse the effect of the force somewhere else. But the second law states entropy must increase such that the impact goes into entropy to lower the reverse impact, as it propagates. If gravity gave off an unknown output, like other forces, the impact of this output would reverse gravity, elsewhere, but in a way that increases net entropy for the second law. The expansion of the universe seems to satisfy the criteria. Let us so this another way. Gravity is defined by GR, with lowering gravitational potential compacting the matter in space-time increasing the local space-time contraction. The exothermic output should cause space-time to expand, elsewhere, if gravity was behaving like a force and giving off potential as it lowers potential. With entropy having to increase, it will not show up as a parallel galaxy expansion but something that is putting potential into space-time expansion effect with increasing entropy. If we look at the earth, condensing core iron from liquid to solid, increases the density and lowers entropy. Does this increase the space-time contraction of the iron core over time due to higher mass density over more space-time. Does the output from this have an anti-gravity/entropy effect in the liquid part of the core? Does these cancel so the surface stays the same or does the surface increase or decrease gravity? Does it show up in anti-gravity effects such as volcanoes, where matter moves toward lowered space-time contraction by increasing gravity potential?
-
I would like to begin with an analogy. Many auto manufacturers recommend changing the oil every 5000 miles. The Jiffy Lube chains, where you can go for a quick oil change, recommends every 3000 miles. Relative to the cost of oil changes, the 3000 mile is more expensive over time, but we are told it will reduce the risk of engine wear. This valid risk concern has been successful, with many people not willing to change their oil every 5000 miles, since they can see the risk. We could lower this risk even more if we change oil every 2000 miles. But there is still some risk even in this. Maybe we can change the oil every day. But even that has a risk. It also starts to affect supply and demand with the cost rising due to demand, until the market adjust and more jiffy lubes open all over the land. Like any good auto service shop, changing the oil is not the only service offered for your protection. Once on the lift, there are other risks which we need to point out and address. Rotating the tires is risky at 7500 miles, even of the manufacturer says so. He wants your tires to wear. Maybe we can lower the risk even more by rotating at 5000 miles to avoid the risk of poor handling and blow out. But even that can be risky, so maybe we need to lower it to 2000 miles. What the heck, maybe each day we can rotate the tires while we change the oil. This will minimize the risk but there will still be some risk. While we change the oil and rotate the tires, there are many other things things than can cause risk. Modern cars have computer to monitor all these risky devices. To lower these risk, we need to plug into that machine every 15,000 miles. But that can be risky. Things can change quickly. Maybe while we change the oil and rotate the tires, we also need to plug into the machine to lower that risk. I am kidding around, but one can never reduce the risk to zero. Trying to do so, gets very expensive. But it would also increase the rate of growth of the quickie lube industry. How can you argue against trying to reduce risk, no matter what the price? But since minimal risk is very expensive, how can the average family afford to own an auto? The daily oil, tire and diagnosis could add up to tens of thousands of dollars per year and take up a lot of time sitting the waiting room. Maybe we need the government to step in, so the wait is less and help offset the costs. Some people just can't afford this and are suffering great risks changing that oil every 5000. Just skipping one day at the shop can double your risk. With every 5000 miles, the risk boggles the mind and borders on criminal neglect. We may have to force them for their own good with a fine or jail. Risk analysis is valid math, but it is an odd correlation, since the weight of a few data points counts more than the majority of the points. As long as you use the buzz word risk, 2>98 To put it in perspective, you run experiments and generate 100 data points. We pick one or two points and draw a curve through only these two making sure it does not touch the other 98. This line is what we need to believe and prepare for. It sounds like soft science, until we use the buzz word risk. Like magic it get an harder. It is called emotional science, where emotion distracts us so we can forget the 98. Now daily oil changes makes sense since the two point curve says this.
-
This graph represents is the heat that escapes from the CO2 greenhouse blanket that is trapping the earth's heat. The red is the hard data and the black is the average of the models which drive the fear. The data says, the CO2, although contributing to global warming, does so at a slower rate than the models predict, since more heat is escaping than predicted. The bottom line is, the earth's greenhouse, has natural vents, like the little open windows one sees on the roof's of greenhouses. The graph seems to show that the earth opened up windows to let out the heat. http://chriscolose.wordpress.com/2009/03/31/lindzen-on-climate-feedback/
-
http://www.dailygalaxy.com/my_weblog/2009/06/is-global-warming-part-of-earths-natural-cycle-mit-team-says-yes.html I would assume this would be part of the discussion, if truth was important. But this is an inconvenient truth, to quote Gore. http://www.opinionjournal.com/extra/?id=110008220 Here is an article from Mr. Lindzen, an Alfred P. Sloan Professor of Atmospheric Science at MIT. It is called the climate of fear. It sounds like sour grapes until Prof Linden published another inconvenient truth very recently. http://chriscolose.wordpress.com/2009/03/31/lindzen-on-climate-feedback/ The greenhouse blanket effects, predicted by models, turns out to be over estimated compared to the real data that was published. There is still global warming but the rate is slower than the models predicted since heat escapes easier than expected.
-
We can simulate the uncertainty principle with photography. If we took a photo of an action scene, with the shutter speed of the camera too slow, we will get motion blur. Motion blur is that fuzziness around moving objects in pictures What this motion blur does is create uncertainty in position, since the object appears to exist within a range of places, instead of one sharp position in distance. In the photo below, the water has position uncertainty. But on the other hand, if we chose the correct shutter speed and stop the action, so we know exact position, but we can't tell momentum. The boys are in motion and we know where they are in position but we can't tell how fast or how much momentum they have. Relative to the uncertainty principle, the phenomena is loosely related to this photography effect. The operative variable is time (shutter speed). If time is too slow in the camera, there is extra time in the photo. This is seen as uncertainty in distance or blur, due to the integration of space-time and position expressing the extra time that camera does not pick up. When there is no extra time and the shutter speed matches the motion speed, we get only sharp distance (without time), but the lack of time makes it hard to know the momentum because momentum needs time as a variable in the equation. There is a work around the uncertainty principle using time. If instead of a still camera, we used a movie camera, with each frame at the correct shutter speed, to stop the boys, each frame will tell exact position. While the composite of the frames or the movie would tell us the momentum in the action. So, we look at the momentum in the movie, and pick a frame for exact position. What the movie has done is add time to the still frame.
-
This past june in New England was one of the coldest on record, with the least number of sunny days since 1903. The reason this was so was because of constant clouds and rain. Global warming will increase the amount of water in the atmosphere, since more water can dissolve into the atmosphere if surface temperature is higher. This leads to more clouds and less sunshine. Clouds are a good solar reflector keeping NE in the low 60's all June. Is it possible the relationship between higher surface temperature and higher water content (more clouds) will make global warming self regulating? If there had been less water in the atmosphere over NE this past June, it would have been sunnier/warmer and more in par with the global warming predictions. What was interesting, all this cool wet weather in NE, really made everything wild grow like weeds. It appears to have increased the surface area for CO2 absorption. Even algae, which is not normal on houses and roofs, during sunnier weather, is helping to collect CO2.
-
There is a logical way to split the singularity of the BB theory, using basic observations within physics. It only requires a single extrapolation from an old experiment. If you look at photons, these behave as both particles and waves. The double slit experiment helped to demonstrate the wave nature of photons because only a wave could go through both slits at the same time without considerations like binary pairs. Mass particles are also particles and waves but they act more like particles than waves. A rock thrown at a double slit will can only go through one at a time. It wave is rather small in relative energy. All we need to do to expand a mass based singularity, is increase the ratio of wave to particle, so it can be in two places at the same time. Being in two places because of the higher wave ratio implies entropy has to increase, which is the reason for the conversion to a higher wave ratio. This lowers energy by being endothermic and increase entropy, like the rest of nature moving the singularity forward.
-
Retrocausality or changes in the future, which can change the past can be explained as being due to the limitations within empirical assumptions. For example, a study might show that drinking coffee is not good for you (hypothetical). So when you drink coffee today the outcome is predicted to be bad in the future. Years later, a new study says, drinking coffee is now good for you. Now all that coffee you drank yesterday, that once would hurt you, magically changes all its past properties and now leads to better health effects for the future. Nothing has changed in reality, the special effect is connected to the empirical result. The wording, retrocausality, is sort of misleading, since it implies we were using cause and effect in the first place. One has to look behind the smoke to see the real cause.
-
The term phobia means fear. Homophobia appears to be different than most traditional phobias, because it does not involve the avoidance of the threat. Rather it involves the aggressive pursuit. For example, someone with hydrophobia avoids water and does not aggressively pursue it. If they see a swimming pool they will go the other way. One does not see a homophobe running away when he sees a gay walk down the road. The behavior is different. One way to characterize the difference is connected to preditor and prey. Traditional phobias place the person in the prey role. The fear of drowning is the fear of being consumed by the water. The prey runs away from the water preditor. Homophobia is more like the preditor role, where the adrenline from fear's fight/flight causes the person to pursue the prey. As such, homophobia may not be a traditional phobia, because it appears to be more connected to preditor behavior, which uses the fear to pursue the prey. Maybe there needs to be a new term to differentiate this from traditional prey-side phobias. The Latin word for aggression is aggredi. Maybe the more accurate term is homoaggredi. From a moral point of view such aggression is not good for anyone. But from a science point of view, distinctions are useful.
-
There is a big push in culture to go "green" or return to natural ways of doing things to restore natural balance. As far as I know, condoms are not a product of the earth but of a factory. Without condoms, in the light of certain behavior, nature has, through evolution, generated "natural" or green diseases based on the environments created by behavior. There is a cause and effect with these diseases only appearing and spreading due to certain behavior and the conditions they create. This behavior can be supported using artificial compensation, even if in a "green" or natural environment, that same behavior would result in a thinning of the herd. The pope is for all green, with selective advantage going to those who can adapt to this natural change in the environment without requiring artificial. One would not be vulnerable to nature's green herd thinning reaction, if they chose certain green behavior that are in harmony with nature. The question becomes what types of behavior would allow selective advantage without artificial additives so we can stay green? If we need a study, take away all the artificial, or things that the earth does not provide naturally, for a control group. Next, we will let nature take her course and see who is left. Then we can correlate the behaviors that gave the most natural selective advantage. We will call this natural evolution.
-
Vertical and lateral evolution
pioneer replied to pioneer's topic in Evolution, Morphology and Exobiology
I understand the convention. It is difficult to argue against this without making people frown. I don't wish to hurt feelings. All I was doing was trying to order some of the data as a function of changes in base systems which build up each other. Is there data that shows a more advanced version of a body system appearing before a simpler system? I would happy to see this data so I could put this to rest. The difference has to do when the gene is being expressed. If the multicellular occurs earlier for the same critter it is a base genetic effect that forms a platform onto which a wide range of change build. They can both be vertical, but one will take a bigger step and provide a larger platform for lateral change. -
The scientist characterizes something. The engineer is involved in scaling that up, usually into production. As an example, the scientists develops a new coating for iron. He has to understand all the chemistry and run all the experiments to make this a sure thing. Once that science is in place, the engineer has to understand it and use this knowledge for scale up. Scale up creates a bunch of extra concerns. For example, to be cost effective, marketing decides it can't cost more than X dollars per gallon to produce. Even if it is the best iron coating, anywhere, if it costs 2X, we may not sell enough. So we need to make it at X dollars or else that excellent science might end up on the shelf. It might still be used to help science because of its unique qualities. But, if this invention is part of a company's R&D, that innovative science may never reach mainstream science since won't be published. From the development engineering point of view, pilot testing, will create new problems compared to the beaker. One can run into problems with heat generation, hot spots, slow diffusion or mass transfer, etc., that can impact reaction kinetics leading to impurities or slower reaction rates compared to the lab. That is not good for cost if we have to build bigger or add another clean up step. A good engineering will solve the problem right there. So it is important for the scientist and development engineer to work as a team. The scientist knows the molecule and has had experience with these by-products, through his experiments. He can identify what might be the cause. The engineer may have to tweak the process design to reduce the variable. Once the pilot study works, and we are on the target cost, now comes scale-up into production. This might require our scientist engineer team interface with a design team. If all goes well, next, comes construction and then, start-up. This is usually where the engineer separates from the scientist. Our scientist is back in the lab, coming up with the next invention which will be pilot tested.
-
Vertical and lateral evolution
pioneer replied to pioneer's topic in Evolution, Morphology and Exobiology
Maybe I am not explaining myself well enough. I am not trying to create confusion, but I see things in a different way, which may be helpful. I think I found a better example of vertical evolution. Consider the heart or that pump that circulates the environment around the cells within a multicellular animal. As far as I know, there is no other basic mechanism, other than a pumping action, that could be substituted, within a large animal. Trees use capillary action. But without the pump, plants are stuck without mobility. The heart-pump appears to be a milestone for the progress of animal without any substitute. Even at the time of only replicators, this was a predefined milestone that had to be achieved or else animal progress on earth would have been stuck. Where this vertical thinking came from was looking at how an animal grows from conception to birth. This is not done randomly, but have a basic schema that lays foundations and builds upon these. As each milestone is set in place, there is further differentiation both laterally and vertically. If we start of a fertilized ovum all is does is divide to a point and then stops. This is conceptually the simplest type of multi-cellular. It is analogous to a dividing cell that sticks daughter cells. Once that milestone is reached, life builds on that. -
Vertical and lateral evolution
pioneer replied to pioneer's topic in Evolution, Morphology and Exobiology
Replicators are often seen as the beginning of life. Was this a vertical evolutionary milestone, or did a variety of mechanisms occur under the differing conditions? Replicators are one area where enough attention was given to a vertical milestone. Other vertical changes haven't been thought out, so they come under a different standard. In the case of replicators, the design is the most flexible for almost all environments. This vertical milestone was proactive and not just reactive. -
There is a logical way, using only simple genetic considerations, to separate evolution into its vertical and lateral components. Vertical evolution could be seen as evolutionary progress. Whereas lateral evolution is more concerned with perturbations of existing systems. This distinction is easier to see with an example. Let us compare a single cell with a hypothetical multicellular life form composed of only two differentiated cells that integrate. To differentiate, maintain, connect, and integrate these two cells requires additional DNA support that is not needed by the single cell. All the extra DNA requirement would make this vertical evolution. If we had two birds of the same species, one is yellow and other red, one color may provide selective advantage. But this would be more of a lateral evolution since it may only require tweaking proteins within an existing integration. If we go from our simple hypothetical two cell critter to a hypothetical critter with ten types of differentiated cells, the genetic control system will need to evolve vertically to differentiate, maintain, connect and integrate all those cells for the integrity of the integrated whole. Once that system is in place, tweaking that design is more lateral evolution. The dividing line between vertical and lateral is not always clear cut, so we may need to define something in the middle, which is oblique evolution, or a combination of vertical and lateral evolution. To put it all together, with a more realistic example, animals that bear their young live require additional genetic differentiation support compared to animals that lay eggs. At the very least, one has to add generic induced plumbing for the shared blood supply. This was vertical evolution. Lateral, may only change some outward coloring characteristics within the litter, but may not change any integrated systems. Oblique evolution may add additional functional optimization. This may go up and out.
-
SR has three relationships, one for time, distance and mass. The relativistic mass is important because it allows us to do an energy balance and therefore differentiate between relative references. The M is connected to E=MC2 with E connected to kinetic energy. Try this thought experiment. We start with three stationary references separated a distance d in a triangle. We label one and give it velocity V. From the point of view of either stationary reference, we see one stationary and one reference with velocity. From the point of view of the moving reference, we see two stationary references appearing to move. One can see the energy balance is different, so relative can cause problems in terms of energy conservation. With two references, it is harder to do an energy balance, so we have 50/50 odds of picking the correct reference that is moving because of energy (has the relativistic mass). This goes back to your question, is relativistic mass real or just a mathematical concept? If it was real, we should be able to measure it and differentiate relative reference. If it is just a concept, we are stuck at 50/50 odds of getting it right with respect to energy balance reality. We could pick what is convenient for us, and then have 50/50 odds. We can still learn things but we might assume to much or to little energy. This will have an impact on subsequent assumptions. It may be important to figure out how to measure relativistic mass, from a distance, to confirm if we have the energy balance correct. If we go back to our three reference scenario, if we could measure relativistic mass, the moving reference would see V but no m. The stationary would both see V and m. Based on that both know who is stationary and moving, even if there are special effects in t, d.
-
Evolution confusion
pioneer replied to Garrettguy457's topic in Evolution, Morphology and Exobiology
I understand, that evolution, as written, is not concerned with progress in an objective sense of systems that improve over time, which if brought back into time, would have selective advantage over an earlier version of the same animal. For example, if we could take modern humans and place them 3 million years ago, according to the theory, there is no objective measure of forward progress. Selective advantage will not go to the modern man since he is not more advanced. It would be a crap shoot, as to who will have selective advantage. I would bet on modern man if you give me 50/50 odds, since he is an advanced version of model 1.0. The older version 1.0 is better with older systems much like old cars had stronger bodies with thicker steel construction. But under the hood there is no comparison since evolution meant forward progress. We would also need to do it the other way. We will take a pre human from 3 million years ago and bring him into modern culture and let him adapt to see if this is also random. I will still use the objective standard of progress when I place my bet. I think I figured out the problem. If we look at all of evolution, from when life was simple chemicals like methane and ammonia, up to the present and plot that data from 0 to 100, Darwin's original "origin of species", was a good fit for species data, which is like the data from 50-100. But just because the theory fit that data it does not mean it can extrapolate all the way to 0. If we don't start at the origin, the best curve may fit the subset of data but may be transposed vertically. Later an addendum was added to the theory when microscopic life was added to macroscopic observations, via genetic theory. This creates the best fit of the data from 20-100 but does not intersect the origin. It is not clear if it is the final curve since there is no requirement that it has to intersect the origin. This potentially, "floating curve" may fit the data from 20-100 perfectly, but has to intersect the origin to be finalized. If we use the theory and extrapolate it back, we may get to 10 or replicators but not to the origin. That told me, just as Darwin needed genetics added to use more of the data, genetics may also need something more fundamental added to get to the origin. Where I looked was water and hydrogen bonding since water is the majority component and everything in the water has to follow the potential associated with water. Hydrogen bonding is a variable that is common to both water and active bio-materials. If you stick lipids in water and shake, the aqueous potential makes a membrane. The interaction with water intersects the origin, since even ammonia or methane in water are under aqueous potentials and h-bonds. Intersecting the origin does leads to some some conflicting conclusions, since it touches all the data from 0-100 and not just from 20-100. Progress based on the curve from 0-100 is a function of an energy balance. Energy considerations will result in bi-layer membrane from lipids. It is not random with cubic membranes just as likely. These would be too high in energy and would shift into the spherical. Once that potential is settled, there is a new milestone for progress. -
An accurate age of the universe
pioneer replied to Gustafson, S's topic in Modern and Theoretical Physics
If the universe is expanding, because space-time is expanding, doesn't the light from distance objects, reaching us see billions of years after traveling through expanding space-time, experience extra red shift, beyond the initial red shift due to initial doppler motion? For example, we start with early space-time as a dense foam. One event gives off energy at that point in the history of space-time. It is fixed in terms of wavelength after doppler. Next, that light has to travel billions of years while the foam is still expanding. A wild card variable that has been added is the accelerated expansion. Is this happening only now, or has space-time always been accelerating in terms of expansion? If always, it was expanding slowest at the beginning. Yet we have the highest velocity objects being credited to the time when they should be the slowest? Is most of the red shift actually due to light traveling through the expanding space-time? Is what we see, not the past of the object, but the present state of the light that was given off long ago. Would this make a slow object look like it was fast? -
The goal of the bill is to stimulate the economy and create jobs. The advantage of tax cuts is no overhead. It can happen within weeks and the money will be in the economy almost immediately. In a state of emergency it is fast with no waste. The bill currently creates jobs, but many of these programs looks like things we do when the government has extra money to spend and the economy is already up and running. The emergency is in the housing market, people need to feel free to spend, banks need to loan, etc. The analogy is the house is on fire. We need water and not a bunch of lemonade stands. If the house was not on fire, then lemonade stands sounds sort of refreshing. We need people with buckets manning the fire line and fighting the fire. Once the fire is out, lemonade stands will be refreshing. Here is what I would do. Take the $1T and give every man, woman and child $3000 each. One can then spend their share anyway they feel. If you like one of the democratic add ons, you can give your money back to the leaders of the party and let them spend it for you. You get to feel good even if you don't get anything out of it. Those who wish to spend for themselves, can also help the economy while helping themselves. If they think the banks need it, put in in a savings account. If you think Wall Street needs it, buys some stock. If you think Detroit needs it, use it as a down payment for a car.
-
I am going to ramble a bit to show some aspects of math are natural while some parts are a powerful invention. If you look at the square root of a negative number this is called an imaginary number. These are called complex numbers, today. We have found many applications for imaginary numbers but it is an area of math they may have been invented. To be natural one should be able to point to an imaginary result in reality. What is interesting, 0 (zero) is the only number that is both real and imaginary at the same time. If I said I had zero coins in my hand, you would have to use your imagination, since I have nothing in my hand. I also have zero suns and zero moons in my hand. We need to focus on which imaginary thing we talking about, which is real at some level, but not in my hand. This is also a useful invention as long as we use the real part. There are other possible inventions. For example 1/0 = infinity. That means that 0 times infinity=1. That is OK. But 10/0 also equals infinity. That means 0 times infinity can also equal 10. I am not sure if the imaginary part of zero is playing tricks. Humans have defined these with conventions, so it doesn't get all confusing. Now we have a useful invention. Adding and subtracting is very natural. Division can become a little odd in reality. One can take a knife and cut (divide) an apple into 2. We end up with two halves, but the math says we have one half. This had to be ironed out on paper and a powerful invention was created. If I have 1 apple and divide it by a half, how can I get 2 apples? This is not something we can do with matter. It sort of sounds like I only cut the apple half way, compared to the full cut above. I should still only have one apple with a cut. Again, this had to be ironed out on paper and a very powerful invention appeared.
-
Here is another interesting angle on refractive index. Below are two graphs of refractive index as a function of the wavelength, for ethanol and distilled water. If you chose the correct wavelength you can make it go either way. http://refractiveindex.info/
-
Electromagnetic Wave have momentum
pioneer replied to Ashish's topic in Modern and Theoretical Physics
Momentum and kinetic energy for a mass are related since both only require M and V, with a simple difference either MV or 1/2MV2. If we add energy to a mass we can increase its kinetic energy, and therefore increase its momentum. I think one experiment was a light beam that shined on a mass foil within a vacuum. The light caused it to turn and therefore gave it momentum. But it also added energy or kinetic energy to create the V that is shared between kinetic energy and momentum. So technically there was a momentum transfer. One might also say a force impulse has momentum. We can do a mass experiment with iron and a magnetic pulse and transfer momentum. This would simulate a quantum of magnetic force containing momentum.