Jump to content

Recommended Posts

Posted (edited)

I know it is done by measuring the time it takes a signal to reach the earth, but how do we know the time it takes to reach earth ?

Edited by Bjarne
Posted (edited)

You send a signal with a response request then devide the time for the signal to get back to you by two. This way you can calibrate signal delay including electronic process delays.

A more accurate method though could be sending a request to the RTC (real time clock) on the satellite requesting the time of receiving a signal. Then compare to two precalibrated atomic clocks. One on Earth one on satellite.

From that data one can calculate the distance to a satellite. Granted we don't rely on one method. If you combine the previous data with parallax you can fine tune the distance/rate of signal.

 

After all a signal moving through a medium can cause delays so you will need to study and test that medium. (Ie atmosphere). After years of transmission though we have an extremely high degree of accuracy in knowing the properties of the atmosphere and local spacetime medium. (Though we still continually test it for changes).

 

That's one aspect ppl fail to recognize in science. We continously test. Particularly on something as critical on distance measures, redshift, luminosity/distance relation and relativity.

This continous testing leads to extreme fine tuning.

As such numerous tests are continuously developed to help strengthen our accuracy.

Edited by Mordred
Posted

 

Because that is what science does. It continually tests established theories. Things are never taken for granted. After all, finding errors in current theory is how advances are made.

 

Right, and and dont forget the motivation -e.g; the flyby anomalies.

Posted

 

Right, and and dont forget the motivation -e.g; the flyby anomalies.

 

I'm sure there are scientists looking at that. I haven't seen anything about their work or results though.

Posted

It would be great if some clear signal of a more complete theory than GR could be found, that could help us understand quantum gravity for example. T

 

Agree

Posted (edited)

To improve upon its confidence level and accuracy. Relativity is a big pill for many to swallow. However if you think about it we test one aspect of relativity in particle accelerators 100's of times every day. That being inertial mass and the amount of energy required to accelerate the protons.

 

 

I know, very small orbits , nothing to worry about compared to flyby's

 

Quote

Meanwhile, space probes continue to challenge scientists every time they perform flybys.

One of the last was that of the spacecraft Juno in October 2013, from Earth en route to Jupiter.

NASA has not yet published data on this journey, but everything indicates that its speed as it flew over our planet once again differed from estimates.

 

Read more at: http://phys.org/news/2014-09-anomaly-satellite-flybys-confounds-scientists.html#jCp

Edited by Bjarne
Posted

No matter how confident we are in any theory the scientific method always considers some error.

In the context here, the question is if the fly-by anomalies are some unaccounted for effect within standard relativistic theory, or if it really does point to new physics. My feeling, and this is just my opinion, is that the effect can be understood within standard GR. I say this as so far no further anomalies at the kinds of energy or distance scales of satellites have been found. Still it is an interesting question for those involved in satellite dynamics and control.

Posted

Just to note if you plan on tackling relativity. You better have conclusive experimental evidence. I saw the thread that was locked. For lack of such. The Pioneer anomoly for example was figured out to be caused by anisotropic radiation loss due to the crafts own heat. You can get the paper in the reference on this page.

 

https://en.m.wikipedia.org/wiki/Pioneer_anomaly

 

Here is a excellent thread about the Pioneer anomaly

 

http://www.scienceforums.net/topic/79814-pioneer-anomaly-still

Posted (edited)

Yeah I am familiar with that thread. I'll take the word of professional peer review articles over a forum discussion any day. Particularly since Pioneer anomoly has been discussed to death in other forums I'm also a member of.

 

A forum is a learning and teaching aid. One doesn't change a theory via a forum. You would need to publish in a peer review to hope to do that

(Anyways this is off topic). Do you have further questions on measuring satellite distance,?

Edited by Mordred
Posted

AS far as I am aware none of the "probe anomalies" can cast any real light on GR or its flaws. The calculations are performed using modified newtonian dynamics (they're not just for the dark matter denying sort) because the chances of arriving at a solution to the field equations for a dynamic multibody system are next to nothing - we believe that these calculations produce predictions that are very close to that which GR would produce; but to say GR is flawed on the basis of a MOND-only calculation is ridiculous.

 

Bjarne - I still don't know what more you want from satellites in order to be a valid experimental test-bed. The early Galileo test satellites had three independent and differently designed atomic clocks - whilst Major Tim Peake (and his friends - but he gets a mention from a fellow Brit) are useful and great for the media - human intervention is not essential; there are oodles of experiments and comparisons with predictions that can be done with an array of satellites all with 10^-14ish sec accuracy clocks.

Posted (edited)

Distance is measured by the two-way propagation delay. On such distant objects an active transponder is needed, so all technological delays are first identified.

 

Satellite designers don't want to carry an atomic clock if the payload needs none, and anything less accurate would be too inaccurate. Imagine a 10-11 clock: after 2 years of Earth-Venus-Venus-Earth flybies, it has drifted by 300µs or 2*50km, not good enough for an Earth flyby.

 

Some craft are designed with an accurate transponder whose delay is well identified before launch. Others have relaxed needs.

 

Also, satellite designers wanted to know the distance at a time (1950) when putting atomic clocks onboard was excluded, but radars were already routine.

Edited by Enthalpy
Posted

How big would a satellite need to be for us to be able to use straightforward radar to measure it's altitude?

We can check the distance to the moon, and most artificial satellites are a lot nearer- but a lot smaller.

Posted

How big would a satellite need to be for us to be able to use straightforward radar to measure it's altitude?

We can check the distance to the moon, and most artificial satellites are a lot nearer- but a lot smaller.

 

The lunar distance uses a corner cube array; it's not the whole moon. And the satellite size doesn't matter as much — you can use a beam much larger then the satellite to hit whatever is doing the reflecting. You probably have to make sure your power is small enough so you don't fry the electronics inside, but you have the much smaller distance coupled with the inverse square law on your side.

Posted

They use a corner cube reflector for lidar. It's better in a number of ways .

 

But I'm talking about this sort of thing

https://en.wikipedia.org/wiki/Earth%E2%80%93Moon%E2%80%93Earth_communication#History

 

The bigger the satellite the more signal hits it (at which point it probably does act pretty much like a small source and so the 1/r^2 law applies) and the better chance there is of the reflected signal being observable when it gets back to you.

Posted

Astronomers make from Earth images of Mars using radiotelescopes as radars, especially at Arecibo, so a first answer would be:

Radius of Mars=3390km and satellite=5m (ratio 4.6e11)

Distance of Mars=80Gm (nearest) hence satellite = 100Mm (ratio 823, acts as 4th power)

pessimistic, because Mars images have several pixels, and satellites contain metal.

 

Or: the D=0.5m F/A-15 radar sees the R=1700km Moon at 384Mm distance, so a D=50m radar sees a R=5m object at 66Mm.

 

Better values exist at Near-Earth Object detectors, at Norad, and the like.

 

Let's evaluate a tailored setup.

  • Transmit 1MW at 30GHz from a D=70m antenna. Could be more.
  • A 10m2 target at arbitrary 1Gm receives 250µW, wow, and reradiates them uniformly over 2pi srd (half space).
  • A D=70m antenna receives 10-19W=-160dBm, wow again. The receiver is far enough from the transmitter that Earth shields it.
  • Integrate with phase coherency for 1min: the signal energy is 6e-18J, the noise energy at 30K noise temperature is 2e-22J. So for S/N=14dB, you obtain 6Gm or 15* Earth-Moon.
  • Or integrate with phase coherency (how?) for 10h. The signal energy from 1Gm is 3.6e-15J, so at S/N=14dB the range is 30GM or 0.2* Sun-Earth.
  • Phase coherency for 1min and intercorrelation of two receiving antennas give a range between both.

You can exaggerate a bit. Antenna fields cumulate more transmit and receive area and transmit more power. A corner reflector at the satellite would gain a lot.

 

A Lidar is obviously better. It too measures distances and speeds. Some asteroids were already announced that would have been impossible to detect with a radar.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.