Danijel Gorupec Posted June 4, 2015 Share Posted June 4, 2015 I risk to ask a dumb question... Why is adaptive optics better than image processing? I mean, if you know how to adapt a mirror, then you also know how to process image to obtain the same effect.. or? I also read something about a thing called "speckle imaging" but I am not sure if this is what I am talking about. I am not sure if 'guide stars' (natural or artificial) are used with this speckle imaging. Link to comment Share on other sites More sharing options...
Klaynos Posted June 4, 2015 Share Posted June 4, 2015 You can't add information with processing. With adaptive optics you can add information. Link to comment Share on other sites More sharing options...
Danijel Gorupec Posted June 4, 2015 Author Share Posted June 4, 2015 Can you explain, I don't get it. Link to comment Share on other sites More sharing options...
imatfaal Posted June 4, 2015 Share Posted June 4, 2015 Adaptive optics increases the amount of information you take into your system by ensuring that incoming wavefronts which are distorted (say by atmosphere) are all useable and inputed into your system. Post processing takes the distorted image (ie some of the information is degraded or unusable) and guesses (sometimes very well) what the rectified image should be; it takes less data and gives an approximation of the totality of the picture. It is always better to get all the possible input and work from there rather than taking a portion of the available and extrapolating/interpolating the rest Link to comment Share on other sites More sharing options...
Klaynos Posted June 4, 2015 Share Posted June 4, 2015 Image processing cannot give you something that you didn't measure. An easy ish example is resolution. If you have a photo of a football field from above with a resolution of 1m. One pixel is darker than the others. No amount of added resolution will tell you wherever that is a ball or a dog or a turtle... You need more information a higher resolution. Link to comment Share on other sites More sharing options...
swansont Posted June 4, 2015 Share Posted June 4, 2015 Another way of looking at it is that adaptive optics filters out noise. The image contains the time-integrated noise. You can't remove noise after the fact with 100% fidelity. As imatfaal said, you have to guess. Why is adaptive optics better than image processing? I mean, if you know how to adapt a mirror, then you also know how to process image to obtain the same effect.. or? You only know how to adapt the mirror because you are measuring the distortions. Link to comment Share on other sites More sharing options...
Sensei Posted June 4, 2015 Share Posted June 4, 2015 (edited) If we have series (or hundred or thousands) of photos of star or other astronomic object taken with delay, there is motion blurred image on them (depending on exposure time), due to rotation of Earth, atmospheric effect, etc. Then image-processing can analyze where are atmospheric effects (because they are dynamic, constantly changing, visible on one photo, but not on other) and generate much better picture from them. Image processing algorithm should know time when photos were taken to calculate how far Earth rotated between them, and how far it rotated while taking single photo. In professional observation of cosmic objects there is no work around to image processing. Suppose so we're looking at far far galaxy that is sending just a few photons per second.. There is needed constant observation of that area in hours or days to get any image from it.. Here is couple photos taken by regular digital camera (10x zoom IIRC) attached to telescope, there has been taken hundred photos, and special custom made C/C++ algorithm joined them to single image: Edited June 4, 2015 by Sensei Link to comment Share on other sites More sharing options...
Danijel Gorupec Posted June 4, 2015 Author Share Posted June 4, 2015 I can see now how information is lost due to limited resolution. For sure, if a detail is degraded (due to atmospheric distortion, perhaps) to a single pixel, nothing much can be made from it. I can also see now how a long-exposition image can be 'infested' by time-varying distortions so that there is no way to make much use of it. However: A) atmospheric distortions vary quickly in time; so if an image detail is degraded to a pixel, in the next moment it might spread to several pixels. Taking many images of the same object (supposing that the object itself is only changing slowly) and doing some decent image processing should reveal high level of details across the whole picture. So yes, image processing is probably inferior to adaptive optics, but I guess only slightly (regarding money invested)... Is this conclusion valid? B) atmospheric distortions vary quickly, but still at some limited speed. If exposition time is short enough there will be no problem with 'integrated noise'... are there limits on how short expositions can be? (Hmm, maybe digital image processing is so common and used in astronomy that nobody is talking about it any more - and this is why it is never mentioned when I read about telescopes.) ... Wow... Only now I see Sensei's post... it confirms that image processing is already heavily used. So I suppose adaptive optics is edge's edge... Thanks for nice pictures. Link to comment Share on other sites More sharing options...
John Cuthber Posted June 4, 2015 Share Posted June 4, 2015 A lot of the time astronomers are looking at stars and those are so small that, even with the biggest telescopes, the image of the star should be a single pixel (with an Airy disk for those who want to be fussy). If the distortions lead that light to be spread out onto, for example, a bunch of 4 pixels or 9, then each of them only gets a quarter or a ninth of the light. So if you have a very dim star you can have a state where, unless it's focussed properly onto just one pixel, it's not bright enough to see at all (for any given exposure time). there simply isn't a signal to process- it's lost in the background noise. Link to comment Share on other sites More sharing options...
Danijel Gorupec Posted June 4, 2015 Author Share Posted June 4, 2015 @John Cuthber... if what you are saying is true, this would clearly make adaptive optics far superior to image processing. But I am (erroneously?) under impression that optical sensors (CCD) can detect one single photon. If so, it would be possible not-to-miss any information even if you make extremely short expositions (and then make integration in the digital domain)... As a kid I was told that human eye can detect one single photon, but maybe this is only an urban legend. Link to comment Share on other sites More sharing options...
John Cuthber Posted June 4, 2015 Share Posted June 4, 2015 (edited) @John Cuthber... if what you are saying is true, this would clearly make adaptive optics far superior to image processing. But I am (erroneously?) under impression that optical sensors (CCD) can detect one single photon. If so, it would be possible not-to-miss any information even if you make extremely short expositions (and then make integration in the digital domain)... As a kid I was told that human eye can detect one single photon, but maybe this is only an urban legend. They didn't make adaptive optics because they wanted to upset the programmers. On a good day the best detectors can detect single photons; but they don't know if that photon is real or noise. Also the quantum efficiency of the best detectors is not 100% (some photons essentially bounce off or are degraded to heat) so you can't ever hope to get a perfect image. The human eye isn't quite that good My recollection is that you need about 10 photons to turn up in quick succession to cause the nerve to fire. Essentially, by having poor focus, you throw away data on where the photon came from. losing information in that way is irrevocable- there's nothing the software can do about it. Edited June 4, 2015 by John Cuthber Link to comment Share on other sites More sharing options...
Danijel Gorupec Posted June 4, 2015 Author Share Posted June 4, 2015 They didn't make adaptive optics because they wanted to upset the programmers. Lol... when I started this thread I was thinking that adaptive optics is an expensive replacement for image processing. Now it is obvious to me that adaptive optics is one additional step to obtain even clearer images. However I cannot fully agree with your last sentence... You always have the guide star and it roughly tells you what to do with the image. You will be able to roughly place your 'wandering' photon where it belongs. Link to comment Share on other sites More sharing options...
John Cuthber Posted June 4, 2015 Share Posted June 4, 2015 Of course, the other advantage to using adaptive optics is that you can use image manipulation as well. Link to comment Share on other sites More sharing options...
swansont Posted June 5, 2015 Share Posted June 5, 2015 A) atmospheric distortions vary quickly in time; so if an image detail is degraded to a pixel, in the next moment it might spread to several pixels. Taking many images of the same object (supposing that the object itself is only changing slowly) and doing some decent image processing should reveal high level of details across the whole picture. So yes, image processing is probably inferior to adaptive optics, but I guess only slightly (regarding money invested)... Is this conclusion valid? B) atmospheric distortions vary quickly, but still at some limited speed. If exposition time is short enough there will be no problem with 'integrated noise'... are there limits on how short expositions can be? There is an approach based on this concept: Lucky Imaging. You take short exposures and add together the ones that don't have atmospheric distortions. http://www.ast.cam.ac.uk/research/lucky/ Lucky Imaging is a remarkably effective technique for delivering near-diffraction-limited imaging on ground-based telescopes. The basic principle is that the atmospheric turbulence that normally limits the resolution of ground-based observations is a statistical process. If images are taken fast enough to freeze the motion caused by the turbulence we find that a significant number of frames are very sharp indeed where the statistical fluctuations are minimal. By combining these sharp images we can produce a much better one than is normally possible from the ground. Link to comment Share on other sites More sharing options...
Enthalpy Posted July 17, 2015 Share Posted July 17, 2015 The detectors used in optical astronomy lose the phase information when detecting a photon. Once this information is lost, post-processing is little effective. By superposing successive images, you add the received power only. Adaptive optics corrects the distotion to add all light in phase. This is fundamentally better because the focus adds the received field, and the power is like the squared field. In radar and sonar, the detectors keep the phase information, and data processing keeps the whole sensitivity of a distorted wave and antenna - so much that we built flat hydrophones and radar antennas and compute beams afterwards, in many directions a once. Lidars do that more or less with light, by coherent detection and by heterodynes, but as far as I know (or ignore) they aren't as advanced. We have no many-million-pixel detector with phase detection, and I suspect we wouldn't have the data processing power neither for that frequency bandwidth. Some people think at synthetic beam forming with light (that is, aperture synthesis) but it's still the beginning. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now