alan2here Posted August 12, 2008 Posted August 12, 2008 (edited) I propose a program to help the blind see gray-scale images as images. Not just read descriptions of them. While screen readers are good they can't portray how an image really looks, only a description of it. For every pixel in the image a waveform (sound) is created. The x, y positions and Brightness (b) of the pixel change 3 properties of the sound. Possible properties include (phase) a time shift in the waveform perhaps relative to it's frequency, frequency, amplitude, (power) to the power of, squareness, (gap) period of silence between each wave, and resolution. For example x could be phase or gap. y could be frequency. b could be amplitude. The average of the many (resolution in the X direction * resolution in the Y direction) waveforms would then be taken to get the final waveform that then may need its amplitude adjusted to reach the final sound. The final sound would be a continuous sound that would represent in a moment the composition of the image or the moment in a film. A camera could be carried around to get visual based audio feedback of the persons environment. This seems necessary to train the person to understand the sound. Another variation could be x = power. y = squareness. b = amplitude. Another possibility is that post processing could be done on the image before it is turned into sound such as median blur or lowering the color depth. Edited August 12, 2008 by alan2here
iNow Posted August 13, 2008 Posted August 13, 2008 That's a very interesting idea, Alan, but I'm curious why we wouldn't rather continue instead with the work being done on occipital impants and visual prosteses. http://en.wikipedia.org/wiki/Visual_prosthetic http://www.bioen.utah.edu/cni/projects/blindness.htm http://www.temple.edu/ispr/examples/ex02_09_16b.html
bascule Posted August 13, 2008 Posted August 13, 2008 Daniel Dennett described experiments where researchers attached a grid of tactile stimulators to people's backs, controlled by cameras. Over time their brain came to make an association between the tactile stimulation and their environment, effectively becoming able to see. New research suggests the tongue may be a better surface to stimulate: http://abcnews.go.com/Primetime/Story?id=2401551&page=1
alan2here Posted August 13, 2008 Author Posted August 13, 2008 (edited) That's a very interesting idea Alan, but I'm curious why we wouldn't rather continue instead with the work being done on occipital implants and visual prostheses.It's not so much one or the other. This is cheaper and so more accessible and also represents a sense in a totally different way so it could be useful for other sorts of research like research into the mind, perhaps a test for adaptive sensory intelligence for extended versions of tests like the IQ test. This also contracts to 1D and expands to 3D and 4D much better than vision does. Mike Ciarciello has been blind since birth but says that in his dreams he can actually see.I would be interested to know more. I suspect that he would not dream using colors or at least not in the conventional way but perhaps have some way of representing distance from an object and what the surface of the object is like. In fact even things we take for granted but probably don't think about much like occlusion may not be such a big thing to him as it is to us. Edited August 13, 2008 by alan2here
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now