Jump to content

Recommended Posts

Posted

This has befuddled me for a while. Isn't analog by it's very nature capabable of storing vastly more information than binary digits? Why is analog considered a "thing of the past"?

 

Consider, rather than a bit with two states, a tiny analog device (approximately the same size as a bit in a normal PC) to control the flow of electricty through a circuit. Let's say that each degree rotated represented a different state, the amount of current allowed to pass through. Thus, 360 degrees would represent 360 numbers. Or, you could use half-degrees, and have 720 numbers represented, or quarter-degrees, with 1440 numbers, etc. You get the idea...

 

Of course, you would have to translate this back into 0's and 1's for the CPU to process the information (unless the CPU was analog too, similar to certain aspects of neurons).

 

Would this not vastly increase potential information storage?

Posted
How much error would there be in the translation? That's always the question in DAC scenarios, if I'm not mistaken.

 

Well, before I asked this question, I suspected that might have something to do with it.

 

And how would you store these analogue signals?

 

The fraction of current allowed to pass through the circuit would be the information.

 

As a generalized exaple, if 0% of the current was allowed to pass through, that would correspond to the first state, 0. If 0.1% was allowed to pass through, that would correspond to the next state, 1; 0.2% equal to 2; and so on. Depending on the level of accuracy engineers could achieve, it could be broken down indefinently, so that 0.0001% would be the next state after zero, up to the millionth state at 100%.

Posted

The problem with analog is that it is inaccurate -- was that a 1.5 or a 1.5001? Each time you touch your data it gets a little corrupted. Whereas with digital, any small corruption gets eliminated each time the data gets copied. The data would have to switch between a zero and a one before there would be any corruption, and there are error correction protections that can usually detect when that happens.

 

Basically, digital is far more error resistant than analog, for only a little less data density (occasionally more than one bit is stored in any particular measurement).

Posted
The problem with analog is that it is inaccurate -- was that a 1.5 or a 1.5001? Each time you touch your data it gets a little corrupted. Whereas with digital, any small corruption gets eliminated each time the data gets copied. The data would have to switch between a zero and a one before there would be any corruption, and there are error correction protections that can usually detect when that happens.

 

Basically, digital is far more error resistant than analog, for only a little less data density (occasionally more than one bit is stored in any particular measurement).

 

I see. That would be a major problem for normal data storage.

 

However, there would be a number of computational situations where accuracy would be less desired than sheer memory capacity, especially pattern recognition, fuzzy logic, neural nets, and other machine learning or probabilistic situations that arise. I would be interested to see a computer with both a digital hard drive and an analog hard drive working in conjunction.

 

Is that not essentially what the human brain is: a hybrid analog-digital computer (thresholds being the digital part, everything else being analog)?

Posted

The biggest reason why we don't use analog computers is the immense challenges of both designing and programming an analog computer. Analog computers operate on continuous data streams, which is more like a river of information, whereas digital computers operate on discrete data streams, which are more like cars driving on a road. It's much easier for us to reason about the behavior of cars: we can break down the problem and examine each car individually to determine its behavior. In an analog system, we can only examine how the data evolves. Unlike cars, you can't examine a "piece" of a river and see how it behaves independently of the rest of the system.

 

Far and away, humans reason about programming in an imperative manner, which relies on pushing discrete chunks of data around. Functional languages are much better suited for operating on analog data streams, but then there's the problem of actually executing the program. How do you compile a context-free program to run on an analog computer?

 

A compromise between analog and digital has perhaps been struck with SIMD units. These exist in many forms, such as SSE on Intel/AMD chips, AltiVec on PowerPC, and many types of DSPs, including the "SPEs" on the PlayStation 3's Cell processor. These units are primarily designed to work on continuous data sets which have been sampled into discrete chunks, applying particular transforms in parallel. This gets you many of the supposed theoretical benefits of analog computers while still retaining a discrete, digital control structure.

Posted
The biggest reason why we don't use analog computers is the immense challenges of both designing and programming an analog computer. Analog computers operate on continuous data streams, which is more like a river of information, whereas digital computers operate on discrete data streams, which are more like cars driving on a road. It's much easier for us to reason about the behavior of cars: we can break down the problem and examine each car individually to determine its behavior. In an analog system, we can only examine how the data evolves. Unlike cars, you can't examine a "piece" of a river and see how it behaves independently of the rest of the system.

 

Far and away, humans reason about programming in an imperative manner, which relies on pushing discrete chunks of data around. Functional languages are much better suited for operating on analog data streams, but then there's the problem of actually executing the program. How do you compile a context-free program to run on an analog computer?

 

A compromise between analog and digital has perhaps been struck with SIMD units. These exist in many forms, such as SSE on Intel/AMD chips, AltiVec on PowerPC, and many types of DSPs, including the "SPEs" on the PlayStation 3's Cell processor. These units are primarily designed to work on continuous data sets which have been sampled into discrete chunks, applying particular transforms in parallel. This gets you many of the supposed theoretical benefits of analog computers while still retaining a discrete, digital control structure.

 

Well, I do not know enough to comment about the design of processing units. Hard memory storage is a simpler topic to tackle, though.

 

In my proposition, analog components would function using discrete units, as digital components currently do. Each fraction of current (or what have you) would correspond to a discrete unit.

 

The primary benefit of such a design would be far, far more potential discrete states given the same amount of components. Yet as others have mentioned, data corruption could be a problem.

Posted

Data corruption isn't a problem. The technique you're describing is known as Quadrature Amplitude Modulation (QAM), and is used for all sorts of high speed digital transmissions, like high speed fiber optic links, cable modems, cell phones, satellites, and dozens of other applications.

 

QAM is ideal when you have a serial line spanning a long distance, because you can pack a lot more data in than just a simple binary signal.

 

However, where a binary signal can be decoded by a single transistor, the number of components needed to decode QAM is a few orders of magnitude higher.

 

Binary circuits work great because of their simplicity. Transistor-transistor logic (TTL) is all that's needed to implement standard logic gates, and modern manufacturing techniques make it relatively easy to place hundreds of millions of transistors onto a single integrated circuit.

 

Compare this approach to trying to perform bitwise logic operations on QAM-encoded signals. There's just no good way to do it, short of decoding the signal to binary and performing it in parallel.

Posted

Correct me if I am completely off here but:

 

Isn't digital more of a 'technique' than a 'medium' in general? You can send an analog cell radio signal to a receiver... it detects the radio waves, their fluctuations etc, and produces electrical current in a consistent response to the fluctuation, causing a speaker to emit sound consistent with the radio signal, which was encoded by the exact same sound.

 

Radio waves will always be "waves" but, when we transmit digital data, its not like we are sending "all or nothing" morse code signals or changing what is inherently an analog signal: we simply emit the radio wave in such a matter that it can be broken down into discrete packets of binary data. The density of the data is not determined by the fact its binary (ie, only 1 or 0 is sent at a time) but by the sensitivity of the encoders and readers, and the viability of the transit or storage medium.

 

If all you can detect or read is "beep/no beep" all you'll get is the bandwidth of a morse code signal, but when you actually transmit radio data you have a huge range of intensities that can be written and read, allowing you to pack a full byte or even more in a single moment. In broadcast, the write speed (time between changing data states) is also determined by ability to write/survive transmission/read with accuracy. For storage, say on a magnetic disk, the size of each physical 'bit' is also based on accuracy. Its really analog at that level...its not a discrete cut box to store a bit in, its an area you write to and read from, that you can be pretty sure you don't mess up with neighboring bits because you haven't packed them too tightly.

 

 

To bring this back to your idea: Quarter degrees gives you 1440 values...but those are still binary values, you've just chosen to pack them in an electrical wave at a certain density per fractions of a degree. We already "pack" data this way when we broadcast digital signals. It doesn't increase storage capacity really, because you are still limited by the accuracy of your writer, the durability of the transmission or storage medium, and the accuracy of the reader.

If you don't care about accuracy, then you have to still pack 'fuzzy numbers' which are large enough to be sure higher digits maintain accuracy, and allow the lower digits to be 'read or not read' - so this doesn't increase capacity.

 

The key problem here, is if you up the storage capacity by measuring eighths of a degree instead of a quarter...any inaccuracy could be actually reading not the end of one value, but the start of the next (and visa versa) - leading to HUGE errors (ie, reading what may as well be a random value) instead of a 'fuzzy' number that is 'almost' close to what you stored.

 

 

Complete side note but: aren't neurons (in the brain) essentially digital and simply work in mass parallel? I always thought it wasn't an analog issue, but one of distributed processing (brain) vs fixed serial processing (CPU).

Posted

im a personal fan of analog over digital. i end up with corrupted files just as easily with a digital file. so thats one strike. not to mention, if you take music into account, when recording, you get a depth from an analog signal that you'll never get to hear with digital. i STILL use a tape backup. Holds alot more space. I just have to worry about the eventual decay for an old skool tape.

Posted

My biggest proboblem is I don't see how we can have a good non-volatile storage method that'll be better than was we've currently got... the magnetic moments on hard disks are either up or down :s

Posted
im a personal fan of analog over digital. i end up with corrupted files just as easily with a digital file. so thats one strike. not to mention, if you take music into account, when recording, you get a depth from an analog signal that you'll never get to hear with digital. i STILL use a tape backup. Holds alot more space. I just have to worry about the eventual decay for an old skool tape.

 

I'm a collector of vinyl records, but I'll certainly concede that for the most part CDs provide a higher quality representation of music than does vinyl.

 

However, one of the limiting factors of vinyl, its dynamic range (45dB, about half that of CDs) has also limited how much music can be compressed (acoustically). Compression is normally intended to boost the loudness of quieter sounds, but the CD age has ushered in the "loudness wars", where the massive dynamic range of CDs is exploited to create the loudest music possible.

 

Because vinyl's dynamic range is much lower, vinyl masters tend to be free of this sort of crap. There's also some other arguments for vinyl as well: tape degrades with time whereas vinyl degrades with use, so if you have a relatively clean copy of an old record made from (what was) a fairly recent copy of the master, it can sound better than a CD mastered from a copy of (a copy of a copy of etc) the master tape, which thanks to the lower quality materials available at the time has undergone a lot of degradation. Sure, there's lots of software and equipment to restore whatever copies of the original master remain, and in some cases (MFSL releases) CDs can be mastered from the original master recording.

 

I generally don't buy audiophile arguments that CDs are "colored" or that vinyl is "warmer." They're clearly hearing something differently than I am. Of course, I'm the kind of person who likes an equalizer too...

  • 2 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.