Jump to content

Practical Singularity


bascule

Recommended Posts

Here are my thoughts on the two most practical ways to bring about Singularity. This is building on ideas I've posted before. One route starts at BlueGene and takes the SAI route, the other starts at BlueBrain and takes the IA route.

 

Scenario One: Commercial Wetware

 

I have long been convinced that creating a marketable brain-computer interface is not only an inevitability, but is also the single most practical means to achieving Singularity. By wetware I am talking about a direct neural interface; a device which allows the brain to exchange abstract information directly with computer networks. Equipped with such a device, you would in effect have access to all human knowledge at the speed of thought.

Here is my roadmap to commercial wetware:

Step One: Decode the Neocortical Column

 

The largest barrier in constructing such a device is sufficient knowledge of the brain’s operation. However, work over the past decade towards understanding the brain’s physiology and research into cognitive science and epistemology has brought us ever closer to this goal. There seems to be a growing consensus as to how the brain actually operates, namely that the Neocortical Columns (NCCs) which I discussed in my previous blog entry start out as a highly repetitive neural pattern analyzers which, over time, begin specializing in a myriad of different tasks. They appear to exchange information with each other via a “global workspace”, regarded by many to be the role of the thalamus:

. “… the thalamus represents a hub from which any site in the cortex can communicate with any other such site or sites. … temporal coincidence of specific and non-specific thalamic activity generates the functional states that characterize human cognition.

In this contemporary model, the thalamus represents the state holder of consciousness (i.e. what you’re presently thinking about) and the NCCs are continuously transforming this state (i.e. the NCCs do the actual “thinking”). Thus it would seem that in order to use a computer to inject information into and process information from the cortex, what is lacking in our present understanding is a high level understanding of the thalamocortical interface. When we can build a computational device that interfaces with the thalamus in a nearly identical manner to a neocortical column, we will have our foot in the doorway of human consciousness.

 

Step Two: An Artificial Neocortical Column

 

This is a feat researchers have already successfully accomplished with the hippocampus, the part of the brain which encodes short term memories for long term storage. The researchers involved built a mathematical model of the hippocampus’s operation, then implemented it in the form of an implantable device. The Blue Brain Project is presently working on constructing a mathematical model of the NCC. If contemporary thinking on consciousness is correct, this would provide an optimal place to begin building a brain-computer interface.

 

Step Three: Classifiable Phenomenology

 

After constructing an artificial neocortical column, the problem becomes how to read phenomenological objects in and out of our thalamus. The inherent plasticity of our neocortical structure would suggest that our phenomenological structure, the way we store, represent, and analyze information, develops after our birth and would thus be unique to every individual. So how can we ever hope to build a device which can transfer information in and out of our consciousness?

The answer lies in classification algorithms, specifically Bayesian algorithms. This, combined with an understanding of how to query your brain’s phenomenological structure for related information, would allow a computer program to construct a representation of your own, personalized phenomenological structure outside of your brain upon which computer programs could act.

 

Step Four: Commercialization

 

For me at least, it really isn’t hard to imagine that once we understand human consciousness at level, all of the prerequisite research will be in place to begin commercialization of wetware. This is an idea that, in the majority of the countries around the world, it would be very difficult to receive government approval for. My prediction is that wetware will be commercialized in Japan, or some similarly technologically progressive country which would allow its development.

Wetware is the killer app of the Singularity. As long as its potential risks can be mitigated, it is something virtually everyone on the planet would want. It’s very difficult to imagine ubiquitous access to all of human knowledge at the speed of thought, with an intelligent agent prefetching information it thinks you might be interested in. Wetware will represent a convergence of ideas that have been bouncing around for decades but never found a practical form (James Burke talked about the potential for “agent” software in the sequel to his Connections series which aired on TLC, for example). This means the potential return on investment for its development is enormous.

Because of its immense commercial potential, I think wetware will be the first Singularity technology we shall see. It not only requires substantially less understanding of the human brain than strong AI, but it also carries with it the potential for immense financial gain for whoever can pull it off. Furthermore, there is no need to worry about a non-human force wresting control of the planet from humanity (unless you would count posthumanity/transhumanity as such a force). For all of these reasons I think that wetware not only should, but will come before strong AI.

 

Scenario Two: Biomodeling

 

Biomodeling is a meme I’ve been attempting to spread for quite awhile. The first I ever heard of the idea was reading an article about Bill Gates quite some time ago, in which he suggested that if consciousness is an emergent effect of the human brain, then we need only grow the human body inside of a computer in order to achieve strong AI. The exponential growth of computational power, combined with advances in our understanding of biochemistry, will facilitate doing just this within a decade’s timespan. The task can be broken down as follows:

 

Step One: A Lookup Table for Protein Folding

 

Protein folding is, hands down, the single most complex behavior of our cells. The way proteins fold depends upon a property called Gibbs Free Energy, and in order to derive how a protein folds from its amino acid sequence requires highly complex atom-by-atom simulations of protein molecules. Fortunately, the most powerful supercomputers in the world are hard at work solving this riddle. The BlueGene project is dedicated to the task of studying the nature of protein folding and, hopefully, will produce a lookup table of the folding pathways and kinetics of all life proteins.

With a lookup table, doing an atom-by-atom simulation of each protein is completely unnecessary. The protein sequence can be looked up in a database, the folding kinetics and pathways loaded, and the protein folding simulated with a mere fraction of what would be necessary if that information had not been precalculated.

 

Step Two: A Molecular Model of Cellular Behavior

 

A complete molecular model of cellular behavior is no short order. At present, protein folding appears to be the biggest stumbling block. However, it is by no means the only cellular behavior that could be assisted by a lookup table. What is needed is a highly simplified model of a cell operating at the molecular level (as opposed to the atomic level). Any operations requiring atomic level modeling must be converted to molecular level operations assisted by lookup tables.

One of the biggest objections to producing such a model is that our knowledge of molecular biology is far too incomplete to even attempt such a model. However I would contend that the lack of such a model has, perhaps, been one of the factors which has kept molecular biology from progressing at the rate other model assisted sciences (for example, atmospheric science) are able to. A molecular model of a cell will of course be highly inaccurate at first, but the inaccuracies will be progressively exposed and refined by studying model output, just like any scientific model.

 

Step Three: Biomodeling of Unicellular Eukaryotes

 

While I have no doubt that prokaryotes will be modeled as well, the modeling of eukaryotic protozoa will be a much more compelling goal, because it brings with it the potential of modeling multicellular eukaryotes such as all modern animals. Modeling of unicellular eukaryotes is, however, a necessary step in which biomolecular models of cellular behavior will be improved and refined. At this point I would like to emphasize that such a model should be open source and therefore collaboratively developed. Advances in organization of collective human behavior over the Internet will hopefully decrease development times to the point that work on models of multicellular eukaryotes can begin shortly (i.e. within a few years timespan, or less) after biomolecular models of protozoa begin to become accurate to their real-life counterparts.

 

Step Four: Biomodeling of Multicellular Eukaryotes

 

Modeling of the more “primitive” (i.e. less complex) multicellular eukaryotes, such as sponges, should not be too enormous a leap from the modeling of protozoa. However, modeling of nerve cells, which are highly dependent upon electromagnetic reactions, will bring about a whole new set of challenges for biomolecular models. Making the leap to higher level lifeforms with more complex nervous systems (by which I mean creatures such as insects) will be quite a difficult one. However, I hope it will bring with it a degree of interest in biomodeling from the general public, specifically the interest of open source programmers who will contribute to debugging and optimizing the model. We know that as the model progresses, computational power will be expanding exponentially, to the point that people will begin to be able to run models of simple organisms on their home PCs or small clusters, especially with the current trend towards multicore processors. Biomodeling is a task that should be easily parallelizable, because it will ultimately end up with a model of cellular behavior which is repeated for each cell (which must, of course, account for the “boundary conditions” occurring with neighbor cells).

We will certainly see protein-based catalysts producing a whole new slew of molecules whose behavior must be accounted for in the model, but the overall trend will be towards modeling more and more complex organisms, until we move past insects towards fish, then past fish towards amphibians and reptiles, to mammals, and with mammals, the higher level simians, and finally man.

 

Step Five: The First Artificial Human

 

At this point, the electromagnetic properties of neurons will hopefully be modeled at a high degree of accuracy, and a complete molecular simulation of the human brain should be feasible.

The remarkable thing about biomodeling will be the small amount of model input that is necessary: all that is needed is a molecular description of a human egg cell, which still needs to be developed, and the genome of a human being, which has already been sequenced. From this, we can simulate the development of a fertilized human ovum, which can implant itself in a virtual uterus and grow inside a computer, replicating the complete human developmental process inside of a supercomputer. One can safely assume, at first, that this will happen substantially slower than realtime, but the model history can be preserved, dumped, and executed on progressively more powerful supercomputers as they are constructed.

What we will be left with, in the end, is a complete computer simulation of a human being, accurate at the molecular level. Assuming consciousness is truly an emergent manifestation of material processes, we will have, at this point, developed strong AI. We will have the most accurate computer model of the human brain possibly conceivable, because it will have been grown from human DNA in exactly the same manner as a real human being.

Some sort of environmental simulation, mimicking that to which a human baby is exposed, will be necessary to ensure that consciousness develops in the virtual human baby in a similar manner to a human child. However, these inputs can be scripted, computer generated constructions, perhaps employing weak AI to select from a set of pre-recorded human behaviors in response to the behavior of an artificial child. It’s important to note that through this approach, the simulation of the child need not be realtime at this point, although obviously the artificial child would benefit from a realtime simulation in which it could interact in a virtual reality environment with real humans.

I consider this approach to be substantially more practical than Kurzweil’s suggestion in The Singularity is Near of using nanomachines to document the behavior of an active human brain. Not only is the “test subject” in this case digital, therefore removing the majority of ethical considerations of injecting a real human brain with potentially deleterious nanorobots, but it provides complete access to the brain’s behavior in a way that wouldn’t be possible with nanorobots alone.

 

Step Six: Towards A Mathematical Model of Consciousness

 

There have been numerous objections to a computational model of consciousness. At an intuitive level many feel it invalidates the possibility of a soul, an idea many find disquieting. Many brilliant people, among them the mathematician Roger Penrose, have argued that consciousness must be inherently non-computational, as Penrose did in his book Shadows of the Mind. While the jury is still out on the computability of consciousness, my feeling is that we will eventually discover this to be the case. At any rate, if we do not see a manifestation of consciousness in a computer model of a human being, it will mean one of two things: either consciousness is fundamentally non-computable, or the biomolecular model is too inaccurate to model the complexities of the human brain. While the former is irresolvable, fixing the latter is only a matter of time.

If we do see a manifestation of consciousness in our artificial human, we can begin devising classification algorithms to begin breaking down the specific operation of various parts of the brain, generalizing it into a substantially simpler mathematical model. Once we have done this, we can convert this mathematical model into a computer program, and begin applying the full set of program optimization tools available at the time. We can look for bottlenecks in human consciousness and begin eliminating them. Eventually, the simulated consciousness, free of the bottlenecks of our own minds, will transcend human (and eventually posthuman) thought power, at which point it can become self-improving. At this point, humanity will have developed seed AI, and the Singularity will truly have taken place.

 

Shamelessly syndicated

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.