-
Posts
8390 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by bascule
-
Yeah, I posted a thread on it not too long ago. Lee Smolin is awesome and I have to thank Martin for pointing me at him, but Martin hasn't seemed to be around very much lately.
-
Any way to induce hallucination without drugs?
bascule replied to hw help's topic in Psychiatry and Psychology
Salvia will throw a monkey wrench into your consciousness for approximately three minutes. It operates on the kappa opiate system which was extensively researched by pharmeceutical companies because it had the same pain relieving effects of the mu opiate system (on which morphine, heroin, etc. operate) but without any of the addictive qualities. However, test subjects reported, well, the intense weirdness of having the system stimulated that salvia users experience, and due to the uncomfortable effects research was terminated. However, be aware that those three minutes will be extremely intense. I don't think it can really be recommended as your first entheogenic experience. -
Any way to induce hallucination without drugs?
bascule replied to hw help's topic in Psychiatry and Psychology
-
Jack Chick proposes an alternative to the Strong Force
bascule replied to bascule's topic in Speculations
How about Bongwater -
I want to build a website, but I dont want to spend any money
bascule replied to In My Memory's topic in Computer Science
Here's a free webhost: http://memebot.com/ -
It came after a long period of explosive brain growth in humans, which has been theorized to have been brought about in a number of different ways (and likely it did involve several factors working together), such as sexual selection, coevolution with the thumb and a leap to bipediality, and the benefits of being able to work effectively in groups which remain organized despite being geographically seperated. How can you say that? Thanks to technology, no animal on the planet can take on a properly prepared human being. We can survive in any climate. We can repair injuries which would've killed any other animal. We can live for decades upon decades, and our population growth continues to be explosive. And we haven't had to speciate in order to do any of these things. The ability to exchange abstract information via spoken communication necessitates the capacity for abstract thought. The problem with explaining abiogenesis is the lack of anything resembling fossil evidence. That doesn't mean we haven't created all sorts of thoroughly plausible hypotheses for explaining it, just that we have no evidence for which one is true.
-
Yep. http://www.scienceforums.net/forums/showthread.php?t=15894
-
I want to build a website, but I dont want to spend any money
bascule replied to In My Memory's topic in Computer Science
Well then, have you considered using Wikicities? -
Yeah, crazy... http://www.cnn.com/2006/WORLD/asiapcf/01/19/japan.jellyfish.reut/index.html
-
No? The idea of emergence is that collective behavior of a system creates new properties which are not present in the individual parts. You're just definition mincing. The terms are somewhat synonymous but "emergence" is an explanation for new properties (e.g. property dualism) No, why would they, any more than the virtual world of a video game which emerges from digital computations within a computer? My apologies for not responding to everything people say to me...
-
I want to build a website, but I dont want to spend any money
bascule replied to In My Memory's topic in Computer Science
Have you considered just using a service like Blogger, LiveJournal, or Xanga? All of these have communities already established that can help get your writing attention from people with mutual interests. Plus, no software to install. -
It takes me about a month and a half to read any lengthy "Science for laymen!" book, such as The Ancestor's Tale, or Fearful Symmetry...
-
Oh really? It seems like conservatives see a liberal bias and liberals see a conservative bias. FAIR at least supports their views with a continually growing list of case examples, one that certainly vastly outweighs Jim's single example. However I see FAIR as an extreme liberal group that just sees the bias it wants to see, much in the same way I feel that conservatives shouting "media bias" see only the bias they want to see. I'll happily concede that liberals dominate television. Anything else, I just don't see it. When a conservative tries to group everything under the auspices of "the media" (or even "the mass media"), then I think they're just being ridiculous.
-
You're describing Singularity "Everyone's searching for something they say... but I get my kicks on the way" -- Pink Floyd / The Gold It's In The...
-
Does talk radio have a liberal bias?
-
There is a general tendency in every avenue of human communication towards a liberal bias? Zuh? Does talk radio have a liberal bias? Do Podcasts have a liberal bias? Do Zines have a liberal bias? Do paintings have a liberal bias? Does music have a liberal bias? Do blogs have a liberal bias? The term "media bias" is so generic as to be meaningless.
-
One would like to think the Paris Commune was effective for the brief time it lasted...
-
Here are my thoughts on the two most practical ways to bring about Singularity. This is building on ideas I've posted before. One route starts at BlueGene and takes the SAI route, the other starts at BlueBrain and takes the IA route. Scenario One: Commercial Wetware I have long been convinced that creating a marketable brain-computer interface is not only an inevitability, but is also the single most practical means to achieving Singularity. By wetware I am talking about a direct neural interface; a device which allows the brain to exchange abstract information directly with computer networks. Equipped with such a device, you would in effect have access to all human knowledge at the speed of thought. Here is my roadmap to commercial wetware: Step One: Decode the Neocortical Column The largest barrier in constructing such a device is sufficient knowledge of the brain’s operation. However, work over the past decade towards understanding the brain’s physiology and research into cognitive science and epistemology has brought us ever closer to this goal. There seems to be a growing consensus as to how the brain actually operates, namely that the Neocortical Columns (NCCs) which I discussed in my previous blog entry start out as a highly repetitive neural pattern analyzers which, over time, begin specializing in a myriad of different tasks. They appear to exchange information with each other via a “global workspace”, regarded by many to be the role of the thalamus: Llinas et al, 1998. “… the thalamus represents a hub from which any site in the cortex can communicate with any other such site or sites. … temporal coincidence of specific and non-specific thalamic activity generates the functional states that characterize human cognition. In this contemporary model, the thalamus represents the state holder of consciousness (i.e. what you’re presently thinking about) and the NCCs are continuously transforming this state (i.e. the NCCs do the actual “thinking”). Thus it would seem that in order to use a computer to inject information into and process information from the cortex, what is lacking in our present understanding is a high level understanding of the thalamocortical interface. When we can build a computational device that interfaces with the thalamus in a nearly identical manner to a neocortical column, we will have our foot in the doorway of human consciousness. Step Two: An Artificial Neocortical Column This is a feat researchers have already successfully accomplished with the hippocampus, the part of the brain which encodes short term memories for long term storage. The researchers involved built a mathematical model of the hippocampus’s operation, then implemented it in the form of an implantable device. The Blue Brain Project is presently working on constructing a mathematical model of the NCC. If contemporary thinking on consciousness is correct, this would provide an optimal place to begin building a brain-computer interface. Step Three: Classifiable Phenomenology After constructing an artificial neocortical column, the problem becomes how to read phenomenological objects in and out of our thalamus. The inherent plasticity of our neocortical structure would suggest that our phenomenological structure, the way we store, represent, and analyze information, develops after our birth and would thus be unique to every individual. So how can we ever hope to build a device which can transfer information in and out of our consciousness? The answer lies in classification algorithms, specifically Bayesian algorithms. This, combined with an understanding of how to query your brain’s phenomenological structure for related information, would allow a computer program to construct a representation of your own, personalized phenomenological structure outside of your brain upon which computer programs could act. Step Four: Commercialization For me at least, it really isn’t hard to imagine that once we understand human consciousness at level, all of the prerequisite research will be in place to begin commercialization of wetware. This is an idea that, in the majority of the countries around the world, it would be very difficult to receive government approval for. My prediction is that wetware will be commercialized in Japan, or some similarly technologically progressive country which would allow its development. Wetware is the killer app of the Singularity. As long as its potential risks can be mitigated, it is something virtually everyone on the planet would want. It’s very difficult to imagine ubiquitous access to all of human knowledge at the speed of thought, with an intelligent agent prefetching information it thinks you might be interested in. Wetware will represent a convergence of ideas that have been bouncing around for decades but never found a practical form (James Burke talked about the potential for “agent” software in the sequel to his Connections series which aired on TLC, for example). This means the potential return on investment for its development is enormous. Because of its immense commercial potential, I think wetware will be the first Singularity technology we shall see. It not only requires substantially less understanding of the human brain than strong AI, but it also carries with it the potential for immense financial gain for whoever can pull it off. Furthermore, there is no need to worry about a non-human force wresting control of the planet from humanity (unless you would count posthumanity/transhumanity as such a force). For all of these reasons I think that wetware not only should, but will come before strong AI. Scenario Two: Biomodeling Biomodeling is a meme I’ve been attempting to spread for quite awhile. The first I ever heard of the idea was reading an article about Bill Gates quite some time ago, in which he suggested that if consciousness is an emergent effect of the human brain, then we need only grow the human body inside of a computer in order to achieve strong AI. The exponential growth of computational power, combined with advances in our understanding of biochemistry, will facilitate doing just this within a decade’s timespan. The task can be broken down as follows: Step One: A Lookup Table for Protein Folding Protein folding is, hands down, the single most complex behavior of our cells. The way proteins fold depends upon a property called Gibbs Free Energy, and in order to derive how a protein folds from its amino acid sequence requires highly complex atom-by-atom simulations of protein molecules. Fortunately, the most powerful supercomputers in the world are hard at work solving this riddle. The BlueGene project is dedicated to the task of studying the nature of protein folding and, hopefully, will produce a lookup table of the folding pathways and kinetics of all life proteins. With a lookup table, doing an atom-by-atom simulation of each protein is completely unnecessary. The protein sequence can be looked up in a database, the folding kinetics and pathways loaded, and the protein folding simulated with a mere fraction of what would be necessary if that information had not been precalculated. Step Two: A Molecular Model of Cellular Behavior A complete molecular model of cellular behavior is no short order. At present, protein folding appears to be the biggest stumbling block. However, it is by no means the only cellular behavior that could be assisted by a lookup table. What is needed is a highly simplified model of a cell operating at the molecular level (as opposed to the atomic level). Any operations requiring atomic level modeling must be converted to molecular level operations assisted by lookup tables. One of the biggest objections to producing such a model is that our knowledge of molecular biology is far too incomplete to even attempt such a model. However I would contend that the lack of such a model has, perhaps, been one of the factors which has kept molecular biology from progressing at the rate other model assisted sciences (for example, atmospheric science) are able to. A molecular model of a cell will of course be highly inaccurate at first, but the inaccuracies will be progressively exposed and refined by studying model output, just like any scientific model. Step Three: Biomodeling of Unicellular Eukaryotes While I have no doubt that prokaryotes will be modeled as well, the modeling of eukaryotic protozoa will be a much more compelling goal, because it brings with it the potential of modeling multicellular eukaryotes such as all modern animals. Modeling of unicellular eukaryotes is, however, a necessary step in which biomolecular models of cellular behavior will be improved and refined. At this point I would like to emphasize that such a model should be open source and therefore collaboratively developed. Advances in organization of collective human behavior over the Internet will hopefully decrease development times to the point that work on models of multicellular eukaryotes can begin shortly (i.e. within a few years timespan, or less) after biomolecular models of protozoa begin to become accurate to their real-life counterparts. Step Four: Biomodeling of Multicellular Eukaryotes Modeling of the more “primitive” (i.e. less complex) multicellular eukaryotes, such as sponges, should not be too enormous a leap from the modeling of protozoa. However, modeling of nerve cells, which are highly dependent upon electromagnetic reactions, will bring about a whole new set of challenges for biomolecular models. Making the leap to higher level lifeforms with more complex nervous systems (by which I mean creatures such as insects) will be quite a difficult one. However, I hope it will bring with it a degree of interest in biomodeling from the general public, specifically the interest of open source programmers who will contribute to debugging and optimizing the model. We know that as the model progresses, computational power will be expanding exponentially, to the point that people will begin to be able to run models of simple organisms on their home PCs or small clusters, especially with the current trend towards multicore processors. Biomodeling is a task that should be easily parallelizable, because it will ultimately end up with a model of cellular behavior which is repeated for each cell (which must, of course, account for the “boundary conditions” occurring with neighbor cells). We will certainly see protein-based catalysts producing a whole new slew of molecules whose behavior must be accounted for in the model, but the overall trend will be towards modeling more and more complex organisms, until we move past insects towards fish, then past fish towards amphibians and reptiles, to mammals, and with mammals, the higher level simians, and finally man. Step Five: The First Artificial Human At this point, the electromagnetic properties of neurons will hopefully be modeled at a high degree of accuracy, and a complete molecular simulation of the human brain should be feasible. The remarkable thing about biomodeling will be the small amount of model input that is necessary: all that is needed is a molecular description of a human egg cell, which still needs to be developed, and the genome of a human being, which has already been sequenced. From this, we can simulate the development of a fertilized human ovum, which can implant itself in a virtual uterus and grow inside a computer, replicating the complete human developmental process inside of a supercomputer. One can safely assume, at first, that this will happen substantially slower than realtime, but the model history can be preserved, dumped, and executed on progressively more powerful supercomputers as they are constructed. What we will be left with, in the end, is a complete computer simulation of a human being, accurate at the molecular level. Assuming consciousness is truly an emergent manifestation of material processes, we will have, at this point, developed strong AI. We will have the most accurate computer model of the human brain possibly conceivable, because it will have been grown from human DNA in exactly the same manner as a real human being. Some sort of environmental simulation, mimicking that to which a human baby is exposed, will be necessary to ensure that consciousness develops in the virtual human baby in a similar manner to a human child. However, these inputs can be scripted, computer generated constructions, perhaps employing weak AI to select from a set of pre-recorded human behaviors in response to the behavior of an artificial child. It’s important to note that through this approach, the simulation of the child need not be realtime at this point, although obviously the artificial child would benefit from a realtime simulation in which it could interact in a virtual reality environment with real humans. I consider this approach to be substantially more practical than Kurzweil’s suggestion in The Singularity is Near of using nanomachines to document the behavior of an active human brain. Not only is the “test subject” in this case digital, therefore removing the majority of ethical considerations of injecting a real human brain with potentially deleterious nanorobots, but it provides complete access to the brain’s behavior in a way that wouldn’t be possible with nanorobots alone. Step Six: Towards A Mathematical Model of Consciousness There have been numerous objections to a computational model of consciousness. At an intuitive level many feel it invalidates the possibility of a soul, an idea many find disquieting. Many brilliant people, among them the mathematician Roger Penrose, have argued that consciousness must be inherently non-computational, as Penrose did in his book Shadows of the Mind. While the jury is still out on the computability of consciousness, my feeling is that we will eventually discover this to be the case. At any rate, if we do not see a manifestation of consciousness in a computer model of a human being, it will mean one of two things: either consciousness is fundamentally non-computable, or the biomolecular model is too inaccurate to model the complexities of the human brain. While the former is irresolvable, fixing the latter is only a matter of time. If we do see a manifestation of consciousness in our artificial human, we can begin devising classification algorithms to begin breaking down the specific operation of various parts of the brain, generalizing it into a substantially simpler mathematical model. Once we have done this, we can convert this mathematical model into a computer program, and begin applying the full set of program optimization tools available at the time. We can look for bottlenecks in human consciousness and begin eliminating them. Eventually, the simulated consciousness, free of the bottlenecks of our own minds, will transcend human (and eventually posthuman) thought power, at which point it can become self-improving. At this point, humanity will have developed seed AI, and the Singularity will truly have taken place. Shamelessly syndicated
-
Your whole argument is a composition fallacy. The word media, by definition, represents all avenues of communication between all humans on the planet. You're pointing at certain people involved in a specific medium of communication and trying to say that because they exhibit what you consider to be a political bias it's somehow indicative of all humans and all communication media taken as a whole. The media are only biased in one way, and that's "what humans believe" It's no different than me saying "There's a conservative media bias. Just look at Hannity, O'Reilly, and Savage!"
-
The photo for this article is priceless. http://www.cnn.com/2006/US/01/25/army.study.ap/index.html?section=cnn_topstories
-
Obviously the anti-Bushites are going to latch onto anything they think makes Bush look bad. But when you try to argue that because of this, what they've latched onto is inherently wrong, you're committing the ad hominem fallacy. I have not seen any specifics out of Bush/Gonzales regarding how obtaining warrants had any sort of timeliness issues. Personally, I don't think these issues exist at all. Have a look at this: http://www.law.cornell.edu/uscode/html/uscode50/usc_sec_50_00001805----000-.html In a situation where timeliness is an issue, they don't have to get a warrant. They merely have to tell the judge what they are doing, and let the judge sort through it later. This still provides judicial oversight and means that someone is watching the watchers. If you're looking for some reasoning behind attempting to circumvent the FISA court system, I'd say it lies here: http://www.upi.com/NewsTrack/view.php?StoryID=20051226-122526-7310r Is this a necessary consequence of the "war on terror," or the Bush Administration abusing its power? Well, that's for the Supreme Court to decide...
-
Moore's Law is one particular observation of a trend of exponentially increasing progress which comes about through the additive feedback of improving technologies in all sorts of fields and disciplines. While in Moore's hayday an IC's die was painstakingly laid out by hand, today not only is the layout done on computers, but computers are doing the layout, with the CPU itself written in a description language much like a computer program, and computer algorithms calculating the optimal layout of the components from the description. This software is constantly improving. CPU design concepts are constantly improving. The computers used to design new CPUs are constantly improving. The materials used to construct new CPUs are constantly improving. All of these things feed back off of each other and actually increase the rate of exponential change. Kurzweil covers hundreds of these trends and generalizes them into something he named The Law of Accelerating Returns See the chart on Wikipedia