-
Posts
2767 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Delta1212
-
Tesla was a brilliant inventor who was not at all good at marketing himself or just business in general, and had a habit of chasing ideas very far down their rabbit holes. That persistence led him to discover a great many important things but also led him astray more than once, and based on things I've read, I would not be at all surprised to find that he suffered from some sort of mental illness late in his life. Tesla was a true brilliant "mad scientist" whose life ultimately ended in a very disrupted and unsatisfactory state given his accomplishments because he was failed by, even taken advantage of by, the system in which he lived. That is all entirely true, and throw in a few of his wilder ideas and it's a story that is fertile ground for applying lots of conspiracy theories and even more outlandish speculation on what he could have accomplished by people who themselves feel like they've had a creative stroke of (impractical) genius but aren't being dealt with fairly by the system. He looks, essentially, like what a crackpot imagines himself to be in every respect except that he had quite visible success in many areas and is taken seriously for real accomplishments.
-
There are plenty of ways to resolve the grandfather paradox. Most notably, if everything that has happened is what happened. If your trip back in time is already part of history, then you clearly didn't kill your grandfather because you exist in order to go back in time and make the attempt. See: the Novikov self-consistency principle
-
Why are we humans and not robots?
Delta1212 replied to jimmydasaint's topic in Evolution, Morphology and Exobiology
Two things here, one, you're really talking about energy saving measures on the human side. That's not an advantage in potential that humans have, it's effectively a low power mode. The human body/brain does not have infinite ability to expand to handle ever greater challenges. There are limits that you will eventually hit. Looking at it from this direction, we do not start at our maximum potential whereas a computer does, and it remains there without constant use, where we do not. Second, neural networks are effectively designed to rewrite their own behavior based on experience and/or trial and error experimentation in pursuit of a desired outcome. They can and will adapt to changing circumstances and problems if you wish to apply them in such a manner. The AI Google now uses for translate, in fact, uses pre-existing knowledge of the languages it has already learned to translate in order to learn new translations. It stores semantic information about words that have related meanings and can therefore very quickly learn to translate between two languages it has already "learned" by translating from something else, even before it has been given any examples. In other words, if it knows English <> Japanese and Japanese <> Korean, it can figure out English <> Korean without being taught English <> Korean, although additional input will help refine it. And it does so without simply running it through Japanese like the game of online translation chaining. -
What about single timeline time travel is incomprehensible?
-
Why the bloody hell do I have to study this.....
Delta1212 replied to DanTrentfield's topic in The Lounge
The more background and context you have for something, the more nuanced an understanding you are going to be able to have on it and the better you you will be able to appreciate it. Animal Farm is a fine story on its own, but you lose a heck of a lot if you read it without ever having heard of the Russian Revolution. -
Why are we humans and not robots?
Delta1212 replied to jimmydasaint's topic in Evolution, Morphology and Exobiology
So? -
The volume of a sphere obviously has an edge, which is the surface. The surface of a sphere does not have an edge, however. There is no "edge of the Earth." Pick a direction to travel, go in a straight line along the surface and eventually you'll just wind up back where you started. It is easier to conceptualize the way a 2D surface could curve back around on itself because we can picture it bending around in three dimensions to form a sphere or a torus. It is possible for a 3D volume/universe to curve back around on itself as well, but this is much harder, or may even impossible, to properly visualize because the geometry is a bit more complex than we're use to dealing with. Just think of it as the ability to travel in a straight line and wind up back where you started, and that should help you grasp it a bit better.
-
I have yet to see a definition of free will that does not have some form of determinism as a prerequisite. The opposite of determinism is not "free will." It is "randomness." For "you" to have any input in your own decisions, they can't be random. They must have a cause or causes, because you and your decision-making process need to be that cause or at least a part of it. The problem people stumble over is the weird belief that if we understand our own decision-making process on a mechanical level, it means we're not really making any decisions, which is silly.
-
Research is rather pointless if it is not recorded and shared. That is why it is done. It may be fun to do for its own sake, but nobody is going to pay a researcher to have fun learning something and then keep the knowledge for themselves. It's also important to note that the number of people who read a piece of research is less important that who it is that reads those reports, and the fact that those reports are available if anyone has specific need of them. Any one report that someone writes may not wind up a source of water cooler talk in offices around the country, but those big ideas that do break through into the public consciousness rely on the collection of data in smaller bits here and there in order to be developed. The foundation of a building is often invisible to the public, but the people who spent time building it know it is there, and the structure would collapse without its presence.
-
Mutation rate controller?
Delta1212 replied to SStell's topic in Evolution, Morphology and Exobiology
Depends on what you mean by "evolve." A mutation that generates extra limbs isn't especially unlikely in the grand scheme of things. It happens fairly easily, as you don't have to "reevolve" an entire body part. You just need a mutation that causes the body to build a part it already has the template for an extra time. The issue, of course, is that most body plans are not especially accommodating of extra parts that are more likely to get in the way of how things "normally" work than do anything especially helpful, which is why you don't see extra limbs rapidly spreading through species all the time. In the event that an individual lucks into an extra part that helps more than it hurts, that could spread through the population and eventually become a fixed feature in the genome. So if by evolve you mean just the initial mutation that eventually spreads, then yes, it can happen very rapidly because duplicating existing structures isn't especially difficult mutation-wise. Otherwise, it still takes quite a while for any new feature to spread throughout a population. -
On the Question of Brain Activity as a Physics Problem?
Delta1212 replied to Perfict_Lightning's topic in Speculations
If you're interested in actual mechanics, I would highly recommend looking into the structure and underlying mathematics of neural networks. To make a long story short, there is not one first neuron and you cannot fire the neurons one at a time sequentially, because it is the pattern and timing of which neurons are active and which aren't at any given time that give rise to the complex pathways from which our responses and interpretations of stimuli are derived. -
Prepositions are, for example: About, above, across, after, against, along, among, around, at, before, belong, by And so on.
-
Google has already implemented a type of dream state for its neural networks as it aids in learning. In any case, this discussion seems to be based mostly on the scifi stereotype of the highly analytical artificial intelligence that has no understanding of human feelings. It's important to remember that this archetypical character is not actually based on any real artificial intelligence and there is no particular reason to expect that an advanced AI would behave remotely in the way that is typically depicted in fiction along these lines. Our most advanced AIs these days are trained, rather than programmed. How they behave is impacted heavily by their experiences in training. They are, effectively, a bunch of circuitry and code that is build to be able to learn, and what it does depends on what it is taught. One of the reasons that Google has been such a front runner in this area is that they have the resources both to build large amounts of optimized hardware for the purpose and, as if not more importantly, access to truly massive amounts of data for training purposes. Artificial Neural Networks can be trained to do image recognition, voice recognition, advanced translation, and in one of Google's projects, learned to recognize cats entirely unprompted after watching hours of YouTube videos. This is where we are right now with AI. Expect that, if we do eventually obtain the necessary processing power and algorithmic optimization to pull it off, building a truly advanced AI will be a bit more like training a pet or teaching a child than most science fiction will have led you to believe. (With the caveat that it will not be precisely identical and there is a fair degree of math underlying exactly how these systems operate)
-
Mutation rate controller?
Delta1212 replied to SStell's topic in Evolution, Morphology and Exobiology
I think I remember something about differential effectiveness of error correction mechanisms and a degree of optimization to increase effective "evolvability" but that was a while ago, and I don't remember the details well enough to know whether I'm getting that right. I'll dig around later and see if I can find anything. -
I would say that the similarly between a country and a website begins and ends at the fact that they are both organized and run by people. Any generlisations you can make about similarities between the two you can make about pretty much anything organized and run by people: clubs, households, libraries. If everything tastes like chicken, perhaps chicken isn't the best comparison to draw in order to inform someone of what a thing really tastes like. Generally, you want to draw notice to distinguishing features. Otherwise you could say "Benches are pretty much like animals. After all, they are both made up of atoms and they are usually opaque in the visible spectrum of electromagnetic radiation."
-
He gave an order to two people who had been intentionally created, by him, to be incapable of understanding the difference between right and wrong, and then punished them for predictably failing to listen to him.
-
How cold does it get at night where you are?
-
The question is: anxious about what?
-
Technically, he resigned before he could be impeached.
-
As alluded to above, the answer is that the orbits don't really remain unchanged. We're just dealing with unimaginably large distances, so any wobbling or drifting is going to take quite a long time to become really noticeable. Even the moon's orbit is drifting away from the Earth at a current rate of centimeters per year. Nothing in the heavens is actually permanent or stable. It just seems that way to us because our lives are so brief in comparison to the timescales involved. To the worker bee living its months-long life in the nest hanging from the tree in your yard, you have always lived in your house and you always will: an eternal, unchanging fixture of the universe.
-
It's not actually yet been proven that this algorithm always collapses to 1 for any starting number. It's known as the Collatz Conjecture, and while it is generally considered that it is probably true, proving it is exceptionally difficult.
-
That seems a bit like asking what the practical usefulness of a car with a flat tire is. I mean, technically you can still travel with it, though it's probably a bad idea to try to go any kind of distance, but the real issue isn't that the car overall isn't useful. It's that the tire needs to be patched or replaced before you can use it properly. There is no advantage to a science being soft over being hard, but there is also nothing that forces a science to be inherently soft. There are some where real progress will take a lot of time and effort and in the meantime we'll have to acknowledge a lot of major gaps in knowledge even for ideas that have been preliminarily tested. This is unsatisfying and making potentially unwarranted extrapolations in order to expand the range of what we know into areas that are simply very difficult to test directly is a big reason why certain subjects have remained very soft. We can't tinker with the economics of countries or the world to see what happens nor with the cultural practices of various societies or their political structures and ideologies. We have to go where the light is currently shining and try to develop models based on that. It's neither easy nor especially accurate with the amount of data we currently possess and are capable of collecting for such complex subjects, and as a result, some people in these fields have historically retreated into patching the holes with speculation or, as stated, unwarranted extrapolation. The more we strip away the unfounded assumptions that a lot of these subjects were initially built upon and get more and more data with which to build real models of behavior, and most soft sciences really do have to do with human behavior simply because it is an extremely complex subject where direct experimentation is either extremely difficult or considered unethical, the harder these subjects will become. It also doesn't help that most of them are relatively young compared to the broader categories of most harder sciences and have had less time to build up the bodies of knowledge that it takes to make both robustly accurate and satisfying predictions about the way things work. That last shouldn't really matter, but it does and is a big part about what leads people to make leaps instead of following the data. As humans, we always want an answer, and if the right one is completely unattainable, we'll often settle for making one up.
-
I think relative "hardness" is a cultural phenomenon more so than anything inherent to the subject matter of any given field. It's mostly about the degree of rigor which has been applied to the most current theories. For some fields, it is simply more difficult to apply that rigor for a vareity of reasons both practical and ethical. Psychological experimentation is obviously very restricted in terms of the types of experiments it can perform and who it can perform them on, and sociology deals with events that it is difficult or impossible to develop truly adequate controls for. A lot of the softness is down to historical accident as much as anything, though. There was a period when the roots of what would become the field of chemistry was extremely soft in Europe. Similarly, astronomy has a similar problem to a lot of the softer sciences in that it is difficult to set up controlled experiments to explore a lot of phenomena, but it has a rich history of developing mathematical models based on direct observation. Economics, by contrast, has traditionally been more of a mathematical philosophy than really evidence based. I think a lot of soft sciences have been getting harder, but especially when direct, controlled experimentation is difficult or impossible, it takes a lot of time to build up the robust knowledge base that the harder sciences enjoy.