-
Posts
2767 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Delta1212
-
I've had a couple of lucid dreams where I've become aware I was dreaming and was able to influence what happened to one extent or another. I'm still not entirely sure whether that is really a qualitatively different experience or whether I was simply dreaming that I was having a lucid dream.
-
But if it's in boiling water, the temperature is going to be the same regardless of the heat underneath it. The water is exposed to additional heat, but the egg won't be until the water boils off.
-
You have to take into account what is feasibly implementable. For starters, the Twitterbot in question is dumb. It can mimic speech but it doesn't understand it. I don't mean that in a "it's not conscious" way, but in a "It's not actually communicating any information" way. It's just babbling in a very sophisticated way, like a baby mimicking the sounds around them without knowing what they mean yet. Given that, something that requires slightly more complex analysis of the content of the messages like "Modesty" is difficult to implement. At best, you could pre-set it to discount input that includes a range of "taboo" keywords or phrases so as to avoid learning from those specific bad examples, but that requires a lot of upfront effort and word-filters are never foolproof. It's a neat little chatbot AI, but it's nowhere near sophisticated enough to handle having people intentionally sabotaging it's input. I'm not even sure there is a foolproof way to handle that regardless of how sophisticated an AI is, especially not in its early stages of training. Parents tend to instill this kind of thing by being physically present to monitor as much of the input their child receives in its early days and providing immediate positive or negative reinforcement towards certain behaviors as they first crop up. If a child goes on a neo-Nazi rant, they get punished and learn to avoid that behavior in the future. If you had someone sitting there monitoring everything a chatbot says and hitting a "punish" button (essentially just send it information to the effect that that is not something it should say) every time it said something wrong, it would probably eventually work out the patterns of statements it should be avoiding itself going forward. But for an AI that is learning by continuous input and reacting with output that is all happening far faster than a human can reasonably monitor on a case by case basis (unless you want to spend literally spend years training your AI by feeing it training sets and manually evaluating it's outputs one at a time) there's not a good way to do this. Or more precisely, there are some good ways to do this as a general case for problems where the output can be boiled down to "good fit" or "not good fit" but that's extremely hard to do when you are evaluating not just whether a sentence fits within grammatical and common usage structure but also whether the content is socially acceptable. An AI can figure the first out with a large enough sample set of sentences to compare its output against, but the latter is much harder. You'd need to collect a large database of socially unacceptable things to say and then teach it to evaluate the unacceptableness of a given statement based on that sample set and then self censor anything that meets those criteria, but then you have to get a very broad and representative sample set to teach it off of and there is always a pretty good chance you will miss stuff.
-
I tend to think that any group that predicates membership on having a high IQ is going to attract members who are most interested in showing off the fact that they have a high IQ. If you want to get involved in a community that is going to help you learn things, look for a community who accepts members based on whether they want to learn things.
-
Why nothing can go faster than speed of light.
Delta1212 replied to Robittybob1's topic in Relativity
The speed can't be the speed of light. It can be any specific value under the speed of light, which is usually most easily expressed as a percentage of the speed of light if you want to talk about approaching it. But 99% of the speed of light has a very different answer than 99.9% of the speed of light which has a much different answer than 99.99% of the speed of light. -
Why nothing can go faster than speed of light.
Delta1212 replied to Robittybob1's topic in Relativity
There is no closest speed. If you are traveling at 99.99999999999% of the speed of light, you can always get to 99.999999999999999% of the speed of light. And each additional 0.0...9 take an increasing amount of energy, which means that the energy requirement extends to infinity the closer you get, so there's no specific value that can be given unless you define exactly how close you want to get, which "as close as possible" doesn't do for the above stated reason. You can always get closer. -
Time Travel is Impossible and if not Impractical
Delta1212 replied to HPositive's topic in Modern and Theoretical Physics
That's also not what a theory means. A theory is something that had been worked on extensively and has explicitly outlined methods for how to test it, some of which would ideally have already been implemented and provided some evidence that the theory is correctly modeling what happens in reality. But it will still be a theory when that happens. The word you are lookin for is not theory, in a scientific context. It is "idea." -
Where Does Space End? It Must End Somewhere!
Delta1212 replied to Edisonian's topic in Astronomy and Cosmology
Our Observable Universe is finite. There may be a greater universe extending beyond the bounds of what we are capable of observing that goes to infinity, but the area that we can observe, everything we see when we look up in the sky, exists within a finite bubble of space. That bubble of space, specifically, used to be much smaller, and consequently much hotter and denser as all of the matter we can observe in the universe was packed into a very small space. But if the universe is infinite, we would not be talking about a very small universe getting very big, but of an infinitely large and uniformly hot and dense universe becoming an infinitely large universe that was much cooler and less dense through the metric expansion of space. Our local pocket of the universe then would have gone from being very small to very large, but the overall universe would have always been infinite if it is now. Expansion/the Big Bang is not the universe "growing." It is the distance between any two given points in space increasing. If distance increases (and see the Hotel example linked above for how something infinitely large could expand) and the amount of matter/energy remains the same, th density will increase, and you will get a situation that looks exactly like what we can see right now when we look out into space. -
Time Travel is Impossible and if not Impractical
Delta1212 replied to HPositive's topic in Modern and Theoretical Physics
There's no evidence that any of that is remotely true. -
I'm just saying that if you include "hunger" in a system, it's not going to require any more complex emotional feedback in order for it to then learn that it should eat (assuming it is given the capacity to do so). If "hunger" is a value that the AI has a goal of decreasing and eating decreases the value, eventually it'll learn to eat (again, assuming that is an available option and not something that is physically impossible to do). I'm also not suggesting that an AI is going to spontaneously develop emotions. It's possible that a complex AI could spontaneously adopt some of the associated behavioral patterns given the right problem set and circumstances during training (heavily changing the way decisions are made during periods of high perceived risk is both a potentially straightforward reaction to the situation and a decent approximation of fear) but in general, I'm suggesting that you should be able to set up the potential for emotions intentionally as an initial condition of the system. Let's say you set up an AI, set it to the task of solving some problem and give it a few good rounds of training so it does a pretty good job. Now, you could set it so that, in the event that it finds itself in circumstances where where it can't find a path to a solution, you include as a secondary goal that, say, it should weight potential decicions with a less predictable outcome more highly in the hopes that unforeseen options will open up. You train it with that priority, and now you have an AI that alters its decision-making in response to frustration. It even becomes "irrational" as a response, although there are reasons that being slightly chaotic in those circumstances has some potential benefits, especially if you are working at a problem where the moves and outcomes are less easily defined than in a game like chess or go. But just because you've added a semblance of "anger" to the AI doesn't mean that it's going to cover the full range of human associations with a given emotion, and you, as the person who is setting the parameters of the AI's behavior and defining for it how potential solutions should be evaluated have the ability to program in emotional responses that are atypical of humans or even that have no direct human correlates. You could program an "adrenaline junky" AI that prioritizes high-risk behaviors instead of a fearful one that is risk averse, for example. Human emotions and emotional responses have been shaped by our "goal" (reproduction, which may not be everyone's personal goal but is the "problem" that the evolutionary algorithm that is life is working on), our environment and the resources that we have available to us. We're defining all three of those things for any AI that we are creating, which means that we have a direct hand in shaping both whether an AI has emotions and what those emotions look like. And there's no need for them to have a 1:1 relationship with any emotions humans have. Thinking about them as resulting in a "librarian with attitude" is perhaps not looking at the way emotional responses could be implemented and even potentially useful in an AI system, because it means you are looking at them from the perspective of exactly mimicking human responses, when in reality human emotional responses are themselves behaviors that were developed as solutions to problems that humans typically face and that you probably won't be applying an AI to. Emotions aren't generally thought of as problem-solving strategies, but that's all they are. They're shortcuts to certain types of solutions that have resulted in generally good outcomes given specific circumstances without forcing you to learn a new response to every problem you come across. Fear keeps you from getting killed or losing resources you can't afford to lose in risky situations. Anger has a number of potential uses from inducing a change in unfavorable circumstances where no better options seem to be available, to inducing other people to solve problems for you when you can't find a solution yourself, to increasing the risk associated with causing problems for you in the first place so that other people will avoid creating problems in the future. Happiness induces you to want to repeat behaviors that have had positives outcomes in the past. Frustration may get you to abandon tasks that are unlikely to yield a benefit worth the effort that is being put in, or cause you to change strategies when the one you are pursuing isn't working. Most emotions have some element of navigating interpersonal relationships and competing or coherent goals in a social environment. Emotional responses get a bad rap because they often lack nuance, but in a world where the time, energy and resources to thoroughly tackle every problem from a purely rational and strategic perspective, useful shortcuts and rules of thumb are often a good way to avoid wasting resources on problems that can be adequately, even if not perfectly, solved with a less refined approach. From this perspective, any emotional responses that an AI has are going to be tailored to the problems it is given and to the resources at its disposal rather than toward mimicking human responses to what is probably an entirely different problem, in an entirely different environment with an entirely different set of available resources.
-
Fear isn't the only motivator. Actually, hunger should work all on its own. You train the system to decrease its feeling of hunger. Eating decreases hunger. Once it tries eating, that should reinforce itself well enough just with that. Even most people eat because more they don't want to feel hungry rather than because they are afraid of dying if they don't get food.
-
That depends somewhat on the problem that it is exploring the solution space of. You have to have some sort of starting parameters one way or another. You can be lazy or incautious and set an AI on a path that will lead to violent tendencies (given that it has the capacity to implement violent behaviors), but that is going to depends on what sorts of tasks and goals an AI is applied to and what kinds of parameters are set for evaluating potential solutions.
-
Time Travel is Impossible and if not Impractical
Delta1212 replied to HPositive's topic in Modern and Theoretical Physics
That's probably one of the least likely ways for time travel to "actually" work, even if that's the most convenient way to make a logically consistent time travel story work. -
Where Does Space End? It Must End Somewhere!
Delta1212 replied to Edisonian's topic in Astronomy and Cosmology
It's hard to imagine because infinity is hard to imagine, but there is no reason it couldn't have started at an infinite size, but much more dense. It would have remained infinitely large, but the expansion would have then decreased the density. But yes, it either is infinite and started infinite, or it is finite and started finite. (Or I suppose something really weird could have happened and it transitioned from one to the other, but you wouldn't get there from simple expansion). -
If we want to look at this at least slightly more rigorously, it might be worth considering what emotion actually is from a results-oriented perspective rather than just how they feel and how people stereotypically act as a result of emotion. In short, you respond to input differently when you are angry (or sad, happy, etc) than when you are not. There's no reason an AI couldn't be programmed with situationally dependent adjustments to the weights in a decision tree (which is effectively what an emotional state would look like in an AI from a practical perspective) but there's no reason that an AI's emotions would have to look anything like a humans, or that one's responses to those emotions would have to resemble a humans. Anger, for example, is a response to a situation with no apparent acceptable solution. You could program an AI to adjust its decision making when encountering such a problem. It may even be a good idea to allow for a change in the way decisions are made when there doesn't seem to be an acceptable decision available under the normal way of processing them, but there's no reason that the new set of responses would need to be violent ones just because that's how humans tend to react. You would need to hardwire or teach the AI a violent set of responses, when you could program or teach it a set of responses to the situation that are entirely unlike what a human would have.
-
What you said at the end is essentially correct, although there is no "maximum point" so much as a limit that you can continuously approach but never reach. You know the CoverFlow effect that's used sometimes for slideshows, and especially for music libraries, where the whatever the current image you are viewing is displaying normally front and center, and then off to either side the images is a visible series of other images that you can scroll through, and as you get further from the current image in either direction, the images get more and more scrunched together? So a bit like this: That's how I conceptualize frames of reference, where each image is one frame, and there are an infinite number of images extending out in either direction, but since they get more and more scrunched together, they never quite reach the edges of the frame. The central image is your rest frame. The "further" an image is from your frame of reference (i.e. the larger the velocity relative to your own rest frame), the more scrunched it appears to be, and the closer it seems to be to the images adjacent to it. If you scroll over and make that image your rest frame, however, the image on either side will appear more spread out, as it looks in the center of the above image. A change in frames is really a change in velocity. So to get from one image to the next, you would need to accelerate until you reached whatever speed that image represents. Now, if you were to look at someone who is currently in a frame way at one end of this image (i.e. moving with a high velocity relative to you), they would appear length contracted and time dilated, and if they were to accelerate, they would continue moving further and further along the images. But each change from one image to the next would constitute a smaller and smaller "distance" along the line as each image (from your perspective) gets more and more scrunched together. And, of course, moving along the images in the line, they can never leave the slideshow because the images don't extend beyond the edge of the screen. They just continually get closer and closer to it extending on to infinity. For a practical example of how this relates to the question, we'll say that a gun accelerates a bullet to the point that it is five "frames" away from the shooter (remembering that a frame = a relative velocity), If you are firing from the center image, the bullet appears to be going a certain amount faster than the gun in the rest frame. If someone in one of the frames at the edge fires the gun, the bullet will move the same five frames, but since the frames at the edges, from your view, look like they are closer together, the relative difference doesn't look as great as it would if you were viewing it from the frame of the gun. That's a little more muddled than I might have liked, and I probably need to work on explaining the metaphor a bit more simply. I know it isn't the easiest one in the world to grasp because it requires thinking of velocities as a locations along a line with distance representing relative speed (and in a more strictly accurate analogy, the "line" would extend in every direction instead of just left and right), but this is the one that I use in my own mind and that makes it easiest to hold a lot of the concepts in my mind. Hopefully it will be of some help. Edit: Heh, now I just had an idea for a little interactive tool that uses the coverflow set up, but with a series of continuously running animations that you can accelerate between by scrolling and that display proper time dilation and length contraction depending on your relative distance from them along the "velocity line." If I stuck clocks on them, you should even be able to pull off your own twin paradox experiment. Hm...
-
Let's say you have your two particles A and B. If you mwah see the spin of A, the spin of B is also collapsed and is determinable from your measurement of spin A. There is no way to know that the wave function has collapsed from B's end, however. You can measure B, but that will immedoatelt collapse the wave function of both A and B, anyway. So if A has been measured, you can measure B and see that it has a definite spin (because A was measured and the wave function collapsed) or you can measure B and see that it has a definite spin (because you measured it). There's no way, from B's end, to detect the moment that the wave function collapses because someone measured A. You could send a signal to B after measuring A letting the system there know "Hey, the wave function collapsed because I measured A, move the pawn" but that that point you could just set up a transmitter to say "Hey, move the pawn" without bothering with entanglement.
-
Also, if you have grand rewards to look forward to when you die, it becomes easier to bear a lesser lot in life than if you think that what you get now is all you're ever going to get. Thrown in that doing anything drastic about your current situation is likely to result in losing out on the post-life rewards and that any wrongdoing on the part of the rich is guaranteed to be punished in the afterlife so you don't have to worry too much if one of them seems to be getting away with some bad things right now, and it's fairly effective at at least raising the threshold of tolerance for a lot of people.
-
How can we invent a machine intelligence if there is no such thing as intelligence?
-
What is the history of dark 'stuff'?
Delta1212 replied to SimonFunnell's topic in Modern and Theoretical Physics
Dark matter is called 'dark' matter because there seems to be mass exerting a gravitational influence on the matter we can see that isn't accounted for by the matter we can see. It is hypothesized that there is some form of matter that does not interact with the electromagnetic spectrum (I.e. 'Light') in the way that normal matter does, hence 'dark matter.' Beyond that, we don't know a whole lot about it The rest of the 'dark stuff' got slapped with the same label mostly because, like dark matter, they are placeholder names for the source of effects that we currently can't account for without there being "something else" that we don't much about except that it is apparently causing an effect that we can't otherwise explain. Aside from a relative dearth of knowledge about them and the 'dark' name, it's unlikely that they have anything in particular to do with each other. -
What about a moon orbiting a gas giant? You might be able to tweak the orbit, size of the planet and distance from the sun to get a fairly extended period of eclipse that actually would put the entire world into darkness, although precisely how extended I'm not sure, especially if you want the moon to be habitable without life support systems of some kind (since it would need to be a fair distance out from its star, I think. Not because the darkness would necessarily render it uninhabitable itself).
-
I think it's important to do away with the image of humans making a trek out if Africa as some kind of goal. It may have taken several generations for a group of humans to move a distance that could have been covered in a single human lifetime if there had been a dedicated and intentional migration. We have to bear in mind that really fixed settlements didn't started cropping up until the rise of agriculture, which hadn't happened yet, and that to some extent, humans needed to move around a bit to stay where the food was. It's likely that whatever group of humans left did so while looking for nearby unexploited food sources and that this drew them out into new territory gradually over quite a bit of time.
-
Memory Palaces take advantage of the fact that humans (not geniuses, but humans in general) often have better spatial memory than other kinds of memory. For example, I have no idea how many bookshelves there were in my childhood bedroom. That's not a fact I know. But I can call up a mental image of my room and count the number of bookcases and tell you there there were 10 (mostly skinny) shelving units full of books in my room. I'm able to call up the correct number even despite not knowing what that number is beforehand because of the way the brain stores spatial information. The memory palace concept exploits this and the fact that often one of the major obstacles to recall is an inability to get at a piece of information because the memory lacks associations. We use things like sights and smells to trigger memories or come at things that are "on the tip of our tongue" from different directions (e.g. I think the name starts with 'r' do I start reciting names that start with 'r' until I hit one that either sparks recognition or makes another connection I can use to get at the memory I want) because all of the information in our heads is associated with other information and we remember things using those associations. A memory palace creates an additional association for a piece of information, which makes it easier to recall. You can associate a fact with an object in your memory palace, and once you've made that mental association, thinking about that object will help you remember the fact. Placing the object in a "physical space" in your mind uses spatial memory to help you keep track of that object and make it easier for you to find. You don't have to be a genius, and it isn't magic. It doesn't actually make your memory any sharper than it is otherwise. It just helps you exploit certain quirks about the way our memory works so that you can use yours more effectively.
-
In a strange land.
-
Is it possible to hijack one thread simply by posting an entirely different thread?