Jump to content

joigus

Senior Members
  • Posts

    4785
  • Joined

  • Days Won

    55

Everything posted by joigus

  1. As I said, Whether a fear or a hope, I can't tell; but an obsession it is.
  2. Exactly. One should always try to catch for patterns, not only in the facts, but also in human narratives, whatever the level of accuracy in reporting facts. I think some kind of crude facts generate this phenomenology, I have little doubt about that. But in the chain of narrative, something gets lost (or added), like in a complex, socially-driven, broken-telephone game. I think there's a lesson to be learnt in biblical (and other mythical) narratives that's very much related to what's happening here. Some Moses figure must have existed, but probably a very different guy from the one we imagine. It wasn't 2 or 3 million people leaving Egypt, but maybe a couple of hundred people, etc. Narrative distorts, and it doesn't do it in any old way; it does it according to your present fears, hopes, etc. Our emotions as a people are the gestaltic element that makes this Rorschach blot into a consistent picture of elder brother trying to help us.
  3. If one analyses UFO reports and close-contact stories, and from the purely story-telling POV, it all sounds very much as humans (bipedal, anthropomorphic) from the future doing the time-warp/FTL thing (or from a parallel dimension) and trying not to leave too much of a fingerprint (so as to avoid big ripples of retrocausal interference). These 'beings' are invariably portrayed in such a way that the number one feature that strikes me is how much they look like moderately-distant relatives that care about us. I wonder if the whole thing is not just a re-edition of the biblical stories about angels, with the necessary literary elements that translate them from the olden-days folklore into a folklore that we can recognize and accept. We don't know nearly enough about time yet to say, without a shadow of a doubt, that they're completely beyond belief; but in the meantime we can entertain ourselves discussing the literary values of such stories. And literary values there are.
  4. Yes. Konrad Lorentz is one example. Another is Werner Heisenberg --also a Nazi. And I'm not crazy about Pauli either --physics is my background.
  5. For a particularly creative way of using our running skills, look up 'persistence hunting' and research thereof. Very interesting way of sophisticated thinking* making up for lack of speed. *Foreseeing the consequence of a chain of consecutive actions, rather than 1st-order causal thinking --the immediate consequence of an action. Corvids and psittaciformes!! I think @Peterkin's link partially overlaps with very interesting talk by author mentioned in mentioned link --John Marzluff: https://youtu.be/1Wp_R0Eo-NE?t=1183 (Ends at 38' 10''.) In the second Marzluff video, he mentions the observed conclusion that crows hold grudges --and pass them on culturally-- against their particular villains. I've set the starting time to when he starts mentioning that. The experimental method involves adding radioactive markers to glucose that reveals brain activity. Homologous areas like amigdala, cortex, and hippocampus reveal different circuitry being activated depending on kind of stimulus, and whether experience is first-time (hippocampus significantly involved) or later experiences (amigdala).
  6. It's clearly undergone circumcision.
  7. Although from the big bang you cannot infer gravity (meaning Einstein's field equations), there is a connection between both in the opposite logical direction. Singularities are a consequence of Einstein's field equations. This is the content of the Penrose-Hawking singularity theorem. Under a set of assumptions that amount to saying that causality holds (until you reach a horizon, that is), horizons themselves, and the singularities hidden behind them, are a must. Another way of saying it is that congruences of geodesics are not complete. So you could say singularities are a consequence of classical gravity (GR). We do not believe classical general relativity is the whole story though. You need quantum mechanics to really understand gravitational horizons. Quantum mechanics must be heavily involved there.
  8. Einstein's equations are compatible with a menu of possibilities, with positive, negative or null cosmological constants. They're also compatible with different choices of initial conditions, so it's not that simple.
  9. Bipedal two-eyed little people with two hands and a metabolism that's compatible with an atmosphere rich in oxygen? The most natural hypothesis is that they're human descendants, millions of years from now, travelling back in time. I see big problems in trying to build an explanation from what's essentially witness accounts. Human memory is very faulty. Especially when one tries to reconstruct events one didn't quite expect or understand. When we get startled, our cultural background plays a very active part in building up a 'consistent' picture of what we saw.
  10. I think what's bothering @geordief is the fact that the photons from the tip of the measuring rod reach the observer's eyes with a delay. If that's the case, my explanation would be that Einstein's definition is based implicitly on the assumption that you can always measure rods' lengths by comparing them to a ruler at rest with respect to different observers. IOW, you can completely fill space time with as many rulers as you need, each of them at rest with whatever observers you want to discuss, and whatever their state of (inertial) motion. Later introductions to SR appeal to light signals. Is that anything like what troubles you, @geordief, and have I been any helpful?
  11. Absolutely. Prediction is the first step in a process that later snowballs, as @Peterkin points out. Prediction is like inverse memory. The first version of this new capability must have been inductive: If B always follow A; whenever you see A, expect B. Deductive processes and analysis (breaking up the problems into smaller problems) must have come later. But to think that all these stages must have appeared incrementally, adaptively, is what boggles the mind.
  12. I'm sure this evolutionary pressure is at its root. The extraordinary rate of development that @Peterkin described is, I'm sure, the primary most immediate biological reason. I also agree that memory must have played a very important part. Memory is the substrate of ideas. But imagination goes the extra mile. Let me give you an example: We know coral snakes and kingsnakes are easy to confuse. This is at the root of so-called Mertensian mimicry. Kingsnakes are quite harmless, but they disguise themselves as deadly coral snakes because there are potential predators that can't tell the difference. Most predators have this memory that @TheVat pointed out in their almonds example, and helps them avoid both the harmless and the deadly snake. This shows that many animals (corvids excluded) must have some kind of memory-based cognition. But humans go much further. We even get to the point of coming up with mnemonics to tell them apart: "Red next to black, friends to Jack; Red next to yellow, kill a fellow" If you've got imagination, which allows you to produce language, you can exploit harmless kingsnakes to eat, for example; while other predators have to give them up.
  13. I would like to introduce another perspective to the very interesting physiological arguments that have been displayed (as @TheVat said, the topic is vast.) It is the question of evolutionary pressures. That's the way I would tackle this question: What (in evolutionary terms) gave rise to big brains with highly complex relational cortex with its cognitive features? Big brains are very expensive organs: They're gluttons for energy, and are under very strict detoxification demands due to extremely high oxidation levels. https://askananthropologist.asu.edu/brain-expensive (Leslie Aiello) https://www.scielo.br/j/bjg/a/FxXZ7LPBDmZxjKKVbPym4Vb/?lang=en Our ancestors must have had to pay dearly also (in exchange for a brain capable of sophisticated thought) with a high degree of neoteny (delayed development) in human infants as compared to other mammals and, in fact, to other primates. Human infants are notoriously vulnerable and dependent from their family until very late in development. For developing brains (big enough to implement complex thinking) to pay off in evolutionary terms, there must be a very powerful reason. A very interesting idea is one authored by Rick Potts, from Smithsonian, that the Pleistocene, with its wildly-varying climate, has been a major trigger of this evolutionary pressure. If there's prey or harvest; year in, year out, exactly where you expect it; you can afford to be dumb. If the world becomes unpredictable, having a big brain pays off.
  14. If Churchill were here, he'd probably say, Never so much was said by so many about something that worries so few.
  15. Sorry, I didn't explain. This would be cloning for quantum superposition \( \left|a\right\rangle +\left|b\right\rangle \).
  16. There is not such a thing as no-cloning theory. The non-cloning theorem is the simple fact that if quantum evolution is linear, there is no way that you can produce, as a result of an interaction, an outgoing state for a second system that consists in the second quantum system cloning (xeroxing, carbon-copying, reading, reproducing) the first system's quantum state. This is not a requisite of quantum teleportation (biggest misnomer in physic's history), but a requisite of your setup. More schematically than the Wikipedia article: \[\left|a\right\rangle \left|\psi\right\rangle \rightarrow\left|a\right\rangle \left|a\right\rangle\] \[\left|b\right\rangle \left|\psi\right\rangle \rightarrow\left|b\right\rangle \left|b\right\rangle\] But, on account of linearity of quantum evolution: \[\left(\left|a\right\rangle +\left|b\right\rangle \right)\left|\psi\right\rangle \rightarrow\left|a\right\rangle \left|a\right\rangle +\left|b\right\rangle \left|b\right\rangle\] But: \[\left(\left|a\right\rangle +\left|b\right\rangle \right)\left|\psi\right\rangle \rightarrow\left(\left|a\right\rangle +\left|b\right\rangle \right)\left(\left|a\right\rangle +\left|b\right\rangle \right)\] which does not coincide with the first expression. So cloning quantum states is impossible if quantum evolution is linear. Maybe you can relax the hypotheses. The most natural one is assuming that the state to be copied is a strict mixture (a statistical 'scrambling' of several wave functions). But a similar result holds: The no-broadcasting theorem. Again, it's not a consequence of EPR correlations, but a consequence of the general principles of quantum mechanics. You cannot broadcast in any way a quantum state.
  17. I know next to nothing about mental illness. My point is --however we define the boundary between a healthy and a diseased mind, which I suspect won't be easy to do-- that the question of consciousness comes first; it's bound to be more elementary. It must have to do with how certain physical systems ('observers') make room in some of their 'self' variables to represent their immediate environment in its ongoing evolution. If the 'self' physical variables are busy representing the ambient variables, perception of the self cannot fundamentally be dissociated from perception of these ambient variables, and the self can only arise as a later educated abstraction, as an afterthought, so to speak: Whatever it is that is the substrate of these things going on.* The discussion presumably will involve biochemistry very heavily, and emergence (behaviours residing in the ensemble, not in the micro-variables) will presumably play a central role in how this projection arises. In very much the same way, a star is a star only because the atoms that make it up are doing something 'starry', or a chair is a chair only as long as the atoms that make it up are doing something 'chairy'. If star or chair are blown apart, neither of them is going anywhere; the atoms have just stopped supporting their identity as whatever it is they were before. My --admittedly unjustified, but I hope, justifiable-- intuition is that this consciousness must rest in fundamental, perhaps as yet unclear, physics (biophysics, biochemistry, or even more basic). I suspect, in fact I more or less know, that none of these ideas are original. But I need to regurgitate them in my own words in order to see that I --and others-- understand what I'm talking about. *Even if the 'self' does not exist any more than the temperature of a gas (there is no temperature for an atom) or any other similarly emergent concept (star, chair, etc --see below).
  18. Impossible: https://en.wikipedia.org/wiki/No-cloning_theorem
  19. But that's more in the direction of what cognitive scientists call 'theory of mind', isn't it? The way I see it, some kind of basic consciousness must develop before the logical inference of 'I' and 'the others' takes place. As to 'sanity', it is conceivable that a conscious agent could be completely insane. But I see no reason why this subjective experience cannot be objectively explained, at least in principle. At the very least, it's something we must aspire to.
  20. One historical factoid: Feynman never wrote a book. He only wrote papers. All his books are based on recordings or, in the case of Lectures on Gravitation, written notes by students, and translated from Feynman notation to standard notation. That wasn't James Gleick's. Sorry. James Gleick's book is Genius, the Life and Science of Richard Feynman. Surely you're joking... wasn't written by Feynman either. It's based on recordings again. It's Feynman's recordings typed by Ralph Leighton.
  21. I don't think Feynman ever wrote popular science books. Not for the greater readership anyway. The Character of Physical Law or Quantum Electrodynamics, the Strange Theory of Light and Matter are not your regular popular-science books. They are only deceptively popular. They're actually scientifically maverick attempts at showing you the nuts and bolts of physical theories, often getting involved in lateral thinking, and not at all easy reads. It was James Gleick's Surely you're Joking, Mr Feynman that did it. Before that book Feynman was widely known among the physics community, but by no means known to the general public. Edit: x-posted with @MigL
  22. https://arxiv.org/pdf/1607.05129.pdf Also: I suppose Carlo Rovelli's excellent book on quantum gravity should be next.
  23. I agree with this. In fact, I look at this kind of exercise as a useful one when it comes to defining most difficult/slippery concepts: Constructively jettison any connotations that are suspect of being just contingent as regards the concept to be defined. In this case: emotions, sight, touch or other specific senses. There would have to be some sensors, granted. Whether those sensors are 'internal' or 'external', to me, is not that clear, as the boundary is surely a fuzzy one at some level of description (I don't perceive my nails or hair as 'self'). They would have to be measuring stimuli around here, not at different spots hundreds of miles away.
  24. No version of the internet --that I can see, or foresee, now-- could replicate the functionality that I've tried to describe as, How does this projection occur and how does it coalesce into a continuous integrated perception is, to me, the only 'hard' standing problem. It's not just about harvesting data from the environment, not even in a chain of 'downward' or 'downstream' causation --however that concept presents itself to me as possibly necessary for any conscious entities worth the name. The internet, or a bee, jellyfish, or any other data-processing entity could, for all I care, gradually or otherwise, turn into (or forever have been) a mumbling, stupid, forgetful, incoherent conscious entity, and it would still conceivable be a conscious entity. The step from conscious to self-conscious, to me, is not the hard part.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.