Jump to content

AIkonoklazt

Senior Members
  • Posts

    416
  • Joined

  • Last visited

Everything posted by AIkonoklazt

  1. It's not a definition (I'm not going to give a theory- Everyone can use the regular English dictionary definition) but as I've already indicated earlier in the thread, it's a matter of necessary and sufficient conditions for consciousness (intentionality and qualia). WIthout those you don't have consciousness.
  2. Yeah, because a computer is designed and a brain isn't. So would you agree if I say something like "comparing artificial consciousness to natural consciousness would be comparing apples with oranges, so to expect consciousness the way everyone has been talking about from machines would be nonsense?" Hey, I would agree with that 100%! 😁 It'd be "emulated symptomatic blinky-lights consciousness" instead of "actual consciousness"
  3. The calculations include all connections in a fruit fly brain (each to 182 others via synapses) while counting none in the machine whatsoever.
  4. Let's redo the completely screwed up math I did earlier... CPU: 16.6 billion times 9472 is 157235200000 (157 billion) GPU: 58.2 billion times 37888 is 2205081600000000 (2 quadrillion) GPU dwarfs CPU so we'll just forget the CPUs. Like I said before, ignore all connections between all transistors on-chip (plus discounting everything else that's not on the chips, like boards, memory, storage, interfaces, controllers etc etc etc), it's 2 quadrillion bare transistors of the supercomputer versus 55 million connections in the fruit fly brain (again I'm generously counting connections between all neurons and synapses) plus whatever other bonus handicaps I'm giving, the supercomputer is still multiple orders of magnitudes more complex than a brain of a fruit fly, yet the fruit fly is more conscious than the supercomputer... The complexity emergence argument evidently just holds no merit. Don't discount the complexity of a modern superscaler microprocessor... It takes a design team of at least hundreds of people SEVERAL YEARS to churn out one... and that's just the chip design, it doesn't include process development e.g. the manufacturing tech development side of things
  5. I don't buy into emergentism, especially complexity emergentism which I addressed in my article. This is what I got off of two quick searchs from MS Bing: Frontier has about 9,707,648,000 + 48,598,272,000 = 58,305,920,000 or over 58 billion transistors. (Edit: oops looks like I severely undercounted this figure because just the CPUs in that thing is more than 95 trillion transistors but let's just be pessimistic about computers) Let's discount any connections between transistors, don't even design anything, just plop all of them down on a slab substrate or something. Let's allow connections in the fruit fly brain but not computer chips, because we need "margin." Because I didn't even get an answer out of Bing, I went to Perplexity and got this: Okay. That's about 55 million versus 58 billion with a big margin built in. Why isn't a supercomputer more conscious than a fruit fly? There goes the complexity argument, but what about other varieties of emergentism? I don't buy those either, and others apparently also don't. This is what someone else has to say (he leads an applied AI team at a robotics company https://ykulbashian.medium.com/emergence-isnt-an-explanation-it-s-a-prayer-ef239d3687bf There is another discussion from a prominent systems scientist, but it's behind a signup wall: https://iai.tv/articles/the-absurdity-of-emergence-auid-2552?_auid=2020 I think it's handwaving and they do too when it comes to the idea being abused. If the issue is about system behavior as Cabrera points out, then what separates it from behaviorism?
  6. Stop your hysterics. How is a car tyre "self diagnosing" (since I can't find your reference) and how does that even fit any definition of intentionality and/or qualia, which my article stated to be the basic requirements for consciousness? and by the way, again, how the heck am I misusing the term "machine?" iNow trying to troll. Cute.
  7. You said I was "misusing" a definition and yet never stated how or why. Back up your assertion. How is a car tyre a machine? I don't think I'm the one misusing a term. Which definition of a machine are you using and from where? (Searched this thread for the word "tyre" and I only found my reply and the reply that I just quoted.)
  8. No machine does anything "by itself"... It has nothing to do with the architecture.
  9. I don't see how the article didn't make it clear, since the purpose of that section ("intelligence versus consciousness") is to distinguish the two. Intelligence is an ability, while consciousness is a phenomenon. Intelligence, as in the term "artificial intelligence," is performative and not attributive; This has been pointed out a lot by experts in AI, yet it is a point of continual confusion. A machine performs tasks that are seemingly intelligent, and not "being intelligent." I really thought the distinction is clear. I suppose I can throw more rhetoric at it but I chose not to. This makes the term "artificial intelligence" a technically specialized term. It's NOT a common vernacular, because if it is, AI would literally possess intelligence instead of exhibiting symptoms of it. https://www.merriam-webster.com/dictionary/intelligence I've seen a lot of comments from experts, especially by Bender (co-author of the now-famous "stochastic parrots" paper https://dl.acm.org/doi/pdf/10.1145/3442188.3445922 She was the one who coined the term to describe LLMs such as ChatGPT/Bard) repeatedly complaining about conflation of concepts and terms surrounding this. "Intelligence" and "learning" in machines, are technical terms referring to their performance, and not their attributes. I've pointed this out very clearly in the article using a passage from an AI textbook: Note how the term “experience” isn’t used in the usual sense of the word, either, because experience isn’t just data collection. The Knowledge Argument shows how the mind doesn’t merely process information about the physical world[9]. In my opinion, the field of AI is in such a mess because of the constant anthropomorphisation, conflation of concepts, and abuse of terminology. Back to the question of the anthill: First, the anthill itself, as you've said, is a building. A building itself isn't a machine in the first place. Are you including the ants? If you're just talking about the "anthill building", what's intelligent about it? It's not even performing any intelligent-seeming task. It just seems to me that you really need to start from the basics and look at what exactly you are referring to when you use a certain term; I can't stress this enough. When you referred to the human body as "mobile anthill," what exactly do you even mean by that... I hope you realize that the question is already loaded because you're saying something about the anthill and the ants already. NO, this isn't "question dodging," this is clarification. Should I add to the very first section of my article ("Intelligence versus consciousness") the words "Intelligence is an ability, while consciousness is a phenomenon"? I seriously thought people would get it right off the bat- It went by two different editors at two publications and they didn't tell me the distinction wasn't clear or anything like that. It's much more than Von Neumann (see section of article “Your argument only applies to Von Neumann machines”) where I explained how my explanation even applies to catapults). It only seems VN-ish because: I'm speaking from an engineering perspective, as in "you can't make this thing, and here's why." When I do that, I have to use language that people understand, and people are most exposed to VN-ish things. I had to use practical examples, and most of that stuff is VN-related. As for the Ellenberg passage... My eyes are kinda going bad so that photo was hard for me to read. It's really short and I can't tell much from it. Yes, he talked about Markov chain but I don't know what ultimate point he was making with it- As in, what does the discussion have to do with referents?
  10. You're very welcome. Feel free to PM me if you need articles to specific topics you have in mind surrounding consciousness, AI, and philosophy of mind. I might have what you want buried somewhere in my web-link archives.
  11. That's utilizing functionalism on a neuron (saying something like "the function is to encode and decode"... Isn't this computationalism all over again?). The entire thing about heuristics... what determines it? The selection criteria somehow isn't itself a program? The created "populations" didn't come from programs? Programming is everywhere in a machine, up to the bare metal. Machine "evolution" isn't "evolution" at all. When any design is involved, it's over. Who designed the genetic algorithm itself? All this is kicking the can labelled "programming" down the road hoping it would disappear into the rhetorical background. The second part of your reply continues the functionalism of neurons. It's using "symbol systems"? Nope... the computational/IP conception really needs to die off. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer You're using a computerization technological parallel. It's the latest in the one long chain of bad analogies based on the latest tech of the day, starting from hydraulics, then telephones, then electrical fields, and now computers and web-networks (or what's being called "neural networks..." when there's nothing "neural" about those) As for DNA, I've addressed that issue in the article by saying DNA differs in functional compartimentalization (i.e. the lack thereof) as well as scope. DNA works nothing like machine programming code. You gave a bunch of names. I don't know what points they made. You have to tell me. That's what I meant. You then pick at singular and plural. Great. Then you said algorithms are not needed, and there are other mechanisms. Like what? What do you mean "treat other people who know other things than you do?" I simply asked you the points those people you named made, plus what those "other mechanisms" are. Excuse me but what's so unreasonable about the request?
  12. No, you need to state your points. I even used the example of a catapult in my argument. I'm curious now.
  13. You didn't state what their point is. iNow just quoted me out of context. He is clowning. Gonna just let him clown.
  14. The impossibility is three-fold. See this reply: That was my reply to dimreepr and not you "Their own"? What is the algorithm responsible for the ability? You can't hatch your way out of programming. What you're doing is no different than everyone else saying things like "but the algorithm is evolutionary" You're in such a hurry that you didn't even notice that the passage was about me NOT using theory but principles and observations. Slow down and then maybe I'll consider the rest. I'm not going to machine gun with everyone. You yourself said you're confused in the rest of the reply.
  15. Why do you claim that I don't understand, for example, the difference between intelligence and consciousness? Back up your claim. I've already given the example of the color "red."
  16. That question wasn't scientific anyway. Phenomenal consciousness isn't amenable to external discovery. ...Which is why you're in the "General Philosophy" subforum right now.
  17. Actually, scientific studies supports the presence of underdetermined factors themselves (the neuronal stimulation experiment on fly neuronal groups). The procession of science itself (this is a big one) demonstrates the underdetermination of scientific theories as a whole (the passage from SEP re: discovery of planets in our solar system). My argument is also evidential. The impossibility, as demonstrated, is multifaceted. A) The problem isn't a scientific problem but an engineering as well as an epistemic problem (i.e. no complete model) as previously mentioned. B) There's also the logical contradiction mentioned. The act of design itself creates the issue. A million years from now, things still have to be designed, and as soon as you design anything, volition is denied from it. (Of course you can gather up living animals and arrange them into a "computer," but any consciousness there wouldn't be artificial consciousness. Why not just cut out animal brains and make cyborgs? It's cheaper and simpler that way anyhow if people are so desperate for those kinds of things... I seriously hope not) C) The nature of computation forbids engagement with meaning, as demonstrated in the Symbol Manipulator thought experiment (which is derived from Searle's Chinese Room Argument- instead of refuting behaviorism/computationalism like the CRA did it now shows the divorce of machine activity from meaning) and the peudocode programming example Is the argument air-tight? I wouldn't know unless people show me otherwise. This is why I've posted the article. This is why I've been trying to set up debates with experts (One journalist agreed to help a while ago, haven't heard back since... Usually people are really busy; I make the time because this has become my personal mission especially since court cases are starting to crop up as I have expected- The UN agency UNESCO banned AI personhood in its AI ethics guidelines but who knows the extent the member countries would actually follow it) I thought I did think up a loophole myself a few months back, but after some discussion with a neuroscience research professor (he's a reviewer in an academic journal) I realized that the possible counterargument just collapses into yet another functionalist argument.
  18. What distinguishes "where did consciousness kick in" from "the function where consciousness kick in" in functionalism? If there is none, then it's not a meaningfully different question than building a model which is in turn a fool's errand. Did your vision come from the video game Nier Automata? "We can make them conscious" Good for you to say without realizing what you're saying. You are inserting teleology, which means in the process of design you are denying volition. Evolution isn't a process of design unless you're making an Intelligent Design argument. No. Actually the question is "why consciousness?" Take a look at this over the weekend if you have time: https://www.sciencedirect.com/science/article/pii/S1053810016301817 The question exposes itself when you think about this: An AGI (artificial general intelligence) can theoretically accomplish all human-level tasks given to it without ever being conscious. So, why even bother trying to "make" an AGI conscious? Just for chits n' giggles?
  19. That is correct. You can't build intentionality. Any attempt to do so results in producing symptoms (functionalism and behaviorism). Doing that kind of engineering is working from the outside in, doing things backwards. The "hard problem of consciousness" mentioned by people like Chalmer involves going from the other direction- Internality, instead of externality. This is what I was doing in my article.
  20. I'm only stating that consciousness could be a state. I can't really say "consciousness IS ____" because I can't have a model myself- If I engage in theoretics, I'm destroying my own positioning. "Hey! You can't use models! Well here's my model............" I am relying on the notion of the necessary and sufficient conditions for consciousness (i.e. what consciousness does and does not entail), and not what consciousness itself is. If I go into theoretics I'm dead meat (see my icon), might as well stick a fork in my article- It's done. I must start from first principles and primary observations. Trying to disprove a theory using yet another theory would be like trying to topple a sand castle with a small ball of sand. That also means my article contains no explanatory power aside from things like "why machine learning isn't actual learning, and why do AI have some very bad behavior?" I only used that as an example in this subthread. You accused me that my article isn't inclusive, so I tried to explain how it was. Intentionality can still occur in a slug. A slug can have "a point of view." Are we at least past that point of contention?
  21. "Point of view" is the position from which something is evaluated: https://www.merriam-webster.com/dictionary/point of view "A slug's point of view" is just that- experiencing things from a slug's perspective.
  22. Okay. Let's start from "a slug's point of view." What is bad and not understandable about what that's referring to?
  23. I can't really judge his argumentation via his title thesis alone. I have to understand what he's saying.
  24. What's wrong with what I said about "a slug's view?" I still don't get it. How is that non-inclusive? You said it yourself- "imitate." We can do imitations. That's it. I just told you about intentionality- Now implement that in a system and let's see what you get. Since you're not getting at the rest of the article, let's start from there. Someone else admitted to me that there's no physical law making toilet seat consciousness impossible, either. That's not a meaningful yardstick. A machine that "does things by itself" sooner or later involves "a program that's not a program" or "programming without programming." Upon deeper examination, artificial consciousness is an oxymoronic concept. ...then consciousness wouldn't be an illusion, since those people don't have such "illusion." I'm trying to grasp the context of what you've quoted.
  25. Scientific basis? What about engineering basis? (okay. You're not going to look at the article, but can you look at the reference section of the article? There are science and computer science references in there)
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.