-
Posts
416 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by AIkonoklazt
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
...A TOILET SEAT... INCLUDING THE HINGE..... -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Is there a law of nature that forbids a toilet seat "to have consciousness"? I'm trying to find out whether you have this weird paradigm. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
I'm not curious about what you're curious about either, because unlike you I already know how to find out. Is there a law of nature that forbids any toilet seat from being conscious? Read. the. clucking. argument. 43943765.avif -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Yeah I'm curious about a catapult too, I'm even curious about my toilet seat. What of it? You. Got. It. Backwards. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
You got it backwards buddy. You need to know the necessary and sufficient conditions of X FIRST before producing X. The ways of knowing comes now. The argument outlines it already. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
It does something, but that "something" isn't consciousness. it's performative intelligence. Read the argument. Again, you build a catapult. How do you know that it's not conscious? -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
The point includes the fact that no matter how coarse or fine a "difference" may be, underdetermination still applies to any model. "All models are wrong" still, "the map is not the territory" still. If performance is practical utility, then consciousness isn't in the picture because performance is regarding intelligence in the performative sense of the term, and not attributive. Otherwise, "artificial intelligence" is just one big gross misnomer as a term. Someone "builds"? How about the knowledge to build it in the first place? You are ignoring engineering. If there isn't a complete model of X, you are relying on functionalism and behaviorism. There ISN'T a complete model, ever. Scientific underdetermination is always at work. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Sure. the point is that there ARE relativistic effects that don't. There is a need of a complete model, otherwise we're relying on producing symptoms via functionalism and behaviorism. Practical models don't have to be complete. Doesn't matter where I'm citing the paper from unless the information is bad. If I have to issue a correction inserting the word "certain" in front of the word "relativistic," sure. Doesn't change my point. I've contacted my editor. You keep ignoring the crux of the issue and therefore you're a troll. Bye.- 530 replies
-
-2
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Dodge what? There is a requirement of a complete model. Practical models don't need to be complete. Uh, how to find out is the same question. "Correlation does not imply causation" applies to physics. It's a scientific issue. You try to corner everything into "law of nature," so what "law of nature" is scientific knowledge? (as in "scientific knowledge" itself as law") -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Please report all you want. The mods can read through the entire thread, and if they are in any way decent, they'd know what's going on. No, I'm not denying practical utility. That's not what it's about. I'm pointing out the requirement of a complete model. You know how a catapult works, so you need a law of nature to find out whether it's conscious? You are raising a moot question. Of course, you've said basically nothing that contradicted my arguments. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Therein lays the rub. You're assuming that the device is built on the title question of "is it possible." I'm not, because my argument (if you even bother to read it) goes ground-up. You on the other hand, already presumes the answer. You know what it is you're doing, right? Yeah, okay I'll ignore your rubbish for a second to say this: You only stated effectively "consciousness is a continuum" and not "consciousness necessarily exist in all things" That doesn't contradict anything I say.- 530 replies
-
-2
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
You're ignoring that the process of design isn't one of "natural law" or "natural design." You're the one who is avoiding the issue. Next. People accusing me of not reading the thread, but they sure go easy on themselves. The practical functioning has to do with models of all kinds. If you have a working model, that doesn't mean that your model is in any way indicative of the entirety of the subject you're dealing with. Complete models never exist in actuality- If that's the case then there's no way to build a complete model of anything that's conscious.- 530 replies
-
-1
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
The "conflation" was regarding another user. This forum software bunches up replies if the replies are made in quick enough succession. This was my reply, you might have missed it: There's "no confusion" yet in your previous reply you stated effectively the opposite, indicating that it's a "fuzzy" and how some people use it some way and other people use it in others, what in the world was that even for? Were you speaking for yourself or not? You said that "of course it can" and then left it at that. That's called "arguing by assertion." Back up what you said. https://rationalwiki.org/wiki/Argument_by_assertion -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Stop your conflation, because artificial consciousness is the topic. If you insist upon the conflation then there's no such thing as "artificial consciousness" in the first place since everything would just be "natural consciousness." https://www.merriam-webster.com/dictionary/artificial Excuse me, but do you know what you're even asking at this point? Do you realize that the making of any man-made object doesn't start with a "law of nature"? If so, then the design process itself is also that of "natural law," and thus everything is operating on "natural design," a equivalent of an Intelligent Design argument? You need to tread carefully at this point.- 530 replies
-
-1
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Then it's not a "natural phenomena." You said something about "forbid any natural phenomena" and artificial consciousness isn't one. You're conflating what is naturally occurring and what is subject to physical forces. Are you being serious with me or not? Satellite navigation, not satellite navigation system. Do you know the difference between the two? My analogy is regarding deriving knowledge from practical effects. This is in the actual text of the paper (one glance isn't going to help you or anyone here, as evidenced by this entire "discussion"): Goodness sakes... -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
So that was a part of its design? Ironically you're raising a point of note. You must have been kidding. If not, explain the analogy. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Explain how that's a valid analogy. p.s I'd implore people of this forum to not make fart jokes of this topic. "Artificial" contains implications such as it being an artifact, which in turn contains other implications connected to insertion of teleology. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
Then why is it even called artificial consciousness instead of natural consciousness? Please give this a bit more thought. What "design"? From whom, God? That's what you're saying?- 530 replies
-
-2
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
(System was sticking multiple replies together while I was typing this, but end up putting one reply by itself anyway.) Oh, the people who downvoted the post above that was answering 4 people at the same time? Uh, you had better be the same people I'm answering to right now. Being Mr. Horse from Ren & Stimpy won't help you understand anything. Are you sure you aren't just encouraging me? ...I have 27000 fake internet points on Reddit so do you think I care about those? Look up a page on law of contradiction. What does it say? It says it's interchangeable with several other terms, including principles. Natural phenomena, like artificial consciousness? Please clarify what you meant. p.s. Correlation does not imply causation applies to physics.- 530 replies
-
-1
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
You don't want to read? Okay, sure. 1. Principle of non-contradiction (philosophy of logic) What is "programming without programming"? An "instruction without an instruction"? A "artifact that's not an artifact?" Upon deeper examination (read the article) the concept of "artificial consciousness" is self-contradictory. 2. Principle of underdetermination of scientific theory (philosophy of science) The long of it here: https://plato.stanford.edu/entries/scientific-underdetermination/ Since (of course) you're not going to read that, you may have heard of various related sayings surrounding the concept: “Correlation does not imply causation” https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation , “All models are wrong, some are useful” https://en.wikipedia.org/wiki/All_models_are_wrong , and last but not least, “The map is not the territory” https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation Once again people... Please... READ THE ARGUMENT. I don't want to constantly rehash, only to be accused of re-assertion. Good grief. Two editors from two publications and four different professors read every last word- Why can't you?- 530 replies
-
-3
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
First definition: "Humanly contrived" - Merriam Webster https://www.merriam-webster.com/dictionary/artificial Edit: Actually, a much better word to look at is artifact: https://www.merriam-webster.com/dictionary/artifact Look at sense 2A. Human agency is involved. If you look at the rest of my article, you will see the importance. There are multiple issues with your last sentence. I've covered those in my article. Do people read my argument before engaging with it? 1) Machines don't learn. AI textbooks readily admit to this. The term isn't being used in its usual English sense of the word: ...The textbook applied the definition to a spreadsheet. Yes, updating Microsoft Excel is in this sense "machine learning." 2) Intention involves a subject matter. There's no such thing in an algorithm. A machine's internal operation is utterly isolated from external reality. It is dealing with a one-dimensional operative reality that is divorced from referents and thus isolated from the causal world. I gave two demonstrations illustrating this (The "Chinese Room" reference in the first demonstration was referring to Searle's famous Chinese Room Argument demonstrating semantics to be insufficient for syntax): 3) There's zero understanding. See above. To even understand something, this "something" must be a referent. Think of it this way- What is a conscious thought without even a subject? Now, look at the above two demonstrations on how a machine works. An algorithm is referentially empty. You, as a programmer, is responsible for a machine's operation along with the hardware designer that designed the hardware. Sure. this is the reference I used in my article (which no one saw, because nobody reads my argument before engaging with it: https://www.researchgate.net/publication/266515947_Relativistic_effects_on_satellite_navigation You're not arguing, you're asserting. "X is true because I said so." - Many people in this forum That's GPS. I was talking about satellite navigation.- 530 replies
-
-2
-
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
They are referring to different things Machines don't have psychological conditions. The term "hallucinations" in LLMs only means "undesirable output." It's a very bad usage of the term, as I've pointed out earlier. https://www.techtarget.com/WhatIs/definition/AI-hallucination p.s. Optical illusions and hallucinations refer to completely different things. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
First, consciousness has two necessary and sufficient conditions- Intentionality and qualia, per my article. Let's start with intentionality. I've raised two demonstrations in the article demonstrating the necessary lack of intentionality in machines. When there is only payload and sequence (the two together compose the very basic mechanism of an algorithm) there isn't any intentionality to speak of. That is, there isn't a referent subject. What is a conscious thought without even a subject? The pseudocode programming example in the article further reinforced this concept. Main idea: There isn't a conscious thought without a referent. I recently saw some random non-peer-reviewed paper on Arxiv (ugh...) that claims otherwise, but the author was blatantly relying on behaviorism (i.e. "if it quacks like a duck...") We would get nowhere by going to the "end" first and looking at the end symptom of any kind (functionalism / behaviorism). Real progress starts by looking at the necessary conditions. Want concrete examples? Look at what LLMs like GPTs do while they "hallucinate." The term "hallucination" in large language models such as ChatGPT / Claude / Bard is a complete abuse in terms; The "hallucination" is CORRECT OUTPUT within the programming. If you describe such process as a "hallucination" then you'd have peg all LLM output as "hallucination." LLMs do not operate on "meaning." Mathematical proximity is NOT "meaning." Seen for that angle, every "machine misbehavior" is actual "correct programmed behavior." Garbage in, garbage out. You programmed it P so that behavior X occurs. There is more than one, but I'll start with the simplest one first. You can't have programming without programming. A "machine that does things on its own and thinks on its own" is a contradiction in terms, and thus a "conscious machine" is a contradiction in term. What's an "instruction without an instruction"? There was an an entire section about it ("Volition Rooms — Machines can only appear to possess intrinsic impetus") That's The law of non-contradiction. The second one is the principle of underdetermination of scientific theory. This one is very involved if we go into fine details (I tried to make a short run in the section "Functionalist objections (My response: They fail to account for underdetermination)") In short, there can be no such thing as a "complete model." You may have heard of various sayings centered around the general idea, such as “Correlation does not imply causation” https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation , “All models are wrong, some are useful” https://en.wikipedia.org/wiki/All_models_are_wrong , and last but not least, “The map is not the territory” https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation Let's take the example of satellite navigation. If you just look at the practical success and usefulness of it, you may think "I have discovered definite laws surrounding orbiting entities and everything acting upon it and within it" because the satellite doesn't deviate from course. Well, satellite navigation doesn't depend on relativistic effects. You think you're going to "build a pseudobrain" and get anywhere even close? Then which arbitrary stopping points of practical "close enough" are you using? So everything that you don't see doesn't count? There is absolutely zero assurance just from outward symptoms. You can have a theoretical AGI that performs every task you can throw at it and it'd still doesn't have to be conscious (which points to other issues as well...) That's the two off the top of my head. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
I'm quoting StringJunky: Where is the "disconfirmation"? Was he using the English definition of the term? Is he talking about conclusive evidence of any kind? https://www.wordnik.com/words/disconfirmation To disconfirm isn't to merely "suggest" as StringJunky said. People were arguing by assertion. -
Artificial Consciousness Is Impossible
AIkonoklazt replied to AIkonoklazt's topic in General Philosophy
I really expect at least one item from someone, anyone, after half a year, that backs up what they say in any way whatsoever. All anyone had to do, was look up Wikipedia, to find that one reference to Chalmers' Computational Foundation argument. I'd say that Wikipedia's even a bit slanted in this regard, listing Chalmer's "pro artificial consciousness point" and... nobody else's anything. However, Chalmer's position ends up being nothing else than another variety of functionalism. Those random Wikipedia editors are pretty disappointing too. In other developments: Robert Marks, Distinguished Professor of Electrical & Computer Engineering at Baylor and director of The Walter Bradley Center for Natural & Artificial Intelligence, read my article, deemed it to be very good and recommended me to submit it for reprint at the center's online publication. It has now been reprinted there in three parts: Artificial Consciousness Remains Impossible (Part 1) Artificial Consciousness Remains Impossible (Part 2) Artificial Consciousness Remains Impossible (Part 3) Coincidentally, a few months after I originally wrote the article, the UN agency UNESCO banned AI legal personhood in their AI ethics recommendations, adopted by all 193 member states at the time. I wrote to Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO. She agreed to forward my argumentation to members of her organization in support and defense of the policy. In my view, not only would be AI legal personhood be unethical, it would be flat out immoral (see section "Some implications with the impossibility of artificial consciousness" of my article). There already have been legal arguments made questioning AI legal personhood, one of which is this one: https://www.cambridge.org/core/journals/international-and-comparative-law-quarterly/article/artificial-intelligence-and-the-limits-of-legal-personality/1859C6E12F75046309C60C150AB31A29 There is still a lack of multilateral public discussion and debate. A newspaper article's writer agreed to talk to his editor to see if it's possible to set up a written philosophical debate with me and some field experts named in an article. The usual responses I get from these things are that people don't have the time, but that won't stop me from trying. There are many people from AI-related fields that have expressed similar frustrations on how the current wave of hype is distorting perception of various issues. Edit: See item 68 (text bolded by me): https://unesdoc.unesco.org/ark:/48223/pf0000381137- 530 replies
-
-2