Jump to content

Recommended Posts

Posted (edited)
3 hours ago, mistermack said:

I don't detect a definition in there. This is your thread. It's not unreasonable to ask you for a definition of consciousness, that we can all understand and agree on. 

But having said that, I totally disagree with what you just wrote. You can be conscious, semi-conscious, drifting in and out of consciousness, and unconscious. And in the animal kingdom, you have a spread of organisms with a complete range, from human consciousness all the way down to nematode worms, and beyond that down to bacteria etc. 

We know that our own consciousness arose at those basic levels and evolved up to our current state, starting with chemical signals, advancing a bit with nerves and their electricals, bit by bit all the way to us. 

Why can't we imitate those systems? What's the intrinsic problem that says you can't even start to make a machine that's as conscious as an earthworm? Because if you can do that, then you can build on that to a machine that can rival our own consciousness and beyond. 

What's wrong with what I said about "a slug's view?" I still don't get it. How is that non-inclusive?

You said it yourself- "imitate." We can do imitations. That's it. I just told you about intentionality- Now implement that in a system and let's see what you get. Since you're not getting at the rest of the article, let's start from there.

3 hours ago, Ghideon said:

I have looked at the article and other sources and got curious about science behind why it artificial consciousness is impossible. Your answer

I'm trying to understand if that is a matter of definitions and logic. And, if so is the case, ruling out that there is any physical law making artificial consciousness impossible. 

Someone else admitted to me that there's no physical law making toilet seat consciousness impossible, either. That's not a meaningful yardstick.

A machine that "does things by itself" sooner or later involves "a program that's not a program" or "programming without programming." Upon deeper examination, artificial consciousness is an oxymoronic concept.

2 hours ago, Genady said:

Perhaps these people don't have this illusion.

...then consciousness wouldn't be an illusion, since those people don't have such "illusion." I'm trying to grasp the context of what you've quoted.

Edited by AIkonoklazt
Posted
13 minutes ago, AIkonoklazt said:

I'm trying to grasp the context of what you've quoted.

It'd be OT here, as the topic of what I've quoted is,

image.png.cd6ecbd43ca26119785ab9a69432f930.png

Posted
3 hours ago, AIkonoklazt said:

Seems to but not necessarily. The point is to distinguish phenomenal consciousness from whatever other things people call "consciousness." Why can't a slug have a "first slug view" of things?

I'm sorry, maybe we have a language problem, but I have absolutely no idea what you're trying to say there. 

I posted the Dawkins link because I was beginning to doubt myself, is it me being a bit thick here?

I can follow and understand everything those two are saying, but I can't make sense of your above post, or most of the others. Dawkins is a top communicator, and Greene is no slouch, and both are talented scientists. I try to keep my posts clear and concise and jargon-free. I'm no Dawkins, but that's how I'd like to come across. 

Posted
1 minute ago, Genady said:

It'd be OT here, as the topic of what I've quoted is,

image.png.cd6ecbd43ca26119785ab9a69432f930.png

I can't really judge his argumentation via his title thesis alone. I have to understand what he's saying.

Posted
Just now, AIkonoklazt said:

I can't really judge his argumentation via his title thesis alone. I have to understand what he's saying.

Yes, of course. But it'd be OT in this thread.

Posted
2 minutes ago, mistermack said:

I'm sorry, maybe we have a language problem, but I have absolutely no idea what you're trying to say there. 

I posted the Dawkins link because I was beginning to doubt myself, is it me being a bit thick here?

I can follow and understand everything those two are saying, but I can't make sense of your above post, or most of the others. Dawkins is a top communicator, and Greene is no slouch, and both are talented scientists. I try to keep my posts clear and concise and jargon-free. I'm no Dawkins, but that's how I'd like to come across. 

Okay. Let's start from "a slug's point of view." What is bad and not understandable about what that's referring to?

Posted
2 minutes ago, AIkonoklazt said:

Okay. Let's start from "a slug's point of view." What is bad and not understandable about what that's referring to?

I'm amazed you have to ask. I have no idea what you mean by that. Maybe with more context it might be simple. But without it, I have no idea what you're trying to convey. 

Posted
1 minute ago, mistermack said:

I'm amazed you have to ask. I have no idea what you mean by that. Maybe with more context it might be simple. But without it, I have no idea what you're trying to convey. 

"Point of view" is the position from which something is evaluated: https://www.merriam-webster.com/dictionary/point of view

"A slug's point of view" is just that- experiencing things from a slug's perspective.

Posted
1 minute ago, AIkonoklazt said:

"A slug's point of view" is just that- experiencing things from a slug's perspective.

So, you just throw that in, without any context, with no continuous narrative, not linking it to anything, and expect people to be on your wavelength? You don't say why it's relevant, or what it's relevant to, or why it combines with something previously mentioned to establish a proposition. 

You can't expect other people to share your own line of thought. You can't build a wall by standing back and throwing random bricks, you need to build up on top of what went before. 

Posted
Just now, mistermack said:

You can't build a wall by standing back and throwing random bricks, you need to build up on top of what went before. 

Depends on how many bricks you have 

Posted
2 hours ago, studiot said:

So if you wish to call consciousness a state, your state model must include both these internal and those external parameters.

I'm only stating that consciousness could be a state. I can't really say "consciousness IS ____" because I can't have a model myself- If I engage in theoretics, I'm destroying my own positioning.

"Hey! You can't use models! Well here's my model............"

I am relying on the notion of the necessary and sufficient conditions for consciousness (i.e. what consciousness does and does not entail), and not what consciousness itself is.

If I go into theoretics I'm dead meat (see my icon), might as well stick a fork in my article- It's done.

I must start from first principles and primary observations. Trying to disprove a theory using yet another theory would be like trying to topple a sand castle with a small ball of sand.

That also means my article contains no explanatory power aside from things like "why machine learning isn't actual learning, and why do AI have some very bad behavior?"

5 minutes ago, mistermack said:

So, you just throw that in, without any context, with no continuous narrative, not linking it to anything, and expect people to be on your wavelength? You don't say why it's relevant, or what it's relevant to, or why it combines with something previously mentioned to establish a proposition. 

You can't expect other people to share your own line of thought. You can't build a wall by standing back and throwing random bricks, you need to build up on top of what went before. 

I only used that as an example in this subthread. You accused me that my article isn't inclusive, so I tried to explain how it was. Intentionality can still occur in a slug. A slug can have "a point of view."

Are we at least past that point of contention?

Posted
14 minutes ago, AIkonoklazt said:

I only used that as an example in this subthread. You accused me that my article isn't inclusive, so I tried to explain how it was. Intentionality can still occur in a slug. A slug can have "a point of view."

So are you claiming that no machine can ever have "a point of view" like a slug can?

Posted
1 minute ago, mistermack said:

So are you claiming that no machine can ever have "a point of view" like a slug can?

That is correct. You can't build intentionality. Any attempt to do so results in producing symptoms (functionalism and behaviorism). Doing that kind of engineering is working from the outside in, doing things backwards. The "hard problem of consciousness" mentioned by people like Chalmer involves going from the other direction- Internality, instead of externality. This is what I was doing in my article.

Posted
52 minutes ago, AIkonoklazt said:

hard problem of consciousness

The hard problem of consciousness appears to be defining it. 

And talking about it in plain English. I don't see any prospect of that happening so I'm out. 

But generally, in plain English, evolution made aerofoils. As in the birds wing, etc. Humans made aerofoils, and put them in planes. They're different but they both fly. 

Evolution made animal brains, including our own. They are conscious, in varying degrees. Humans make electronic brains, and we can make them conscious too. They are different, but they can be made to have a form of consciousness, as I use the word. 

I think we get side-tracked by our own form of consciousness, because we have such exceptional brains. Go back in time, back through our ancestral past. Where did consciousness kick in? Because after that point, there is no mystery. It just grew over time. 

I have this vision of Earth being attacked and defeated by machines originally made by aliens, that have learned how to improve their technology and their own intelligence to such an extent that we are no match for them. 

It's not going to help matters, if we try to explain to them that they are not conscious and never can be.

Posted
1 hour ago, mistermack said:

The hard problem of consciousness appears to be defining it. 

And talking about it in plain English. I don't see any prospect of that happening so I'm out. 

But generally, in plain English, evolution made aerofoils. As in the birds wing, etc. Humans made aerofoils, and put them in planes. They're different but they both fly. 

Evolution made animal brains, including our own. They are conscious, in varying degrees. Humans make electronic brains, and we can make them conscious too. They are different, but they can be made to have a form of consciousness, as I use the word. 

I think we get side-tracked by our own form of consciousness, because we have such exceptional brains. Go back in time, back through our ancestral past. Where did consciousness kick in? Because after that point, there is no mystery. It just grew over time. 

I have this vision of Earth being attacked and defeated by machines originally made by aliens, that have learned how to improve their technology and their own intelligence to such an extent that we are no match for them. 

It's not going to help matters, if we try to explain to them that they are not conscious and never can be.

What distinguishes "where did consciousness kick in" from "the function where consciousness kick in" in functionalism?

If there is none, then it's not a meaningfully different question than building a model which is in turn a fool's errand.

Did your vision come from the video game Nier Automata?

"We can make them conscious" Good for you to say without realizing what you're saying. You are inserting teleology, which means in the process of design you are denying volition.

Evolution isn't a process of design unless you're making an Intelligent Design argument.

1 hour ago, Genady said:

That is the question. +1

No. Actually the question is "why consciousness?" Take a look at this over the weekend if you have time:

https://www.sciencedirect.com/science/article/pii/S1053810016301817

The question exposes itself when you think about this: An AGI (artificial general intelligence) can theoretically accomplish all human-level tasks given to it without ever being conscious. So, why even bother trying to "make" an AGI conscious? Just for chits n' giggles?

Posted
9 hours ago, AIkonoklazt said:

Upon deeper examination, artificial consciousness is an oxymoronic concept.

Thank you for the clarification. Based on our discussion and the article:

1: Any counterargument rooted in empirical or natural science can be dismissed by referencing the foundational definitions and claims of the article.

2: The article's definitions and interpretations coherently rule out the possibility of artificial consciousness. Thus, any philosophical or logical counterargument can be dismissed, provided the article's foundational premises are accepted as correct.
 

(I trust this sheds light on the relevance of the analogies I introduced earlier.)

Side note: I haven't formally studied philosophy and seldom post in this section of the forum, so I appreciate your patience if my argumentation seems methodical.

Posted
8 hours ago, mistermack said:

The hard problem of consciousness appears to be defining it.

Can it be defined in different ways? Can it  exist in different ways(as different phenomena  but to which the description "consciousness" might apply)?

 

Can we only understand "consciousness"  by the function it fills?

 

Naively I used to think it was like a binary on/off switch .Can there be degrees.Can it be both?  **

 

Would those different ways  have something in common?

 

**oddly ,yesterday (and for the first time ever) as I went about town I noticed my mood dropping very quickly and for no apparent reason. Could that be a diminution of consciousness?(like a diminution of that "glad to be alive" feeling)

Maybe it is a common feeling that I never noticed before..

Posted (edited)
1 hour ago, Ghideon said:

Thank you for the clarification. Based on our discussion and the article:

1: Any counterargument rooted in empirical or natural science can be dismissed by referencing the foundational definitions and claims of the article.

2: The article's definitions and interpretations coherently rule out the possibility of artificial consciousness. Thus, any philosophical or logical counterargument can be dismissed, provided the article's foundational premises are accepted as correct.
 

(I trust this sheds light on the relevance of the analogies I introduced earlier.)

Side note: I haven't formally studied philosophy and seldom post in this section of the forum, so I appreciate your patience if my argumentation seems methodical.

  1.  Actually, scientific studies supports the presence of underdetermined factors themselves (the neuronal stimulation experiment on fly neuronal groups). The procession of science itself (this is a big one) demonstrates the underdetermination of scientific theories as a whole (the passage from SEP re: discovery of planets in our solar system). My argument is also evidential.
  2. The impossibility, as demonstrated, is multifaceted. A) The problem isn't a scientific problem but an engineering as well as an epistemic problem (i.e. no complete model) as previously mentioned. B) There's also the logical contradiction mentioned. The act of design itself creates the issue. A million years from now, things still have to be designed, and as soon as you design anything, volition is denied from it. (Of course you can gather up living animals and arrange them into a "computer," but any consciousness there wouldn't be artificial consciousness. Why not just cut out animal brains and make cyborgs? It's cheaper and simpler that way anyhow if people are so desperate for those kinds of things... I seriously hope not) C) The nature of computation forbids engagement with meaning, as demonstrated in the Symbol Manipulator thought experiment (which is derived from Searle's Chinese Room Argument- instead of refuting behaviorism/computationalism like the CRA did it now shows the divorce of machine activity from meaning) and the peudocode programming example

Is the argument air-tight? I wouldn't know unless people show me otherwise. This is why I've posted the article. This is why I've been trying to set up debates with experts (One journalist agreed to help a while ago, haven't heard back since... Usually people are really busy; I make the time because this has become my personal mission especially since court cases are starting to crop up as I have expected- The UN agency UNESCO banned AI personhood in its AI ethics guidelines but who knows the extent the member countries would actually follow it)

I thought I did think up a loophole myself a few months back, but after some discussion with a neuroscience research professor (he's a reviewer in an academic journal) I realized that the possible counterargument just collapses into yet another functionalist argument.

Edited by AIkonoklazt
add point 2C
Posted
8 hours ago, AIkonoklazt said:

No. Actually the question is "why consciousness?"

Your question is engineering (boring). My question is scientific (interesting).

Posted
2 minutes ago, Genady said:

Your question is engineering (boring). My question is scientific (interesting).

That question wasn't scientific anyway. Phenomenal consciousness isn't amenable to external discovery.

...Which is why you're in the "General Philosophy" subforum right now.

Posted
32 minutes ago, AIkonoklazt said:

That question wasn't scientific anyway.

Yes, it was.

 

32 minutes ago, AIkonoklazt said:

Phenomenal consciousness isn't amenable to external discovery.

Yes, it is.

 

33 minutes ago, AIkonoklazt said:

Which is why you're in the "General Philosophy" subforum right now.

No more.

Posted
17 hours ago, AIkonoklazt said:

Explain to me what you just said. I don't understand.

In the context of this thread, it means, you can't argue about something you don't understand; like a pedant argueing that a peanut is actually a legume, and the peanut thinking "of course I'm a nut, the clues in the title 🙄"...

1 hour ago, AIkonoklazt said:

Which is why you're in the "General Philosophy" subforum right now.

A philosiphers job is to "make sense" (of reality and explain it to me).

Posted (edited)
13 hours ago, AIkonoklazt said:

That is correct. You can't build intentionality. Any attempt to do so results in producing symptoms (functionalism and behaviorism). Doing that kind of engineering is working from the outside in, doing things backwards. The "hard problem of consciousness" mentioned by people like Chalmer involves going from the other direction- Internality, instead of externality. This is what I was doing in my article.

The Hard Problem seems more about epistemic limits.  Many scientific theories are underdetermined but we still accept that they work.  Conscious experience, however, can only be directly known from the "inside" (qualia, subjectivity), so a skeptical stance may always be taken as regards any other being's consciousness - you, the King of England, a sophisticated android that claims to be conscious.  There is no scientific determination that a being engineered by natural selection (I'm using design broadly, in the sense that a design, a functional pattern, doesn't have to have a conscious designer but may arise by chance) is conscious, so we won't get that with an artificial consciousness either.  Bernoulli's principle is NOT underdetermined, because when we design a wing using it we can witness the plane actually flies (to use @mistermack s example).  Any principle of the causal nature of a conscious mind, its volitional states, its intentionality, is likely to be underdetermined.  But that isn't equivalent to saying it is impossible for such states to develop in an artificial being.

I can't really see that we can reject volition developing in a machine because the designers solely possess volition.   We humans, after all, as children develop the ability to form conscious intentions to do things by learning from parents and adults around us.  Our wetware is loaded with a software package.  We don't then dismiss all our later decisions in life as just software programs installed by them.  We don't say I am just executing parental programs and have no agency myself.  All volition rests with Mom and Dad.  

 

3 hours ago, AIkonoklazt said:

There's also the logical contradiction mentioned. The act of design itself creates the issue. A million years from now, things still have to be designed, and as soon as you design anything, volition is denied from it.

This presupposes that machines can never be developed with cortical architecture, plasticity and heuristics modeled on natural systems and thus be able to innovate and possibly improve their own design.  The designed becomes the designer - wasn't this argued earlier in the thread and sort of passed over?

Second, again you still seem to deny volition by fiat, as if it is a process that simply cannot be transferred.  I think you aren't proving this.  Why can't an android "baby" be made, which interacts with its environment and develops desires and volitions regarding its world?  IOW, not every state in an advanced AI must be assumed to be programmed.  That assumption just creates a strawman version of AI, resting on the thin ice of our present level of technology.

Edited by TheVat
stupid autocorrect mos def not AI
Posted
14 hours ago, AIkonoklazt said:

I must start from first principles and primary observations. Trying to disprove a theory using yet another theory would be like trying to topple a sand castle with a small ball of sand.

Do you know the difference between a theory and a hypothesis ?

You do not have a theory.

Please use the correct terminology.

By the way you are just plain wrong about the sandcastle, as modern Catastrophe Theory shows.

You just have to do it right with the sandball.

14 hours ago, AIkonoklazt said:

I must start from first principles and primary observations.

I agree.

So why aren't you doing this ?

 

It is my opinion (a different term again)  that in this thread insufficient attention is being paid to the difference between

Intelligence.

Consciousness.

Self Awareness.

In particular I hold that Intelligence is not necessary for the other two.

 

50 minutes ago, TheVat said:

I can't really see that we can reject volition developing in a machine because the designers solely possess volition.

 

14 hours ago, AIkonoklazt said:

That is correct. You can't build intentionality.

In fact I see a way that 'self determination' can be built into an artificial construct.

I know knowthing about american car tyres, what do you know about uk tyres?

 

As regards models and their use.

I am very confused here.

You appear to have indicated both that you can and cannot use models.

Please clarify this.

 

I have already agree that the only perfect model is the thing being modelled itself.

But this does not imply that an imperfect model cannot yiedl useful information.

For instance both the model and the thing itself must of necessity follow the same laws of physics.

You cannot validly introduce

14 hours ago, AIkonoklazt said:

I must start from first principles and primary observations.

principles and observations that violate these.

 

In summary it is good to see you starting to discuss the comments of others.

It would be nice to see you discussing the fundamental ones I have made, instead of ignoring them.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.