Jump to content

Recommended Posts

Posted (edited)
9 minutes ago, AIkonoklazt said:

Because you seemed to be making the equation between nature-as-designer and man-as-designer from your original point.

No, that's just your imagination. I am capable of writing that, If that was what I meant. Try replying to what I write, not what you imagine I might have meant.

Edit : Let me have a go of making it crystal clear. In everyday English, we say that the shape of the mountain guides the course of an avalanche. Or the shape of a valley guides the course of a river. There is no intent or design involved. Maybe English is not your first language.

Edited by mistermack
Posted (edited)

  

On 5/29/2022 at 8:41 AM, TheVat said:

The simplest argument I can offer you is: the emergence of artificial consciousness is possible because the consciousness we all know intimately has in fact emerged from matter, molecules which evolved the ability to both represent and to create information.  

If I may add a counter argument to this, that is to say I am not sure how close we really are to Actually AI or just another really intelligent programed Robot.

While true Intelligence is required to be conscious, just because something is intelligent does not mean its conscious, Lots of equipment of NASA like Rovers are intelligent. Calculators and Computers are pretty intelligent too, but are they conscious? And the indefinite answer is no. 

1.) The AI what ever form it is, must have Awareness, out side of programming. We can program robots all day long to detect dead leaves on a tree and pluck those leaves and leave the healthy ones alone. But this is not AI or Awareness it's programming.

2.) The AI Must be Conscious!

Consciousness is the state of being aware of something, and it can be regarded more like a spiritual kind of definition. If a person is aware of something he or she may only feel its presence with out actually knowing what it is. Give for example a Blind and Deafened person, they can feel things around them and are aware of them but have no knowledge of what they are. This requires senses that AI just can't obtain Like Smell. The important thing here is that the person does not need to have a full understanding of the object that he/she feels. If they sense it, they may be aware of it. Take away a blind deafened persons ability to taste touch and smell. Now what do you have? Well One can be aware of an emotions yes?

3.) Life 

It is in my understanding something must be Living and Have Intelligence in order to be Conscious. For example a tree as we understand is alive, but is not intelligent. But a Bird is alive and has Intelligence so it is more conscious than a Tree would be. 

4.) Self Consciousness

To have an understanding, that one is Alive and is conscious = Being aware that one is aware, or conscious of being conscious. In order for AI to have these and Be a true form of AI in the sense every one dreams about, it must possess all of these outside of programming. 

 

Edited by J.Merrill
Posted
1 hour ago, mistermack said:

No, that's just your imagination. I am capable of writing that, If that was what I meant. Try replying to what I write, not what you imagine I might have meant.

Edit : Let me have a go of making it crystal clear. In everyday English, we say that the shape of the mountain guides the course of an avalanche. Or the shape of a valley guides the course of a river. There is no intent or design involved. Maybe English is not your first language.

I did attempt to answer your original question in good faith. If we misunderstand each other then just clarify. Let me try again, then.

The concept of artificial consciousness, upon closer examination, is an oxymoronic concept because it would involve design without design, programming without programming, and so on. That was one of the points.

The second major point would be the arbitrary nature of algorithms that involves no meaning. Reference the section from my article labeled "Symbol Manipulator, a thought experiment."

Posted
4 hours ago, AIkonoklazt said:

The concept of artificial consciousness, upon closer examination, is an oxymoronic concept because it would involve design without design, programming without programming, and so on. That was one of the points.

Nature does it.

Posted
11 hours ago, AIkonoklazt said:

3. Read what he said earlier in the thread. He was using offcut. That's single piece.

And that's another strawman, the argument was about how an off-cut can lead to an unintended machine i.e. an emergent quality.

As machine's become more complex, other emergent qualities 'should be expected' the nature of which is impossible to predict; therefore the premise of the OP is demonstrably false.

 

complex.jpg

Posted
11 minutes ago, dimreepr said:

And that's another strawman, the argument was about how an off-cut can lead to an unintended machine i.e. an emergent quality.

 

And more to the point he was misrepresenting what I actually said, and not for the first time.

Posted
11 hours ago, AIkonoklazt said:

The concept of artificial consciousness, upon closer examination, is an oxymoronic concept because it would involve design without design, programming without programming, and so on. That was one of the points.

That's a ludicrous statement. ARTIFICIAL consciousness would involve design WITH design. 

How on earth could it be otherwise? You're virtually saying that black is white. 

Posted (edited)
2 hours ago, dimreepr said:

And that's another strawman, the argument was about how an off-cut can lead to an unintended machine i.e. an emergent quality.

As machine's become more complex, other emergent qualities 'should be expected' the nature of which is impossible to predict; therefore the premise of the OP is demonstrably false.

 

 

The OP doesn't seem be familiar with emergence as a concept or indeed the view of complex systems from the hard sciences perspectives. This seems to be an issue with being familiar with computer logic  systems but not biological/molecular systems, and that, I think, that tends to instil a  Cartesian duality view of things, since computers are non-biological in substrates. That can lead to a metaphysical interpretation of living systems as somehow distinct from the physical substrate,

Edited by StringJunky
Posted

Thanks, Stringy.  Made this point earlier, but the OP seemed unwilling to consider a blind spot in their view, so I departed.  It's too bad the "design means what" red herring took over here.  Your blind watchmaker reference earlier was spot on.  

Self-programming, self-modifying neural nets arise in nature, and become aware.  It's real, and it is the design (in the non-teleological, natural selection driven sense) that current AI research into neural nets are trying to model and give a medium for growth.  I have yet to see a cogent argument as to why in principle this may not happen.

The argument that "today's computers don't fit that model," is a Strawman.  Of course they don't.  Might as well be Lord Kelvin declaring we've reached the end of physics.  

 

@StringJunky

Posted (edited)
2 hours ago, TheVat said:

Thanks, Stringy.  Made this point earlier, but the OP seemed unwilling to consider a blind spot in their view, so I departed.  It's too bad the "design means what" red herring took over here.  Your blind watchmaker reference earlier was spot on.  

Self-programming, self-modifying neural nets arise in nature, and become aware.  It's real, and it is the design (in the non-teleological, natural selection driven sense) that current AI research into neural nets are trying to model and give a medium for growth.  I have yet to see a cogent argument as to why in principle this may not happen.

The argument that "today's computers don't fit that model," is a Strawman.  Of course they don't.  Might as well be Lord Kelvin declaring we've reached the end of physics.  

 

@StringJunky

I think it will be ex post facto when we find out what a synthetic recipe for consciousness looks like. I think this is to be expected, being emergent and, as such, only a retrospective analysis will allow us to find out. Hopefully, we find the blighter before they can become too autonomous in a way that might be detrimental to us. :)

 

Edited by StringJunky
Posted
On 6/18/2022 at 11:19 PM, StringJunky said:

Nature does it.

Nature doesn't design anything. The process of random mutation isn't design.

On 6/19/2022 at 4:24 AM, dimreepr said:

And that's another strawman, the argument was about how an off-cut can lead to an unintended machine i.e. an emergent quality.

As machine's become more complex, other emergent qualities 'should be expected' the nature of which is impossible to predict; therefore the premise of the OP is demonstrably false.

 

This had been argued earlier. My response was this:
 

Quote

let's look at what unintended variations mean.

Two major categories of the same "accidental modification" theme, and how possible they are in altering the fundamental algorithmic nature of an automation:

  • Bug / design flaw (inherent flaws): This doesn't change the nature of the resultant function. A flawed algorithmic operation is still algorithmic
  • Deviation / mal-adaptation (incidental flaws): Ditto. Unintended algorithmic operation is still algorithmic.

Both are akin to claiming how opacity introduces fundamental changes in nature, e.g. a bot suddenly doing something unexpected doesn't mean it has "gained a mind of its own" but is due to bad design and/or compromise.

Flaws aren't designs, but they don't make programs into non-programs.

 

On 6/19/2022 at 6:27 AM, mistermack said:

That's a ludicrous statement. ARTIFICIAL consciousness would involve design WITH design. 

How on earth could it be otherwise? You're virtually saying that black is white. 

A conscious AI would be "acting on its own," correct? But it wouldn't be doing so because of its programming. Hence "programming without programming" and "design without design."

On 6/19/2022 at 7:01 AM, StringJunky said:

This seems to be an issue with being familiar with computer logic  systems but not biological/molecular systems, and that, I think, that tends to instil a  Cartesian duality view of things, since computers are non-biological in substrates. That can lead to a metaphysical interpretation of living systems as somehow distinct from the physical substrate,

Isn't that a strawman if it's indeed an argument against my thesis? The issue isn't with the substrate as I've already indicated in the article itself, but with the algorithmic nature of machines.

Posted
On 6/19/2022 at 12:36 PM, studiot said:

 

And more to the point he was misrepresenting what I actually said, and not for the first time.

I'm still waiting for an acknowledgement (and perhaps an apology) of the misrepresentation.

 

Posted
On 6/19/2022 at 10:15 AM, TheVat said:

Self-programming, self-modifying neural nets arise in nature

This was already addressed in section "Volition Rooms — Machines can only appear to possess intrinsic impetus":
 

Quote

A machine’s operations have been externally determined by its programmers and designers, even if there are obfuscating claims (intentional or otherwise) such as “a program/machine evolved,” (Who designed the evolutionary algorithm?) “no one knows how the resulting program in the black box came about,” (Who programmed the program which produced the resultant code?) “The neural net doesn’t have a program,” (Who wrote the neural net’s algorithm?) “The machine learned and adapted,” (It doesn’t “learn…” Who determined how it would adapt?) and “There’s self-modifying code” (What determines the behavior of this so-called “self-modification,” because it isn’t “self.”) There’s no hiding or escaping from what ultimately produces the behaviors- The programmers’ programming.

I ccould generalize these kinds of counterarguments under a section named "Arguments from opacity." Opacity of origin (via programming), opacity of change (via flaws), et cetera.

4 minutes ago, studiot said:

I'm still waiting for an acknowledgement (and perhaps an apology) of the misrepresentation.

 

If the argument wasn't about the offcut being an unintended machine, then please clarify. I understood it as such.

Posted
20 minutes ago, AIkonoklazt said:

This had been argued earlier. My response was this:
 

Quote

let's look at what unintended variations mean.

Two major categories of the same "accidental modification" theme, and how possible they are in altering the fundamental algorithmic nature of an automation:

  • Bug / design flaw (inherent flaws): This doesn't change the nature of the resultant function. A flawed algorithmic operation is still algorithmic
  • Deviation / mal-adaptation (incidental flaws): Ditto. Unintended algorithmic operation is still algorithmic.

Both are akin to claiming how opacity introduces fundamental changes in nature, e.g. a bot suddenly doing something unexpected doesn't mean it has "gained a mind of its own" but is due to bad design and/or compromise.

Flaws aren't designs, but they don't make programs into non-programs.

 

Now you're wrong twice...

Posted (edited)
34 minutes ago, AIkonoklazt said:

Nature doesn't design anything. The process of random mutation isn't design.

But a mutation is an alteration to the design of the RNA/DNA molecule. It doesn't require sentience to alter the design. The human mind was designed by deterministic behaviour of DNA molecles, mutation and environmental pressures. 

Edited by StringJunky
Posted
21 minutes ago, dimreepr said:

Now you're wrong twice...

Explain.

14 minutes ago, StringJunky said:

But a mutation is an alteration to the design of the RNA/DNA molecule. It doesn't require sentience to alter the design. The human mind was designed by deterministic behaviour of DNA molecles, mutation and environmental pressures. 

The RNA/DNA molecule itself isn't a design, either. The human mind was not designed, either. Why the automatic insertion of teleology?

Posted
3 minutes ago, AIkonoklazt said:

Explain.

Well, you were wrong the first time, as I explained; it's like watching the replay and expecting the goaly to save it this time round...

Posted
25 minutes ago, AIkonoklazt said:

 

The RNA/DNA molecule itself isn't a design, either. The human mind was not designed, either. Why the automatic insertion of teleology?

Design is not being used in the sense of implied teleology.  You can stop pretending you don't see that.  

And, given that human cognitive processes cause computers and artificial neural nets, it takes no great insight into the chain of causality to assert that, if there is no design to the human mind, then whatever arises from that mind and is implemented in a different substrate, also is not designed.  IOW reductio ad absurdum.

This absurd conclusion is what you get when you insist on a narrow definition of design.  

Again, neither you, nor the paper you keep insisting already answers all these objections, really addresses the blind watchmaker.  You can't erase design because the causal origin is natural variations and then just drop it in later when those natural processes give rise to engineered structures that seek to replicate the natural ones.  

So knock off the strawman where you accuse of us "inserting teleology" and recognize the deeper broader meaning of design.  And answer these points, instead of insisting (incorrectly) that they've already been preemptively dismissed.  

Posted
31 minutes ago, AIkonoklazt said:

Explain.

The RNA/DNA molecule itself isn't a design, either. The human mind was not designed, either. Why the automatic insertion of teleology?

Whatever we construct we are replicating  some existing configuration of nature; we don't conceive of patterns/designs in a vacuum. No teleogy required.

Posted
1 hour ago, AIkonoklazt said:

If the argument wasn't about the offcut being an unintended machine, then please clarify. I understood it as such.

 

I didn't make an 'argument about it. You did.

 

On 6/19/2022 at 12:10 AM, AIkonoklazt said:

3. Read what he said earlier in the thread. He was using offcut. That's single piece.

 

Not only did you misrepresent what I actually said, you directed the attention of another member to your false representation. And you made a big thing of the offcut being a single piece (your actual words)

 

I actually said

 

On 6/16/2022 at 12:30 AM, studiot said:

If, however the end of the wood is damaged so the offcut comes in pieces, then one or more of those pieces could be wedge shaped.

How many pieces do you now think the offcut came in ?

Posted
45 minutes ago, dimreepr said:

Well, you were wrong the first time, as I explained; it's like watching the replay and expecting the goaly to save it this time round...

I don't know what you're referring to unless you reference it. You're not the only person in this thread.

 

Posted
5 minutes ago, AIkonoklazt said:

I don't know what you're referring to unless you reference it. You're not the only person in this thread.

The references are given automatically by the website system at the top of each quote.

Posted (edited)
1 hour ago, TheVat said:

Design is not being used in the sense of implied teleology.  You can stop pretending you don't see that.  

And, given that human cognitive processes cause computers and artificial neural nets, it takes no great insight into the chain of causality to assert that, if there is no design to the human mind, then whatever arises from that mind and is implemented in a different substrate, also is not designed.  IOW reductio ad absurdum.

This absurd conclusion is what you get when you insist on a narrow definition of design.  

Again, neither you, nor the paper you keep insisting already answers all these objections, really addresses the blind watchmaker.  You can't erase design because the causal origin is natural variations and then just drop it in later when those natural processes give rise to engineered structures that seek to replicate the natural ones.  

So knock off the strawman where you accuse of us "inserting teleology" and recognize the deeper broader meaning of design.  And answer these points, instead of insisting (incorrectly) that they've already been preemptively dismissed.  

I'm not pretending anything. If you feel that I'm not engaging in good faith then exit the conversation, but don't accuse me of bad faith.

"And, given that human cognitive processes cause computers and artificial neural nets, it takes no great insight into the chain of causality to assert that, if there is no design to the human mind, then whatever arises from that mind and is implemented in a different substrate, also is not designed.  IOW reductio ad absurdum."

That doesn't follow. That argument amounts to asserting that the human mind is indistinguishable from a machine especially in terms of dealing with meaning versus none (section "Symbol Manipulator, a thought experiment"). The randomness of nature isn't a conscious process but the invention from a human mind is.

1 hour ago, studiot said:

The references are given automatically by the website system at the top of each quote.

It goes back to his comment where he said "now you're wrong twice," which then ultimately goes a few steps back to one of my own comments that had other people's quotes in it but none of his. I'd rather that people directly mention things.

1 hour ago, studiot said:

I actually said

 

How many pieces do you now think the offcut came in ?

Can you explain the big difference that made in the argument between one offcut and several?

1 hour ago, StringJunky said:

Whatever we construct we are replicating  some existing configuration of nature; we don't conceive of patterns/designs in a vacuum. No teleogy required.

Whatever we construct, there is a teleology whenever there is purpose involved. In other words, what the construct is "for" had been decided. That's different from natural constructs.

Edited by AIkonoklazt
typo
Posted
2 hours ago, AIkonoklazt said:

A conscious AI would be "acting on its own," correct? But it wouldn't be doing so because of its programming. Hence "programming without programming" and "design without design."

I've really tried to understand that. But I can't make head or tail of it. Can you put it in simple terms, for a simple person?

Posted
1 minute ago, mistermack said:

I've really tried to understand that. But I can't make head or tail of it. Can you put it in simple terms, for a simple person?

 

It boils down to:

What drives a machine, "itself?"

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.