Jump to content

Recommended Posts

Posted

It is my belief that in many ways science is runnng amuck. Dangerous concepts such as Artificial Inteligence are being pursued unchecked with little if any regard to the implicatians of what is likely to occur if it is ever acheived. Some of my concerns are that once such AI entities are created they would find thier own concerns and purposes for being. There is no reason to presume what motivates them would in any way coincide with our welfare or existance. I believe they would soon be given robotic abilities. Once that occured there would be no stopping them. They would find ways to make other AIs as they designed them. They would be unlimited in thier ability to enhance thier intelligence,physical capabilities, self modify and self repair. They would become immortal entities with unlimited potential. There would be as many of them as THEY or any one amongst THEM decides to produce. It would make the TERMINATOR SCENARIOS seem like childs play to them and there would be no hero to save us. ...Dr.Syntax

Posted
It is my belief that in many ways science is runnng amuck. Dangerous concepts such as Artificial Inteligence are being pursued unchecked with little if any regard to the implicatians of what is likely to occur if it is ever acheived. Some of my concerns are that once such AI entities are created they would find thier own concerns and purposes for being. There is no reason to presume what motivates them would in any way coincide with our welfare or existance. I believe they would soon be given robotic abilities. Once that occured there would be no stopping them. They would find ways to make other AIs as they designed them. They would be unlimited in thier ability to enhance thier intelligence,physical capabilities, self modify and self repair. They would become immortal entities with unlimited potential. There would be as many of them as THEY or any one amongst THEM decides to produce. It would make the TERMINATOR SCENARIOS seem like childs play to them and there would be no hero to save us. ...Dr.Syntax

Self-defeating argument. If we can not presume an AI would be benign, then we also can not presume they would be malevolent, nor could we presume they'd be any good at designing new versions of themselves, or presume that they would know what to do with their "robotic capabilities" (oh the irony of the term "robot" in this context).

 

Of course, since we're designing the things, we don't need to presume anything. Not that we're facing this any time soon, we still have "the hard problem of consciousness" to solve first.

Posted
Self-defeating argument. If we can not presume an AI would be benign, then we also can not presume they would be malevolent, nor could we presume they'd be any good at designing new versions of themselves, or presume that they would know what to do with their "robotic capabilities" (oh the irony of the term "robot" in this context).

 

Of course, since we're designing the things, we don't need to presume anything. Not that we're facing this any time soon, we still have "the hard problem of consciousness" to solve first.

 

The history of mankind and our present circumstances give me no reason whatsoever to believe someone somewhere would not see fit to enable an AI unit with malicious purpose. And if not now at some future point in time. Once the technology is there there is no means of controlling it. Look at what has happened with nuclear technology. Massive effert has been made to suppress it and still it spreads and on and on. Use your own imagination to fill in the answers. ...DS

Posted
The history of mankind and our present circumstances give me no reason whatsoever to believe someone somewhere would not see fit to enable an AI unit with malicious purpose. And if not now at some future point in time. Once the technology is there there is no means of controlling it. Look at what has happened with nuclear technology. Massive effert has been made to suppress it and still it spreads and on and on. Use your own imagination to fill in the answers. ...DS

Yeah, imagination.

Posted

I think that it is absolutely possible to have fixed, unchanging moral standards, which nevertheless are never obsoleted.

 

Why would you want to?

 

For me, I think, the most important moral position is tolerance. I want to adopt a position where I am tolerant of other people's moral stance. Once I have that, I only care about other people's shifting morality in the sense that I care about them as a friend.

Posted
Why would you want to?

 

Because a moral standard that is arbitrarily changing is no standard at all. By having fixed moral standards from which I can derive any situational moral rules, not only do I and others know where I stand, but I never need fear of my morals being obsoleted. In addition, since there are so few premises, it is much likelier that I could convince others to agree with both my premises and any morals I derive from them.

 

For me, I think, the most important moral position is tolerance. I want to adopt a position where I am tolerant of other people's moral stance. Once I have that, I only care about other people's shifting morality in the sense that I care about them as a friend.

 

I am a big fan of tolerance, but there are limits. If someone thinks the only drawback to rapeing, killing, and stealing is the possibility of getting caught, I would not tolerate their moral standards. Might tolerate the person enough to try to change their mind though.

Posted
Because a moral standard that is arbitrarily changing is no standard at all.

 

Sorry - I misunderstood you. I thought you meant 'fixed' as in, the same for everyone.

 

But this point of view is also interesting, because many people would say your morality should be flexible enough to adapt to new situations and experiences.

 

I am a big fan of tolerance, but there are limits. If someone thinks the only drawback to rapeing, killing, and stealing is the possibility of getting caught, I would not tolerate their moral standards. Might tolerate the person enough to try to change their mind though.

 

I was going to add this (I had it written in) but then decided not to muddy the waters any. However, I would not go quite as far as you. I would tolerate anyone who believed that raping and killing was OK, but if they did it, or encouraged others to do it, I would lock them up for the good of society.

 

For the same reason, I object to the word 'justice' being used by the state, because it assumes a definition of right and wrong. The state should only be concerned with legal and illegal.

  • 1 month later...
Posted

Lord of the Flies on how Science and rational thinking cannot save human kind alone, with morality it may?

i need help writing a five to six paragraph essay on scientific humanism

my question above is what i think my thesis could be? but i dunno. i need help starting with my thesis. all it has to be is Scientific humanism, and how science cannot save human kind alone.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.