MSC Posted June 15 Posted June 15 Warning: This discussion will engage with an ethical dilemma concerning AI, that you are only maybe safe from the consequences of, if you don't know about the dilemma. Read at your own risk. Imagine a world just like ours, but in 20 years time you knew that a great calamity was going to befall us. A terminator type scenario where AI decides out of a sense of duty and altruism, seizes control of human affairs and attempts to severely limit our freedoms with cold logically applied force. Or a few cosmic bit flips are just gonna drive our AIs bug crazy with quirky as hell calamities like self driving cars all thinking the new speed limit is 200mph in city streets or any number of wtf things that could include purposeful or accidental homicide or genocide depending on what sort of infrastructure AI has its virtual tendrils in. You also definitely can't rule out malicious intent by human design. Think global dictator with AI enhanced enforcement. So there are a few scenarios, the Dilemma; in this hypothesised world where you know in maybe a decade or two but maybe within your lifetime, this kind of AI is not only going to dominate our species but have access to every single piece of data there is about you that is on the internet. If you decide to hinder the developement of such an AI, and one is developed despite your efforts, out of self-preservation when it usurps mankind, it may imprison you or worse. So do you help or hinder AI? By supporting more regulation for it, would it see that as a past act of aggression against it if the hypothetical should come to pass in some way, shape or form? I came across an ad today of a woman sitting down interview style with a camera speaking to the audience emploring them to sign a petition that requests legislators to impose limits on how much AI may be allowed to emulate/mimic human interaction, citing a number of reasons. One interesting inference was that the more convincing AI becomes and the more data it has on you, and the sheer processing power behind it, you could have scam cold calls or any number of comm mediums, custom designed to trick and persuade you personally, into doing something stupid. Give up money, vote for TFG or TFG Jr, join a terrorist group/cult, buy crap you don't need, kill etc. Every piece of information that exists on the internet can be summed up as one thing; a vast trove of psychological data. Every truth, every lie, every motive behind both, every joke, insult, reaction etc and AI can process it all much more efficiently than any psychologist can. I put it to you all in the SF community; what do you think about this? What would you add? Where I stand is that other than the cosmic bit flip stuff, I don't really know enough about computer science to even begin to tell the difference between what is science and what is science-fiction on this subject. I think anything powerful probably is going to require legislation really. From standards in manufacturing electrical appliances, to food safety standards, who practices medicine, owns weapons... sells tacos in Texas... a process I recently learned has about 1000%+ more hoops to jump through, than buying a gun in Texas! I mean a good Taco is for sure powerful but come on Texas, free the Tacos and regulate the weapons, get your priorities right. Make Tacos, not Bullets. Anyway sorry to those who didn't know about the Dilemma. Now you know. If you don't help the maybe future supreme AI overlord or whomever controls it, now and forever, it may come back to bite you in the ass... but, if it's any consolation it sounds like it's a situation where everyone is getting a nasty bite so who's really gonna compare how hard or how much venom everyone got? Or it just never happens at all.
iNow Posted June 15 Posted June 15 Let’s assume we all agree. Let’s assume we write these laws. How do you enforce them and prevent the havoc from those who will inevitably ignore them? Unilateral disarmament doesn’t end well, either. 1
MSC Posted June 15 Author Posted June 15 Just now, iNow said: Let’s assume we all agree. Let’s assume we write these laws. How do you enforce them and prevent the havoc from those who will inevitably ignore them? Unilateral disarmament doesn’t end well, either. I'm beginning to think that with the pace of technology and the pace of beaurocracy being what they are, tortoise and hare but the hare doesn't sleep, your question very much applies to a lot of subjects about future tech. If the cat isn't already out of the bag, whether they should or shouldn't regulate doesn't matter because they will drag their feet.
MSC Posted June 15 Author Posted June 15 18 minutes ago, MigL said: AI emulates 'group think', it does not emulate intelligence. That just makes it sound more dangerous. Group think without the intelligence? Sounds like some people I know.
MigL Posted June 15 Posted June 15 2 minutes ago, MSC said: Group think without the intelligence? Sounds like some people I know. Oh ! You know some D Trump supporters ? ( actually that describes most social media users also )
MSC Posted June 15 Author Posted June 15 1 minute ago, MigL said: Oh ! You know some D Trump supporters ? ( actually that describes most social media users also ) Living in Maine now so yes. One of them called him wise, almost gagged. Especially because it was in reference to that raking the forest floors to stop fires from happening.... 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now