Ramblings of an old Doc

 

Bill Gates, Musk, Hawkings and others have all stated their concern regarding the distressing potential dangers of AI more than once, on we go pell-mell towards self aware/self governing machines.

We can’t even get security updates right without causing severe problems, but somehow think “We can do this. We can win!”.

Just a minor thought…a program, any program (including heuristic ones) are limited by their coding and how, via this coding, they ‘learn’. The same is true about biological systems. Their form, their being carbon based, their being subject to the laws of thermodynamics and sensitivities to environment, and other biological entities all determine and limit how they learn.

Another minor thought, “If something can go wrong, it will.” Just ask God.

Now comes this report by Selmer Bringsfjord (RPI, New York) (inThe New Scientist) regarding a test he ran using the classic “Wise-men Puzzle” on three robots, two of which he silenced, and one he didn’t. All three had auditory sensors.

“In a robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.

They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.

Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says. It then writes a formal mathematical proof and saves it to its memory to prove it has understood.” – New Scientist

Granted, this isn’t “full consciousness”, but this is conscious thought and shows a conception of ‘self’, or “the first-hand experience of conscious thought’.

There are those who are correct in saying that there’s a big difference between saying, “It’s sunrise.” and being able to enjoy the esthetic experience of knowing who you are and being part of that sunrise and . Perhaps central to the experience is knowing one is mortal and what that sunrise signifies in terms of mortality and the passage of time which generates compassion for others subject to that passage of time and the knowledge that each is at a different point in that passage.

Perhaps what I fear most therefore, is a machine which has no compassion and its actions for self preservation without that essential quality, even if through inaction because it simply has no perception that it is doing wrong since ‘right’ and ‘wrong’ are alien to it.

After all, even though very imperfect, we do have a system of checks and balances, ideas of morality, etc. which function (to some degree) to limit us.

If you don’t believe the craziness of all this, if you don’t believe this is real, read about how ‘killer robots’ was to be discussed at the the U.N. Convention on Certain Conventional Weapons. You can look up the meeting (11/2014) search. You read more here.

Source:

https://www.newscientist.com/article/mg22730302-700-robot-homes-in-on-consciousness-by-passing-self-awareness-test/

http://www.computerworld.com/article/2970737/emerging-technology/are-we-safe-from-self-aware-robots.html

http://www.stopkillerrobots.org/2015/03/ccwexperts2015/

http://www.computerworld.com/article/2489408/computer-hardware/evan-schuman--killer-robots--what-could-go-wrong--oh--yeah----.html


Comments (Page 2)
3 Pages1 2 3 
on Aug 17, 2015

Borg999

Simply make it a requirement that all sentient machines be programmed with the 3 laws of robotics hard coded in their processor.

Problem solved.

 

Three laws, ala Issac Asimov?  In the movie remake "I Robot," the three laws were interpreted by the uber-AI in a manner most of us humans would disagree.  I'm not sure hard coding the 'Three Laws' would protect us. ...

on Aug 17, 2015

ElanaAhova


Quoting Borg999,

Simply make it a requirement that all sentient machines be programmed with the 3 laws of robotics hard coded in their processor.

Problem solved.



 

Three laws, ala Issac Asimov?  In the movie remake "I Robot," the three laws were interpreted by the uber-AI in a manner most of us humans would disagree.  I'm not sure hard coding the 'Three Laws' would protect us. ...

True.  And the laws will never be written to begin with.  They don't make economic sense in our corporate world.  There are too many production types pushing for the code to go out as soon as possible.  

Money will always win over morality.  We won't even think about putting the brakes on AI code until it's too late.  That's our nature.

But rest assured, those who survive will point their collective fingers at those considered to be at fault. That's also our nature.

Sleep tight.  

 

on Aug 17, 2015

MottiKhan


Quoting ElanaAhova,






Quoting Borg999,



Simply make it a requirement that all sentient machines be programmed with the 3 laws of robotics hard coded in their processor.

Problem solved.



 

Three laws, ala Issac Asimov?  In the movie remake "I Robot," the three laws were interpreted by the uber-AI in a manner most of us humans would disagree.  I'm not sure hard coding the 'Three Laws' would protect us. ...



True.  And the laws will never be written to begin with.  They don't make economic sense in our corporate world.  There are too many production types pushing for the code to go out as soon as possible.  

Money will always win over morality.  We won't even think about putting the brakes on AI code until it's too late.  That's our nature.

But rest assured, those who survive will point their collective fingers at those considered to be at fault. That's also our nature.

Sleep tight.  

 

 

Truely sentient AI is a long way off. I don't think well have anything to worry about for quite a while.

on Aug 17, 2015

MottiKhan


Quoting ElanaAhova,






Quoting Borg999,



Simply make it a requirement that all sentient machines be programmed with the 3 laws of robotics hard coded in their processor.

Problem solved.



 

Three laws, ala Issac Asimov?  In the movie remake "I Robot," the three laws were interpreted by the uber-AI in a manner most of us humans would disagree.  I'm not sure hard coding the 'Three Laws' would protect us. ...



True.  And the laws will never be written to begin with.  They don't make economic sense in our corporate world.  There are too many production types pushing for the code to go out as soon as possible.  

Money will always win over morality.  We won't even think about putting the brakes on AI code until it's too late.  That's our nature.

But rest assured, those who survive will point their collective fingers at those considered to be at fault. That's also our nature.

Sleep tight.  

 

 

You are right about money (mammon) having a higher loyalty than 'the milk of human kindness.'  I was reminded of this quite strongly when watching reruns of Lost Girl, in anticipation of finally getting access to the new season.  In the episode where Kenzie goes to call on the Norn, (an uber hybrid of the trickster god, and Earth Mother Gaia's number Two) wielding a chain saw.   The Norn is in total disbelief that Kenzie, a mere human, not a Fey, would even consider hurting the big Tree.  Kenzie's reply goes something like this: Humans?  We pollute every lake we find ... We'll happily burn the planet to a crisp if it means just one more cheeseburger."  Yes, sadly, You are right.  Kenzie told me so...   

on Aug 18, 2015

ElanaAhova



You are right about money (mammon) having a higher loyalty than 'the milk of human kindness.' I was reminded of this quite strongly when watching reruns of Lost Girl, in anticipation of finally getting access to the new season.

I love that series... on of Canada's best.  I won't tell you how the final [and last] season goes, but sufficed to say it is a doosey and right up there with plot twists and excitement. 

on Aug 18, 2015

starkers

I love that series... on of Canada's best.  I won't tell you how the final [and last] season goes, but sufficed to say it is a doosey and right up there with plot twists and excitement. 


 

Yes, Lost Girl show really sucks....    

 

Now, apologies for attempted thread hijacking, back to AI concerns....

on Aug 18, 2015

ElanaAhova

Yes, Lost Girl show really sucks....

Yep!  Sucks viewers in for the long haul... cos there's no escaping once you're in.

ElanaAhova

Now, apologies for attempted thread hijacking, back to AI concerns....

Yeah, one of the concerns I have is when AI is as perfected as it could be... and politicians get AI transplants so they finally have intelligence.

I mean, they do enough damage when unintelligent. 

on Aug 18, 2015

Eventually, AIs will become self-actualizing.  Human engineers are already developing ways for the proto-AIs we now have to make redesign themselves 'better.'  Since they replicate many generations to each of our human generations, they will evolve way faster than we do.  Indeed, based on what I have seen and learned about human history, we haven't really evolved in the past 20,000 years - and our most decisive 'advantage' (intelligence) is also our Achilles's heel, because we subvert our intelligence, via 'rationalizations,' to do really stupid, evil, and self centered things.  Planet earth is just a bigger Easter Island, and destroying the web of life that supports us will take longer than it did on Easter Island - but we will.  And the 'artificial' AI's we leave behind will long outlast us.   Who knows, maybe in the far future, when the universe winds down, they will proclaim "42" and then say "let there be light?"

on Aug 18, 2015

I'm not just concerned about AIs evolving and replicating much faster than we humans do, I'm concerned that AIs will see humans as irrelevant and superfluous to their needs, therefore deciding to kill us all off to enable wiser use of the world's resources.  I mean, AIs that have been programmed with all human intin elligence and more will take one look at us and think most of us are too thick to be spared... and with 'human' intelligence they'll sure know how to kill, won't they.

Frankly, many inventions and leaps in technology have been purely made so mankind can be lazy... and AI is just another step along that path of mankind becoming more slovenly and sloth-like, only this time it will come to bite mankind in its rear... considerably harder that it's ever been bitten before.

on Aug 18, 2015

starkers

I'm not just concerned about AIs evolving and replicating much faster than we humans do, I'm concerned that AIs will see humans as irrelevant and superfluous to their needs, therefore deciding to kill us all off to enable wiser use of the world's resources.  I mean, AIs that have been programmed with all human intin elligence and more will take one look at us and think most of us are too thick to be spared... and with 'human' intelligence they'll sure know how to kill, won't they.

Frankly, many inventions and leaps in technology have been purely made so mankind can be lazy... and AI is just another step along that path of mankind becoming more slovenly and sloth-like, only this time it will come to bite mankind in its rear... considerably harder that it's ever been bitten before.

You need to read my old fave comic [was a Gold Key one] - Magnus the Robot Fighter ... 

on Aug 19, 2015



Quoting starkers,

I'm not just concerned about AIs evolving and replicating much faster than we humans do, I'm concerned that AIs will see humans as irrelevant and superfluous to their needs, therefore deciding to kill us all off to enable wiser use of the world's resources.  I mean, AIs that have been programmed with all human intin elligence and more will take one look at us and think most of us are too thick to be spared... and with 'human' intelligence they'll sure know how to kill, won't they.

Frankly, many inventions and leaps in technology have been purely made so mankind can be lazy... and AI is just another step along that path of mankind becoming more slovenly and sloth-like, only this time it will come to bite mankind in its rear... considerably harder that it's ever been bitten before.


,
You need to read my old fave comic [was a Gold Key one] - Magnus the Robot Fighter ...  

I never got into comics... guess I never needed superheroes or dweebs with their underwear on the outside.  Nah, as a kids I was always too busy for comics, either being outside in the English countryside or working for my father and/or mother.... and we were always early to bed so there was no time for comics after we'd watched our fave TV shows.

Shoot, even Wonderwoman couldn't get me into 'em.

on Aug 19, 2015

It's not a live until it can tell me and explain what it's favorite song is, and they would only kill us if there was a need. (Or a program fault or if it was intentionally done.) Because it could be intelligent but my not be cable of killing or thinking of it.

on Aug 19, 2015

 

The first edition ...

on Aug 19, 2015

Anyone seen Screamers? Now that is a movie that would put most people off the idea of autonomous killer drones.

As far as robotic weaponry/drones etc, I firmly believe there should always be a human behind the trigger. If it is AI controlled, who is to blame if the machine bombs a school or hospital instead of a military target?

There should always be a human there who has all that pressure of morality, duty and the consequences of making a mistake.

Also, I'd rather we build giant mechs and duke it out with each other than have us wiped out by rogue AI =P 

on Aug 19, 2015

I'm not afraid of any synthetic intelligence or consciousness as long as the developers don't map feelings into it or give it a body that can experience frustration.

The aforementioned rape, murder etc pp all happen because of strictly emotional reasons - this has nothing whatsoever to do that these criminal activities are based on intelligence. It's more or less an absence of it via a loss of self-control. Of course this is just a very big generalization as there are much more reasons to do evil. There are people whose brain apparently doesn't work like it should, be it from sickness, "bad" genetics, lead astray by ideas, immature infantile minds that crave hedonism even over other peoples rights, all the religious known sins etc... the list is endless ... but one thing to realise is that a computer should be oblivious to most of these motives. Perhaps a computer could be made intelligent without being able to have urges, motives etc on his own in the first place.

Then, I don't believe that Artifical Intelligence such as we see in our self is even remotely possible with machines. It might be an expression of biolgical life, and I throw consciousness right therein, too. As of now, both terms lack an ultimate & precise definition, but unlike us, a computer code needs to specify its terms exactly or it won't work.

In this thread I sense irrational fear of the unknown, fear to loose power - both of which could be called a root of evil themselves.

3 Pages1 2 3