Ramblings of an old Doc

 

Bill Gates, Musk, Hawkings and others have all stated their concern regarding the distressing potential dangers of AI more than once, on we go pell-mell towards self aware/self governing machines.

We can’t even get security updates right without causing severe problems, but somehow think “We can do this. We can win!”.

Just a minor thought…a program, any program (including heuristic ones) are limited by their coding and how, via this coding, they ‘learn’. The same is true about biological systems. Their form, their being carbon based, their being subject to the laws of thermodynamics and sensitivities to environment, and other biological entities all determine and limit how they learn.

Another minor thought, “If something can go wrong, it will.” Just ask God.

Now comes this report by Selmer Bringsfjord (RPI, New York) (inThe New Scientist) regarding a test he ran using the classic “Wise-men Puzzle” on three robots, two of which he silenced, and one he didn’t. All three had auditory sensors.

“In a robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.

They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.

Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says. It then writes a formal mathematical proof and saves it to its memory to prove it has understood.” – New Scientist

Granted, this isn’t “full consciousness”, but this is conscious thought and shows a conception of ‘self’, or “the first-hand experience of conscious thought’.

There are those who are correct in saying that there’s a big difference between saying, “It’s sunrise.” and being able to enjoy the esthetic experience of knowing who you are and being part of that sunrise and . Perhaps central to the experience is knowing one is mortal and what that sunrise signifies in terms of mortality and the passage of time which generates compassion for others subject to that passage of time and the knowledge that each is at a different point in that passage.

Perhaps what I fear most therefore, is a machine which has no compassion and its actions for self preservation without that essential quality, even if through inaction because it simply has no perception that it is doing wrong since ‘right’ and ‘wrong’ are alien to it.

After all, even though very imperfect, we do have a system of checks and balances, ideas of morality, etc. which function (to some degree) to limit us.

If you don’t believe the craziness of all this, if you don’t believe this is real, read about how ‘killer robots’ was to be discussed at the the U.N. Convention on Certain Conventional Weapons. You can look up the meeting (11/2014) search. You read more here.

Source:

https://www.newscientist.com/article/mg22730302-700-robot-homes-in-on-consciousness-by-passing-self-awareness-test/

http://www.computerworld.com/article/2970737/emerging-technology/are-we-safe-from-self-aware-robots.html

http://www.stopkillerrobots.org/2015/03/ccwexperts2015/

http://www.computerworld.com/article/2489408/computer-hardware/evan-schuman--killer-robots--what-could-go-wrong--oh--yeah----.html


Comments (Page 1)
3 Pages1 2 3 
on Aug 15, 2015

Well fellow meat bag, no machine  has voting rights, or will. At least until humans invest in upgrading themselves biologically and artificially. Also what is stopping AI from being a capable  intelligence  is the hdd/sdd, imho. It's 2D like a book and works in straight lines of coding, our brains are 3d and have many parts disagreeing, agreeing, being silent, or making stuff up faster than any drive. That basically is the gist of it for me, if there are not DNA drives like you were showing a few days ago, then a I will continue to be just that. And I would imagine there would have to be a new language of p=np syntax in 3d to be able to make one that could make human like connections.

So it's like global warming, a long way away and something that only might happen by 2050. Lol

on Aug 15, 2015

If it feels the other two robots have been wronged in being silenced, then I'll start worrying.  A simple logic puzzle is about as indicative of self awareness as software checking for system requirements on install.

on Aug 15, 2015

I read a article that deals with your Windows concerns (Ed Bott) and his suggestion is if it really gets too bad on Windows or Apple, to go to Linux Mint or Ubuntu.  So far, I am okay with Windows 10 although I mind having to turn off a ton of bloat and crap.  I couldn't stand Cortana but Ed Bott likes it so we definitely disagree there.  The problem is that some of those items we disabled are still running in the background.  It is exactly the same issue I have with my Android phone which is very non-secure at this point with two major exploits out there that Google seems to be ignoring.  And there's all that sharing..... At some point, I may actually make the move to Linux which still respects privacy at least for now. 

"Sharing" is really a euphemism for "invasion of privacy."  I am on FB but hate it as everything there is tracked as well. Now add some very smart robots to all of this and there will be no need to worry about your phone or computer.  Your faithful Robot will report everything for you or a cute little drone will scan your house and the whole neighborhood including any and all electronic devices. Google will send a robotic taxi to pick you up while smart drones deliver your groceries. Needless to say those wonderful "conveniences" can be used for other purposes as well.  The next arms race should be a doozy!

Human beings will no longer be used for combat (or have a job).  It will be left to smart robots instead.  Let's hope they don't get confused about who their friends are and who they work for.  Various counties will be kept busy trying to exploit each other's robots.  The sheriff's department here has helicopters that fly at night to "keep us safe." Supposedly, they are equipped with the latest technology allowed by the feds.  As for me, I am going to keep on watching the sunrise and sunset and enjoy the sheer beauty of the moon and those glorious stars at night - and the space station is visible as well. At least for now....

on Aug 15, 2015

DARCA1213

no machine  has voting rights, or will.

Ah, but what about the voting machines? The politicians will declare war on the voting machines that ruin their fixed elections.

Cameochi

Let's hope they don't get confused about who their friends are and who they work for. 

Right...what could go wrong?

 

on Aug 15, 2015

Artificial intelligence?  Like why would mankind attempt to create AI when mankind lacks the intelligence to use its collective brain intelligently.

True intelligence should tell us it is wrong to kill, rape and go to war, yet there are millions of effwits going around committing atrocities to fellow human beings.

No, mankind's attempt to create AI will not end well.  What's that about creating something in one's own image?  Building a robot to look human is one thing, but to create robots/machines that have human traits and 'think' like humans, what with our own flawed 'intelligence as building blocks, is fraught with danger.

Yes, technology can be grand, and it can enhance our lives to no end, but it can also be abused and used for no good - just look at all the scammers and proliferaters of malware on the internet - so just because we can create it doesn't mean we should.  Sadly though, there will be those in positions of power and influence who would seek to exploit such technology, so it will be developed whether it's smart to do so or not.

on Aug 15, 2015

Indeed, Mark. Just imagine the AI standing before the judge claiming innocence because it was hacked. 

As of now, the whole "internet of things" is open to hacking (every home device imaginable) because "brilliant" humans dash off some insecure code and then it's off to the "bigger, better, shinier and newer" model.

Righto! 

on Aug 15, 2015

Like I said, if we can't use our intelligence intelligently, how could we expect a creation of human intelligence to function any better?

I mean, if it is human to err, then is it safe to assume that artificial intelligence would produce human errors?

Sadly, intelligence will not prevail.

on Aug 15, 2015

Wonder at what point AI will get basic rights, and turning off your computer will be considered murder.

on Aug 15, 2015

I hope somebody invents a kill switch to go along with it.

on Aug 15, 2015

starkers

True intelligence should tell us it is wrong to kill, rape and go to war, yet there are millions of effwits going around committing atrocities to fellow human beings.

That has exactly nothing to do with intelligence [or lack of].

Morality, maybe, but definitely not intelligence.... Spell checker

on Aug 16, 2015



Quoting starkers,

True intelligence should tell us it is wrong to kill, rape and go to war, yet there are millions of effwits going around committing atrocities to fellow human beings.



That has exactly nothing to do with intelligence [or lack of].

Morality, maybe, but definitely not intelligence.... Spell checker

Yeah, intelligence!   Our intelligence tells us it is wrong to kill/rape, that there can be/will be consequences/repercussions, yet there are those who still proceed with the crime.  For me, that's unintelligence.  Morality, on the other hand, is the definition of good [socially acceptable standards and shows us the difference between right and wrong.

on Aug 16, 2015

You're both right. Morality and intelligence are not mutually exclusive...an act might well be moral and based not solely on religious values.

 

on Aug 16, 2015

DrJBHL

You're both right. Morality and intelligence are not mutually exclusive...an act might well be moral and based not solely on religious values.

 

That's exactly right!  Morals are what society deems as being socially acceptable... and intelligent people abide by those standards/expectations.

At the end of the day, intelligence isn't what one knows or how much, rather it is how well one uses what they know.

on Aug 17, 2015

I always imagined it as a race between biomechanics/biomedicine to beat out AI devrlopment or we all die and have the world end up like it did in Wall-E or Terrminator. Because upgrading our collective gene's and capibilities is better than making a AI to do the same task.

 

DARCA

on Aug 17, 2015

Simply make it a requirement that all sentient machines be programmed with the 3 laws of robotics hard coded in their processor.

Problem solved.

3 Pages1 2 3