Ramblings of an old Doc

 

Bill Gates, Musk, Hawkings and others have all stated their concern regarding the distressing potential dangers of AI more than once, on we go pell-mell towards self aware/self governing machines.

We can’t even get security updates right without causing severe problems, but somehow think “We can do this. We can win!”.

Just a minor thought…a program, any program (including heuristic ones) are limited by their coding and how, via this coding, they ‘learn’. The same is true about biological systems. Their form, their being carbon based, their being subject to the laws of thermodynamics and sensitivities to environment, and other biological entities all determine and limit how they learn.

Another minor thought, “If something can go wrong, it will.” Just ask God.

Now comes this report by Selmer Bringsfjord (RPI, New York) (inThe New Scientist) regarding a test he ran using the classic “Wise-men Puzzle” on three robots, two of which he silenced, and one he didn’t. All three had auditory sensors.

“In a robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.

They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.

Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says. It then writes a formal mathematical proof and saves it to its memory to prove it has understood.” – New Scientist

Granted, this isn’t “full consciousness”, but this is conscious thought and shows a conception of ‘self’, or “the first-hand experience of conscious thought’.

There are those who are correct in saying that there’s a big difference between saying, “It’s sunrise.” and being able to enjoy the esthetic experience of knowing who you are and being part of that sunrise and . Perhaps central to the experience is knowing one is mortal and what that sunrise signifies in terms of mortality and the passage of time which generates compassion for others subject to that passage of time and the knowledge that each is at a different point in that passage.

Perhaps what I fear most therefore, is a machine which has no compassion and its actions for self preservation without that essential quality, even if through inaction because it simply has no perception that it is doing wrong since ‘right’ and ‘wrong’ are alien to it.

After all, even though very imperfect, we do have a system of checks and balances, ideas of morality, etc. which function (to some degree) to limit us.

If you don’t believe the craziness of all this, if you don’t believe this is real, read about how ‘killer robots’ was to be discussed at the the U.N. Convention on Certain Conventional Weapons. You can look up the meeting (11/2014) search. You read more here.

Source:

https://www.newscientist.com/article/mg22730302-700-robot-homes-in-on-consciousness-by-passing-self-awareness-test/

http://www.computerworld.com/article/2970737/emerging-technology/are-we-safe-from-self-aware-robots.html

http://www.stopkillerrobots.org/2015/03/ccwexperts2015/

http://www.computerworld.com/article/2489408/computer-hardware/evan-schuman--killer-robots--what-could-go-wrong--oh--yeah----.html


Comments (Page 3)
3 Pages1 2 3 
on Aug 19, 2015

I think, therefore I am.....

 

 

 

 

 

 

 

 

 

 

....I think.

on Aug 19, 2015

Whenever discussion come up like this I usually take a nap and thinks just sort themselves out............or is it that I realize I have an overwhelming lack of interest.   

 

on Aug 19, 2015

Maiden666

I'm not afraid of any synthetic intelligence or consciousness as long as the developers don't map feelings into it or give it a body that can experience frustration.

The aforementioned rape, murder etc pp all happen because of strictly emotional reasons - this has nothing whatsoever to do that these criminal activities are based on intelligence. It's more or less an absence of it via a loss of self-control. Of course this is just a very big generalization as there are much more reasons to do evil. There are people whose brain apparently doesn't work like it should, be it from sickness, "bad" genetics, lead astray by ideas, immature infantile minds that crave hedonism even over other peoples rights, all the religious known sins etc... the list is endless ... but one thing to realise is that a computer should be oblivious to most of these motives. Perhaps a computer could be made intelligent without being able to have urges, motives etc on his own in the first place.

Then, I don't believe that Artifical Intelligence such as we see in our self is even remotely possible with machines. It might be an expression of biolgical life, and I throw consciousness right therein, too. As of now, both terms lack an ultimate & precise definition, but unlike us, a computer code needs to specify its terms exactly or it won't work.

In this thread I sense irrational fear of the unknown, fear to loose power - both of which could be called a root of evil themselves.

 

Or maybe a natural progression of logic could do us in. Just ask Dr. Mel Practice...

http://www.gocomics.com/brewsterrockit/2015/04/28

on Aug 19, 2015


I think, therefore I am.....

 

 

 

 

 

 

 

 

 

 

....I think.

 

on Aug 19, 2015

"poof?'   "poof?  No way.   "poofet"

on Aug 19, 2015

I thought.... therefore I was....

 

 

However.... thinking sometimes hurt

 

So I evolved.....

 

 

Now I simply exist.... happily.

on Aug 21, 2015

Please wake me up when AIs will be capable of playing games like Rome Total War, Civilization, Go ... or Galactic Civilization on a semi-competent level. Or when they are capable of truly understand human language, and to perceive things like irony, metaphor, symbolism, etc.

I think the current tech is still eons from that.

Meanwhile... why don't we make our world at least a bit asymmetric so that the current AIs have at least a fleeting chance of taking over, just for the sport, okay?

on Aug 23, 2015

Fascinating read, Doc. Have you seen this?

http://qz.com/481164/ibm-has-built-a-digital-rat-brain-that-could-power-tomorrows-smartphones/

When you couple the leaps in programming with the new leaps in organic computing and quantum computing I do think we'll be seeing at least "weak AI" starting to approach human simulation (though probably not consciousness) within the next 10 years or so, maybe 15. In 10 or 15 years people's Siri's and Cortana's in their phones will be able to have personable conversations with them. It wouldn't surprise me at all if we see the first actual strong AI sometime within the next 20 years, probably less.

3 Pages1 2 3