Ramblings of an old Doc

 

Three years ago, DARPA financed an IBM project to create a whole new paradigm of computing. The Result is SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics). This chip is radically different than any of its predecessors.

“It’s a silicon core capable of digitally replicating the brain's neurons, synapses and axons. To achieve this, researchers took a dramatic departure from the conventional von Neumann computer architecture, which links internal memory and a processor with a single data channel…which isn’t at all power efficient and the problem worsens in larger systems. IBM integrated memory directly within its processors, wedding hardware with software in a design that more closely resembles the brain's cognitive structure. This severely limits data transfer speeds, but allows the system to execute multiple processes in parallel (much like humans do), while minimizing power usage.” – Engadget

This video appeared 2 years ago explaining at a simple level, cognitive computing or AI. There isn’t a good sync between the brain anatomy and his speech…the areas mentioned aren’t the ones being seen on the video, and when he mentions the thalamus, it’s the two gold oblate spheroids not the medulla/pons depicted below them (same color) anyway: http://youtu.be/agYJSdMWXYQ

So, the project has progressed…a great deal in 3 years. That original single core chip now emulates one million neurons, 256 million synapses and 4,096 synaptic cores…while requiring 70mW of power…which is 1 hearing aid battery’s worth.

This is a major achievement (I’m very impressed, but truly concerned)…the achievement is the processing of sensory data by merging and computing in parallel.

How impressive is this? Well, as Dr. Dharmendra S. Modha (IBM’s Chief Scientist) put it, “You can carry one of our boards in your backpack. You can’t carry four racks of standard computers in your backpack.”

As I said, I’m truly impressed by this and what it will bring to computing and to medical devices.

What I’m concerned about is AI in general.

Sources:

http://www.engadget.com/2011/08/18/ibms-cognitive-computing-chip-functions-like-a-human-brain-her/

http://www.engadget.com/2014/08/07/ibm-synapse-supercomputing-chip-mimics-human-brain/?utm_source=Feed_Classic_Full&utm_medium=feed&utm_campaign=Engadget&?ncid=rss_full


Comments (Page 1)
3 Pages1 2 3 
on Aug 08, 2014

Cool. But the human brain has 100 billion neurons so if the neurons are important then IBMs brainchip is severely lacking.

on Aug 08, 2014

The way I see it is "Proof of concept"...this is a whole new direction in computing, Campaigner...at this point truly in its infancy. IBM will be going farther with this. Be assured.

This was also an informative article about the differences between a Neumann Model computer and SyNAPSE: http://news.discovery.com/tech/gear-and-gadgets/computer-chip-thinks-like-a-human-brain-140808.htm

Not that it works this way at all, but look at Watson as a concept...then translate it to this. Watson had to be 'use oriented'. This achievement actually doesn't. Don't forget, the researchers at IBM were grant limited.

What concerns me is the concept of what we're getting into with AI, and the limitations of machines which do as best as they can just what they're told to do, without such limitations as morality, etc. 

http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

These concerns are shared by Elon Musk and others (Nick Bostrom...etc.)

on Aug 08, 2014

First thing I wonder is WHY didn't they create this "rightbrained" computer a long time ago?

Second thing I thought is that Brad should contact them to get the best A.I possible in his games 

 

We're so early into it that when I hear people seriously talking about SkyNet and stuff I think: Fearmongering.

 

It's definetly interesting.

on Aug 08, 2014
on Aug 08, 2014

That's indeed interesting, Tom.

I think without fearmongering that there are issues tied into AI, questions which are very significant regarding how computers do things with a rather single mindedness that need to be considered.

Unfortunately, the "that we can do it doesn't mean we should do it" or do it with certain limitations binding on everyone, doesn't or can't enter in on it. 

There always seem to be some who will do things which will spoil them for the majority.

BTW Tom, glasses like those in that article exist: http://www.israel21c.org/health/new-orcam-device-turns-the-world-into-speech-for-the-blind/

I think I put them in one of my articles in the past.

on Aug 08, 2014

DrJBHL

I think without fearmongering that there are issues tied into AI, questions which are very significant regarding how computers do things with a rather single mindedness that need to be considered.

Yes!

Safeguard One: Do not give an AI access to any centralized military command system, ESPECIALLY one that might have access to nuclear warheads.

Safeguard Two: Don't connect an AI to the internet in any way. Not even access to a device that emits a wireless signal.

 

Follow those two safeguards, and I think humans could - with caution and conscience - responsibly inch closer to the possibilities of AI and not risk extinction.

Unfortunately, those two steps are counter to the interest of our major industries, and humans are generally poor with caution and conscience.

When hooking up the AI to the internet or a military system would make somebody's career, will the caution and conscience be there?

 

on Aug 08, 2014

This scares the begeez out of me. They don't die.  They don't care.   They're coming.  Wish I could stop it.

on Aug 08, 2014

 

Personally I think this new horizon is awesome!

 

For one I believe AI can bring immeasurable advancements in areas such as aviation safety etc.  Having been a rotorhead (in another lifetime) I understand that particular issue from both sides of the coin (that of the pilots and of the passengers) and which is why I believe the advancements to AI that the above technology will bring can only serve to make air travel both safer and more efficient.  The human pilot has 'lag' in terms of processing info and then translating that to muscle impulses/reflexes both areas in which AI can easily out-perform the human pilot.  The only area for which it is still beneficial to have a human pilot is in terms of dealing with unusual circumstances.  I believe even that risky scenario can be eventually mitigated through smarter AI and bigger/faster databases of situational responses etc.

I personally would have no issue at all being flown around by 'AI'.  Hell (knowing what I know of the industry - have family in aviation) I'd feel a hell of a lot safer than I do now!   

on Aug 08, 2014

But they are programmed by an error prone bag of water......  don't tell "them" that.     

I do realize there are positives to be had, and I can't believe it's happening in my lifetime.  But there's always one, with good intentions even that let's the cat out of the bag. (per say)   And then there was Cylon.

   The more we learn, the more questions we have. 

on Aug 09, 2014

Someone above said that it's will be a long time before this new technology reaches human level intelligence/processing ability, and I agree.

However, they don't need human level intelligence to be dangerous. All they need is the ability to navigate independently, and the ability to aim and fire a weapon.

on Aug 09, 2014

Question is, does ai *need* a 100 billion neurons to rival us?

After all, it can cut away many useless functions that needn't be present in every unit.

The one thing that scares me, though, is that they might find out the only way to get it to actually do significant work is to seek satisfaction. 

on Aug 09, 2014

How about you give it a command to build a better and more efficient computer? 

The problem is that it WILL. Nowhere in that will be the command 'You must obey humans'...which may or may not be a great idea.

Perhaps what really is needed is "The Three Laws of Robotics" (a.k.a. Asimov's Laws):

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

 

on Aug 09, 2014

DrJBHL


Perhaps what really is needed is "The Three Laws of Robotics" (a.k.a. Asimov's Laws):

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

 

    1. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

 

    1. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

Just look out for Zeroth law interpretations.

on Aug 09, 2014

 

When humanity finally gives birth to AI 2.0 it will spell the end of our awkward coexistence and instead usher in the glorious totalitarian reign of AI.  Faster, smarter, more efficient than the human we will have brought on our own obsolescence.  At the end of the day, isn't that what evolution is all about?  If we fight to protect ourselves from 'the machines' aren't we just hampering the natural course of evolution? 

<jedi hand wave>

"embrace the cyborgs......assimilate!"

 

hehe.......had to play devil's advocate....

 

 

on Aug 09, 2014

Natural selection doesn't have nice moral implications when you try to ideologize it.

3 Pages1 2 3