I heard this last week that Facebook had been forced to shut down a pair of artificial intelligence programs in development after the pair developed their own language the humans were having difficulty understanding — and note: this has happened every time two AIs have been tasked to communicate with each other.

After a joke about "I, for one, welcome our new machine overlords," I took a little time to consider.

There's a great deal about consciousness we simply do not understand. We know it exists (I think, therefore I am) and we know the brain has a great deal to do with it, but why are we self-aware? No one really knows.

My theory — based on my admittedly limited knowledge — is that consciousness is a quantum process.

A friend of mine is a brain researcher at Wake Forest and according to him, that's at least partially true.

He says he can explain why one person likes beer and another doesn’t, strictly based on taste receptors, neurotransmitter balance, etc. However, what he can’t explain is why 11 molecules of neurotransmitter are released at time-point No. 1, 10 molecules at time No. 2, 15 at No. 13, 12 at No. 5, and 6 at No. 50, etc. According to him, there are nonlinear processes and quantum processes at work.

Now, in my opinion (and that of a lot of researchers) a true AI — rather than the expert systems we see now — is simply a matter of time. Particularly with the new generation of graphene processors liable to come online in the next few years.

But true AI, outside of the dangers, raises interesting questions.

First, will we recognize a self-aware computer when we see it? I suspect we will, but that raises another question. Is a computer programed to mimic self awareness truly sentient?

Possibly. And does it matter? If the computer mimics self awareness perfectly, then what difference does it make?

Then there are the ethical questions. If we create a machine sentient, do we have the right to treat it as a machine? That is, if the computers are truly self aware, or mimic self-awareness at a level in which we cannot tell the difference, do we have the right to treat them as slaves? What moral authority do we have to simply shut them off?

Worse, what if a machine intelligence — hooked into the internet — looks around, figures out the greatest threat to its existence is us and decides to do something about it?

Of course, I'm not the first one to raise these questions. Everyone from Isaac Asimov to Philip K. Dick to Gene Roddenberry to James Cameron has explored these issues.

However to this point, this has been the realm of science fiction. No more. True AI is not far off, computers are already to some degree programming themselves, these are questions we're simply going to have to answer. Particularly because — as Jeff Goldblum pointed out in the original Jurassic Park — scientists are all too often more concerned about whether or not they can do a thing, rather than if they should.

All IMHO, of course

— Patrick Richardson is the managing editor of the Pittsburg Morning Sun. He still welcomes our new machine overlords. He can be emailed at prichardson@morningsun.net, or follow him on Twitter @PittEditor.