IF, AND, NOT

Future pundits warn us that the Artificial Intelligence revolution could get out of our hands, as robots develop minds of their own and evade human control. Meanwhile, I’m sure you’ve noticed that user interfaces on every web site you’ve loaded in the last few years is buggy, quirky, and makes stupid mistakes, mistakes that are peculiar to software, mistakes no human would ever make. Software has bitten off more than it can chew.The idea that software is going to outsmart us with general intelligence about the real world is a fantasy. Of course, I won’t rule out that this may come to pass in some future world. But it’s not an extrapolation of any technology currently in existence.

 

robots

 

Two Kinds of AI

Computers are made of billions of switches that can be (virtually) wired together in flexible way. The switches can only be either on or off, and there are only three things that their connections can do:

IF: If switch A is on, it will turn on switch B.

NOT: If switch A is on, it will turn switch B off, and If switch A is off, it will turn switch B on.

AND: If switch A and B are both on, that will turn on switch C.

Computer programming, including AI, is no more than an arrangement of many such switches to form logical structures. There are two approaches to AI:

  1. The programmer can think through the entire logical process, account for all contingencies, and create a decision tree “by hand” that incorporates all the knowledge and logic necessary to do a job in a wide variety of real-world circumstances.
  2. The programmer creates a generalized learning tool that watches human behaviors in billions of different circumstances. The result is a black box that does the right thing. There is a decision tree based on criteria that no one has mapped out, so no one understands it.

Both approaches are being deployed. For example, Google’s original autopilot car was programmed manually (A), but the system advanced when learning ability was added on (B). The Tesla autopilot system is built by computer learning (B) from the ground up.

One of the most advanced systems using approach (A) for general computer intelligence is called Wolfram Alpha. (Stephen Wolfram is a certified computer supergenius, and has an elite team working with him, developing Alpha over the course of two decades.) You can try it out by typing any question into the yellow box.

Google’s search engine is programmed by approach (B). It is constantly learning by monitoring computer searches by billions of users, following their clicks to learn what it is they were trying to look for. Try typing the same questions into the Google search box.

Computer learning is way ahead of manual programming

This little experiment was designed to show you that approach B is miles ahead of approach A. Face recognition, voice recognition, natural language understanding, medical diagnosis…all are based on computers learning from humans. Anything impressive that computers can do today, they learned by emulating humans.

This is why I don’t think computers are poised to surpass human intelligence and creativity and judgment. They will continue to do some of what we do faster and more reliably, but they’re not learning how to do things that humans can’t do. Current AI technology is completely dependent on learning from humans.

The Singularity

Ray Kurtzweil, director of engineering at Google, is another certifiable supergenius who personally created some of the first milestones in AI back in the 1970s. Kurtzweil coined the term Singularity to describe the coming time when computers become better than people at the task of designing computers. He argues that the computer designed by the computer will immediately design a computer that’s more powerful yet, and computer intelligence will expand exponentially in a short period of time.

I think he’s not being realistic about what constitutes “intelligence”. There are aspects of intelligence that are outside the purview of computer capabilities.

Is the Human Brain a Computer?

Back in the 1940s, Alan Turing developed the theory of what a computers is and what it can do, and he proved that a broad class of machines with very different architecture are all equivalent, just slower or faster versions of the same set of capabilities. So much of what has been written about AI, both by computer geeks and philosophers, assumes that the human brain is a computer, with all the power and all the limitations of a Turing Machine.

It’s not true.

Human brains can do the things that computers can do, though, compared to computers, they’re pretty slow and prone to errors. But our brains do things that computers are not designed to do. We empathize, we intuit, we create, we receive and transmit ideas without knowing where they come from.

Telepathy is part and parcel of our thought process. There is overwhelming experimental evidence for telepathy. Sigmund Freud and WIlliam James, both pioneers of Western scientific psychology, knew about telepathic abilities from their own experience and from their studies. James wrote that the human brain was more akin to a radio receiver than a computer, and that consciousness lives somewhere outside the brain, outside material reality.

Is the brain a quantum computer? Quantum computers as presently conceived and designed seek to operate predictably by minimizing “errors”. In fact, all the engineering difficulties that are associated with development of a quantum computer come from the need for computations to take place reproducibly. Quantum computers of this design will be enormously fast Turing machines. The human brain may be a quantum computer of a different ilk, a design that thrives on quantum “uncertainty” as an entry point for consciousness. This is what Stuart Kauffman’s findings about superposition states in neurotransmitters suggests to me.

But that’s speculation. In any case, it’s certainly true that our brains are doing things that no computer of any design presently contemplated can duplicate.

Leave a Comment