“Doublethink” is the Test of a First-Rate Intelligence


Doublethink means the power of holding two contradictory beliefs in one’s mind simultaneously, and accepting both of them.
George Orwell, “1984”

The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.
F. Scott Fitzgerald

ergo

Doublethink is the test of a first-rate intelligence.

Q.E.D.

Intelligent Design. What Does It Mean?


Some time ago, I made a post “Created or Evolved?” arguing that technology which is thought to be created, in fact, does not have a specific creator and rather evolves.

In my previous post, “Intelligence is in the Eye of the Beholder“, I pointed out that the term intelligence refers to the level of complexity.  The term intelligent is usually reserved for systems complex enough that we don’t quite understand their behavior.  Once we fully understand the system behavior, the illusion of intelligence disappears.  This is why, although we have very complex devices today doing very sophisticated things, it is still believed that “artificial intelligence” (AI, for short) is still in the future.  I think, it will always be.

Another necessary feature of intelligence is a perceived purpose.  If we don’t see a purpose in system’s behavior, we don’t call the system intelligent.

Now, let’s put the pieces together and answer the question, was the world intelligently designed by a creator or has it evolved?  Since even things created by humans do not have a single creator and rely on fusion of ideas to evolve from simple to complex, the world has, certainly, evolved.  However, when a system appears to have a purpose and we do not fully understand how it works, we tend to consider it intelligent or designed by an intelligent agent. And the world does seem to fit this description.

Intelligence is in the Eye of the Beholder


SelfAwarePatterns has recently made a post titled “Let artificial intelligence evolve? Probably fruitless, possibly dangerous” arguing that if we want to create intelligent machines, we must let them survive in the real world and let them evolve.  I made a comment that may be worth turning into a post.

I said

Intelligence is in the eye of the beholder. “Intelligence”, perhaps, refers to the level of complexity. When a machine is complex enough that we do not understand how it makes decisions to do certain things, we call it “intelligent”. But when we understand how machine’s actions are triggered, the impression of “intelligence” disappears.

For instance, my smartphone may suddenly tell me: “Hey, if you want to be in time for that meeting, you’d better start now and, by the way, avoid that highway – there is an accident near exit 69.” That’s intelligent, right? How did it come up with such a timely and useful message? But, of course, the smartphone “knows” about the time and place of my next meeting from my Google Calendar. It also knows my current location from GPS and can calculate how long it takes to get to the meeting using Google Maps. It also knows about the traffic based on the information from thousands of smartphones on the road aggregated at the Google server. The smartphone does not just “decide” that this message would be useful to me. The smartphone knows nothing of being useful. It is programmed to do things that the designers of Google Now considered useful. So, if we don’t know all these things, the message appears intelligent. But if we do understand how things work, the impression of “intelligence” disappears.

However complex the machine, if it exists, humans (at least, some) must understand how it works. Perhaps, nobody individually, but collectively, there will be a group of experts whose knowledge covers all aspects of the machine. So, perhaps, existing machines will be never considered “intelligent” and the term “intelligent” will always be reserved for some mysterious “next generation”. Of course, nobody has an idea what the next generation of machines will do. So, it’s quite appropriate. On the other hand, we might as well consider that the AI already exists because what I described in my example would certainly blow my mind 20 years ago.

Another thought. “Intelligence” implies purpose. There are very complex natural systems with very complex behavior. But unless they do something that appears useful or purposeful to humans, they are never called “intelligent”. The term “intelligence” seems to be closely related to goal setting and decision making and, therefore, to the question of free will. Before we answer whether machines can be intelligent, we need to answer whether humans are intelligent themselves or are mere automatons. And there is no answer to this question. It’s a matter of philosophical worldview.