Intelligence is in the Eye of the Beholder

SelfAwarePatterns has recently made a post titled “Let artificial intelligence evolve? Probably fruitless, possibly dangerous” arguing that if we want to create intelligent machines, we must let them survive in the real world and let them evolve.  I made a comment that may be worth turning into a post.

I said

Intelligence is in the eye of the beholder. “Intelligence”, perhaps, refers to the level of complexity. When a machine is complex enough that we do not understand how it makes decisions to do certain things, we call it “intelligent”. But when we understand how machine’s actions are triggered, the impression of “intelligence” disappears.

For instance, my smartphone may suddenly tell me: “Hey, if you want to be in time for that meeting, you’d better start now and, by the way, avoid that highway – there is an accident near exit 69.” That’s intelligent, right? How did it come up with such a timely and useful message? But, of course, the smartphone “knows” about the time and place of my next meeting from my Google Calendar. It also knows my current location from GPS and can calculate how long it takes to get to the meeting using Google Maps. It also knows about the traffic based on the information from thousands of smartphones on the road aggregated at the Google server. The smartphone does not just “decide” that this message would be useful to me. The smartphone knows nothing of being useful. It is programmed to do things that the designers of Google Now considered useful. So, if we don’t know all these things, the message appears intelligent. But if we do understand how things work, the impression of “intelligence” disappears.

However complex the machine, if it exists, humans (at least, some) must understand how it works. Perhaps, nobody individually, but collectively, there will be a group of experts whose knowledge covers all aspects of the machine. So, perhaps, existing machines will be never considered “intelligent” and the term “intelligent” will always be reserved for some mysterious “next generation”. Of course, nobody has an idea what the next generation of machines will do. So, it’s quite appropriate. On the other hand, we might as well consider that the AI already exists because what I described in my example would certainly blow my mind 20 years ago.

Another thought. “Intelligence” implies purpose. There are very complex natural systems with very complex behavior. But unless they do something that appears useful or purposeful to humans, they are never called “intelligent”. The term “intelligence” seems to be closely related to goal setting and decision making and, therefore, to the question of free will. Before we answer whether machines can be intelligent, we need to answer whether humans are intelligent themselves or are mere automatons. And there is no answer to this question. It’s a matter of philosophical worldview.


5 thoughts on “Intelligence is in the Eye of the Beholder

  1. It’s not a matter of intelligence per se, but rather a question of life. All living organisms, whether they be bacteria, animals, and shrubbery, come with a built-in purpose to survive, thrive, and reproduce. Inanimate forms of matter, like rocks, lack this purpose. The inanimate will roll down the hill. The living organism will climb the hill if there is a MacDonald’s Hamburgers restaurant at the top.

    The difference between the human and the machine is that the human has the purpose. The human builds the machine to serve the human’s purpose. The machine has no purpose of its own except what is given it by the human that built it.

    Intelligence is the tool of purpose.

    • Agree. I had a similar thought in the last paragraph. So, do you believe that artificial intelligence can be created? After all, humans can program the purpose to survive into a machine. Then, perhaps, we can let machines evolve to be intelligent as SelfAwarePatterns suggests.

      • A calculator has the intelligence to do arithmetic. But it lacks a purpose of its own. If we were to create a new life form with it’s own need to survive, thrive, and reproduce, then we may have to kill it to avoid it killing us. But as long as the machine has no purpose other than to serve our purposes (like the calculator) then it should be okay to allow it to self-improve its learning and thinking as long as it remains in the sandbox we create.

        • I feel the same way about AI. What’s the purpose of creating a machine with its own purposes that don’t serve our purposes? Self-driving cars are fine as long as they take us to our destination, not somewhere they feel like going in the morning.

Feel free to leave your comments and sarcastic remarks

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s