Intelligence is in the eye of the beholder. “Intelligence”, perhaps, refers to the level of complexity. When a machine is complex enough that we do not understand how it makes decisions to do certain things, we call it “intelligent”. But when we understand how machine’s actions are triggered, the impression of “intelligence” disappears.
For instance, my smartphone may suddenly tell me: “Hey, if you want to be in time for that meeting, you’d better start now and, by the way, avoid that highway – there is an accident near exit 69.” That’s intelligent, right? How did it come up with such a timely and useful message? But, of course, the smartphone “knows” about the time and place of my next meeting from my Google Calendar. It also knows my current location from GPS and can calculate how long it takes to get to the meeting using Google Maps. It also knows about the traffic based on the information from thousands of smartphones on the road aggregated at the Google server. The smartphone does not just “decide” that this message would be useful to me. The smartphone knows nothing of being useful. It is programmed to do things that the designers of Google Now considered useful. So, if we don’t know all these things, the message appears intelligent. But if we do understand how things work, the impression of “intelligence” disappears.
However complex the machine, if it exists, humans (at least, some) must understand how it works. Perhaps, nobody individually, but collectively, there will be a group of experts whose knowledge covers all aspects of the machine. So, perhaps, existing machines will be never considered “intelligent” and the term “intelligent” will always be reserved for some mysterious “next generation”. Of course, nobody has an idea what the next generation of machines will do. So, it’s quite appropriate. On the other hand, we might as well consider that the AI already exists because what I described in my example would certainly blow my mind 20 years ago.
Another thought. “Intelligence” implies purpose. There are very complex natural systems with very complex behavior. But unless they do something that appears useful or purposeful to humans, they are never called “intelligent”. The term “intelligence” seems to be closely related to goal setting and decision making and, therefore, to the question of free will. Before we answer whether machines can be intelligent, we need to answer whether humans are intelligent themselves or are mere automatons. And there is no answer to this question. It’s a matter of philosophical worldview.
Doobster has posted another masterpiece called “I seen it all”. The story has a great beginning, but then suddenly ends causing the readers to beg for more, if you read the comments.
This is a great story. I think, it reveals a great deal about ourselves. We get irritated when things do not go according to expectations: when unexpected things come up and mess up our plans, when other people behave irrationally (i.e. in a way that wee cannot explain or expect). Getting irritated about those things often leads to anger, fear, worry, and anxiety. I train myself to accept reality as it is, good or bad, expected or not. It’s just my way to be content.
Doobster’s story is great for practicing this philosophy. One would think that the story does not have an “ending”. Well, it ends, doesn’t it? The problem is that we don’t like the ending. We expect something more. But that’s not a problem with the story. It’s a problem with ourselves. Just deal with it, folks. And, to add some seasoning to the recipe, Doobster, who is an excellent writer, calls the story “I seen it all” and sprinkles this grammatical error, like a good chef, throughout the story to tickle the taste buds of the “grammar Nazis”.
This reminded me of my favorite TED talk by philosopher Dan Dennett. In the beginning, he promises to explain consciousness, but warns that his explanation may disappoint many people because it’s like explaining a magic trick: when it is explained, the “magic” disappears — it’s not “magic” any more. Then Dennett uses a few examples of optical illusions to show how our mind creates “reality” that does not really exist — how we see people on a picture where, in reality, there are just few color spots on a canvas that don’t look like people at all; or how we fail to notice that an airplane is missing an engine simply because we presume that it’s there. The basic message behind this video is that we see what we expect to see, whereas consciousness and reality are not what we expect them to be — they are what they are.
The comments to the talk are most amusing. People are disappointed by Dennett’s explanation of consciousness because… it is not what they expected… although Dennett warned everyone that they will be disappointed and although that’s the point of the talk — that consciousness is not what we expect. In other words, Dennett has masterfully delivered on his promise to disappoint. Brilliant.
I have been told many times by atheists that “God is a human construct”. Most recently, here:
GOD is just a myth, like every OTHER construct of man.
Well, not all “constructs of man” are myths. Men (and women, to be politically correct) come up with many ideas, not just myths. And I readily agree that God is one of such ideas.
People do not believe in things. People believe in ideas. And yes, ideas are immaterial, cannot be touched, seen, smelled, felt in any way. Well, people can read an idea, but what they see are signs or images. When people say they heard an idea, they actually heard sounds. They could be words, music, or white noise.
So, it does not bother me at all that “God is a human construct”. So is everything else for which we can find a word.