By the age of seven months, most children have learned that objects still exist even when they are out of sight. But it is something that self-driving cars do not have. And that is a problem. For a self-driving car, a bicycle that is momentarily hidden by a passing truck is a bicycle that has disappeared. How to give AI the reasoning ability for a seven-month-old child is now a matter of active research.
Modern Al is based on the idea of machine learning. If an engineer wants a computer to recognize a stop sign, he does not try to write thousands of lines of code that describe every pattern of pixels(像素) which could possibly indicate such a sign. Instead, he writes a program that can learn for itself, and then shows that program thousands of pictures of stop signs. Over many repetitions, the program gradually works out what features all of these pictures have in common.
Similar techniques are used to train self-driving cars to operate in traffic. But they do not understand many things a human driver takes for granted. In a recent paper in Artificial Intelligence, Mehul Bhatt of Orebro University in Sweden, describes a different approach. He and his colleagues took some existing Al programs which are used by self-driving cars and bolted onto them a piece of software. In tests, if one car momentarily blocked the sight of another, the reasoning-enhanced software could keep track of the blocked car, predict where and when it would reappear, and take steps to avoid it if necessary. The improvement was not huge. On standard tests Dr Bhatt's system scored about 5% better than existing software.
However, the question goes beyond self-driving cars to the future of Al itself." I don't think we're taking the right approach right now,"Dr Marcus, who studies psychology and neural science at New York University, says. "It's not actually the answer to AI. We haven't really solved the intelligence problem. "One way or another, then, it seems seven-month-olds still have a lot to teach machines.