Artificial intelligence will not evolve if it fails to create computer models that enable us to understand the world around us, as the largest AI conference in Stockholm showed. Author / journalist Bennie Mols was there all week.
If you want to know what happens in the field of artificial intelligence, the annual International Joint Conference on Artificial Intelligence the perfect opportunity† Anyone who thinks or wants to think something in the field of artificial intelligence is there. Godfathers of computer learning such as Yann LeCun and Yoshua Bengio gave lectures, young researchers presented their work, large tech companies tried to recruit talent and the most important prizes in the field were awarded. For example, researchers at DeepMind received the Marvin Minsky Medal for their groundbreaking work on the Go-playing computer that wiped people off the board years earlier than expected.
Deep Learning does not solve everything
Professor of artificial intelligence Frank van Harmelen from VU, who specializes in knowledge representation and reasoning, has participated in the conference for years. Asked about the most striking developments this year, he says: ‘Faster than I expected, I see a fusion between the old approach to artificial intelligence – reasoning and knowledge representation – with the new approach, which has achieved such great success in the past year: to learn computers, especially by deep learning† The realization breaks through it deep learning does not solve everything. “
And in fact one of the founders of deep learning, Yann LeCun (Facebook AI Research and New York University), explains in his presentation what is missing in today’s artificial intelligence: ‘Learning computers need too many examples and too much practice. Moreover, computers do not have common sense. “According to him, these are the main reasons why we still have no smart chatbots, no multifunctional household robots, no intelligent digital personal assistants and still far from human-like artificial intelligence.” Machines must also build models of the world for to take the next step in artificial intelligence. “
A three-year-old can do much more
MIT professor Josh Tenenbaum is trying to do just that. But what are these models of the world? In his keynote address with the title To build machines that learn and think like humans Tenenbaum shows that intelligence is so much more than recognizing patterns in which learning computers have become so good in recent years. Tenenbaum: ‘Intelligence is also about modeling the world: understanding what we observe, being able to establish cause-effect relationships, imagine new things, solve problems and plan actions.’ He investigates how babies and toddlers do this, as well as how we can mimic this in computers. “Right now, there are a lot of skills that three-year-olds have, but the best computers and robots still do not, for example the intuitive way young children build new things with bricks or intuitively understand what others around them want.” This is what the American scientist calls intuitive physics and intuitive psychology.
To illustrate this, Tenenbaum shows a funny video of an experiment in which an adult man with a stack of paper runs into a closed closet twice while a small child watches. After the second collision, the small child goes to the cupboard, opens the door, looks at the man and signals that he can put the pile in the cupboard. “The toddler has never seen this situation before,” says Tenenbaum, “but still understands the man’s purpose. No robot even comes close to this child’s intuitive understanding of the world. The big question is: how does the brain manage to build a model? of the world very efficiently, in a very short time and with very few examples, and how can we teach computers and robots the same thing?
In addition to the fact that the field of artificial intelligence is increasingly also exceeding the limitations of deep learning realize that China’s rapid progress is striking, says Van Harmelen. Of the 710 accepted conference papers (out of a tender of 3470 submitted articles), 46% this year came from China, 18% from the EU and 17% from the USA. ‘But most of their contributions are in machine learning. I do not see them much in the other sub-areas of artificial intelligence. “
The third thing that strikes Van Harmelen is that more and more attention is being paid to the social impact of artificial intelligence: to ethical aspects such as transparency, accountability and clarity. “Not so long ago, these were perceived as soft aspects that serious researchers did not deal with. Now no one can escape the ethical side of artificial intelligence. “
Write to Bennie Mols
International Joint Conference on Artificial Intelligence
The International Joint Conference on Artificial Intelligence (IJCAI) is the oldest and the world’s largest scientific conference on artificial intelligence, covering all sub-areas: from machine learning (by far the largest part of the conference this year), computer vision and language processing, for planning, searching, games, knowledge representation and robotics. The first IJCAI conference took place in 1969, thirteen years after the field was born and the name “artificial intelligence” was invented. The conference now attracts many thousands of researchers from around the world.
Opening photo: Yann LeCun, one of the founders of deep learning, under his keynote.
If you found this article interesting, you can subscribe to our weekly newsletter for free.