“The theory of innate ability holds that human thinking is innate thought or knowledge. Among these ideas, the most famous one is Plato’s theory of form and later Descartes’ meditations. At present, this view is gaining neuroscience evidence that we are indeed born with a innate understanding of our world.
Figure 1: An elder Plato walks alongside Aristotle, The School of Athens, Raphael
However, the theory of “innate ability” is in conflict with “purism” machine learning method. In this “pure” machine learning algorithm, the algorithm only learns from data without explicit programming or pre programmed calculation and logic modules.
“The actual content of thought is very profound and extremely complex; we should not try to find some simple ways to think about the actual content of thought, such as thinking about space, objects, multi-body or symmetry in a simple way. All of this is part of an arbitrary, inherently complex external world. They should not be built-in because their complexity is endless; instead, we should only build meta methods that can discover and capture this arbitrary complexity. “
——The Bitter Lesson
March 13, 2019
However, there is a school of thought which holds the opposite idea, and suggests that the technology of semiotics artificial intelligence be combined with deep learning.
The future of deep learning
New York University Professor Gary Marcus and others advocated a view that deep learning needs to be combined with older, symbolic artificial intelligence technology to achieve human intelligence level. But Hinton doesn’t think so. Hinton compares this to using an electric motor, which is only used to drive the fuel injectors of a gasoline engine, although electricity is more energy-efficient.
At the same time, the hybrid model may be able to solve the obvious limitations of deep learning, especially about “deep learning currently lacks a clear definition of the mechanism to learn abstract concepts. When there are thousands, millions or even billions of training examples, this method is the best”.
Do you think we’d better integrate gofai into deep learning? The debate is still raging.
New neurological evidence
In my opinion, these discussions all boil down to a question: do we learn everything through experience, or are we born with some form of innate cognition?
A study published in the proceedings of the National Academy of Sciences (PNAs) said:
“The study found a principle of synaptic organization that groups neurons in a way that is common among animals, independent of individual experience.”
——A synaptic organizing principle for cortical neuronal groups
By Rodrigo Perin, Thomas K. Berger, and Henry Markram
Some of the simple work of the physical world consists of this.
“Neuronal clusters, or cell aggregates, that occur simultaneously in the neocortex of animals are essentially” building blocks “of cells. For many animals, learning, perception and memory may be the result of putting these pieces together, rather than forming new cell combinations. “
——Are We Born With Knowledge?
With more and more neurological evidence supporting the existence of innate cognition, it may be meaningful to equip deep learning with “innate” computational modules or primitives. At the same time, some of these primitives are likely to be based on ideas borrowed or inspired from gofai.
On the other hand, it is difficult to predict what the architecture of deep learning will look like in the future. Yoshua bengio himself admits that “before neural networks can reach the general intelligence of the human brain, a new architecture of deep learning is needed.”
In my opinion, in contrast to clear juxtaposition such as the neural back end and the symbol front end (Figure 2), symbolic manipulation is likely to be deeply coupled and entangled with the neural architecture. “Compared with general-purpose computer programs, their models are built on richer primitives than our current distinguishable layers. This is how we will reason and abstract, and it is also the basic weakness of the current model.”
Figure 2: Deep Symbolic Reinforcement Learning, the neural back end learns to map raw sensor data into a symbolic representation, which is used by the symbolic front end to learn an effective policy (source)
This shows that the boundary between “purism” and “mixism” is very vague. Therefore, I think that the differences in views are more about differences in focus than on fundamental differences.
Link to the original text: https://towardsdatascience.co…
Welcome to click “Jingdong cloud” to learn more
The above information is from the Internet, compiled by the official account of Jingdong cloud developer community, and does not represent Jingdong cloud.