- 译本 >
- 精准学习 - 周加仙译 >
- 第三部分 学习的四大核心支柱 >
- 注释
第2章 为什么人脑的学习能力比目前的人工智能机器更强
-
Artificial neural networks primarily implement the unconscious operations of the brain: Dehaene, Lau, and Kouider, 2017.
-
Artificial neural networks tend to learn superficial regularities: Jo and Bengio, 2017.
-
Generation of images that confuse humans as well as artificial neural networks: Elsayed et al., 2018.
-
Artificial neural network that learns to recognize CAPTCHAs: George et al., 2017.
-
Critique of the learning speed in artificial neural networks: Lake, Ullman, Tenenbaum, and Gershman, 2017.
-
Lack of systematicity in artificial neural networks: Fodor and Pylyshyn, 1988; Fodor and McLaughlin, 1990.
-
Language of thought hypothesis: Amalric, Wang, et al., 2017; Fodor, 1975.
-
Learning to count as program inference: Piantadosi, Tenenbaum, and Goodman, 2012; see also Piantadosi, Tenenbaum, and Goodman, 2016.
-
Recursive representations as a singularity of the human species: Dehaene, Meyniel, Wacongne, Wang, and Pallier, 2015; Everaert, Huybregts, Chomsky, Berwick, and Bolhuis, 2015; Hauser, Chomsky, and Fitch, 2002; Hauser and Watumull, 2017.
-
Human singularity in coding an elementary sequence of sounds: Wang, Uhrig, Jarraya, and Dehaene, 2015.
-
Acquisition of geometrical rules—slow in monkeys, ultrafast in children: Jiang et al., 2018.
-
The conscious human brain resembles a serial Turing machine: Sackur and Dehaene, 2009; Zylberberg, Dehaene, Roelfsema, and Sigman, 2011.
-
Fast learning of word meaning: Tenenbaum, Kemp, Griffiths, and Goodman, 2011; Xu and Tenenbaum, 2007.
-
Word learning based on shared attention: Baldwin et al., 1996.
-
Knowledge of determiners and other function words at twelve months: Cyr and Shi, 2013; Shi and Lepage, 2008.
-
Mutual exclusivity principle in word learning: Carey and Bartlett, 1978; Clark, 1988; Markman and Wachtel, 1988; Markman, Wasow, and Hansen, 2003.
-
Reduced reliance on mutual exclusivity in bilinguals: Byers-Heinlein and Werker, 2009.
-
Rico, a dog who learned hundreds of words: Kaminski, Call, and Fischer, 2004.
-
Modelling of an “artificial scientist”: Kemp and Tenenbaum, 2008.
-
Discovering the causality principle: Goodman, Ullman, and Tenenbaum, 2011; Tenenbaum et al., 2011.
-
The brain as a generative model: Lake, Salakhutdinov, and Tenenbaum, 2015; Lake et al., 2017.
-
Probability theory is the logic of science: Jaynes, 2003.
-
Bayesian model of information processing in the cortex: Friston, 2005. For empirical data on hierarchical passing of probabilistic error messages in the cortex, see, for instance, Chao, Takaura, Wang, Fujii, and Dehaene, 2018; Wacongne et al., 2011.
第二部分 人脑如何学习