第2章 为什么人脑的学习能力比目前的人工智能机器更强

字数:10
  1. Artificial neural networks primarily implement the unconscious operations of the brain: Dehaene, Lau, and Kouider, 2017.

  2. Artificial neural networks tend to learn superficial regularities: Jo and Bengio, 2017.

  3. Generation of images that confuse humans as well as artificial neural networks: Elsayed et al., 2018.

  4. Artificial neural network that learns to recognize CAPTCHAs: George et al., 2017.

  5. Critique of the learning speed in artificial neural networks: Lake, Ullman, Tenenbaum, and Gershman, 2017.

  6. Lack of systematicity in artificial neural networks: Fodor and Pylyshyn, 1988; Fodor and McLaughlin, 1990.

  7. Language of thought hypothesis: Amalric, Wang, et al., 2017; Fodor, 1975.

  8. Learning to count as program inference: Piantadosi, Tenenbaum, and Goodman, 2012; see also Piantadosi, Tenenbaum, and Goodman, 2016.

  9. Recursive representations as a singularity of the human species: Dehaene, Meyniel, Wacongne, Wang, and Pallier, 2015; Everaert, Huybregts, Chomsky, Berwick, and Bolhuis, 2015; Hauser, Chomsky, and Fitch, 2002; Hauser and Watumull, 2017.

  10. Human singularity in coding an elementary sequence of sounds: Wang, Uhrig, Jarraya, and Dehaene, 2015.

  11. Acquisition of geometrical rules—slow in monkeys, ultrafast in children: Jiang et al., 2018.

  12. The conscious human brain resembles a serial Turing machine: Sackur and Dehaene, 2009; Zylberberg, Dehaene, Roelfsema, and Sigman, 2011.

  13. Fast learning of word meaning: Tenenbaum, Kemp, Griffiths, and Goodman, 2011; Xu and Tenenbaum, 2007.

  14. Word learning based on shared attention: Baldwin et al., 1996.

  15. Knowledge of determiners and other function words at twelve months: Cyr and Shi, 2013; Shi and Lepage, 2008.

  16. Mutual exclusivity principle in word learning: Carey and Bartlett, 1978; Clark, 1988; Markman and Wachtel, 1988; Markman, Wasow, and Hansen, 2003.

  17. Reduced reliance on mutual exclusivity in bilinguals: Byers-Heinlein and Werker, 2009.

  18. Rico, a dog who learned hundreds of words: Kaminski, Call, and Fischer, 2004.

  19. Modelling of an “artificial scientist”: Kemp and Tenenbaum, 2008.

  20. Discovering the causality principle: Goodman, Ullman, and Tenenbaum, 2011; Tenenbaum et al., 2011.

  21. The brain as a generative model: Lake, Salakhutdinov, and Tenenbaum, 2015; Lake et al., 2017.

  22. Probability theory is the logic of science: Jaynes, 2003.

  23. Bayesian model of information processing in the cortex: Friston, 2005. For empirical data on hierarchical passing of probabilistic error messages in the cortex, see, for instance, Chao, Takaura, Wang, Fujii, and Dehaene, 2018; Wacongne et al., 2011.

第二部分 人脑如何学习


第1章 学习的7个定义第3章 看不见的婴儿知识