Article contents
Will human-like machines make human-like mistakes?
Published online by Cambridge University Press: 10 November 2017
Abstract
Although we agree with Lake et al.'s central argument, there are numerous flaws in the way people use causal models. Our models are often incorrect, resistant to correction, and applied inappropriately to new situations. These deficiencies are pervasive and have real-world consequences. Developers of machines with similar capacities should proceed with caution.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © Cambridge University Press 2017
References
- 2
- Cited by
Target article
Building machines that learn and think like people
Related commentaries (27)
Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human-like learning
Avoiding frostbite: It helps to learn from others
Back to the future: The return of cognitive functionalism
Benefits of embodiment
Building brains that communicate like machines
Building machines that adapt and compute like brains
Building machines that learn and think for themselves
Building on prior knowledge without building it in
Causal generative models are just a start
Children begin with the same start-up software, but their software updates are cultural
Crossmodal lifelong learning in hybrid neural embodied architectures
Deep-learning networks and the functional architecture of executive control
Digging deeper on “deep” learning: A computational ecology approach
Evidence from machines that learn and think like people
Human-like machines: Transparency and comprehensibility
Intelligent machines and human minds
Social-motor experience and perception-action learning bring efficiency to machines
The architecture challenge: Future artificial-intelligence systems will require sophisticated architectures, and knowledge of the brain might guide their construction
The argument for single-purpose robots
The fork in the road
The humanness of artificial non-normative personalities
The importance of motivation and emotion for explaining human cognition
Theories or fragments?
Thinking like animals or thinking like colleagues?
Understand the cogs to understand cognition
What can the brain teach us about building artificial intelligence?
Will human-like machines make human-like mistakes?
Author response
Ingredients of intelligence: From classic debates to an engineering roadmap