Book contents
- Creating Human Nature
- Creating Human Nature
- Copyright page
- Dedication
- Contents
- Acknowledgments
- Introduction
- Part I The Political Bioethics of Regulating Genetic Engineering
- Part II The Political Dimensions of Engineering Intelligence
- 5 Threshold Capacities for Political Participation
- 6 Political Capacity of Human Intelligence and the Challenge of AI
- 7 Political Ambiguity of Personalized Education Informed by the Pupil’s Genome
- Part III Inequality as an Unintended Consequence Locally and as a Planetary Phenomenon
- References
- Index
6 - Political Capacity of Human Intelligence and the Challenge of AI
from Part II - The Political Dimensions of Engineering Intelligence
Published online by Cambridge University Press: 13 October 2022
- Creating Human Nature
- Creating Human Nature
- Copyright page
- Dedication
- Contents
- Acknowledgments
- Introduction
- Part I The Political Bioethics of Regulating Genetic Engineering
- Part II The Political Dimensions of Engineering Intelligence
- 5 Threshold Capacities for Political Participation
- 6 Political Capacity of Human Intelligence and the Challenge of AI
- 7 Political Ambiguity of Personalized Education Informed by the Pupil’s Genome
- Part III Inequality as an Unintended Consequence Locally and as a Planetary Phenomenon
- References
- Index
Summary
Natural and artificial intelligence differ in their history and pattern of development. Human or natural intelligence (HI) is the product of a deep history of undirected, natural evolution. Its evolution is a mix of biology, natural environment, and cultural environment. Artificial intelligence (AI), by contrast, emerged within a very brief, highly reflected and always directed history of technological development.1 This difference is significant to the extent that we view AI by analogy to HI. Not surprisingly, researchers in the past conceptualized AI in terms they took to be congruent with HI. Perhaps the greatest congruence concerns the use of symbols. Marvin Minsky (1952), for example, sought a form of AI by analogy to the human mind’s capacity to manipulate symbols. AI processes symbols serially. HI may do so as well2 – even as parallel processing is essential for many human tasks and AI can emulate it, for example in robot vision. For both, symbols can represent contexts of human action and interaction (Pickering 1993: 126). Both HI and AI demarcate a domain of operation; both discriminate between self and non-self, friend and foe, safe and dangerous. Both are “defined by the dynamics” of their respective networks (Varela et al. 1988: 365). Both may be described in terms of “enactive cognition” where intelligence interacts with, learns from, and even selectively creates its environment (Sandini et al. 2007: 309). Enactivist cognition contrasts with “our usual view of cognition as being a more or less accurate representation of a world already full of signification, and where the system picks up information to solve a given problem, posed in advance” (ibid., 373).
- Type
- Chapter
- Information
- Creating Human NatureThe Political Challenges of Genetic Engineering, pp. 119 - 140Publisher: Cambridge University PressPrint publication year: 2022