Book contents
- Frontmatter
- Contents
- General Introduction
- PART I THE NATURE OF MACHINE ETHICS
- PART II THE IMPORTANCE OF MACHINE ETHICS
- Introduction
- 4 Why Machine Ethics?
- 5 Authenticity in the Age of Digital Companions
- PART III ISSUES CONCERNING MACHINE ETHICS
- PART IV APPROACHES TO MACHINE ETHICS
- PART V VISIONS FOR MACHINE ETHICS
Introduction
from PART II - THE IMPORTANCE OF MACHINE ETHICS
Published online by Cambridge University Press: 01 June 2011
- Frontmatter
- Contents
- General Introduction
- PART I THE NATURE OF MACHINE ETHICS
- PART II THE IMPORTANCE OF MACHINE ETHICS
- Introduction
- 4 Why Machine Ethics?
- 5 Authenticity in the Age of Digital Companions
- PART III ISSUES CONCERNING MACHINE ETHICS
- PART IV APPROACHES TO MACHINE ETHICS
- PART V VISIONS FOR MACHINE ETHICS
Summary
Colin allen, wendell wallach, and iva smit maintain in “why Machine Ethics?” that it is time to begin adding ethical decision making to computers and robots. They point out that “[d]riverless [train] systems put machines in the position of making split-second decisions that could have life or death implications” if people are on one or more tracks that the systems could steer toward or avoid. The ethical dilemmas raised are much like the classic “trolley” cases often discussed in ethics courses. “The computer revolution is continuing to promote reliance on automation, and autonomous systems are coming whether we like it or not,” they say. Shouldn't we try to ensure that they act in an ethical fashion?
Allen et al. don't believe that “increasing reliance on autonomous systems will undermine our basic humanity” or that robots will eventually “enslave or exterminate us.” However, in order to ensure that the benefits of the new technologies outweigh the costs, “we'll need to integrate artificial moral agents into these new technologies … to uphold shared ethical standards.” It won't be easy, in their view, “but it is necessary and inevitable.”
It is not necessary, according to Allen et al., that the autonomous machines we create be moral agents in the sense that human beings are. They don't have to have free will, for instance. We only need to design them “to act as if they were moral agents … we must be confident that their behavior satisfies appropriate norms.”
- Type
- Chapter
- Information
- Machine Ethics , pp. 47 - 50Publisher: Cambridge University PressPrint publication year: 2011
- 1
- Cited by