Browse Items by:
Sign on to:
Please use this identifier to cite or link to this item:
Full metadata record
Ethics in Progress, Volume 9 (2018), Issue 1, pp. 44-61
With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or rather “Artificial People” (AP). The paper rejects this forecast, due to its argument based on three metaphysical conflicting assumptions. Firstly, it is the idea that it is possible to precisely define persons and apply the definition to robots or use it to differentiate human beings from robots. Further, the argument of APs favors a position of non-reductive physicalism (second assumption) and materialism (third assumption), finally producing weird convictions about future robotics. Therefore, I will suggest to follow Christine Korsgaard’s defence of animals as ends in themselves with moral standing. I will show that her argument can be transmitted to robots, too, at least to robots which are capable of pursuing their own good (even if they are not rational). Korsgaard’s interpretation of Kant delivers an option that allows us to leave out complicated metaphysical notions like “person” or “subject” in the debate, without denying robots’ status as agents.
Wydawnictwo Naukowe Instytutu Filozofii UAM
ends in themselves
Why Can´t We Regard Robots As People?
Appears in Collections:
Ethics in Progress, 2018, Volume 9, Issue 1
Files in This Item:
4 André Schmiljun (Berlin) Why Can’t We Regard Robots as People.pdf
Show simple item record
This item is licensed under a
Creative Commons License