Remmers, Peter2019-11-282019-11-282019Ethics in Progress. Vol. 10 (2019). No. 2, pp. 52-67.2084-9257http://hdl.handle.net/10593/25196A defining goal of research in AI and robotics is to build technical artefacts as substitutes, assistants or enhancements of human action and decision-making. But both in reflection on these technologies and in interaction with the respective technical artefacts, we sometimes encounter certain kinds of human likenesses. To clarify their significance, three aspects are highlighted. First, I will broadly investigate some relations between humans and artificial agents by recalling certain points from the debates on Strong AI, on Turing’s Test, on the concept of autonomy and on anthropomorphism in human-machine interaction. Second, I will argue for the claim that there are no serious ethical issues involved in the theoretical aspects of technological human likeness. Third, I will suggest that although human likeness may not be ethically significant on the philosophical and conceptual levels, strategies to use anthropomorphism in the technological design of human-machine collaborations are ethically significant, because artificial agents are specifically designed to be treated in ways we usually treat humans.enginfo:eu-repo/semantics/openAccessAIroboticshuman likenessanthropomorphismethical implicationStrong AITuring's testautonomyThe Ethical Significance of Human Likeness in Robotics and AIhttps://doi.org/10.14746/eip.2019.2.6