Abstract
Driverless cars, autonomous weapons systems and cyberweapons, algorithms for financial transactions and loan granting, Watson-like systems for medical diagnosis and treatment: autonomous robots and software systems are being increasingly required to perform tasks that have significant implications in the way of human duties, responsibilities and proper respect of fundamental rights. Accordingly, one must endow the controllers of these artificial autonomous agents with suitable ethical policies. I will show that major hurdles towards the identification of ethical policies for artificial autonomous agents are raised by moral dilemmas and conflicts between different theoretical frameworks in normative ethics. By reference to the case studies of driverless cars and autonomous weapons systems, I will discuss various strategies for defusing these moral conflicts and converging on locally shared ethical policies. Finally, I will argue that engineering control problems about ethical policies for autonomous systems raise fundamental questions about the ultimate justification of human morality and the consistency of underlying ethical models.