Ethical guidelines for the use of artificial intelligence and value conflict challenges

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

The aim of this article is to articulate and critically discuss different answers to the following question. As values (such as accuracy, privacy, non-discrimination and transparency) usually entailed in ethical guidelines for the use of Artificial Intelligence, e.g. algorithm-based sentencing, often are or can be in conflict with one another, how should decision-makers deal with these kinds of conflicts? First, the focus is on clarification of some of the general advantages of using such guidelines in an ethical analysis of the use of AI. Some disadvantages will also be presented and critically discussed. Second, it will be shown that we need to distinguish between three kinds of conflict that can exist for ethical guidelines used in the moral assessment of AI. This will be followed by a critical discussion of different answers to the question of how to handle what we shall call internal and external conflicts of value. Finally, there will be a critical discussion of three different strategies to solve what is called a ‘genuine value conflict’. The strategies are: the ‘accepting the existence of irresolvable conflict’ view, the ranking view and value monism. This article defends the ‘accepting the existence of irresolvable conflict’ view. It also argues that even though the ranking view and value monism, from a merely theoretical (or philosophical) point of view, are better equipped to solve genuine value conflicts among values in, for example, ethical guidelines for artificial intelligence, this is not the case in real-life decision-making.
Original languageEnglish
JournalEtikk i Praksis
Volume15
Issue number1
Pages (from-to)25-40
Number of pages15
ISSN1890-3991
DOIs
Publication statusPublished - 17 Jun 2021

Keywords

  • AI
  • ethical guidelines
  • algorithm - based sentencing
  • value conflicts

Cite this