Paul Dumouchel signe deux articles sur les enjeux de l’IA
Dans un premier article, le professeur de philosophie à la Graduate School of Core Ethics and Frontier Sciences de l’université Ritsumeikan (Kyoto, Japon) s’interroge sur la nature même de l’intelligence :
Abstract The idea of artificial intelligence implies the existence of a form of intelligence that is “natural,” or at least not artificial. The problem is that intel-ligence, whether “natural” or “artificial,” is not well defined: it is hard to say what, exactly, is or constitutes intelligence. This difficulty makes it impossible to measure human intelligence against artificial intelligence on a unique scale. It does not, however, prevent us from comparing them; rather, it changes the sense and meaning of such comparisons. Comparing artificial intelligence with human intelligence could allow us to understand both forms better. This paper thus aims to compare and distinguish these two forms of intelligence, focusing on three issues: forms of embodiment, autonomy and judgment. Doing so, I argue, should enable us to have a better view of the promises and limitations of present-day artificial intelligence, along with its benefits and dangers and the place we should make for it in our culture and society
Il s’attaque ensuite à la question épineuse de la moralité des machines :
A recent large-scale survey, “The Moral Machine experiment” (2018) aggregated 39.61 million decisions across 233 countries and territories reflecting people’s preferences as to who should be spared in fatal moral dilemmas involving autonomous road vehicles. The experiment collected ‘big data’ to reach conclusions concerning the moral rules that should be implemented in these vehicles. In this paper, first I question the philosophical presuppositions of the experiment, arguing that it has very little to do with ethics or moral norms, but essentially constitutes a market survey concerning the social acceptance of a dangerous technology. Then, I criticize the myth of moral machines and the illusion that abandoning to automated systems the power to ‘autonomously’ take lethal ‘decisions’ is a radically new phenomenon. Finally, I suggest a different solution to the difficulties addressed by the Moral Machine experiment and make political and legal suggestions concerning policy towards ‘autonomous road vehicles’.
Ce contenu a été mis à jour le 5 février 2020 à 15 h 38 min.