Noticias em eLiteracias

🔒
✇ Ethics and Information Technology

Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context

30 de Setembro de 2021, 00:00

Abstract

During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for the concept of moral responsibility. The paper starts by highlighting the important difficulties in assigning responsibility to either technologies themselves or to their developers. Top-down and bottom-up approaches to moral responsibility are then contrasted, as we explore how they could inform debates about Responsible AI. We highlight the limits of the former ethical approaches and build the case for classical Aristotelian virtue ethics. We show that two building blocks of Aristotle’s ethics, dianoetic virtues and the context of actions, although largely ignored in the literature, can shed light on how we could think of moral responsibility for both AI and humans. We end by exploring the practical implications of this particular understanding of moral responsibility along the triadic dimensions of ethics by design, ethics in design and ethics for designers.

✇ Ethics and Information Technology

AI recruitment algorithms and the dehumanization problem

29 de Setembro de 2021, 00:00

Abstract

According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: (i) to bring attention to this neglected issue, (ii) to clarify what exactly this concern about dehumanization might amount to, and (iii) to sketch an argument for why dehumanizing the hiring process is ethically suspect. After distinguishing the use of the term “dehumanization” in this context (i.e. removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e. conceiving of other humans as subhuman), we argue that the use of hiring algorithms may negatively impact the employee-employer relationship. We argue that there are good independent reasons to accept a substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We further argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen’s (in: Lackey, Applied Epistemology, Oxford University Press, 2021) critique of how Twitter “gamifies communication”, we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms, along with the possibility that some difficult trade-offs will need to be made.

✇ Ethics and Information Technology

Addressing inequal risk exposure in the development of automated vehicles

18 de Setembro de 2021, 00:00

Abstract

Automated vehicles (AVs) are expected to operate on public roads, together with non-automated vehicles and other road users such as pedestrians or bicycles. Recent ethical reports and guidelines raise worries that AVs will introduce injustice or reinforce existing social inequalities in road traffic. One major injustice concern in today’s traffic is that different types of road users are exposed differently to risks of corporal harm. In the first part of the paper, we discuss the responsibility of AV developers to address existing injustice concerns regarding risk exposure as well as approaches on how to fulfill the responsibility for a fairer distribution of risk. In contrast to popular approaches on the ethics of risk distribution in unavoidable accident cases, we focus on low and moderate risk situations, referred to as routine driving. For routine driving, the obligation to distribute risks fairly must be discussed in the context of risk-taking and risk-acceptance, balancing safety objectives of occupants and other road users with driving utility. In the second part of the paper, we present a typical architecture for decentralized automated driving which contains a dedicated module for real-time risk estimation and management. We examine how risk estimation modules can be adjusted and parameterized to redress some inequalities.

✇ Ethics and Information Technology

The European Commission report on ethics of connected and automated vehicles and the future of ethics of transportation

14 de Setembro de 2021, 00:00

Abstract

The paper has two goals. The first is presenting the main results of the recent report Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility written by the Horizon 2020 European Commission Expert Group to advise on specific ethical issues raised by driverless mobility, of which the author of this paper has been member and rapporteur. The second is presenting some broader ethical and philosophical implications of these recommendations, and using these to contribute to the establishment of Ethics of Transportation as an independent branch of applied ethics. The recent debate on the ethics of Connected and Automated Vehicles (CAVs) presents a paradox and an opportunity. The paradox is the presence of a flourishing debate on the ethics of one very specific transportation technology without ethics of transportation being in itself a well-established academic discipline. The opportunity is that now that a spotlight has been switched on the ethical dimensions of CAVs it may be easier to establish a broader debate on ethics of transportation. While the 20 recommendations of the EU report are grouped in three macro-areas: road safety, data ethics, and responsibility, in this paper they will be grouped according to eight philosophical themes: Responsible Innovation, road justice, road safety, freedom, human control, privacy, data fairness, responsibility. These are proposed as the first topics for a new ethics of transportation.

✇ Ethics and Information Technology

May Kantians commit virtual killings that affect no other persons?

14 de Setembro de 2021, 00:00

Abstract

Are acts of violence performed in virtual environments ever morally wrong, even when no other persons are affected? While some such acts surely reflect deficient moral character, I focus on the moral rightness or wrongness of acts. Typically it’s thought that, on Kant’s moral theory, an act of virtual violence is morally wrong (i.e., violates the Categorical Imperative) only if the act mistreats another person. But I argue that, on Kant’s moral theory, some acts of virtual violence can be morally wrong, even when no other persons or their avatars are affected. First, I explain why many have thought that, in general on Kant’s moral theory, virtual acts affecting no other persons or their avatars can’t violate the Categorical Imperative. For there are real world acts that clearly do, but it seems that when we consider the same sorts of acts done alone in a virtual environment, they don’t violate the Categorical Imperative, because no others persons were involved. But then, how could any virtual acts like these, that affect no other persons or their avatars, violate the Categorical Imperative? I then argue that there indeed can be such cases of morally wrong virtual acts—some due to an actor’s having erroneous beliefs about morally relevant facts, and others due not to error, but to the actor’s intention leaving out morally relevant facts while immersed in a virtual environment. I conclude by considering some implications of my arguments for both our present technological context as well as the future.

✇ Ethics and Information Technology

Can the predictive processing model of the mind ameliorate the value-alignment problem?

6 de Setembro de 2021, 00:00

Abstract

How do we ensure that future generally intelligent AI share our values? This is the value-alignment problem. It is a weighty matter. After all, if AI are neutral with respect to our wellbeing, or worse, actively hostile toward us, then they pose an existential threat to humanity. Some philosophers have argued that one important way in which we can mitigate this threat is to develop only AI that shares our values or that has values that ‘align with’ ours. However, there is nothing to guarantee that this policy will be universally implemented—in particular, ‘bad actors’ are likely to flout it. In this paper, I show how the predictive processing model of the mind, currently ascendant in cognitive science, may ameliorate the value-alignment problem. In essence, I argue that there are a plurality of reasons why any future generally intelligent AI will possess a predictive processing cognitive architecture (e.g. because we decide to build them that way; because it is the only possible cognitive architecture that can underpin general intelligence; because it is the easiest way to create AI.). I also argue that if future generally intelligent AI possess a predictive processing cognitive architecture, then they will come to share our pro-moral motivations (of valuing humanity as an end; avoiding maleficent actions; etc.), regardless of their initial motivation set. Consequently, these AI will pose a minimal threat to humanity. In this way then, I conclude, the value-alignment problem is significantly ameliorated under the assumption that future generally intelligent AI will possess a predictive processing cognitive architecture.

✇ Ethics and Information Technology

Psychological consequences of legal responsibility misattribution associated with automated vehicles

2 de Setembro de 2021, 00:00

Abstract

A human driver and an automated driving system (ADS) might share control of automated vehicles (AVs) in the near future. This raises many concerns associated with the assignment of responsibility for negative outcomes caused by them; one is that the human driver might be required to bear the brunt of moral and legal responsibilities. The psychological consequences of responsibility misattribution have not yet been examined. We designed a hypothetical crash similar to Uber’s 2018 fatal crash (which was jointly caused by its distracted driver and the malfunctioning ADS). We incorporated five legal responsibility attributions (the human driver should bear full, primary, half, secondary, and no liability, that is, the AV manufacturer should bear no, secondary, half, primary, and full liability). Participants (N = 1524) chose their preferred liability attribution and then were randomly assigned into one of the five actual liability attribution conditions. They then responded to a series of questions concerning liability assignment (fairness and reasonableness), the crash (e.g., acceptability), and AVs (e.g., intention to buy and trust). Slightly more than 50% of participants thought that the human driver should bear full or primary liability. Legal responsibility misattribution (operationalized as the difference between actual and preferred liability attributions) negatively influenced these mentioned responses, regardless of overly attributing human or manufacturer liability. Overly attributing human liability (vs. manufacturer liability) had more negative influences. Improper liability attribution might hinder the adoption of AVs. Public opinion should not be ignored in developing a legal framework for AVs.

✇ Ethics and Information Technology

Non-empirical problems in fair machine learning

5 de Agosto de 2021, 00:00

Abstract

The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover and prevent discrimination, additional assessment criteria, and analyses of the interaction between fairness and predictive accuracy. However, the same framework has also suggested higher-order issues regarding the translation of fairness into metrics and quantifiable trade-offs. Although the (empirical) tools which have been developed so far are essential to address discrimination encoded in data and algorithms, their integration into society elicits key (conceptual) questions such as: What kind of assumptions and decisions underlies the empirical framework? How do the results of the empirical approach penetrate public debate? What kind of reflection and deliberation should stakeholders have over available fairness metrics? I will outline the empirical approach to fair machine learning, i.e. how the problem is framed and addressed, and suggest that there are important non-empirical issues that should be tackled. While this work will focus on the problem of algorithmic fairness, the lesson can extend to other conceptual problems in the analysis of algorithmic decision-making such as privacy and explainability.

✇ Ethics and Information Technology

Predictive privacy: towards an applied ethics of data analytics

31 de Julho de 2021, 00:00

Abstract

Data analytics and data-driven approaches in Machine Learning are now among the most hailed computing technologies in many industrial domains. One major application is predictive analytics, which is used to predict sensitive attributes, future behavior, or cost, risk and utility functions associated with target groups or individuals based on large sets of behavioral and usage data. This paper stresses the severe ethical and data protection implications of predictive analytics if it is used to predict sensitive information about single individuals or treat individuals differently based on the data many unrelated individuals provided. To tackle these concerns in an applied ethics, first, the paper introduces the concept of “predictive privacy” to formulate an ethical principle protecting individuals and groups against differential treatment based on Machine Learning and Big Data analytics. Secondly, it analyses the typical data processing cycle of predictive systems to provide a step-by-step discussion of ethical implications, locating occurrences of predictive privacy violations. Thirdly, the paper sheds light on what is qualitatively new in the way predictive analytics challenges ethical principles such as human dignity and the (liberal) notion of individual privacy. These new challenges arise when predictive systems transform statistical inferences, which provide knowledge about the cohort of training data donors, into individual predictions, thereby crossing what I call the “prediction gap”. Finally, the paper summarizes that data protection in the age of predictive analytics is a collective matter as we face situations where an individual’s (or group’s) privacy is violated using data other individuals provide about themselves, possibly even anonymously.

✇ Ethics and Information Technology

Automated vehicles and the morality of post-collision behavior

23 de Julho de 2021, 00:00

Abstract

We address the considerations of the European Commission Expert Group on the ethics of connected and automated vehicles regarding data provision in the event of collisions. While human drivers’ appropriate post-collision behavior is clearly defined, regulations for automated driving do not provide for collision detection. We agree it is important to systematically incorporate citizens’ intuitions into the discourse on the ethics of automated vehicles. Therefore, we investigate whether people expect automated vehicles to behave like humans after an accident, even if this behavior does not directly affect the consequences of the accident. We find that appropriate post-collision behavior substantially influences people’s evaluation of the underlying crash scenario. Moreover, people clearly think that automated vehicles can and should record the accident, stop at the site, and call the police. They are even willing to pay for technological features that enable post-collision behavior. Our study might begin a research program on post-collision behavior, enriching the empirically informed study of automated driving ethics that so far exclusively focuses on pre-collision behavior.

✇ Ethics and Information Technology

Does kindness towards robots lead to virtue? A reply to Sparrow’s asymmetry argument

18 de Julho de 2021, 00:00

Abstract

Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.

✇ Ethics and Information Technology

Ethical concerns in rescue robotics: a scoping review

2 de Julho de 2021, 00:00

Abstract

Rescue operations taking place in disaster settings can be fraught with ethical challenges. Further ethical challenges will likely be introduced by the use of robots, which are expected to soon become commonplace in search and rescue missions and disaster recovery efforts. To help focus timely reflection on the ethical considerations associated with the deployment of rescue robots, we have conducted a scoping review exploring the relevant academic literature following a widely recognized scoping review framework. Of the 429 papers identified by the first screening, six fulfilled the selection criteria of our literature review. Quantitative data synthesis showed that a subset of the papers includes a qualitative experimental exploration of the ethical issues at hand, with workshops involving both experts and potential users. Most use simulations or scenarios to anticipate the ethical implications and other consequences of using robots in search and rescue missions. Qualitative text analysis identified seven core ethically relevant themes: fairness and discrimination; false or excessive expectations; labor replacement; privacy; responsibility; safety; trust. Our results suggest that the literature on ethics in rescue robotics is scant and disparate, but the papers identified uniformly endorsed a proactive approach to handling the ethical concerns associated with the use of robots in disaster scenarios.

✇ Ethics and Information Technology

Ethical dilemmas are really important to potential adopters of autonomous vehicles

2 de Julho de 2021, 00:00

Abstract

The ethical dilemma (ED) of whether autonomous vehicles (AVs) should protect the passengers or pedestrians when harm is unavoidable has been widely researched and debated. Several behavioral scientists have sought public opinion on this issue, based on the premise that EDs are critical to resolve for AV adoption. However, many scholars and industry participants have downplayed the importance of these edge cases. Policy makers also advocate a focus on higher level ethical principles rather than on a specific solution to EDs. But conspicuously absent from this debate is the view of the consumers or potential adopters, who will be instrumental to the success of AVs. The current research investigated this issue both from a theoretical standpoint and through empirical research. The literature on innovation adoption and risk perception suggests that EDs will be heavily weighted by potential adopters of AVs. Two studies conducted with a broad sample of consumers verified this assertion. The results from these studies showed that people associated EDs with the highest risk and considered EDs as the most important issue to address as compared to the other technical, legal and ethical issues facing AVs. As such, EDs need to be addressed to ensure robustness in the design of AVs and to assure consumers of the safety of this promising technology. Some preliminary evidence is provided about interventions to resolve the social dilemma in EDs and about the ethical preferences of prospective early adopters of AVs.

✇ Ethics and Information Technology

How can we know a self-driving car is safe?

30 de Junho de 2021, 00:00

Abstract

Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safe enough?’ Drawing on 50 interviews with people developing and researching self-driving cars, I describe two dominant narratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics in an attempt to reassure the public. The second approach, safety-by-design, starts with the challenge of safety assurance and sees the technology as intrinsically problematic. The first approach is concerned only with performance—what a self-driving system does. The second is also concerned with why systems do what they do and how they should be tested. Using insights from workshops with members of the public, I introduce a further concern that will define trustworthy self-driving cars: the intended and perceived purposes of a system. Engineers’ safety assurances will have their credibility tested in public. ‘How safe is safe enough?’ prompts further questions: ‘safe enough for what?’ and ‘safe enough for whom?’

✇ Ethics and Information Technology

From human resources to human rights: Impact assessments for hiring algorithms

25 de Junho de 2021, 00:00

Abstract

Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. First, AI principles have been criticized for being vague and not actionable. Second, the use of vague ethical principles to discuss algorithmic risks does not provide any accountability. This lack of accountability creates an algorithmic accountability gap. Closing this gap is crucial because, without accountability, the use of hiring algorithms can lead to discrimination and unequal access to employment opportunities. This paper makes two contributions to the AI ethics literature. First, it frames the ethical risks of hiring algorithms using international human rights law as a universal standard for determining algorithmic accountability. Second, it evaluates four types of algorithmic impact assessments in terms of how effectively they address the five human rights of job applicants implicated in hiring algorithms. It determines which of the assessments can help companies audit their hiring algorithms and close the algorithmic accountability gap.

✇ Ethics and Information Technology

The possibility of deliberate norm-adherence in AI

1 de Junho de 2021, 00:00

Abstract

Moral agency status is often given to those individuals or entities which act intentionally within a society or environment. In the past, moral agency has primarily been focused on human beings and some higher-order animals. However, with the fast-paced advancements made in artificial intelligence (AI), we are now quickly approaching the point where we need to ask an important question: should we grant moral agency status to AI? To answer this question, we need to determine the moral agency status of these entities in society. In this paper I argue that to grant moral agency status to an entity, deliberate norm-adherence must be possible (at a minimum). In this paper I argue that, under the current status quo, AI systems are unable to meet this criterion. The novel contribution this paper makes to the field of machine ethics is first, to provide at least two criteria with which we can determine moral agency status. We do this by determining the possibility of deliberate norm-adherence through examining the possibility of deliberate norm-violation. Second, to show that establishing moral agency in AI suffer the same pitfalls as establishing moral agency in constitutive accounts of agency.

✇ Ethics and Information Technology

Applying a principle of explicability to AI research in Africa: should we do it?

1 de Junho de 2021, 00:00

Abstract

Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the societies in which they operate. This is particularly pertinent for AI research and implementation across Africa, a ground where AI systems are and will be used but also a place with a history of imposition of outside values. In this paper, we thus critically examine one proposal for ensuring that decision-making systems are just, fair, and intelligible—that we adopt a principle of explicability to generate specific recommendations—to assess whether the principle should be adopted in an African research context. We argue that a principle of explicability not only can contribute to responsible and thoughtful development of AI that is sensitive to African interests and values, but can also advance tackling some of the computational challenges in machine learning research. In this way, the motivation for ensuring that a machine learning-based system is just, fair, and intelligible is not only to meet ethical requirements, but also to make effective progress in the field itself.

✇ Ethics and Information Technology

An ontic–ontological theory for ethics of designing social robots: a case of Black African women and humanoids

1 de Junho de 2021, 00:00

Abstract

Given the affective psychological and cognitive dynamics prevalent during human–robot-interlocution, the vulnerability to cultural-political influences of the design aesthetics of a social humanoid robot has far-reaching ramifications. Building upon this hypothesis, I explicate the relationship between the structures of the constitution social ontology and computational semiotics, and ventures a theoretical framework which I proposes as a thesis that impels a moral responsibility on engineers of social humanoids. In distilling this thesis, the implications of the intersection between the socio-aesthetics of racialised and genderised humanoids and the phenomenology of human–robot-interaction are illuminated by the figuration of the experience of a typical black rural African woman as the user, that is, an interlocutor with an industry-standard socially-situated humanlike robot. The representation of the gravity of the psycho-existential and socio-political ramifications of such woman’s life with humanoids is abstracted and posited as grounds that illustrate the imperative for roboticists to take socio-ethical considerations seriously in their designs of humanoids.

✇ Ethics and Information Technology

Computationally rational agents can be moral agents

1 de Junho de 2021, 00:00

Abstract

In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument for computational rationality as an integrative element that effectively combines the philosophical and computational aspects of artificial moral agency. This logically leads to a philosophically coherent and scientifically consistent model for building artificial moral agents. Besides providing a possible answer to the question of how to build artificial moral agents, this model also invites sound debate from multiple disciplines, which should help to advance the field of machine ethics forward.

✇ Ethics and Information Technology

Digitalization of contact tracing: balancing data privacy with public health benefit

10 de Junho de 2021, 00:00

Abstract

The COVID-19 pandemic has brought the long-standing public health practice of contact tracing into the public spotlight. While contact tracing and case investigation have been carefully designed to protect privacy, the huge volume of tracing which is being carried out as part of the pandemic response in the United States is highlighting potential concerns around privacy, legality, and equity. Contact tracing during the pandemic has gained particular attention for the new use of digital technologies—both on the consumer side in the form of Exposure Notification applications, and for public health agencies as digital case management software systems enable massive scaling of operations. While the consumer application side of digital innovation has dominated the news and academic discourse around privacy, people are likely to interact more intensively with public health agencies and their use of digital case management systems. Effective use of digital case management for contact tracing requires revisiting the existing legal frameworks, privacy protections, and security practices for management of sensitive health data. The scale of these tools and demands of an unprecedented pandemic response are introducing new risks through the collection of huge volumes of data, and expanding requirements for more adept data sharing among jurisdictions. Public health agencies must strengthen their best practices for data collection and protection even in the absence of comprehensive or clear guidance. This requires navigating a difficult balance between rigorous data protection and remaining highly adaptive and agile.

✇ Ethics and Information Technology

What is the ‘personal’ in ‘personal information’?

7 de Junho de 2021, 00:00

Abstract

Contemporary privacy theories and European discussions about data protection employ the notion of ‘personal information’ to designate their areas of concern. The notion of personal information is demarcated from non-personal information—or just information—indicating that we are dealing with a specific kind of information. However, within privacy scholarship the notion of personal information appears undertheorized, rendering the concept somewhat unclear. We argue that in an age of datafication, protection of personal information and privacy is crucial, making the understanding of what is meant by ‘personal information’ more important than ever. To contribute to this aim, we analyse the conception of personal information and its nature, including the distinction between personal and non-personal information from a philosophy of language perspective. Through analyses of aboutness and relative aboutness we point to challenges related to the demarcation between personal and non-personal information, which may in practice lead to all information being personal.

✇ Ethics and Information Technology

How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners

27 de Maio de 2021, 00:00

Abstract

Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions (EAI). The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions towards it, and about getting involved in a kind of affective relationship with it. In this paper, I want to tackle these worries by focusing on the last aspect: in what sense could it be problematic or even wrong to establish an emotional relationship with EAI-systems? I want to show that the justifications for the widespread intuition concerning the problems are not as strong as they seem at first sight. To do so, I discuss three arguments: the argument from self-deception, the argument from lack of mutuality, and the argument from moral negligence.

✇ Ethics and Information Technology

Is it time for robot rights? Moral status in artificial entities

17 de Maio de 2021, 00:00

Abstract

Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find the suggestions ultimately unmotivated, the discussion shows that our epistemic condition with respect to the moral status of others does raise problems, and that the human tendency to empathise with things that do not have moral status should be taken seriously—we suggest that it produces a “derived moral status”. Finally, it turns out that there is typically no individual in real AI that could even be said to be the bearer of moral status. Overall, there is no reason to think that robot rights are an issue now.

✇ Ethics and Information Technology

Non-consensual personified sexbots: an intrinsic wrong

17 de Maio de 2021, 00:00

Abstract

Humanoid robots used for sexual purposes (sexbots) are beginning to look increasingly lifelike. It is possible for a user to have a bespoke sexbot created which matches their exact requirements in skin pigmentation, hair and eye colour, body shape, and genital design. This means that it is possible—and increasingly easy—for a sexbot to be created which bears a very high degree of resemblance to a particular person. There is a small but steadily increasing literature exploring some of the ethical issues surrounding sexbots, however sexbots made to look like particular people is something which, as yet, has not been philosophically addressed in the literature. In this essay I argue that creating a lifelike sexbot to represent and resemble someone is an act of sexual objectification which morally requires consent, and that doing so without the person’s consent is intrinsically wrong. I consider two sexbot creators: Roy and Fred. Roy creates a sexbot of Katie with her consent, and Fred creates a sexbot of Jane without her consent. I draw on the work of Alan Goldman, Rae Langton, and Martha Nussbaum in particular to demonstrate that creating a sexbot of a particular person requires consent if it is to be intrinsically permissible.

✇ Ethics and Information Technology

Introduction to the special issue: value sensitive design: charting the next decade

1 de Março de 2021, 00:00

Abstract

In this article, we introduce the Special Issue, Value Sensitive Design: Charting the Next Decade, which arose from a week-long workshop hosted by Lorentz Center, Leiden, The Netherlands, November 14–18, 2016. Forty-one researchers and designers, ranging in seniority from doctoral students to full professors, from Australia, Europe, and North America, and representing a wide range of academic fields participated in the workshop. The first article in the special issue puts forward eight grand challenges for value sensitive design to help guide and shape the field. It is followed by 16 articles consisting of value sensitive design nuggets—short pieces of writing on a new idea, method, challenge, application, or other concept that engages some aspect of value sensitive design. The nuggets are grouped into three clusters: theory, method, and applications. Taken together the grand challenges and nuggets point the way forward for value sensitive design into the next decade and beyond.

❌