Noticias em eLiteracias

🔒
❌ Sobre o FreshRSS
Há novos artigos disponíveis, clique para atualizar a página.
Antes de ontemEthics and Information Technology

Reasons for Meaningful Human Control

Abstract

”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this kind of control. It is the purpose of this paper to facilitate further operationalization of ”meaningful human control”.

This paper consists of two parts. In the first part I resolve an ambiguity that plagues current operationalizations of MHC. One of the design conditions says that the system should track the reasons of the relevant agents. This condition is ambiguous between the kind of reasons involved. On one interpretation it says that a system should track motivating reasons, while it is concerned with normative reasons on the other. Current participants in the debate interpret the framework as being concerned with (something in the vicinity of) motivating reasons. I argue against this interpretation by showing that meaningful human control requires that a system tracks normative reasons. Moreover, I maintain that an operationalization of meaningful human control that fails to track the right kind of reasons is morally problematic.

When this is properly understood, it can be shown that the framework of MHC is committed to the agent-relativity of reasons. More precisely, I argue in the second part of this paper that if the tracking condition of MHC plays an important role in responsibility attribution (as the proponents of the view maintain), then the framework is incompatible with first-order normative theories that hold that normative reasons are agent-neutral (such as many versions of consequentialism). In the final section I present three ways forward for the proponent of MHC as reason-responsiveness.

  • 23 de Novembro de 2022, 00:00

Digital temperance: adapting an ancient virtue for a technological age

Abstract

In technological societies where excessive screen use and internet addiction are becoming constant temptations, the valuable yet intoxicating pleasures of digital technology suggest a need to recover and repurpose temperance, a virtue emphasized by ancient and medieval philosophers. This article reconstructs this virtue for our technological age by reclaiming the most relevant features of Aristotle’s and Aquinas’s accounts and suggesting five critical revisions needed to adapt the virtue for a contemporary context. The article then draws on this critical interpretation, along with empirical research analyzing the value and dangers of digital technology, to construct a normative account of digital temperance, a virtue that finds a mean between “digital insensibility,” the vice of deficiency, and “digital overindulgence,” the vice of excess. We conclude by showing how this virtue of digital temperance can help to promote human flourishing in a world saturated with tempting technology.

  • 22 de Novembro de 2022, 00:00

Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”

Abstract

A large part of the explainable AI literature focuses on what explanations are in general, what algorithmic explainability is more specifically, and how to code these principles of explainability into AI systems. Much less attention has been devoted to the question of why algorithmic decisions and systems should be explainable and whether there ought to be a right to explanation and why. We therefore explore the normative landscape of the need for AI to be explainable and individuals having a right to such explanation. This exploration is particularly relevant to the medical domain where the (im)possibility of explainable AI is high on both the research and practitioners’ agenda. The dominant intuition overall is that explainability has and should play a key role in the health context. Notwithstanding the strong normative intuition for having a right to explanation, intuitions can be wrong. So, we need more than an appeal to intuitions when it comes to explaining the normative significance of having a right to explanation when being subject to AI-based decision-making. The aim of the paper is therefore to provide an account of what might underlie the normative intuition. We defend the ‘symmetry thesis’ according to which there is no special normative reason to have a right to explanation when ‘machines’ in the broad sense, make decisions, recommend treatment, discover tumors, and so on. Instead, we argue that we have a right to explanation in cases that involve automated processing that significantly affect our core deliberative agency and which we do not understand, because we have a general moral right to explanation when choices are made which significantly affect us but which we do not understand.

  • 11 de Novembro de 2022, 00:00

Extended loneliness. When hyperconnectivity makes us feel alone

Abstract

In this paper, I analyse a specific kind of loneliness that can be experienced in the networked life, namely “extended loneliness”. I claim that loneliness—conceived of as stemming from a lack of satisfying relationships to others—can arise from an abundance of connections in the online sphere. Extended loneliness, in these cases, does not result from a lack of connections to other people. On the contrary, it consists in the complex affective experience of both lacking and longing for meaningful relationships while being connected to many people online. The recursive interaction with a digital assistant in a smart flat is my key example for defining the contours of this specific kind of loneliness that emerges when hyperconnectivity becomes pervasive in the user’s daily-life. Drawing on Sherry Turkle’s work and employing the conceptual framework of the extended mind, I analyse the specific characteristics of extended loneliness and explore its phenomenology.

  • 9 de Novembro de 2022, 00:00

Automating anticorruption?

Abstract

The paper explores some normative challenges concerning the integration of Machine Learning (ML) algorithms into anticorruption in public institutions. The challenges emerge from the tensions between an approach treating ML algorithms as allies to an exclusively legalistic conception of anticorruption and an approach seeing them within an institutional ethics of office accountability. We explore two main challenges. One concerns the variable opacity of some ML algorithms, which may affect public officeholders’ capacity to account for institutional processes relying upon ML techniques. The other pinpoints the risk that automating certain institutional processes may weaken officeholders’ direct engagement to take forward-looking responsibility for the working of their institution. We discuss why both challenges matter to see how ML algorithms may enhance (and not hinder) institutional answerability practices.

  • 9 de Novembro de 2022, 00:00

A framework for the application of socio-technical design methodology

Abstract

Socio-technical systems (STS) have become prominent platforms for online social interactions. Yet, they are still struggling to incorporate basic social ideas for many different and new online activities. This has resulted in unintended exposure of users’ personal data and a rise in online threats as users have now become a desirable target for malicious activities. To address such challenges, various researchers have argued that STS should support user-oriented configurations to protect their users from online social abuse. Some methodologies have also been proposed to appreciate the integration of social values in the design of information systems, but they often lack an application mechanism. This paper presents a framework for the application of the socio-technical design methodology to incorporate social standards in the design of STS. The proposed framework exemplifies the socio-technical design approach by considering a list of social standards, followed by their mapping onto corresponding technical specifications. Based on these two sets, the framework highlights various individual, inter-group, and intra-group interactions and their supporting tools for STS governance. A conversation about the integration of social standards in STS is already materializing, therefore, a comprehensive framework to apply these standards in STS is entailed.

  • 25 de Outubro de 2022, 00:00

No wheel but a dial: why and how passengers in self-driving cars should decide how their car drives

Abstract

Much of the debate on the ethics of self-driving cars has revolved around trolley scenarios. This paper instead takes up the political or institutional question of who should decide how a self-driving car drives. Specifically, this paper is on the question of whether and why passengers should be able to control how their car drives. The paper reviews existing arguments—those for passenger ethics settings and for mandatory ethics settings respectively—and argues that they fail. Although the arguments are not successful, they serve as the basis to formulate desiderata that any approach to regulating the driving behavior of self-driving cars ought to fulfill. The paper then proposes one way of designing passenger ethics settings that meets these desiderata.

  • 17 de Outubro de 2022, 00:00

Cobots, “co-operation” and the replacement of human skill

Abstract

Automation does not always replace human labour altogether: there is an intermediate stage of human co-existence with machines, including robots, in a production process. Cobots are robots designed to participate at close quarters with humans in such a process. I shall discuss the possible role of cobots in facilitating the eventual total elimination of human operators from production in which co-bots are initially involved. This issue is complicated by another: cobots are often introduced to workplaces with the message (from managers) that they will not replace human operators but will rather assist human operators and make their jobs more interesting and responsible. If, in the process of learning to assist human operators, robots acquire the skills of human operators, then the promise of avoiding replacement can turn out to be false, and if a human operator loses his job, he has been harmed twice over: once by unemployment and once by deception. I shall suggest that this moral risk attends some cobots more than others.

  • 6 de Outubro de 2022, 00:00

Enforcing ethical goals over reinforcement-learning policies

Abstract

Recent years have yielded many discussions on how to endow autonomous agents with the ability to make ethical decisions, and the need for explicit ethical reasoning and transparency is a persistent theme in this literature. We present a modular and transparent approach to equip autonomous agents with the ability to comply with ethical prescriptions, while still enacting pre-learned optimal behaviour. Our approach relies on a normative supervisor module, that integrates a theorem prover for defeasible deontic logic within the control loop of a reinforcement learning agent. The supervisor operates as both an event recorder and an on-the-fly compliance checker w.r.t. an external norm base. We successfully evaluated our approach with several tests using variations of the game Pac-Man, subject to a variety of “ethical” constraints.

  • 29 de Setembro de 2022, 00:00

Disguising Reddit sources and the efficacy of ethical research

Abstract

Concerned researchers of online forums might implement what Bruckman (2002) referred to as disguise. Heavy disguise, for example, elides usernames and rewords quoted prose so that sources are difficult to locate via search engines. This can protect users (who might be members of vulnerable populations, including minors) from additional harms (such as harassment or additional identification). But does disguise work? I analyze 22 Reddit research reports: 3 of light disguise, using verbatim quotes, and 19 of heavier disguise, using reworded phrases. I test if their sources can be located via three different search services (i.e., Reddit, Google, and RedditSearch). I also interview 10 of the reports’ authors about their sourcing practices, influences, and experiences. Disguising sources is effective only if done and tested rigorously; I was able to locate all of the verbatim sources (3/3) and many of the reworded sources (11/19). There is a lack of understanding, among users and researchers, about how online messages can be located, especially after deletion. Researchers should conduct similar site-specific investigations and develop practical guidelines and tools for improving the ethical use of online sources.

  • 10 de Setembro de 2022, 00:00

The value sensitive design of a preventive health check app

Abstract

In projects concerning big data, ethical questions need to be answered during the design process. In this paper the Value Sensitive Design method is applied in the context of data-driven health services aimed at disease prevention. It shows how Value Sensitive Design, with the use of a moral dialogue and an ethical matrix, can support the identification and operationalization of moral values that are at stake in the design of such services. It also shows that using this method can support meeting the requirements of the General Data Protection Regulation.

  • 31 de Agosto de 2022, 00:00

Enabling Fairness in Healthcare Through Machine Learning

Abstract

The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair.

  • 31 de Agosto de 2022, 00:00

Characteristics and challenges in the industries towards responsible AI: a systematic literature review

Abstract

Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We discuss our findings and provide general recommendations to be considered during AI deployment in production. The results have shown many gaps and concerns towards responsible AI and integration of complex AI models in the industry that the research community could address.

  • 29 de Agosto de 2022, 00:00

Artificial intelligence and responsibility gaps: what is the problem?

Abstract

Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of much concern. In this article, I propose a more optimistic view on artificial intelligence, raising two challenges for responsibility gap pessimists. First, proponents of responsibility gaps must say more about when responsibility gaps occur. Once we accept a difficult-to-reject plausibility constraint on the emergence of such gaps, it becomes apparent that the situations in which responsibility gaps occur are unclear. Second, assuming that responsibility gaps occur, more must be said about why we should be concerned about such gaps in the first place. I proceed by defusing what I take to be the two most important concerns about responsibility gaps, one relating to the consequences of responsibility gaps and the other relating to violations of jus in bello.

  • 24 de Agosto de 2022, 00:00

Technology and moral change: the transformation of truth and trust

Abstract

Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by changing the costs/benefits of accessing them; allowing us to substitute those values for other, closely-related ones; increasing their perceived scarcity/abundance; and disrupting traditional value-gatekeepers. This has implications for how we study other, technologically-mediated, value changes.

  • 20 de Agosto de 2022, 00:00

Engineering responsibility

Abstract

Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good reason to think such technological advances are likely, then we should take steps to address the potential for engineering responsibility.

  • 8 de Agosto de 2022, 00:00

Humans, Neanderthals, robots and rights

Abstract

Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.

  • 8 de Agosto de 2022, 00:00

Legitimacy and automated decisions: the moral limits of algocracy

Abstract

With the advent of automated decision-making, governments have increasingly begun to rely on artificially intelligent algorithms to inform policy decisions across a range of domains of government interest and influence. The practice has not gone unnoticed among philosophers, worried about “algocracy” (rule by algorithm), and its ethical and political impacts. One of the chief issues of ethical and political significance raised by algocratic governance, so the argument goes, is the lack of transparency of algorithms.

One of the best-known examples of philosophical analyses of algocracy is John Danaher’s “The threat of algocracy” (2016), arguing that government by algorithm undermines political legitimacy. In this paper, I will treat Danaher’s argument as a springboard for raising additional questions about the connections between algocracy, comprehensibility, and legitimacy, especially in light of empirical results about what we can expect the voters and policymakers to know.

The paper has the following structure: in Sect. 2, I introduce the basics of Danaher’s argument regarding algocracy. In Sect. 3 I argue that the algocratic threat to legitimacy has troubling implications for social justice. In Sect. 4, I argue that, nevertheless, there seem to be good reasons for governments to rely on algorithmic decision support systems. Lastly, I try to resolve the apparent tension between the findings of the two preceding Sections.

  • 8 de Agosto de 2022, 00:00

Resisting the Gamer’s Dilemma

Abstract

Intuitively, many people seem to hold that engaging in acts of virtual murder in videogames is morally permissible, whereas engaging in acts of virtual child molestation is morally impermissible. The Gamer’s Dilemma (Luck in Ethics Inf Technol 11:31–36, 2009) challenges these intuitions by arguing that it is unclear whether there is a morally relevant difference between these two types of virtual actions. There are two main responses in the literature to this dilemma. First, attempts to resolve the dilemma by defending an account of the relevant moral differences between virtual murder and virtual child molestation. Second, attempts to dissolve the dilemma by undermining the intuitions that ground it. In this paper, we argue that a narrow version of the Gamer’s Dilemma seems to survive attempts to resolve or dissolve it away entirely, since neither approach seems to be able to solve the dilemma for all cases. We thus provide a contextually sensitive version of the dilemma that more accurately tracks onto the intuitions of gamers. However, we also argue that the intuitions that ground the narrow version of the Dilemma may not have a moral foundation, and we put forward alternative non-moral normative foundations that seem to better account for the remaining intuitive difference between the two types of virtual actions. We also respond to proposed solutions to the Gamer’s Dilemma in novel ways and set out areas for future empirical work in this area.

  • 28 de Julho de 2022, 00:00

Tracing app technology: an ethical review in the COVID-19 era and directions for post-COVID-19

Abstract

We conducted a systematic literature review on the ethical considerations of the use of contact tracing app technology, which was extensively implemented during the COVID-19 pandemic. The rapid and extensive use of this technology during the COVID-19 pandemic, while benefiting the public well-being by providing information about people’s mobility and movements to control the spread of the virus, raised several ethical concerns for the post-COVID-19 era. To investigate these concerns for the post-pandemic situation and provide direction for future events, we analyzed the current ethical frameworks, research, and case studies about the ethical usage of tracing app technology. The results suggest there are seven essential ethical considerations—privacy, security, acceptability, government surveillance, transparency, justice, and voluntariness—in the ethical use of contact tracing technology. In this paper, we explain and discuss these considerations and how they are needed for the ethical usage of this technology. The findings also highlight the importance of developing integrated guidelines and frameworks for implementation of such technology in the post- COVID-19 world.

  • 27 de Julho de 2022, 00:00

Vicarious liability: a solution to a problem of AI responsibility?

Abstract

Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility for this wrong. In this article, we focus on a particular (aspect of the) AI responsibility gap: it seems fitting that someone should bear the legal consequences in scenarios involving AI machines with design defects; however, there seems to be no such fitting bearer. We approach this problem from the legal perspective, and suggest vicarious liability of AI manufacturers as a solution to this problem. Our proposal comes in two variants: the first one has a narrower range of application, but can be easily integrated in current legal frameworks; the second one requires a revision of current legal frameworks, but has a wider range of application. The latter variant employs a broadened account of vicarious liability. We emphasise strengths of the two variants and finally highlight how vicarious liability offers important insights for addressing a moral AI responsibility gap.

  • 14 de Julho de 2022, 00:00

Matching values to technology: a value sensitive design approach to identify values and use cases of an assistive system for people with dementia in institutional care

Abstract

The number of people with dementia is increasing worldwide. At the same time, family and professional caregivers’ resources are limited. A promising approach to relieve these carers’ burden and assist people with dementia is assistive technology. In order to be useful and accepted, such technologies need to respect the values and needs of their intended users. We applied the value sensitive design approach to identify values and needs of patients with dementia and family and professional caregivers in respect to assistive technologies to assist people with dementia in institutionalized care. Based on semi-structured interviews of residents/patients with cognitive impairment, relatives, and healthcare professionals (10 each), we identified 44 values summarized by 18 core values. From these values, we created a values’ network to demonstrate the interplay between the values. The core of this network was caring and empathy as most strongly interacting value. Furthermore, we found 36 needs for assistance belonging to the four action fields of activity, care, management/administration, and nursing. Based on these values and needs for assistance, we created possible use cases for assistive technologies in each of the identified four action fields. All these use cases already are technologically feasible today but are not currently being used in healthcare facilities. This underlines the need for development of value-based technologies to ensure not only technological feasibility but also acceptance and implementation of assistive technologies. Our results help balance conflicting values and provide concrete suggestions for how engineers and designers can incorporate values into assistive technologies.

  • 12 de Julho de 2022, 00:00
❌