SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Ethics of Artificial Intelligence and Robotics

Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.

After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects , i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects , i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3).

For each section within these themes, we provide a general explanation of the ethical issues , outline existing positions and arguments , then analyse how these play out with current technologies and finally, what policy consequences may be drawn.

1.1 Background of the Field

1.2 ai & robotics, 1.3 a note on policy, 2.1 privacy & surveillance, 2.2 manipulation of behaviour, 2.3 opacity of ai systems, 2.4 bias in decision systems, 2.5 human-robot interaction, 2.6 automation and employment, 2.7 autonomous systems, 2.8 machine ethics, 2.9 artificial moral agents, 2.10 singularity, research organizations, conferences, policy documents, other relevant pages, related entries, 1. introduction.

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, in the Other Internet Resources section below, hereafter [OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with significant dynamics, but few well-established issues and no authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et al. 2019), and policy recommendations (AI HLEG 2019 [OIR]; IEEE 2019). So this article cannot merely reproduce what the community has achieved thus far, but must propose an ordering where little order exists.

The notion of “artificial intelligence” (AI) is understood broadly as any kind of artificial computational system that shows intelligent behaviour, i.e., complex behaviour that is conducive to reaching goals. In particular, we do not wish to restrict “intelligence” to what would require intelligence if done by humans , as Minsky had suggested (1985). This means we incorporate a range of machines, including those in “technical AI”, that show only limited abilities in learning or reasoning but excel at the automation of particular tasks, as well as machines in “general AI” that aim to create a generally intelligent agent.

AI somehow gets closer to our skin than other technologies—thus the field of “philosophy of AI”. Perhaps this is because the project of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings. The main purposes of an artificially intelligent agent probably involve sensing, modelling, planning and action, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, as well as autonomous vehicles and other forms of robotics (P. Stone et al. 2016). AI may involve any number of computational techniques to achieve these aims, be that classical symbol-manipulating AI, inspired by natural cognition, or machine learning via neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018).

Historically, it is worth noting that the term “AI” was used as above ca. 1950–1975, then came into disrepute during the “AI winter”, ca. 1975–1995, and narrowed. As a result, areas such as “machine learning”, “natural language processing” and “data science” were often not labelled as “AI”. Since ca. 2010, the use has broadened again, and at times almost all of computer science and even high-tech is lumped under “AI”. Now it is a name to be proud of, a booming industry with massive capital investment (Shoham et al. 2018), and on the edge of hype again. As Erik Brynjolfsson noted, it may allow us to

virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. (quoted in Anderson, Rainie, and Luchsinger 2018)

While AI can be entirely software, robots are physical machines that move. Robots are subject to physical impact, typically through “sensors”, and they exert physical force onto the world, typically through “actuators”, like a gripper or a turning wheel. Accordingly, autonomous cars or planes are robots, and only a minuscule portion of robots is “humanoid” (human-shaped), like in the movies. Some robots use AI, and some do not: Typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning (around 500,000 such new industrial robots are installed each year (IFR 2019 [OIR])). It is probably fair to say that while robotics systems cause more concerns in the general public, AI systems are more likely to have a greater impact on humanity. Also, AI or robotics systems for a narrow set of tasks are less likely to cause new issues than systems that are more flexible and autonomous.

Robotics and AI can thus be seen as covering two overlapping sets of systems: systems that are only AI, systems that are only robotics, and systems that are both. We are interested in all three; the scope of this article is thus not only the intersection, but the union, of both sets.

Policy is only one of the concerns of this article. There is significant public discussion about AI ethics, and there are frequent pronouncements from politicians that the matter requires new policy, which is easier said than done: Actual technology policy is difficult to plan and enforce. It can take many forms, from incentives and funding, infrastructure, taxation, or good-will statements, to regulation by various actors, and the law. Policy for AI will possibly come into conflict with other aims of technology policy or general policy. Governments, parliaments, associations, and industry circles in industrialised countries have produced reports and white papers in recent years, and some have generated good-will slogans (“trusted/responsible/humane/human-centred/good/beneficial AI”), but is that what is needed? For a survey, see Jobin, Ienca, and Vayena (2019) and V. Müller’s list of PT-AI Policy Documents and Institutions .

For people who work in ethics and policy, there might be a tendency to overestimate the impact and threats from a new technology, and to underestimate how far current regulation can reach (e.g., for product liability). On the other hand, there is a tendency for businesses, the military, and some public administrations to “just talk” and do some “ethics washing” in order to preserve a good public image and continue as before. Actually implementing legally binding regulation would challenge existing business models and practices. Actual policy is not just an implementation of ethical theory, but subject to societal power structures—and the agents that do have the power will push against anything that restricts them. There is thus a significant risk that regulation will remain toothless in the face of economical and political power.

Though very little actual policy has been produced, there are some notable beginnings: The latest EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust, and then spells this out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). Much European research now runs under the slogan of “responsible research and innovation” (RRI), and “technology assessment” has been a standard field since the advent of nuclear power. Professional ethics is also a standard field in information technology, and this includes issues that are relevant in this article. Perhaps a “code of ethics” for AI engineers, analogous to the codes of ethics for medical doctors, is an option here (Véliz 2019). What data science itself should do is addressed in (L. Taylor and Purtova 2019). We also expect that much policy will eventually cover specific uses or technologies of AI and robotics, rather than the field as a whole. A useful summary of an ethical framework for AI is given in (European Group on Ethics in Science and New Technologies 2018: 13ff). On general AI policy, see Calo (2018) as well as Crawford and Calo (2016); Stahl, Timmermans, and Mittelstadt (2016); Johnson and Verdicchio (2017); and Giubilini and Savulescu (2018). A more political angle of technology is often discussed in the field of “Science and Technology Studies” (STS). As books like The Ethics of Invention (Jasanoff 2016) show, concerns in STS are often quite similar to those in ethics (Jacobs et al. 2019 [OIR]). In this article, we discuss the policy for each type of issue separately rather than for AI or robotics in general.

2. Main Debates

In this section we outline the ethical issues of human use of AI and robotics systems that can be more or less autonomous—which means we look at issues that arise with certain uses of the technologies which would not arise with others. It must be kept in mind, however, that technologies will always cause some uses to be easier, and thus more frequent, and hinder other uses. The design of technical artefacts thus has ethical relevance for their use (Houkes and Vermaas 2010; Verbeek 2011), so beyond “responsible use”, we also need “responsible design” in this field. The focus on use does not presuppose which ethical approaches are best suited for tackling these issues; they might well be virtue ethics (Vallor 2017) rather than consequentialist or value-based (Floridi et al. 2018). This section is also neutral with respect to the question whether AI systems truly have “intelligence” or other mental properties: It would apply equally well if AI and robotics are merely seen as the current face of automation (cf. Müller forthcoming-b).

There is a general discussion about privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which mainly concerns the access to private data and data that is personally identifiable. Privacy has several well recognised aspects, e.g., “the right to be let alone”, information privacy, privacy as an aspect of personhood, control over information about oneself, and the right to secrecy (Bennett and Raab 2006). Privacy studies have historically focused on state surveillance by secret services but now include surveillance by other state agents, businesses, and even individuals. The technology has changed significantly in the last decades while regulation has been slow to respond (though there is the Regulation (EU) 2016/679)—the result is a certain anarchy that is exploited by the most powerful players, sometimes in plain sight, sometimes in hiding.

The digital sphere has widened greatly: All data collection and storage is now digital, our lives are increasingly digital, most digital data is connected to a single Internet, and there is more and more sensor technology in use that generates data about non-digital aspects of our lives. AI increases both the possibilities of intelligent data collection and the possibilities for data analysis. This applies to blanket surveillance of whole populations as well as to classic targeted surveillance. In addition, much of the data is traded between agents, usually for a fee.

At the same time, controlling who collects which data, and who has access, is much harder in the digital world than it was in the analogue world of paper and telephone calls. Many new AI technologies amplify the known issues. For example, face recognition in photos and videos allows identification and thus profiling and searching for individuals (Whittaker et al. 2018: 15ff). This continues using other techniques for identification, e.g., “device fingerprinting”, which are commonplace on the Internet (sometimes revealed in the “privacy policy”). The result is that “In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01). The result is arguably a scandal that still has not received due public attention.

The data trail we leave behind is how our “free” services are paid for—but we are not told about that data collection and the value of this new raw material, and we are manipulated into leaving ever more such data. For the “big 5” companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main data-collection part of their business appears to be based on deception, exploiting human weaknesses, furthering procrastination, generating addiction, and manipulation (Harris 2016 [OIR]). The primary focus of social media, gaming, and most of the Internet in this “surveillance economy” is to gain, maintain, and direct attention—and thus data supply. “Surveillance is the business model of the Internet” (Schneier 2015). This surveillance and attention economy is sometimes called “surveillance capitalism” (Zuboff 2019). It has caused many attempts to escape from the grasp of these corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes through the open source movement, but it appears that present-day citizens have lost the degree of autonomy needed to escape while fully continuing with their life and work. We have lost ownership of our data, if “ownership” is the right relation here. Arguably, we have lost control of our data.

These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behaviour allows insights into our mental states (Burr and Christianini 2019) and manipulation (see below section 2.2 ). This has led to calls for the protection of “derived data” (Wachter and Mittelstadt 2019). With the last sentence of his bestselling book, Homo Deus , Harari asks about the long-term consequences of AI:

What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves? (2016: 462)

Robotic devices have not yet played a major role in this area, except for security patrolling, but this will change once they are more common outside of industry environments. Together with the “Internet of things”, the so-called “smart” systems (phone, TV, oven, lamp, virtual assistant, home,…), “smart city” (Sennett 2018), and “smart governance”, they are set to become part of the data-gathering machinery that offers more detailed data, of different types, in real time, with ever more information.

Privacy-preserving techniques that can largely conceal the identity of persons or groups are now a standard staple in data science; they include (relative) anonymisation , access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data (Stahl and Wright 2018); in the case of “differential privacy”, this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). While requiring more effort and cost, such techniques can avoid many of the privacy issues. Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price.

One of the major practical difficulties is to actually enforce regulation, both on the level of the state and on the level of the individual who has a claim. They must identify the responsible legal entity, prove the action, perhaps prove intent, find a court that declares itself competent … and eventually get the court to actually enforce its decision. Well-established legal protection of rights such as consumer rights, product liability, and other civil liability or protection of intellectual property rights is often missing in digital products, or hard to enforce. This means that companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

The ethical issues of AI in surveillance go beyond the mere accumulation of data and direction of attention: They include the use of information to manipulate behaviour, online and offline, in a way that undermines autonomous rational choice. Of course, efforts to manipulate behaviour are ancient, but they may gain a new quality when they use AI systems. Given users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are vulnerable to “nudges”, manipulation, and deception. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these particular individuals. A ’nudge‘ changes the environment such that it influences behaviour in a predictable way that is positive for the individual, but easy and cheap to avoid (Thaler & Sunstein 2008). There is a slippery slope from here to paternalism and manipulation.

Many advertisers, marketers, and online sellers will use any legal means at their disposal to maximise profit, including exploitation of behavioural biases, deception, and addiction generation (Costa and Halpern 2019 [OIR]). Such manipulation is the business model in much of the gambling and gaming industries, but it is spreading, e.g., to low-cost airlines. In interface design on web pages or in games, this manipulation uses what is called “dark patterns” (Mathur et al. 2019). At this moment, gambling and the sale of addictive substances are highly regulated, but online manipulation and addiction are not—even though manipulation of online behaviour is becoming a core business model of the Internet.

Furthermore, social media is now the prime location for political propaganda. This influence can be used to steer voting behaviour, as in the Facebook-Cambridge Analytica “scandal” (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019) and—if successful—it may harm the autonomy of individuals (Susser, Roessler, and Nissenbaum 2019).

Improved AI “faking” technologies make what once was reliable evidence into unreliable evidence—this has already happened to digital photos, sound recordings, and video. It will soon be quite easy to create (rather than alter) “deep fake” text, photos, and video material with any desired content. Soon, sophisticated real-time interaction with persons over text, phone, or video will be faked, too. So we cannot trust digital interactions while we are at the same time increasingly dependent on such interactions.

One more specific issue is that machine learning techniques in AI rely on training with vast amounts of data. This means there will often be a trade-off between privacy and rights to data vs. technical quality of the product. This influences the consequentialist evaluation of privacy-violating practices.

The policy in this field has its ups and downs: Civil liberties and the protection of individual rights are under intense pressure from businesses’ lobbying, secret services, and other state agencies that depend on surveillance. Privacy protection has diminished massively compared to the pre-digital age when communication was based on letters, analogue telephone communications, and personal conversation and when surveillance operated under significant legal constraints.

While the EU General Data Protection Regulation (Regulation (EU) 2016/679) has strengthened privacy protection, the US and China prefer growth with less regulation (Thompson and Bremmer 2018), likely in the hope that this provides a competitive advantage. It is clear that state and business actors have increased their ability to invade privacy and manipulate people with the help of AI technology and will continue to do so to further their particular interests—unless reined in by policy in the interest of general society.

Opacity and bias are central issues in what is now sometimes called “data ethics” or “big data ethics” (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). AI systems for automated decision support and “predictive analytics” raise “significant concerns about lack of due process, accountability, community engagement, and auditing” (Whittaker et al. 2018: 18ff). They are part of a power structure in which “we are creating decision-making processes that constrain and limit opportunities for human participation” (Danaher 2016b: 245). At the same time, it will often be impossible for the affected person to know how the system came to this output, i.e., the system is “opaque” to that person. If the system involves machine learning, it will typically be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is. Bias in decision systems and data sets is exacerbated by this opacity. So, at least in cases where there is a desire to remove bias, the analysis of opacity and bias go hand in hand, and political response has to tackle both issues together.

Many AI systems rely on machine learning techniques in (simulated) neural networks that will extract patterns from a given dataset, with or without “correct” solutions provided; i.e., supervised, semi-supervised or unsupervised. With these techniques, the “learning” captures patterns in the data and these are labelled in a way that appears useful to the decision the system makes, while the programmer does not really know which patterns in the data the system has used. In fact, the programs are evolving, so when new data comes in, or new feedback is given (“this was correct”, “this was incorrect”), the patterns used by the learning system change. What this means is that the outcome is not transparent to the user or programmers: it is opaque. Furthermore, the quality of the program depends heavily on the quality of the data provided, following the old slogan “garbage in, garbage out”. So, if the data already involved a bias (e.g., police data about the skin colour of suspects), then the program will reproduce that bias. There are proposals for a standard description of datasets in a “datasheet” that would make the identification of such bias more feasible (Gebru et al. 2018 [OIR]). There is also significant recent literature about the limitations of machine learning systems that are essentially sophisticated data filters (Marcus 2018 [OIR]). Some have argued that the ethical problems of today are the result of technical “shortcuts” AI has taken (Cristianini forthcoming).

There are several technical activities that aim at “explainable AI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and, more recently, a DARPA programme (Gunning 2017 [OIR]). More broadly, the demand for

a mechanism for elucidating and articulating the power structures, biases, and influences that computational artefacts exercise in society (Diakopoulos 2015: 398)

is sometimes called “algorithmic accountability reporting”. This does not mean that we expect an AI to “explain its reasoning”—doing so would require far more serious moral autonomy than we currently attribute to AI systems (see below §2.10 ).

The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006). In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere. The political angle of this discussion has been stressed by O’Neil in her influential book Weapons of Math Destruction (2016), and by Yeung and Lodge (2019).

In the EU, some of these issues have been taken into account with the (Regulation (EU) 2016/679), which foresees that consumers, when faced with a decision based on data processing, will have a legal “right to explanation”—how far this goes and to what extent it can be enforced is disputed (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). Zerilli et al. (2019) argue that there may be a double standard here, where we demand a high level of explanation for machine-based decisions despite humans sometimes not reaching that standard themselves.

Automated AI decision support systems and “predictive analytics” operate on data and produce a decision as “output”. This output may range from the relatively trivial to the highly significant: “this restaurant matches your preferences”, “the patient in this X-ray has completed bone growth”, “application to credit card declined”, “donor organ will be given to another patient”, “bail is denied”, or “target identified and engaged”. Data analysis is often used in “predictive analytics” in business, healthcare, and other fields, to foresee future developments—since prediction is easier, it will also become a cheaper commodity. One use of prediction is in “predictive policing” (NIJ 2014 [OIR]), which many fear might lead to an erosion of public liberties (Ferguson 2017) because it can take away power from the people whose behaviour is predicted. It appears, however, that many of the worries about policing depend on futuristic scenarios where law enforcement foresees and punishes planned actions, rather than waiting until a crime has been committed (like in the 2002 film “Minority Report”). One concern is that these systems might perpetuate bias that was already in the data used to set up the system, e.g., by increasing police patrols in an area and discovering more crime in that area. Actual “predictive policing” or “intelligence led policing” techniques mainly concern the question of where and when police forces will be needed most. Also, police officers can be provided with more data, offering them more control and facilitating better decisions, in workflow support software (e.g., “ArcGIS”). Whether this is problematic depends on the appropriate level of trust in the technical quality of these systems, and on the evaluation of aims of the police work itself. Perhaps a recent paper title points in the right direction here: “AI ethics in predictive policing: From models of threat to an ethics of care” (Asaro 2019).

Bias typically surfaces when unfair judgments are made because the individual making the judgment is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group. So, one form of bias is a learned cognitive feature of a person, often not made explicit. The person concerned may not be aware of having that bias—they may even be honestly and explicitly opposed to a bias they are found to have (e.g., through priming, cf. Graham and Lowery 2004). On fairness vs. bias in machine learning, see Binns (2018).

Apart from the social phenomenon of learned bias, the human cognitive system is generally prone to have various kinds of “cognitive biases”, e.g., the “confirmation bias”: humans tend to interpret information as confirming what they already believe. This second form of bias is often said to impede performance in rational judgment (Kahnemann 2011)—though at least some cognitive biases generate an evolutionary advantage, e.g., economical use of resources for intuitive judgment. There is a question whether AI systems could or should have such cognitive bias.

A third form of bias is present in data when it exhibits systematic error, e.g., “statistical bias”. Strictly, any given dataset will only be unbiased for a single kind of issue, so the mere creation of a dataset involves the danger that it may be used for a different kind of issue, and then turn out to be biased for that kind. Machine learning on the basis of such data would then not only fail to recognise the bias, but codify and automate the “historical bias”. Such historical bias was discovered in an automated recruitment screening system at Amazon (discontinued early 2017) that discriminated against women—presumably because the company had a history of discriminating against women in the hiring process. The “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS), a system to predict whether a defendant would re-offend, was found to be as successful (65.2% accuracy) as a group of random humans (Dressel and Farid 2018) and to produce more false positives and less false negatives for black defendants. The problem with such systems is thus bias plus humans placing excessive trust in the systems. The political dimensions of such automated systems in the USA are investigated in Eubanks (2018).

There are significant technical efforts to detect and remove bias from AI systems, but it is fair to say that these are in early stages: see UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). It appears that technological fixes have their limits in that they need a mathematical notion of fairness, which is hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a formal notion of “race” (see Benthall and Haynes 2019). An institutional proposal is in (Veale and Binns 2017).

Human-robot interaction (HRI) is an academic fields in its own right, which now pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working (e.g., Arnold and Scheutz 2017). Useful surveys for the ethics of robotics include Calo, Froomkin, and Kerr (2016); Royakkers and van Est (2016); Tzafestas (2016); a standard collection of papers is Lin, Abney, and Jenkins (2017).

While AI can be used to manipulate humans into believing and doing things (see section 2.2 ), it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of “respect for humanity”. Humans very easily attribute mental properties to objects, and empathise with them, especially when the outer appearance of these objects is similar to that of living beings. This can be used to deceive humans (or animals) into attributing more intellectual or even emotional significance to robots or AI systems than they deserve. Some parts of humanoid robotics are problematic in this regard (e.g., Hiroshi Ishiguro’s remote-controlled Geminoids), and there are cases that have been clearly deceptive for public-relations purposes (e.g. on the abilities of Hanson Robotics’ “Sophia”). Of course, some fairly basic constraints of business ethics and law apply to robots, too: product safety and liability, or non-deception in advertisement. It appears that these existing constraints take care of many concerns that are raised. There are cases, however, where human-human interaction has aspects that appear specifically human in ways that can perhaps not be replaced by robots: care, love, and sex.

2.5.1 Example (a) Care Robots

The use of robots in health care for humans is currently at the level of concept studies in real environments, but it may become a usable technology in a few years, and has raised a number of concerns for a dystopian future of de-humanised care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). Current systems include robots that support human carers/caregivers (e.g., in lifting patients, or transporting material), robots that enable patients to do certain things by themselves (e.g., eat with a robotic arm), but also robots that are given to patients as company and comfort (e.g., the “Paro” robot seal). For an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga and Albo-Canals (2019), for a survey of users Draper et al. (2014).

One reason why the issue of care has come to the fore is that people have argued that we will need robots in ageing societies. This argument makes problematic assumptions, namely that with longer lifespan people will need more care, and that it will not be possible to attract more humans to caring professions. It may also show a bias about age (Jecker forthcoming). Most importantly, it ignores the nature of automation, which is not simply about replacing humans, but about allowing humans to work more efficiently. It is not very clear that there really is an issue here since the discussion mostly focuses on the fear of robots de-humanising care, but the actual and foreseeable robots in care are assistive robots for classic automation of technical tasks. They are thus “care robots” only in a behavioural sense of performing tasks in care environments, not in the sense that a human “cares” for the patients. It appears that the success of “being cared for” relies on this intentional sense of “care”, which foreseeable robots cannot provide. If anything, the risk of robots in care is the absence of such intentional care—because less human carers may be needed. Interestingly, caring for something, even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A system that pretends to care would be deceptive and thus problematic—unless the deception is countered by sufficiently large utility gain (Coeckelbergh 2016). Some robots that pretend to “care” on a basic level are available (Paro seal) and others are in the making. Perhaps feeling cared for by a machine, to some extent, is progress for come patients.

2.5.2 Example (b) Sex Robots

It has been argued by several tech optimists that humans will likely be interested in sex and companionship with robots and be comfortable with the idea (Levy 2007). Given the variation of human sexual preferences, including sex toys and sex dolls, this seems very likely: The question is whether such devices should be manufactured and promoted, and whether there should be limits in this touchy area. It seems to have moved into the mainstream of “robot philosophy” in recent times (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018).

Humans have long had deep emotional attachments to objects, so perhaps companionship or even love with a predictable android is attractive, especially to people who struggle with actual humans, and already prefer dogs, cats, birds, a computer or a tamagotchi . Danaher (2019b) argues against (Nyholm and Frank 2017) that these can be true friendships, and is thus a valuable goal. It certainly looks like such friendship might increase overall utility, even if lacking in depth. In these discussions there is an issue of deception, since a robot cannot (at present) mean what it says, or have feelings for a human. It is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience,even to clearly inanimate objects that show no behaviour at all. Also, paying for deception seems to be an elementary part of the traditional sex industry.

Finally, there are concerns that have often accompanied matters of sex, namely consent (Frank and Nyholm 2017), aesthetic concerns, and the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human behaviour is influenced by experience, and it is likely that pornography or sex robots support the perception of other humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience. In this vein, the “Campaign Against Sex Robots” argues that these devices are a continuation of slavery and prostitution (Richardson 2016).

It seems clear that AI and robotics will lead to significant gains in productivity and thus overall wealth. The attempt to increase productivity has often been a feature of the economy, though the emphasis on “growth” is a modern phenomenon (Harari 2016: 240). However, productivity gains through automation typically mean that fewer humans are required for the same output. This does not necessarily imply a loss of overall employment, however, because available wealth increases and that can increase demand sufficiently to counteract the productivity gain. In the long run, higher productivity in industrial societies has led to more wealth overall. Major labour market disruptions have occurred in the past, e.g., farming employed over 60% of the workforce in Europe and North-America in 1800, while by 2010 it employed ca. 5% in the EU, and even less in the wealthiest countries (European Commission 2013). In the 20 years between 1950 and 1970 the number of hired agricultural workers in the UK was reduced by 50% (Zayed and Loft 2019). Some of these disruptions lead to more labour-intensive industries moving to places with lower labour cost. This is an ongoing process.

Classic automation replaced human muscle, whereas digital automation replaces human thought or information-processing—and unlike physical machines, digital automation is very cheap to duplicate (Bostrom and Yudkowsky 2014). It may thus mean a more radical change on the labour market. So, the main question is: will the effects be different this time? Will the creation of new jobs and wealth keep up with the destruction of jobs? And even if it is not different, what are the transition costs, and who bears them? Do we need to make societal adjustments for a fair distribution of costs and benefits of digital automation?

Responses to the issue of unemployment from AI have ranged from the alarmed (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the optimistic (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). In principle, the labour market effect of automation seems to be fairly well understood as involving two channels:

(i) the nature of interactions between differently skilled workers and new technologies affecting labour demand and (ii) the equilibrium effects of technological progress through consequent changes in labour supply and product markets. (Goos 2018: 362)

What currently seems to happen in the labour market as a result of AI and robotics automation is “job polarisation” or the “dumbbell” shape (Goos, Manning, and Salomons 2009): The highly skilled technical jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the mid-qualification jobs in factories and offices, i.e., the majority of jobs, are under pressure and reduced because they are relatively predictable, and most likely to be automated (Baldwin 2019).

Perhaps enormous productivity gains will allow the “age of leisure” to be realised, something (Keynes 1930) had predicted to occur around 2030, assuming a growth rate of 1% per annum. Actually, we have already reached the level he anticipated for 2030, but we are still working—consuming more and inventing ever more levels of organisation. Harari explains how this economic development allowed humanity to overcome hunger, disease, and war—and now we aim for immortality and eternal bliss through AI, thus his title Homo Deus (Harari 2016: 75).

In general terms, the issue of unemployment is an issue of how goods in a society should be justly distributed. A standard view is that distributive justice should be rationally decided from behind a “veil of ignorance” (Rawls 1971), i.e., as if one does not know what position in a society one would actually be taking (labourer or industrialist, etc.). Rawls thought the chosen principles would then support basic liberties and a distribution that is of greatest benefit to the least-advantaged members of society. It would appear that the AI economy has three features that make such justice unlikely: First, it operates in a largely unregulated environment where responsibility is often hard to allocate. Second, it operates in markets that have a “winner takes all” feature where monopolies develop quickly. Third, the “new economy” of the digital service industries is based on intangible assets, also called “capitalism without capital” (Haskel and Westlake 2017). This means that it is difficult to control multinational digital corporations that do not rely on a physical plant in a particular location. These three features seem to suggest that if we leave the distribution of wealth to free market forces, the result would be a heavily unjust distribution: And this is indeed a development that we can already see.

One interesting question that has not received too much attention is whether the development of AI is environmentally sustainable: Like all computing systems, AI systems produce waste that is very hard to recycle and they consume vast amounts of energy, especially for the training of machine learning systems (and even for the “mining” of cryptocurrency). Again, it appears that some actors in this space offload such costs to the general society.

There are several notions of autonomy in the discussion of autonomous systems. A stronger notion is involved in philosophical debates where autonomy is the basis for responsibility and personhood (Christman 2003 [2018]). In this context, responsibility implies autonomy, but not inversely, so there can be systems that have degrees of technical autonomy without raising issues of responsibility. The weaker, more technical, notion of autonomy in robotics is relative and gradual: A system is said to be autonomous with respect to human control to a certain degree (Müller 2012). There is a parallel here to the issues of bias and opacity in AI since autonomy also concerns a power-relation: who is in control, and who is responsible?

Generally speaking, one question is the degree to which autonomous robots raise issues our present conceptual schemes must adapt to, or whether they just require technical adjustments. In most jurisdictions, there is a sophisticated system of civil and criminal liability to resolve such issues. Technical standards, e.g., for the safe use of machinery in medical environments, will likely need to be adjusted. There is already a field of “verifiable AI” for such safety-critical systems and for “security applications”. Bodies like the IEEE (The Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have produced “standards”, particularly on more technical sub-problems, such as data security and transparency. Among the many autonomous systems on land, on water, under water, in air or space, we discuss two samples: autonomous vehicles and autonomous weapons.

2.7.1 Example (a) Autonomous Vehicles

Autonomous vehicles hold the promise to reduce the very significant damage that human driving currently causes—approximately 1 million humans being killed per year, many more injured, the environment polluted, earth sealed with concrete and tarmac, cities full of parked cars, etc. However, there seem to be questions on how autonomous vehicles should behave, and how responsibility and risk should be distributed in the complicated system the vehicles operates in. (There is also significant disagreement over how long the development of fully autonomous, or “level 5” cars (SAE International 2018) will actually take.)

There is some discussion of “trolley problems” in this context. In the classic “trolley problems” (Thomson 1976; Woollard and Howard-Snyder 2016: section 2) various dilemmas are presented. The simplest version is that of a trolley train on a track that is heading towards five people and will kill them, unless the train is diverted onto a side track, but on that track there is one person, who will be killed if the train takes that side track. The example goes back to a remark in (Foot 1967: 6), who discusses a number of dilemma cases where tolerated and intended consequences of an action differ. “Trolley problems” are not supposed to describe actual ethical problems or to be solved with a “right” choice. Rather, they are thought-experiments where choice is artificially constrained to a small finite number of distinct one-off options and where the agent has perfect knowledge. These problems are used as a theoretical tool to investigate ethical intuitions and theories—especially the difference between actively doing vs. allowing something to happen, intended vs. tolerated consequences, and consequentialist vs. other normative approaches (Kamm 2016). This type of problem has reminded many of the problems encountered in actual driving and in autonomous driving (Lin 2016). It is doubtful, however, that an actual driver or autonomous car will ever have to solve trolley problems (but see Keeling 2020). While autonomous car trolley problems have received a lot of media attention (Awad et al. 2018), they do not seem to offer anything new to either ethical theory or to the programming of autonomous vehicles.

The more common ethical problems in driving, such as speeding, risky overtaking, not keeping a safe distance, etc. are classic problems of pursuing personal interest vs. the common good. The vast majority of these are covered by legal regulations on driving. Programming the car to drive “by the rules” rather than “by the interest of the passengers” or “to achieve maximum utility” is thus deflated to a standard problem of programming ethical machines (see section 2.9 ). There are probably additional discretionary rules of politeness and interesting questions on when to break the rules (Lin 2016), but again this seems to be more a case of applying standard considerations (rules vs. utility) to the case of autonomous vehicles.

Notable policy efforts in this field include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which stresses that safety is the primary objective. Rule 10 states

In the case of automated and connected driving systems, the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions.

(See section 2.10.1 below). The resulting German and EU laws on licensing automated driving are much more restrictive than their US counterparts where “testing on consumers” is a strategy used by some companies—without informed consent of the consumers or their possible victims.

2.7.2 Example (b) Autonomous Weapons

The notion of automated weapons is fairly old:

For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. (DARPA 1983: 1)

This proposal was ridiculed as “fantasy” at the time (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more easily identifiable targets (missiles, planes, ships, tanks, etc.), but not for human combatants. The main arguments against (lethal) autonomous weapon systems (AWS or LAWS), are that they support extrajudicial killings, take responsibility away from humans, and make wars or killings more likely—for a detailed list of issues see Lin, Bekey, and Abney (2008: 73–86).

It appears that lowering the hurdle to use such systems (autonomous vehicles, “fire-and-forget” missiles, or drones loaded with explosives) and reducing the probability of being held accountable would increase the probability of their use. The crucial asymmetry where one side can kill with impunity, and thus has few reasons not to do so, already exists in conventional drone wars with remote controlled weapons (e.g., US in Pakistan). It is easy to imagine a small drone that searches, identifies, and kills an individual human—or perhaps a type of human. These are the kinds of cases brought forward by the Campaign to Stop Killer Robots and other activist groups. Some seem to be equivalent to saying that autonomous weapons are indeed weapons …, and weapons kill, but we still make them in gigantic numbers. On the matter of accountability, autonomous weapons might make identification and prosecution of the responsible agents more difficult—but this is not clear, given the digital records that one can keep, at least in a conventional war. The difficulty of allocating punishment is sometimes called the “retribution gap” (Danaher 2016a).

Another question is whether using autonomous weapons in war would make wars worse, or make wars less bad. If robots reduce war crimes and crimes in war, the answer may well be positive and has been used as an argument in favour of these weapons (Arkin 2009; Müller 2016a) but also as an argument against them (Amoroso and Tamburrini 2018). Arguably the main threat is not the use of such weapons in conventional warfare, but in asymmetric conflicts or by non-state agents, including criminals.

It has also been said that autonomous weapons cannot conform to International Humanitarian Law, which requires observance of the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) in military conflict (A. Sharkey 2019). It is true that the distinction between combatants and non-combatants is hard, but the distinction between civilian and military ships is easy—so all this says is that we should not construct and use such weapons if they do violate Humanitarian Law. Additional concerns have been raised that being killed by an autonomous weapon threatens human dignity, but even the defenders of a ban on these weapons seem to say that these are not good arguments:

There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity. (A. Sharkey 2019)

A lot has been made of keeping humans “in the loop” or “on the loop” in the military guidance on weapons—these ways of spelling out “meaningful control” are discussed in (Santoni de Sio and van den Hoven 2018). There have been discussions about the difficulties of allocating responsibility for the killings of an autonomous weapon, and a “responsibility gap” has been suggested (esp. Rob Sparrow 2007), meaning that neither the human nor the machine may be responsible. On the other hand, we do not assume that for every event there is someone responsible for that event, and the real issue may well be the distribution of risk (Simpson and Müller 2016). Risk analysis (Hansson 2013) indicates it is crucial to identify who is exposed to risk, who is a potential beneficiary , and who makes the decisions (Hansson 2018: 1822–1824).

Machine ethics is ethics for machines, for “ethical machines”, for machines as subjects , rather than for the human use of machines as objects. It is often not very clear whether this is supposed to cover all of AI ethics or to be a part of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). Sometimes it looks as though there is the (dubious) inference at play here that if machines act in ethically relevant ways, then we need a machine ethics. Accordingly, some use a broader notion:

machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. (Anderson and Anderson 2007: 15)

This might include mere matters of product safety, for example. Other authors sound rather ambitious but use a narrower notion:

AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. (Dignum 2018: 1, 2)

Some of the discussion in machine ethics makes the very substantial assumption that machines can, in some sense, be ethical agents responsible for their actions, or “autonomous moral agents” (see van Wynsberghe and Robbins 2019). The basic idea of machine ethics is now finding its way into actual robotics where the assumption that these machines are artificial moral agents in any substantial sense is usually not made (Winfield et al. 2019). It is sometimes observed that a robot that is programmed to follow ethical rules can very easily be modified to follow unethical rules (Vanderelst and Winfield 2018).

The idea that machine ethics might take the form of “laws” has famously been investigated by Isaac Asimov, who proposed “three laws of robotics” (Asimov 1942):

First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov then showed in a number of stories how conflicts between these three laws will make it problematic to use them despite their hierarchical organisation.

It is not clear that there is a consistent notion of “machine ethics” since weaker versions are in danger of reducing “having an ethics” to notions that would not normally be considered sufficient (e.g., without “reflection” or even without “action”); stronger notions that move towards artificial moral agents may describe a—currently—empty set.

If one takes machine ethics to concern moral agents, in some substantial sense, then these agents can be called “artificial moral agents”, having rights and responsibilities. However, the discussion about artificial entities challenges a number of common notions in ethics and it can be very useful to understand these in abstraction from the human case (cf. Misselhorn 2020; Powers and Ganascia forthcoming).

Several authors use “artificial moral agent” in a less demanding sense, borrowing from the use of “agent” in software engineering in which case matters of responsibility and rights will not arise (Allen, Varner, and Zinser 2000). James Moor (2006) distinguishes four types of machine agents: ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who “can make explicit ethical judgments and generally is competent to reasonably justify them. An average adult human is a full ethical agent”.) Several ways to achieve “explicit” or “full” ethical agents have been proposed, via programming it in (operational morality), via “developing” the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (Allen, Smit, and Wallach 2005; Moor 2006). Programmed agents are sometimes not considered “full” agents because they are “competent without comprehension”, just like the neurons in a brain (Dennett 2017; Hakli and Mäkelä 2019).

In some discussions, the notion of “moral patient” plays a role: Ethical agents have responsibilities while ethical patients have rights because harm to them matters. It seems clear that some entities are patients without being agents, e.g., simple animals that can feel pain but cannot make justified choices. On the other hand, it is normally understood that all agents will also be patients (e.g., in a Kantian framework). Usually, being a person is supposed to be what makes an entity a responsible agent, someone who can have duties and be the object of ethical concerns. Such personhood is typically a deep notion associated with phenomenal consciousness, intention and free will (Frankfurt 1971; Strawson 1998). Torrance (2011) suggests “artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of ‘ethical status’ in those humans” (2011: 116)—which he takes to be “ethical productivity and ethical receptivity ” (2011: 117)—his expressions for moral agents and patients.

2.9.1 Responsibility for Robots

There is broad consensus that accountability, liability, and the rule of law are basic requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the issue in the case of robots is how this can be done and how responsibility can be allocated. If the robots act, will they themselves be responsible, liable, or accountable for their actions? Or should the distribution of risk perhaps take precedence over discussions of responsibility?

Traditional distribution of responsibility already occurs: A car maker is responsible for the technical safety of the car, a driver is responsible for driving, a mechanic is responsible for proper maintenance, the public authorities are responsible for the technical conditions of the roads, etc. In general

The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware.… With distributed agency comes distributed responsibility. (Taddeo and Floridi 2018: 751).

How this distribution might occur is not a problem that is specific to AI, but it gains particular urgency in this context (Nyholm 2018a, 2018b). In classical control engineering, distributed control is often achieved through a control hierarchy plus control loops across these hierarchies.

2.9.2 Rights for Robots

Some authors have indicated that it should be seriously considered whether current robots must be allocated rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). This position seems to rely largely on criticism of the opponents and on the empirical observation that robots and other non-persons are sometimes treated as having rights. In this vein, a “relational turn” has been proposed: If we relate to robots as though they had rights, then we might be well-advised not to search whether they “really” do have such rights (Coeckelbergh 2010, 2012, 2018). This raises the question how far such anti-realism or quasi-realism can go, and what it means then to say that “robots have rights” in a human-centred approach (Gerdes 2016). On the other side of the debate, Bryson has insisted that robots should not enjoy rights (Bryson 2010), though she considers it a possibility (Gunkel and Bryson 2014).

There is a wholly separate issue whether robots (or other AI systems) should be given the status of “legal entities” or “legal persons” in a sense natural persons, but also states, businesses, or organisations are “entities”, namely they can have legal rights and duties. The European Parliament has considered allocating such status to robots in order to deal with civil liability (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal liability—which is reserved for natural persons. It would also be possible to assign only a certain subset of rights and duties to robots. It has been said that “such legislative action would be morally unnecessary and legally troublesome” because it would not serve the interest of humans (Bryson, Diamantis, and Grant 2017: 273). In environmental ethics there is a long-standing discussion about the legal rights for natural objects like trees (C. D. Stone 1972).

It has also been said that the reasons for developing robots with rights, or artificial moral patients, in the future are ethically doubtful (van Wynsberghe and Robbins 2019). In the community of “artificial consciousness” researchers there is a significant concern whether it would be ethical to create such consciousness since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off—some authors have called for a “moratorium on synthetic phenomenology” (Bentley et al. 2018: 28f).

2.10.1 Singularity and Superintelligence

In some quarters, the aim of current AI is thought to be an “artificial general intelligence” (AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from traditional notions of AI as a general purpose system, and from Searle’s notion of “strong AI”:

computers given the right programs can be literally said to understand and have other cognitive states. (Searle 1980: 417)

The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent” (see below). Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487).

The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good 1965: 33)

The optimistic argument from acceleration to singularity is spelled out by Kurzweil (1999, 2005, 2012) who essentially points out that computing power has been increasing exponentially, i.e., doubling ca. every 2 years since 1970 in accordance with “Moore’s Law” on the number of transistors, and will continue to do so for some time in the future. He predicted in (Kurzweil 1999) that by 2010 supercomputers will reach human computation capacity, by 2030 “mind uploading” will be possible, and by 2045 the “singularity” will occur. Kurzweil talks about an increase in computing power that can be purchased at a given cost—but of course in recent years the funds available to AI companies have also increased enormously: Amodei and Hernandez (2018 [OIR]) thus estimate that in the years 2012–2018 the actual computing power available to train a particular AI system doubled every 3.4 months, resulting in an 300,000x increase—not the 7x increase that doubling every two years would have created.

A common version of this argument (Chalmers 2010) talks about an increase in “intelligence” of the AI system (rather than raw computing power), but the crucial point of “singularity” remains the one where further development of AI is taken over by AI systems and accelerates beyond human level. Bostrom (2014) explains in some detail what would happen at that point and what the risks for humanity are. The discussion is summarised in Eden et al. (2012); Armstrong (2014); Shanahan (2015). There are possible paths to superintelligence other than computing power increase, e.g., the complete emulation of the human brain on a computer (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organisations (Bostrom 2014: 22–51).

Despite obvious weaknesses in the identification of “intelligence” with processing power, Kurzweil seems right that humans tend to underestimate the power of exponential growth. Mini-test: If you walked in steps in such a way that each step is double the previous, starting with a step of one metre, how far would you get with 30 steps? (answer: almost 3 times further than the Earth’s only permanent natural satellite.) Indeed, most progress in AI is readily attributable to the availability of processors that are faster by degrees of magnitude, larger storage, and higher investment (Müller 2018). The actual acceleration and its speeds are discussed in (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming); Sandberg (2019) argues that progress will continue for some time.

The participants in this debate are united by being technophiles in the sense that they expect technology to develop rapidly and bring broadly welcome changes—but beyond that, they divide into those who focus on benefits (e.g., Kurzweil) and those who focus on risks (e.g., Bostrom). Both camps sympathise with “transhuman” views of survival for humankind in a different physical form, e.g., uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). They also consider the prospects of “human enhancement” in various respects, including intelligence—often called “IA” (intelligence augmentation). It may be that future AI will be used for human enhancement, or will contribute further to the dissolution of the neatly defined human single person. Robin Hanson provides detailed speculation about what will happen economically in case human “brain emulation” enables truly intelligent robots or “ems” (Hanson 2016).

The argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109).

Criticism of the singularity narrative has been raised from various angles. Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail. One question is whether such a singularity will ever occur—it may be conceptually impossible, practically impossible or may just not happen because of contingent events, including people actively preventing it. Philosophically, the interesting question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the trajectory of actual AI research. This is something that practitioners often assume (e.g., Brooks 2017 [OIR]). They may do so because they fear the public relations backlash, because they overestimate the practical problems, or because they have good reasons to think that superintelligence is an unlikely outcome of current AI research (Müller forthcoming-a). This discussion raises the question whether the concern about “singularity” is just a narrative about fictional AI based on human fears. But even if one does find negative reasons compelling and the singularity not likely to occur, there is still a significant possibility that one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant 1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that discussing the very high-impact risk of singularity has justification even if one thinks the probability of such singularity ever occurring is very low.

2.10.2 Existential Risk from Superintelligence

Thinking about superintelligence in the long term raises the question whether superintelligence may lead to the extinction of the human species, which is called an “existential risk” (or XRisk): The superintelligent systems may well have preferences that conflict with the existence of humans on Earth, and may thus decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care).

Thinking in the long term is the crucial feature of this literature. Whether the singularity (or another catastrophic event) occurs in 30 or 300 or 3000 years does not really matter (Baum et al. 2019). Perhaps there is even an astronomical pattern such that an intelligent species is bound to discover AI at some point, and thus bring about its own demise. Such a “great filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life in the known universe despite the high probability of it emerging. It would be bad news if we found out that the “great filter” is ahead of us, rather than an obstacle that Earth has already passed. These issues are sometimes taken more narrowly to be about human extinction (Bostrom 2013), or more broadly as concerning any large risk for the species (Rees 2018)—of which AI is only one (Häggström 2016; Ord 2020). Bostrom also uses the category of “global catastrophic risk” for risks that are sufficiently high up the two dimensions of “scope” and “severity” (Bostrom and Ćirković 2011; Bostrom 2013).

These discussions of risk are usually not connected to the general problem of ethics under risk (e.g., Hansson 2013, 2018). The long-term view has its own methodological challenges but has produced a wide discussion: (Tegmark 2017) focuses on AI and human life “3.0” after singularity while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) survey longer-term policy issues in ethical AI. Several collections of papers have investigated the risks of artificial general intelligence (AGI) and the factors that might make this development more or less risk-laden (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018), including the development of non-agent AI (Drexler 2019).

2.10.3 Controlling Superintelligence?

In a narrow sense, the “control problem” is how we humans can remain in control of an AI system once it is superintelligent (Bostrom 2014: 127ff). In a wider sense, it is the problem of how we can make sure an AI system will turn out to be positive according to human perception (Russell 2019); this is sometimes called “value alignment”. How easy or hard it is to control a superintelligence depends significantly on the speed of “take-off” to a superintelligent system. This has led to particular attention to systems with self-improvement, such as AlphaZero (Silver et al. 2018).

One aspect of this problem is that we might decide a certain feature is desirable, but then find out that it has unforeseen consequences that are so negative that we would not desire that feature after all. This is the ancient problem of King Midas who wished that all he touched would turn into gold. This problem has been discussed on the occasion of various examples, such as the “paperclip maximiser” (Bostrom 2003b), or the program to optimise chess performance (Omohundro 2014).

Discussions about superintelligence include speculation about omniscient beings, the radical changes on a “latter day”, and the promise of immortality through transcendence of our current bodily form—so sometimes they have clear religious undertones (Capurro 1993; Geraci 2008, 2010; O’Connell 2017: 160ff). These issues also pose a well-known problem of epistemology: Can we know the ways of the omniscient (Danaher 2015)? The usual opponents have already shown up: A characteristic response of an atheist is

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world (Domingos 2015)

The new nihilists explain that a “techno-hypnosis” through information technologies has now become our main method of distraction from the loss of meaning (Gertz 2018). Both opponents would thus say we need an ethics for the “small” problems that occur with actual AI and robotics ( sections 2.1 through 2.9 above), and that there is less need for the “big ethics” of existential risk from AI ( section 2.10 ).

The singularity thus raises the problem of the concept of AI again. It is remarkable how imagination or “vision” has played a central role since the very beginning of the discipline at the “Dartmouth Summer Research Project” (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation” (Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom 2014). This created media attention and public relations efforts, but it also raises the problem of how much of this “philosophy and ethics of AI” is really about AI rather than about an imagined technology. As we said at the outset, AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth. We have seen issues that have been raised and will have to watch technological and social developments closely to catch the new issues early on, develop a philosophical analysis, and learn for traditional problems of philosophy.

NOTE: Citations in the main text annotated “[OIR]” may be found in the Other Internet Resources section below, not in the Bibliography.

  • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality , 7(3): 1–15. doi:10.29012/jpc.v7i3.404
  • AI4EU, 2019, “Outcomes from the Strategic Orientation Workshop (Deliverable 7.1)”, (June 28, 2019). https://www.ai4eu.eu/ai4eu-project-deliverables
  • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology , 7(3): 149–155. doi:10.1007/s10676-006-0004-4
  • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence , 12(3): 251–261. doi:10.1080/09528130050111428
  • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist , 18(1): art. 20170012. doi:10.1515/gj-2017-0012
  • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans , Washington, DC: Pew Research Center.
  • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine , 28(4): 15–26.
  • ––– (eds.), 2011, Machine Ethics , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
  • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization , Durham, NC and London: Duke University Press.
  • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots , Boca Raton, FL: CRC Press.
  • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics , 12: 68–84.
  • –––, 2014, Smarter Than Us , Berkeley, CA: MIRI.
  • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17 , Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
  • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine , 38(2): 40–53. doi:10.1109/MTS.2019.2915154
  • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction , March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
  • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature , 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
  • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work , New York: Oxford University Press.
  • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight , 21(1): 53–83. doi:10.1108/FS-04-2018-0037
  • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie , Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
  • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective , second edition, Cambridge, MA: MIT Press.
  • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19 , Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
  • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [ Bentley et al. 2018 available online ]
  • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society , 34(3): 130–140. doi:10.1080/01972243.2018.1444249
  • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency , in Proceedings of Machine Learning Research , 81: 149–159.
  • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly , 53(211): 243–255. doi:10.1111/1467-9213.00309
  • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2 , Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [ Botstrom 2003b revised available online ]
  • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century , Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
  • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines , 22(2): 71–85. doi:10.1007/s11023-012-9281-3
  • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy , 4(1): 15–31. doi:10.1111/1758-5899.12002
  • –––, 2014, Superintelligence: Paths, Dangers, Strategies , Oxford: Oxford University Press.
  • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks , New York: Oxford University Press.
  • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence , S Matthew Liao (ed.), New York: Oxford University Press. [ Bostrom, Dafoe, and Flynn forthcoming – preprint available online ]
  • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence , Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [ Bostrom and Yudkowsky 2014 available online ]
  • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [ Bradshaw, Neudert, and Howard 2019 available online/ ]
  • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology , Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
  • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies , New York: W. W. Norton.
  • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues , Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
  • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade , Madrid: Turner - BVVA. [ Bryson 2019 available online ]
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law , 25(3): 273–291. doi:10.1007/s10506-017-9214-9
  • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines , 29(3): 461–494. doi:10.1007/s11023-019-09497-4
  • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch) , 13 June 1863. [ Butler 1863 available online ]
  • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
  • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review , 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
  • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law , Cheltenham: Edward Elgar.
  • Čapek, Karel, 1920, R.U.R. , Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
  • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung , 47: 93–102.
  • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [ Cave 2019 available online ]
  • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies , 17(9–10): 7–65. [ Chalmers 2010 available online ]
  • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = < https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/ >
  • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology , 12(3): 209–221. doi:10.1007/s10676-010-9235-5
  • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription , London: Palgrave. doi:10.1057/9781137025968
  • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society , 31(4): 455–462. doi:10.1007/s00146-015-0626-3
  • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications , Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
  • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature , 538(7625): 311–313. doi:10.1038/538311a
  • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust , Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [ Cristianini forthcoming – preprint available online ]
  • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines , 25(3): 231–246. doi:10.1007/s11023-015-9365-y
  • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology , 18(4): 299–309. doi:10.1007/s10676-016-9403-3
  • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology , 29(3): 245–268. doi:10.1007/s13347-015-0211-1
  • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work , Cambridge, MA: Harvard University Press.
  • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies , 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
  • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics , first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
  • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications , Boston, MA: MIT Press.
  • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [ DARPA 1983 available online ]
  • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds , New York: W.W. Norton.
  • Devlin, Kate, 2018, Turned On: Science, Sex and Robots , London: Bloomsbury.
  • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism , 3(3): 398–415. doi:10.1080/21670811.2014.976411
  • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology , 20(1): 1–3. doi:10.1007/s10676-018-9450-z
  • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World , London: Allen Lane.
  • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014 , Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
  • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances , 4(1): eaao5580. doi:10.1126/sciadv.aao5580
  • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [ Drexler 2019 available online ]
  • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason , second edition, Cambridge, MA: MIT Press 1992.
  • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer , New York: Free Press.
  • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis , Berlin, Heidelberg.
  • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
  • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , London: St. Martin’s Press.
  • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs , 8 (July 2013). [ Anonymous 2013 available online ]
  • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [ European Group 2018 available online ]
  • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement , New York: NYU Press.
  • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon , 9 May 2016. URL = < Floridi 2016 available online >
  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines , 28(4): 689–707. doi:10.1007/s11023-018-9482-5
  • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines , 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 374(2083): 20160360. doi:10.1098/rsta.2016.0360
  • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review , 5: 5–15.
  • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics , 10(1): 77–93. doi:10.1515/pjbr-2019-0006
  • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law , 25(3): 305–323. doi:10.1007/s10506-017-9212-y
  • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy , 68(1): 5–20.
  • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation , Princeton, NJ: Princeton University Press.
  • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [ Frey and Osborne 2013 available online ]
  • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité , Paris: Éditions du Seuil.
  • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs , 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
  • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union , 119 (4 May 2016), 1–88. [ Regulation (EU) 2016/679 available online ]
  • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion , 76(1): 138–166. doi:10.1093/jaarel/lfm101
  • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
  • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society , 45(3): 274–279. doi:10.1145/2874239.2874278
  • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [ GFMTDI 2017 available online ]
  • Gertz, Nolen, 2018, Nihilism and Technology , London: Rowman & Littlefield.
  • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy , 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
  • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique , Maxime Kristanek (ed.), accessed: 16 April 2020, URL = < Gibert 2019 available online >
  • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology , 31(2): 169–188. doi:10.1007/s13347-017-0285-z
  • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6 , Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning , Cambridge, MA: MIT Press.
  • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine , 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
  • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy , 34(3): 362–375. doi:10.1093/oxrep/gry002
  • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review , 99(2): 58–63. doi:10.1257/aer.99.2.58
  • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior , 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
  • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology , 20(2): 87–99. doi:10.1007/s10676-017-9442-4
  • –––, 2018b, Robot Rights , Boston, MA: MIT Press.
  • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology , 27(1): 1–142.
  • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
  • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist , 102(2): 259–275. doi:10.1093/monist/onz009
  • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth , Oxford: Oxford University Press.
  • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World , New York: Palgrave Macmillan.
  • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis , 38(9): 1820–1829. doi:10.1111/risa.12978
  • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow , New York: Harper.
  • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy , Princeton, NJ: Princeton University Press.
  • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts , (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
  • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), < IEEE 2019 available online >.
  • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future , New York: Norton.
  • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age , New York: Oxford University Press.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence , 1(9): 389–399. doi:10.1038/s42256-019-0088-2
  • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines , 27(4): 575–590. doi:10.1007/s11023-017-9417-6
  • Kahnemann, Daniel, 2011, Thinking Fast and Slow , London: Macmillan.
  • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries , Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
  • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft . Translated as Critique of Pure Reason , Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
  • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics , 26(1): 293–307. doi:10.1007/s11948-019-00096-1
  • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion , New York: Harcourt Brace, 1932, 358–373.
  • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic , June 2018. [ Kissinger 2018 available online ]
  • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence , London: Penguin.
  • –––, 2005, The Singularity Is Near: When Humans Transcend Biology , London: Viking.
  • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed , New York: Viking.
  • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19 , Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
  • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships , New York: Harper & Co.
  • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion , London: Science Research Council. [ Lighthill 1973 available online ]
  • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving , Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
  • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence , New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
  • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [ Lin, Bekey, and Abney 2008 available online ]
  • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12 , Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
  • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction , London: Routledge.
  • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction , 3(CSCW): art. 81. doi:10.1145/3359183
  • Minsky, Marvin, 1985, The Society of Mind , New York: Simon & Schuster.
  • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence , 278: art. 103179. doi:10.1016/j.artint.2019.103179
  • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics , 22(2): 303–341. doi:10.1007/s11948-015-9652-2
  • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems , 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Moravec, Hans, 1990, Mind Children , Cambridge, MA: Harvard University Press.
  • –––, 1998, Robot: Mere Machine to Transcendent Mind , New York: Oxford University Press.
  • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism , New York: Public Affairs.
  • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation , 4(3): 212–215. doi:10.1007/s12559-012-9129-4
  • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons , Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
  • ––– (ed.), 2016b, Risks of Artificial Intelligence , London: Chapman & Hall - CRC Press. doi:10.1201/b19187
  • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz , 20: 5–15. [ Müller 2018 available online ]
  • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals , Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
  • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence , New York: Oxford University Press.
  • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence , New York: Oxford University Press.
  • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence , Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
  • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology , London: Penguin.
  • Nørskov, Marco (ed.), 2017, Social Robots , London: Routledge.
  • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics , 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
  • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass , 13(7): e12506. doi:10.1111/phc3.12506
  • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
  • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death , London: Granta.
  • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , Largo, ML: Crown.
  • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence , 26(3): 303–315. doi:10.1080/0952813X.2014.895111
  • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity , London: Bloomsbury.
  • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence , Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
  • Rawls, John, 1971, A Theory of Justice , Cambridge, MA: Belknap Press.
  • Rees, Martin, 2018, On the Future: Prospects for Humanity , Princeton: Princeton University Press.
  • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine , 35(2): 46–53. doi:10.1109/MTS.2016.2554421
  • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society , 117(2): 187–206. doi:10.1093/arisoc/aox008
  • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War , Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
  • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control , New York: Viking.
  • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine , 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
  • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [ SAE International 2015 available online ]
  • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence , Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
  • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight , 21(1): 84–99. doi:10.1108/FS-04-2018-0044
  • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI , 5(February): 15. doi:10.3389/frobt.2018.00015
  • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World , New York: W. W. Norton.
  • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences , 3(3): 417–424. doi:10.1017/S0140525X00005756
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19 , Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
  • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City , London: Allen Lane.
  • Shanahan, Murray, 2015, The Technological Singularity , Cambridge, MA: MIT Press.
  • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology , 21(2): 75–87. doi:10.1007/s10676-018-9494-0
  • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics , Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
  • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [ Shoam et al. 2018 available online ]
  • SIENNA, 2019, “Deliverable Report D4.4: Ethical Issues in Artificial Intelligence and Robotics”, June 2019, published by the SIENNA project (Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), University of Twente, pp. 1–103. [ SIENNA 2019 available online ]
  • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science , 362(6419): 1140–1144. doi:10.1126/science.aar6404
  • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research , 6(1): 1–10. doi:10.1287/opre.6.1.1
  • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly , 66(263): 302–322. doi:10.1093/pq/pqv075
  • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
  • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy , 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
  • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society , 31(4): 445–454. doi:10.1007/s00146-015-0625-4
  • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys , 48(4): art. 55. doi:10.1145/2871196
  • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy , 16(3): 26–33.
  • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review , 45: 450–501.
  • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [ Stone et al. 2016 available online ]
  • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy , Taylor & Francis. doi:10.4324/9780415249126-V014-1
  • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing , 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
  • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review , 8(2): 30 June 2019. [ Susser, Roessler, and Nissenbaum 2019 available online ]
  • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science , 361(6404): 751–752. doi:10.1126/science.aat5991
  • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
  • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. doi:10.5281/zenodo.1303252 [ Taylor, et al. 2018 available online ]
  • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence , New York: Knopf.
  • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness , New York: Penguin.
  • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired , 23 November 2018. [ Thompson and Bremmer 2018 available online ]
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist , 59(2): 204–217. doi:10.5840/monist197659224
  • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
  • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [ Trump 2019 available online ]
  • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence , Berlin: Springer. doi:10.1007/978-3-319-96235-1
  • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview , (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
  • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
  • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04) , San Jose, CA: AAAI Press, 900–907.
  • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation , London: Routledge. doi:10.4324/9781315586397
  • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics , 25(3): 719–735. doi:10.1007/s11948-018-0030-8
  • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
  • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society , 4(2): art. 205395171774353. doi:10.1177/2053951717743530
  • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics , 2(8): 316–318. doi:10.1038/s41928-019-0294-2
  • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things , Chicago: University of Chicago Press.
  • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review , 2019(2): 494–620.
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law , 7(2): 76–99. doi:10.1093/idpl/ipx005
  • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology , 31(2): 842–887. doi:10.2139/ssrn.3063289
  • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics , London: Routledge.
  • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence , Amherst, MA: Prometheus Books.
  • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy , London: Nesta. [ Westlake 2014 available online ]
  • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [ Whittaker et al. 2018 available online ]
  • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [ Whittlestone 2019 available online ]
  • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems , special issue of Proceedings of the IEEE , 107(3): 501–632.
  • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2016/entries/doing-allowing/ >
  • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media , Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
  • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security , Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
  • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation , Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
  • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper , 3339(25 June 2019): 1-19. [ Zayed and Loft 2019 available online ]
  • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology , 32(4): 661–683. doi:10.1007/s13347-018-0330-6
  • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , New York: Public Affairs.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • AI HLEG, 2019, “ High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI ”, European Commission , accessed: 9 April 2019.
  • Amodei, Dario and Danny Hernandez, 2018, “ AI and Compute ”, OpenAI Blog , 16 July 2018.
  • Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms , paper in the Proceedings of the 4th International Summer Academy on Technology Studies, available at archive.org.
  • Brooks, Rodney, 2017, “ The Seven Deadly Sins of Predicting the Future of AI ”, on Rodney Brooks: Robots, AI, and Other Stuff , 7 September 2017.
  • Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, et al., 2018, “ The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation ”, unpublished manuscript, ArXiv:1802.07228 [Cs].
  • Costa, Elisabeth and David Halpern, 2019, “ The Behavioural Science of Online Harm and Manipulation, and What to Do About It: An Exploratory Paper to Spark Ideas and Debate ”, The Behavioural Insights Team Report, 1-82.
  • Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford, 2018, “ Datasheets for Datasets ”, unpublished manuscript, arxiv:1803.09010, 23 March 2018.
  • Gunning, David, 2017, “ Explainable Artificial Intelligence (XAI) ”, Defense Advanced Research Projects Agency (DARPA) Program.
  • Harris, Tristan, 2016, “ How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist ”, Thrive Global , 18 May 2016.
  • International Federation of Robotics (IFR), 2019, World Robotics 2019 Edition .
  • Jacobs, An, Lynn Tytgat, Michel Maus, Romain Meeusen, and Bram Vanderborght (eds.), Homo Roboticus: 30 Questions and Answers on Man, Technology, Science & Art, 2019, Brussels: ASP .
  • Marcus, Gary, 2018, “ Deep Learning: A Critical Appraisal ”, unpublished manuscript, 2 January 2018, arxiv:1801.00631.
  • McCarthy, John, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, 1955, “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence ”, 31 August 1955.
  • Metcalf, Jacob, Emily F. Keller, and Danah Boyd, 2016, “ Perspectives on Big Data, Ethics, and Society ”, 23 May 2016, Council for Big Data, Ethics, and Society.
  • National Institute of Justice (NIJ), 2014, “ Overview of Predictive Policing ”, 9 June 2014.
  • Searle, John R., 2015, “ Consciousness in Artificial Intelligence ”, Google’s Singularity Network, Talks at Google (YouTube video).
  • Sharkey, Noel, Aimee van Wynsberghe, Scott Robbins, and Eleanor Hancock, 2017, “ Report: Our Sexual Future with Robots ”, Responsible Robotics , 1–44.
  • Turing Institute (UK): Data Ethics Group
  • Leverhulme Centre for the Future of Intelligence
  • Future of Humanity Institute
  • Future of Life Institute
  • Stanford Center for Internet and Society
  • Berkman Klein Center
  • Digital Ethics Lab
  • Open Roboethics Institute
  • Philosophy & Theory of AI
  • Ethics and AI 2017
  • We Robot 2018
  • Robophilosophy
  • EUrobotics TG ‘robot ethics’ collection of policy documents
  • PhilPapers section on Ethics of Artificial Intelligence
  • PhilPapers section on Robot Ethics

computing: and moral responsibility | ethics: internet research | ethics: search engines and | information technology: and moral values | information technology: and privacy | manipulation, ethics of | social networking and ethics

Acknowledgments

Early drafts of this article were discussed with colleagues at the IDEA Centre of the University of Leeds, some friends, and my PhD students Michael Cannon, Zach Gudmunsen, Gabriela Arriagada-Bruneau and Charlotte Stix. Later drafts were made publicly available on the Internet and publicised via Twitter and e-mail to all (then) cited authors that I could locate. These later drafts were presented to audiences at the INBOTS Project Meeting (Reykjavik 2019), the Computer Science Department Colloquium (Leeds 2019), the European Robotics Forum (Bucharest 2019), the AI Lunch and the Philosophy & Ethics group (Eindhoven 2019)—many thanks for their comments.

I am grateful for detailed written comments by John Danaher, Martin Gibert, Elizabeth O’Neill, Sven Nyholm, Etienne B. Roesch, Emma Ruttkamp-Bloem, Tom Powers, Steve Taylor, and Alan Winfield. I am grateful for further useful comments by Colin Allen, Susan Anderson, Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, Yazmin Morlet Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, Olle Häggström, Geoff Keeling, Karabo Maiyane, Brent Mittelstadt, Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter, Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, Jeffrey White, and Xinyi Wu.

Parts of the work on this article have been supported by the European Commission under the INBOTS project (H2020 grant no. 780073).

Copyright © 2020 by Vincent C. Müller < vincent . c . mueller @ fau . de >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Ethics of Artificial Intelligence

Author's profile.

ethics of artificial intelligence essay

Reprint years

Buy this book.

ethics of artificial intelligence essay

PhilArchive

External links.

ethics of artificial intelligence essay

  • Available at Amazon.com

Through your library

  • Sign in / register and customize your OpenURL resolver
  • Configure custom resolver

Similar books and articles

Citations of this work, references found in this work.

No references found.

Phiosophy Documentation Center

Artificial intelligence1

  • More on Ethics of Artificial Intelligence
  • Events Roadmap
  • Innovation in teaching and learning
  • Women’s access & participation in technological developments
  • Ethics of Science, Technology & Bioethics
  • For Policymakers
  • For the judiciary
  • Developing policies & capacities
  • Fostering gender equality and youth inclusion
  • Rights, Openness, Accessibility, & Multi-stakeholder
  • Report of COMEST on robotics ethics
  • Global South map of emerging areas in Artificial Intelligence
  • 7 minutes to understand AI
  • On the Ethics of Artificial Intelligence
  • On a possible standard-setting instrument on the ethics of AI
  • On the technical and legal aspects relating to a standard-setting instrument on the ethics of AI

Ethics of Artificial Intelligence

Global Forum on the Ethics of Artificial Intelligence 2024 - Changing the Landscape of AI Governance (main banner)

Global Forum on the Ethics of AI 2024

The 2nd Global Forum on the Ethics of AI: Changing the Landscape of AI Governance took place in the Brdo Congress Centre of Kranj on 5 and 6 February 2024.

Getting AI governance right is one of the most consequential challenges of our time, calling for mutual learning based on the lessons and good practices emerging from the different jurisdictions around the world.

This Forum brought together the experiences and expertise of countries at different levels of technological and policy development, for a focused exchange to learn from each other, and for a dialogue with the private sector, academia and a wider civil society.

With its unique mandate, UNESCO has led the international effort to ensure that science and technology develop with strong ethical guardrails for decades.

Be it on genetic research, climate change, or scientific research, UNESCO has delivered global standards to maximize the benefits of the scientific discoveries, while minimizing the downside risks, ensuring they contribute to a more inclusive, sustainable, and peaceful world. It has also identified frontier challenges in areas such as the ethics of neurotechnology, on climate engineering, and the internet of things.

AI - Artificial intelligence

The rapid rise in artificial intelligence (AI) has created many opportunities globally , from facilitating healthcare diagnoses to enabling human connections through social media and creating labour efficiencies through automated tasks.

However, these rapid changes also raise profound ethical concerns . These arise from the potential AI systems have to embed biases, contribute to climate degradation, threaten human rights and more. Such risks associated with AI have already begun to compound on top of existing inequalities, resulting in further harm to already marginalised groups.

Artificial intelligence plays a role in billions of people’s lives

In no other field is the ethical compass more relevant than in artificial intelligence. These general-purpose technologies are re-shaping the way we work, interact, and live. The world is set to change at a pace not seen since the deployment of the printing press six centuries ago. AI technology brings major benefits in many areas, but without the ethical guardrails , it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.

Gabriela Ramos

Recommendation on the Ethics of Artificial Intelligence

UNESCO produced the first-ever global standard on AI ethics – the ‘Recommendation on the Ethics of Artificial Intelligence ’ in November 2021. This framework was adopted by all 193 Member States. The protection of human rights and dignity is the cornerstone of the Recommendation, based on the advancement of fundamental principles such as transparency and fairness, always remembering the importance of human oversight of AI systems. However, what makes the Recommendation exceptionally applicable are its extensive Policy Action Areas , which allow policymakers to translate the core values and principles into action with respect to data governance, environment and ecosystems, gender, education and research, and health and social wellbeing, among many other spheres.

Four core values

Respect, protection and promotion of human rights and fundamental freedoms and human dignity

just, and interconnected societies

A dynamic understanding of AI

The Recommendation interprets AI broadly as systems with the ability to process data in a way which resembles intelligent behaviour.

This is crucial as the rapid pace of technological change would quickly render any fixed, narrow definition outdated, and make future-proof policies infeasible.

A human rights approach to AI

The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses.

Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.

International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.

AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.

Actionable policies

Key policy areas make clear arenas where Member States can make strides towards responsible developments in AI

While values and principles are crucial to establishing a basis for any ethical AI framework, recent movements in AI ethics have emphasised the need to move beyond high-level principles and toward practical strategies.

The Recommendation does just this by setting out eleven key areas for policy actions .

Recommendation on the Ethics of Artificial Intelligence - 11 Key policy areas

Implementing the Recommendation

The RAM is designed to help assess whether Member States are prepared to effectively implement the Recommendation. It will help them identify their status of preparedness & provide a basis for UNESCO to custom-tailor its capacity-building support.

EIA is a structured process which helps AI project teams, in collaboration with the affected communities, to identify & assess the impacts an AI system may have. It allows to reflect on its potential impact & to identify needed harm prevention actions.

Women4Ethical AI expert platform to advance gender equality

UNESCO's Women4Ethical AI is a new collaborative platform to support governments and companies’ efforts to ensure that women are represented equally in both the design and deployment of AI . The platform’s members will also contribute to the advancement of all the ethical provisions in the Recommendation on the Ethics of AI.

The platform unites 17 leading female experts from academia, civil society, the private sector and regulatory bodies, from around the world. They will share research and contribute to a repository of good practices. The platform will drive progress on non-discriminatory algorithms and data sources, and incentivize girls, women and under-represented groups to participate in AI.

Women 4 Ethical AI

Business Council for Ethics of AI

The Business Council for Ethics of AI is a collaborative initiative between UNESCO and companies operating in Latin America that are involved in the development or use of artificial intelligence (AI) in various sectors.

The Council serves as a platform for companies to come together, exchange experiences, and promote ethical practices within the AI industry. By working closely with UNESCO, it aims to ensure that AI is developed and utilized in a manner that respects human rights and upholds ethical standards.

Currently co-chaired by Microsoft and Telefonica, the Council is committed to strengthening technical capacities in ethics and AI, designing and implementing the Ethical Impact Assessment tool mandated by the Recommendation on the Ethics of AI, and contributing to the development of intelligent regional regulations. Through these efforts, it strives to create a competitive environment that benefits all stakeholders and promotes the responsible and ethical use of AI.

Artificial Intelligence

Ideas, news & stories

ethics of artificial intelligence essay

Examples of ethical dilemmas

Examples of gender bias in artificial intelligence, originating from stereotypical representations deeply rooted in our societies.

The use of AI in judicial systems around the world is increasing, creating more ethical questions to explore.

The use of AI in culture raises interesting ethical reflections. For instance, what happens when AI has the capacity to create works of art itself?

An autonomous car is a vehicle that is capable of sensing its environment and moving with little or no human involvement.

Do you know AI or AI knows you better? Thinking Ethics of AI (version with multilingual subtitles)

Discover our resources

Publication

Related items

  • Social and human sciences
  • Natural sciences
  • Artificial intelligence
  • Norms & Standards
  • Policy Advice
  • UN & International cooperation
  • High technology
  • Information technology
  • Information technology (hardware)
  • Information technology (software)
  • Computer science
  • Ethics of science
  • Science and development
  • Science and society
  • Science policy
  • Social science policy
  • Social sciences
  • Ethics of artificial intelligence
  • Ethics of technology
  • See more add

An Overview of Artificial Intelligence Ethics

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Current Issue
  • Past Issues
  • Get New Issue Alerts
  • American Academy of Arts 
and Sciences

Artificial Intelligence, Humanistic Ethics

ethics of artificial intelligence essay

Ethics is concerned with what it is to live a flourishing life and what it is we morally owe to others. The optimizing mindset prevalent among computer scientists and economists, among other powerful actors, has led to an approach focused on maximizing the fulfilment of human preferences, an approach that has acquired considerable influence in the ethics of AI . But this preference-based utilitarianism is open to serious objections. This essay sketches an alternative, “humanistic” ethics for AI that is sensitive to aspects of human engagement with the ethical often missed by the dominant approach. Three elements of this humanistic approach are outlined: its commitment to a plurality of values, its stress on the importance of the procedures we adopt, not just the outcomes they yield, and the centrality it accords to individual and collective participation in our understanding of human well-being and morality. The essay concludes with thoughts on how the prospect of artificial general intelligence bears on this humanistic outlook.

John Tasioulas is Professor of Ethics and Legal Philosophy at the Faculty of Phi losophy and Director of the Institute for Ethics in AI at the University of Oxford. He is the editor of The Cambridge Companion to the Philosophy of Law (2020) and The Philosophy of International Law (with Samantha Besson, 2010).

Ethics is, first and foremost, a domain of ordinary human thought, not a specialist academic discipline. It presupposes the existence of human choices that can be appraised by reference to a distinctive range of values.

The delimitation of this range, among other values such as aesthetic or religious values, is philosophically controversial. But on a fairly standard reading, two very general, interlocking questions lie at the heart of ethics: What is it to live a good or flourishing life? And what is it that we owe to others, notably fellow human beings, but also nonhuman animals or even inanimate nature? The first question brings us into the territory of individual well-being; the second into that of morality, especially the obligations we owe to others and the rights they hold against us. Philosophers expound theories of well-being and morality and their interrelations, but all of us, in living our lives, constantly make choices that reflect answers to these questions, however inchoate or unconscious they may be.

Engagement with ethics is inescapable in decision-making about artificial intelligence. 1 The choices we make regarding the development and deployment of AI -based technologies are ultimately intelligible only in terms of the fallible pursuit of ethical values such as the acquisition of knowledge and control or the promotion of health, justice, and security. Moreover, all forms of “regulation” that might be proposed for AI , whether voluntary self-regulation in deciding whether to use a social robot as a caregiver, or the social and legal norms that should govern the manufacturing and use of such robots, ultimately implicate choices that reflect judgments about ethical values and their prioritization.

A clear-eyed appreciation of the pervasive significance of ethics for AI is sometimes obscured by an odd contraction that the idea of ethics is liable to undergo in this domain. So, for example, Kate Crawford, author and founder of the AI Now Institute, urges us to “focus less on ethics and more on power” because “ AI is invariably designed to amplify and reproduce the forms of power it has been deployed to optimize.” 2 But what would the recommended focus on power entail? For Crawford, it means interrogating the institutional power structures in which AI is embedded by reference to ideas of equality, justice, and democracy. But the irony is that these three ideas are either themselves core ethical values or, in the case of democracy, need to be explicated and defended in terms of such values.

Nonetheless, Crawford’s injunction usefully prompts reflection on the various ways the idea of ethics has been unduly diminished in recent discussions about AI , no doubt partly a result of the prominent role of big tech players in shaping the field of “ AI ethics” to limit the threat it poses to their commercial ambitions. Consider three ways the diminishment of ethics is typically effected.

Content. The content of ethical standards is often interpreted as exclusively a matter of fairness, which is primarily taken to be a relational concern with how some people are treated compared with others. Illustrations of AI -based technology that raise fairness concerns include facial recognition technology that systematically disadvantages darker-skinned people or automated resume screening tools that are biased against women because the respective algorithms were trained on data sets that are demographically unrepresentative or that reflect historically sexist hiring practices. “Algorithmic unfairness” is a vitally important matter, especially when it exacerbates the condition of members of already unjustly disadvantaged groups. But this should not obscure the fact that ethics also encompasses nonrelational concerns such as whether, for example, facial recognition technology should be deployed at all in light of privacy rights or whether it is disrespectful to job applicants in general to rank their resumes by means of an automated process. 3

Scope of application. Ethics is sometimes construed as narrowly individualistic in focus: that is, as being concerned with guiding individuals’ personal conduct, rather than also bearing on the larger institutional and social settings in which their decisions are made and enacted. 4 In reality, however, almost all key ethical values, such as justice and charity, have profound implications for institutions and patterns of social organization. Plato’s Republic , after all, sought to understand justice in the individual soul by considering it “writ large” in the polity. Admittedly, some philosophers treat political justice as radically discontinuous from justice in the soul. The most influential proponent of the discontinuity thesis in recent decades is John Rawls, who contends that pervasive reasonable disagreement on ethical truth disqualifies beliefs about such truths from figuring as premises in political justification. 5 This is a sophisticated controversy, which cannot be addressed here, save to note that this kind of move will always face the response that the phenomenon of reasonable disagreement, and the need for respect that it highlights, is itself yet a further topic for ethical appraisal, and hence cannot displace the need to take a stand on ethical truth. 6

Means of enforcement. There is a widespread assumption that ethics relates to norms that are not properly enforceable–for example, through legal mechanisms–but instead are backed up primarily by the sanction of individual conscience and informal public opinion. But the general restriction of ethics to “soft” forms of regulation in this way is arbitrary. The very question whether to enact a law or other regulatory norm and, if so, how best to implement and enforce it, is one on which ethical values such as justice and personal autonomy have a significant bearing. Indeed, there is a long-standing tradition, cutting across ideological boundaries, that identifies justice precisely with those moral rights that should in principle receive social and legal enforcement.

In short, we should reclaim a broad and foundational understanding of ethics in the AI domain, one that potentially encompasses deliberation about any form of regulation, from personal self-regulation to legal regulation, and which potentially has radical implications for the reordering of social power.

Given its inescapability, ethical thought is hardly absent from current discussions around AI . However, these discussions often suffer from a tendency either to leave inexplicit their operative ethical assumptions or else to rely upon them uncritically even when they are made explicit. We can go even further and identify a dominant, or at least a prominent, approach to ethics that is widely congenial to powerful scientific, economic, and governmental actors in the AI field.

Like anyone else, AI scientists are prone to the illusion that the intellectual methods at their disposal have a far greater problem-solving purchase than is warranted. This is a phenomenon that Plato diagnosed in relation to the technical experts of his day, artisans such as cobblers and shipbuilders. The mindset of scien tists working in AI tends to be data-driven, it places great emphasis on optimization as the core operation of rationality, and it prioritizes formal and quantitative techniques. Given this intellectual orientation, it is little wonder that an eminent AI scientist, like Stuart Russell, in his recent book Human Compatible: AI and the Problem of Control, is drawn to preference-based utilitarianism as his overarching ethical standpoint. 7

Russell’s book takes the familiar worry that AI –in the form of an artificial general intelligence ( AGI ) that surpasses human intellectual capabilities–will eventually spiral out of control, unconstrained by human morality, with disastrous consequences. But what is human morality? Russell appears to take it as axiomatic that the morally right thing to do is whatever will maximize the fulfilment of human preferences. 8 In terms of our two core concerns of ethics, the fulfilment of human preferences is taken to encompass well-being, and the fundamental moral injunction is to maximize overall well-being thus conceived. So ethics is reduced to an exercise in prediction and optimization: which act or policy is likely to lead to the optimal fulfilment of human preferences?

But this view of ethics is notoriously open to multiple serious–I believe, fatal–objections. Its concern with aggregating preferences threatens to override important rights that erect strong barriers to what can be done to individuals. Why not feed a few Christians to the lions if their preferences to stay alive are outweighed by the preferences of a sufficiently large number of blood-thirsty Roman spectators? And that is even before we observe that many preferences are infected with racism, sexism, or other prejudices; that they may reflect false or incomplete information; or that they may be psychological adaptations to oppressive circumstances. Ethics operates in the crucial space of reflection on what our preferences should be, a vital consideration that makes a belated appearance in the last few pages of Russell’s book. 9 It cannot take those preferences as ultimate determinants of value.

There are moral philosophers who defend versions of preference utilitarianism that are patched-up to address these difficulties. But the idea that preference utilitarianism is a highly contestable moral theory does not really register in Russell’s book, which conforms with my suspicion that it approximates to a default position among leading actors in the AI field.

The same broad approach is heavily influential among leading economic and governmental actors. This is perhaps less obvious, since the doctrine is standardly modified by positing wealth-maximization as the more readily measurable proxy for preference satisfaction. Hence the tendency of GDP to hijack governmental decision-making around economically consequential technologies, with the resultant sidelining of values that are not readily catered to by the market, such as public goods like access to justice and health care or the preservation of a sustainable environment. Hence, also, the legitimation of profit maximization by corporations as the most effective institutional means to societal wealth maximization. Of course, many who adopt such an approach have never heard of utilitarianism or, if they have, may explicitly reject it. But one revealing indication of the dominance of an ideology is the way that people who disavow it can nonetheless remain in its intellectual grip.

A key priority for those working in the field of AI ethics is to elaborate an ethical approach that transcends the limitations and distortions of this dominant ethical paradigm. In my view, such a humanistic ethics–one t hat encompasses aspects of human engagement with the ethical that are not adequately captured by the methods of natural science and mainstream economics, but that are the traditional concern of the arts and humanities–would possess at least the following three, interrelated features (the three P s).

Pluralism . The approach would emphasize the plurality of values, both in terms of the elements of human well-being (such as achievement, understanding, friendship, and play) and the core components of morality (such as justice, fairness, charity, and the common good). This pluralism of values abandons the comforting notion that the key to the ethics of AI will be found in a single master concept, such as trustworthiness or human rights. How could human rights be the comprehensive ethical framework for AI when, for example, AI has a serious environmental impact beyond its bearing on anthropocentric concerns? And what of those important values to which we do not have a right, such as mercy or solidarity? Nor can trustworthiness be the master value. Being parasitic on compliance with more basic values, trustworthiness cannot itself displace those values.

Beyond the pluralism of values is their incommensurability. We are often confronted with practical problems that implicate an array of values that pull in different directions. In such cases, although some decisions will be superior to others, there may be no single decision that is optimal: in choosing an occupation, teaching may be a better field for me than surgery, but we cannot assume there is a single profession that is, all things considered, best, rather than a limited array of eligible alternatives that are no worse than the others. This incommensurability calls into question the availability of some optimizing function that determines the single option that is, all things considered, most beneficial or morally right, the quest for which has animated a lot of utilitarian thinking in ethics.

It is worth observing that confidence about the deployment of AI to minimize “noise” in human judgment–the unwanted variability, for example, in hiring decisions by employers or sentencing by judges–displayed in the important new work of Daniel Kahneman, Olivier Sibony, and Cass Sunstein, sometimes involves an implicit reductionism about the values at stake that downplays the scope for incommensurability. 10 For example, the authors treat bail decisions fundamentally as predictions of the likelihood that the accused will abscond or reoffend, sidelining considerations such as the gravity of the offense with which they have been charged or the impact of detention on the accused’s dependents. 11 But such decisions typically address multivalue problems, and there is no guarantee that there is a single best way of reconciling the competing values in each case. This means not only that algorithms will need to be more sophisticated to balance multiple salient values in reaching a correct decision, but that much of what looks like noise may be acceptable variability of judgments within the range of rationally eligible alternatives.

Procedures, not only outcomes . Of course, we want AI to achieve valuable social goals, such as improving access to education, justice, and health care in an effective and efficient way. The COVID -19 pandemic has cast into sharp relief the ques tion of what outcomes AI is being used to pursue: for example, is it enabling physicians to diagnose and triage patients faster and more effectively, or is it primarily engaged in profit-making activities, like vacuuming up people’s attention online, that have little or no redeeming social value? 12 The second feature of a humanistic approach to ethics emphasizes that what we rightly care about is not just the val ue of the outcomes that AI applications can be used to deliver, but the procedures through which it does so.

If, for example, important practical decisions exhibit the phenomenon of incommensurability, then we may have good reason to ensure that they are assigned to humans, rather than to automated processes, to preserve a valuable form of autonomy for humans as they express and develop their tastes and characters in choosing from divergent, but rationally eligible, pathways in life. Of course, there is the further question of how to balance such autonomy against demands for consistency (or “noiselessness”), especially in public decision-making. Should we tolerate significant divergence in sentencing across judges, or should the demands for “horizontal equity” prevail, ensuring that like cases are treated alike? Proponents of the latter view often recommend the use of algorithms to guide or replace human decision-making. This itself is a difficult question of striking a balance between competing considerations in our legal culture, with no ex ante guarantee that one solution will emerge as superior overall.

But the case for according ultimate decision-making authority to humans can  also be made even if we suppose that a single correct answer is always available. Take, for example, the use of AI in cancer diagnosis and its use in the sentencing of criminals. Intuitively, the two cases seem to exhibit a difference in the comparative valuing of the soundness of the eventual decision or diagnosis and the process through which it is reached. When it comes to cancer, generating the most accurate diagnosis may be all-important, it being largely a matter of indifference whether this is generated by an AI diagnostic tool or the exercise of human judgment. In criminal sentencing, however, being sentenced by a robot judge–even if the sentence is likely to be less biased or less “noisy” than one rendered by a human counterpart–appears to sacrifice important values, such as the ideal of reciprocity among fellow citizens that is central to the rule of law. 13

This last point is familiar, of course, in relation to such process values as trans parency, procedural fairness, and explainability. Even if the procedure followed by the judicial algorithm can be made transparent, there is a serious question–given, for example, the vast discrepancy between machine learning and ordinary human reasoning processes–whether it affords an explanation of the right kind, an explanation that a criminal defendant can grasp as offering intelligible reasons for the decision to imprison him. But the point goes beyond the important issue of explainability. How does it feel to contemplate the prospect of a world in which judgments that bear on our deepest interests and moral standing have, at least as their proximate decision-makers, autonomous machines that do not have a share in human solidarity and cannot be held accountable for their decisions in the way that a human judge can?

Participation . The third feature relates to the importance of participation in the process of decision-making with respect to AI , whether as an individual or as part of a group of self-governing democratic citizens. At the level of individual well-being, this takes the focus away from theories that equate human well-being with an end state such as pleasure or preference-satisfaction. These end states could in principle be brought about through a process in which the person who enjoys them is passive: for example, by the government putting a happiness drug into the water supply. Contrary to this passive view, it would stress that successful engagement with valuable pursuits is at the core of human well-being. 14

If the conception of human well-being that emerges is deeply participatory, then this bears heavily on the delegation of decision-making power to AI applications. One of the most important sites of participation in constructing a good life, in modern societies, is the workplace. 15 According to a McKinsey study, around 30 percent of all work activities in 60 percent of occupations could one day be automated. 16 Can we accept the idea that the large-scale elimination of job opportunities can be compensated for by the benefits that automation makes available? The answer partly depends on whether the participatory self-fulfillment of work can, any time soon and for the vast majority of those rendered jobless, be feasibly replaced by other activities, such as art, friendship, play, or religion. If it cannot, addressing the problem with a mechanism like a universal basic income, which involves the passive receipt of a benefit, will hardly suffice. Instead, much greater attention will need to be paid to how AI can be integrated into productive practices in ways that do not so much replace human work as enhance its quality, making it more productive, fulfilling, and challenging, while also less dangerous, repetitive, and lacking in meaning. 17

Similarly, we value citizen participation as part of collective democratic self-government. And we do so not just because of the instrumental benefits of democratic decision-making in generating superior decisions by harnessing cognitive diversity, but also because of the way in which participatory decision-making processes affirm the status of citizens as free and equal members of the community. 18 This is an essential plank in the defense against the tendency of AI technology to be co-opted by technocratic modes of decision-making that erode democratic values by seeking to convert matters of political judgment into questions of technical expertise. 19

At present, much of the culture in which AI is embedded is distinctly techno cratic, with decisions about the “values” encoded in AI applications being taken by corporate, bureaucratic, or political elites, often largely insulated from meaningful democratic control. Indeed, a small group of tech giants accounts for the lion’s share of investment in AI research, dictating its overall direction and setting the prevalent moral tone. Meanwhile, AI -enabled social media risks eroding the quality of public deliberation that a genuine democracy needs, such as by promoting the spread of disinformation, aggravating political polarization, or using bots in astroturfing campaigns. Similarly, the use of AI as part of corporate and governmental efforts to monitor and manipulate individuals undermines privacy and threatens the exercise of basic liberties, effectively discouraging citizen participation in democratic politics. 20

As with workplace participation, we need to reflect seriously on how AI and digital technology more generally can enable, rather than hinder and distort, dem ocratic participation. 21 This is especially urgent given the declining faith in democracy across the globe in recent years, including in long-established democracies such as the United Kingdom and the United States. Indeed, the disillusionment is such that, in a recent poll, 51 percent of Europeans favored replacing at least some of their parliamentarians with AI . 22 There is still time to salvage the democratic ideal that an essential part of civic dignity is participation in self-government.

An additional complexity here concerns how these two modes of participation– in the workplace and in politics–are connected. It is obvious that active participation in the two domains is mutually reinforcing in important ways. Thus, powers of reason and sociability that are developed in a participatory workplace, and that foster a sense of equal civic dignity, can be brought to bear in democratic deliberation about political questions, just as democratic control over the impact of new technologies on the workplace can help preserve and enhance its vital role as a site of genuine human fulfilment. 23

I have mainly focused on narrow AI , conceived as AI -powered technology that can perform limited tasks (such as facial recognition or medical diagnosis) that typically require intelligence when performed by humans. This is partly because serious doubt surrounds the likelihood of artificial general intelligence emerging within any realistically foreseeable time frame, partly because the op erative notion of “intelligence” in discussions of AGI is problematic, 24 and partly because a focus on AGI often distracts us from the more immediate questions of narrow AI . 25

With these caveats in place, however, one can admit that thought experiments about AGI can help bring into focus two questions fundamental to any humanistic ethic: What is the ultimate source of human dignity, understood as the inherent value attaching to each and every human being? And how can we relate human dignity to the value inhering in nonhuman beings? Toward the end of Kazuo Ishiguro’s novel Klara and the Sun , the eponymous narrator, an “Artificial Friend,” speculates that human dignity–the “human heart” that “makes each of us special and individual”–has its source not in something within us, but in the love of others for us. 26 But a threat of circularity looms for this boot-strapping humanism, for how can the love of others endow us with value unless those others already have value? Moreover, if the source of human dignity is contingent on the varying attitudes of others, how can it apply equally to every human being? Are the unloved bereft of the “human heart”?

Questions like these explain the tendency among some to interpret the inherent value of each individual human being as arising from the special love that a supremely good transcendent being–God, represented by the sun, in Ishiguro’s novel, which the solar-powered Klara treats as a kind of life-sustaining divinity– has for each human being in equal measure. 27 But invoking a divine being to under write human dignity leads us into obvious metaphysical and ethical quagmires, which in turn raise the difficult question of whether the inherent worth of human beings can be explicated within a broadly naturalistic framework. 28 Supposing that it can be, this is compatible with a distinct kind of dignity also inhering in other beings, such as nonhuman animals.

We are still struggling to integrate the value of nonhuman animals within our ethical thought. Doing so requires overcoming the baleful influence of longstanding practices in which animals are treated either as possessing merely instrumental value in relation to human ends, or at best intrinsic value that is conditional on their role in human life. The dream of AGI , should it ever become a reality, will generate an even more acute version of this problem, given the prominent role that our rational capacities play in elevating human dignity above the dignity of other beings known to us. 29 For the foreseeable future, however, our focus must be on properly integrating AI technology into a culture that respects and advances the dignity and well-being of humans, and the nonhuman animals with whom we share the world, rather than on the highly speculative endeavor of integrating the dignity of intelligent machines into our existing ethical framework.

author’s note

This essay began life as a blog post for the Ada Lovelace Institute, “The Role of the Arts and Humanities in Thinking about Artificial Intelligence ( AI ).” I am grateful to the Institute for permitting me to reuse some material here. I have benefited from comments on previous drafts from Dominic Burbidge, Hélène Landemore, Seth Lazar, Ted Lechterman, James Manyika, Adrian Vermuele, Carina Prunkl, Divya Siddarth, Carissa Veliz, Glen Weyl, Mike Woolridge, and John Zerilli. I regret that I have not been able to pursue many of their very stimulating comments within the confines of this short essay.

© 2022 by John Tasioulas. Published under a  CC BY-NC 4.0  license.

  • 1 I shall assume a very broad understanding of AI as essentially the use of machines to perform tasks that characteristically require intelligence when performed by humans. My focus will primarily be on “narrow” AI applications, such as facial recognition, surveillance, and risk-assessment, rather than artificial general intelligence, though I say something about the latter toward the very end.
  • 2 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven and London: Yale University Press, 2021), 224.
  • 3 Some of these issues are compellingly developed by Joshua Cohen in “Don’t Shoot the Algorithm” (unpublished manuscript).
  • 4 The “effective altruism” movement, which has significant allegiance among tech elites, is arguably one expression of this depoliticized and, in its effect, ultimately conservative view of ethics. See Amia Srinivasan, “Stop the Robot Apocalypse,” London Review of Books 37 (18) (2015).
  • 5 John Rawls, Political Liberalism (New York: Columbia University Press, 1993). For an attempt to pursue the radical discontinuity thesis in relation to AI, see Iason Gabriel, “Artificial Intelligence, Values, and Alignment,” Minds and Machines 30 (3) (2020): 411.
  • 6 See John Tasioulas, “The Liberalism of Love,” in Political Emotions: Towards a Decent Public Sphere , ed. Thom Brooks (London: Palgrave Macmillan, forthcoming 2022).
  • 7 I am here identifying an influential mode of thought that Russell’s book epitomizes. It should be emphasized, however, that there have always been scientists in this domain who have urged the importance of a multidisciplinary approach with an important humanistic dimension, such as in Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: Freeman & Co, 1976); and, more recently, in Nigel Shadbolt and Roger Hampson, The Digital Ape: How to Live (in Peace) with Smart Machines (London: Scribe, 2018).
  • 8 Stuart Russell, Human Compatible: AI and the Problem of Control (London: Allen Lane, 2019), 178.
  • 9 Ibid., 255.
  • 10 Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (London: William Collins, 2021).
  • 11 Ibid., chap. 10.
  • 12 For a discussion of studies showing that AI predictive tools made no real difference in diagnosing and triaging COVID-19 patients, and in some cases, may have been harmful, see William Douglas Heaven, “ Hundreds of AI Tools Have Been Built to Catch Covid, None of Them Helped ,” MIT Technology Review , July 30, 2021.
  • 13 John Tasioulas, “The Rule of Law,” in The Cambridge Companion to the Philosophy of Law , ed. John Tasioulas (Cambridge: Cambridge University Press, 2020), 131–133.
  • 14 Joseph Raz, The Morality of Freedom (Oxford: Oxford University Press, 1986), chap. 12.
  • 15 Anca Gheaus and Lisa Herzog, “The Goods of Work (Other Than Money!),” Journal of Social Philosophy 47 (1) (2016): 70–89.
  • 16 James Manyika and Kevin Sneader, “ AI, Automation, and the Future of Work: Ten Things to Solve For ,” McKinsey Global Institute Executive Briefing, June 1, 2018.
  • 17 For an exploration of this theme, see Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Cambridge, Mass.: Harvard University Press, 2020).
  • 18 For a powerful recent defense of democracy along these lines, see Josiah Ober, Demopolis: Democracy Before Liberalism in Theory and Practice (Cambridge: Cambridge University Press, 2017).
  • 19 For a candid statement, by a Silicon Valley billionaire, of the need to harness the libertarian promise of technology to an antidemocratic ethos, see Peter Thiel, “ The Education of a Libertarian ,” Cato Unbound: A Journal of Debate , April 13, 2009.
  • 20 For a helpfully wide-ranging discussion of some of these issues, see Joshua Cohen and Archon Fung, “Democracy and the Digital Public Sphere,” in Digital Technology and Democratic Theory , ed. Lucy Bernholz, Hélène Landemore, and Rob Reich (Chicago: University of Chicago Press, 2021).
  • 21 For some positive thinking along these lines, see Hélène Landemore, “Open Democracy and Digital Technologies,” in Digital Technology and Democratic Theory . For useful discussions of digitally enhanced democracy in pioneering countries such as Estonia and Taiwan, see Hans Kundani, The Future of Democracy in Europe: Technology and the Evolution of Representation (London: Chatham House, 2020); and Divya Siddarth, Taiwan: Grassroots Digital Democracy That Works (New York: Radical Exchange, 2021).
  • 22 “ More Than Half of Europeans Want to Replace Lawmakers with AI, Study Finds ,” CNBC, May 27, 2021. For an interesting discussion of “algocracy” (“rule by algorithm”), see Ted Lechterman, “ Will AI Make Democracy Obsolete? ” Public Ethics , August 4, 2021. It is worth noting that proposals for algocracy often assume that the point of politics is to aggregate human preferences, in line with the preference-based utilitarianism discussed above.
  • 23 For a perceptive discussion of the way AI threatens to disrupt the economic underpinnings of democracy, see Daron Acemoglu, Redesigning AI: Work, Democracy, and Justice in the Age of Automation (Boston: Boston Review Forum, 2021).
  • 24 Like many others, Stuart Russell adopts an impoverished conception of intelligence as competence in means-ends reasoning according to which the choice of ends is made extraneous to the operations of intelligence. On this view, a machine that annihilated humanity in order to maximize the number of paper clips in existence can qualify as superintelligent. Ibid., 167. For a wide-ranging and perceptive discussion of problems with the idea of “intelligence” invoked in discussions of AGI, see Divya Siddarth, Daron Acemoglu, Danielle Allen, et al., “How AI Fails Us” (Cambridge, Mass.: Edmond J. Safra Center for Ethics, 2021).
  • 25 Some of these concerns are discussed in John Tasioulas, “ First Steps Towards an Ethics of Robots and Artificial Intelligence ,” Journal of Practical Ethics 7 (1) (2019): 61–95.
  • 26 Kazuo Ishiguro, Klara and the Sun (London: Faber, 2021), 218, 306.
  • 27 Nicholas Wolterstorff, Justice: Rights and Wrongs (Princeton, N.J.: Princeton University Press, 2008), 352–361.
  • 28 David Wiggins, Solidarity and the Root of the Ethical (Lawrence: The Lindley Lecture, University of Kansas, 2008).
  • 29 See, for example, John Tasioulas, “Human Dignity and the Foundations of Human Rights” in Understanding Human Dignity , ed. Christopher McCrudden (Oxford: Oxford University Press, 2013), 293–314; and Jeremy Waldron, One Another’s Equals: The Basis of Human Equality (Cambridge, Mass.: Harvard University Press, 2017).
  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Ethics of Artificial Intelligence

  • < Previous
  • Next chapter >

A Short Introduction to the Ethics of Artificial Intelligence

  • Published: September 2020
  • Cite Icon Cite
  • Permissions Icon Permissions

This introduction outlines in section I.1 some of the key issues in the study of the ethics of artificial intelligence (AI) and proposes ways to take these discussions further. Section I.2 discusses key concepts in AI, machine learning, and deep learning. Section I.3 considers ethical issues that arise because current machine learning is data hungry; is vulnerable to bad data and bad algorithms; is a black box that has problems with interpretability, explainability, and trust; and lacks a moral sense. Section I.4 discusses ethical issues that arise because current machine learning systems may be working too well and human beings can be vulnerable in the presence of these intelligent systems. Section I.5 examines ethical issues arising out of the long-term impact of superintelligence such as how the values of a superintelligent AI can be aligned with human values. Section I.6 presents an overview of the essays in this volume.

Signed in as

Institutional accounts.

  • Google Scholar Indexing
  • GoogleCrawler [DO NOT DELETE]

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

NYU Scholars Logo

  • Help & FAQ
  • Ethics of artificial intelligence
  • Global Public Health

Research output : Book/Report › Book

Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today’s most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment caused by automation, how to avoid designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As AI technologies progress, questions about the ethics of AI, in both the near future and the long term, become more pressing than ever. Should a self-driving car prioritize the lives of the passengers over those of pedestrians? Should we as a society develop autonomous weapon systems capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human-level moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development have come together to explore these existential questions.

  • Ai moral status
  • Algorithmic biases
  • Automation and jobs
  • Autonomous weapon systems
  • Global existential risk
  • Machine ethics
  • Self-driving cars
  • Superintelligence

ASJC Scopus subject areas

  • General Arts and Humanities

Access to Document

  • 10.1093/oso/9780190905033.001.0001

Other files and links

  • Link to publication in Scopus
  • Link to the citations in Scopus

Fingerprint

  • Artificial Intelligence Arts & Humanities 100%
  • Extinction Arts & Humanities 14%
  • Morality Arts & Humanities 13%
  • Moral Status Arts & Humanities 13%
  • Car Arts & Humanities 10%
  • Automation Arts & Humanities 10%
  • Weapons Arts & Humanities 10%
  • Conscious Arts & Humanities 8%

T1 - Ethics of artificial intelligence

AU - Liao, S. Matthew

N1 - Publisher Copyright: © Oxford University Press 2020.

PY - 2020/1/1

Y1 - 2020/1/1

N2 - Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today’s most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment caused by automation, how to avoid designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As AI technologies progress, questions about the ethics of AI, in both the near future and the long term, become more pressing than ever. Should a self-driving car prioritize the lives of the passengers over those of pedestrians? Should we as a society develop autonomous weapon systems capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human-level moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development have come together to explore these existential questions.

AB - Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today’s most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment caused by automation, how to avoid designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As AI technologies progress, questions about the ethics of AI, in both the near future and the long term, become more pressing than ever. Should a self-driving car prioritize the lives of the passengers over those of pedestrians? Should we as a society develop autonomous weapon systems capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human-level moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development have come together to explore these existential questions.

KW - Ai moral status

KW - AI rights

KW - Algorithmic biases

KW - Automation and jobs

KW - Autonomous weapon systems

KW - Ethics of artificial intelligence

KW - Global existential risk

KW - Machine ethics

KW - Self-driving cars

KW - Superintelligence

UR - http://www.scopus.com/inward/record.url?scp=85111851912&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85111851912&partnerID=8YFLogxK

U2 - 10.1093/oso/9780190905033.001.0001

DO - 10.1093/oso/9780190905033.001.0001

AN - SCOPUS:85111851912

BT - Ethics of artificial intelligence

PB - Oxford University Press

Ethics of Artificial Intelligence in Academic Research and Education

  • Living reference work entry
  • First Online: 20 July 2023
  • Cite this living reference work entry

Book cover

  • Nader Ghotbi   ORCID: orcid.org/0000-0002-6735-8627 2  

61 Accesses

1 Altmetric

Artificial intelligence (AI) includes an array of rapidly advancing technologies with astonishing abilities that may outperform human brain in certain functions, related to the huge size of the memory, speed, and multilayer processing of data in AI systems. There are concerns over the potential for unethical uses of AI by various entities, including the academic community itself, some of whom are engaged in AI research and development. It has therefore been emphasized that AI research and development should be specifically directed at its potential benefits and avoid areas of possible misuse or abuse. However, sometimes, the possible applications of a basic research project are not apparent until far too late in the outcomes of the project. Also, a beneficial AI tool may later be misused in a different capacity. This chapter first provides a concise review of the main ethical issues of AI research and development as related to academic integrity and then discusses some better-known applications of AI with possible misuse in academic tasks, such as in generating texts for submission as one’s own work and the use of AI in proctoring online examinations. There are further concerns over the intrusion of AI into the privacy of students and over AI bias and discrimination especially against minorities. Other ethical issues may occur when new AI applications are developed. The purpose of the chapter is to engage the readers in a dynamic discussion and debate over the research and development of powerful systems that can have a major influence on their academic career.

  • Academic integrity
  • Academic research
  • Academic writing
  • Artificial intelligence (AI)
  • AI discrimination
  • AI exam proctoring
  • AI benefits
  • Intellectual copyright
  • Students’ privacy
  • Unethical AI

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Abd-Elaal, E. S., Gamage, S. H, & Mills, J. E. (2019). Artificial intelligence is a tool for cheating academic integrity. In: AAEE 2019. Proceedings of the 30th Annual Conference for the Australasian Association for Engineering Education. Educators Becoming Agents of Change: Innovate, Integrate, Motivate . Engineers Australia. https://aaee.net.au/wp-content/uploads/2020/07/AAEE2019_Annual_Conference_paper_180.pdf

All Tech Is Human. (2022, June 27). AI and human rights, building a tech future aligned with the public interest. https://alltechishuman.org/ai-human-rights-report . Accessed 22 July 2022.

Binstein, J. (2015). How to cheat with proctortrack, examity, and the rest . Jake Binstein. https://jakebinstein.com/blog/on-knuckle-scanners-and-cheating-how-to-bypass-proctortrack/ . Accessed 19 July 2022.

Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A, 376 , 20180080. https://doi.org/10.1098/rsta.2018.0080

Article   Google Scholar  

Chiang, S., Picard, R. W., Chiong, W., Moss, R., Worrell, G. A., Rao, V. R., & Goldenholz, D. M. (2021). Guidelines for conducting ethical artificial intelligence research in neurology: A systematic approach for clinicians and researchers. Neurology, 97 , 632–640.

Chin, M. (2021, January 29). University will stop using controversial remote-testing software following student outcry. The Verge . https://www.theverge.com/2021/1/28/22254631/university-of-illinois-urbana-champaign-proctorio-online-test-proctoring-privacy . Accessed 19 July 2022.

Coghlan, S., Miller, T., & Paterson, J. (2021). Good proctor or “big brother”? Ethics of online exam supervision technologies. Philosophy & Technology, 34 (4), 1581–1606.

Coleman, C. H., & Bouësseau, M. C. (2008). How do we know that research ethics committees are really working? The neglected role of outcomes assessment in research ethics review. BMC Medical Ethics, 9 , 6. https://doi.org/10.1186/1472-6939-9-6

Cruz Rivera, S., Liu, X., Chan, A. W., et al. (2020). Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. Nature Medicine, 26 , 1351–1363. https://doi.org/10.1038/s41591-020-1037-7

Dawson, P. (2020). Defending assessment security in a digital world: Preventing e-cheating and supporting academic integrity in higher education . Routledge.

Book   Google Scholar  

EssayAiLab. (2020). EssayAiGroup Information Technology. https://www.essayailab.com/login . Accessed 21 July 2022.

European Commission, Directorate-General for Communications Networks, Content and Technology. (2019). Ethics guidelines for trustworthy AI . Publications Office. https://data.europa.eu/doi/10.2759/346720

Floridi, L., Cowls, J., King, T. C., et al. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26 , 1771–1796. https://doi.org/10.1007/s11948-020-00213-5

Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & SOCIETY . https://doi.org/10.1007/s00146-022-01397-z

Ghotbi, N., & Tung, M. H. (2021). Moral awareness of college students regarding artificial intelligence. Asian Bioethics Review, 13 , 421–433. https://doi.org/10.1007/s41649-021-00182-2

Gibney, E. (2022). Open-source language AI challenges big tech’s models. BLOOM aims to address the biases that machine-learning systems inherit from the texts they train on. Nature, 606 , 850–851. https://doi.org/10.1038/d41586-022-01705-z

Grammarly. (2022). Great writing, simplified. https://www.grammarly.com . Accessed 21 July 2022.

Gunser, V. E., Gottschling, S., Brucker, B., Richter, S., Çakir, D., & Gerjets, P. (2022). The pure poet: How good is the subjective credibility and stylistic quality of literary short texts written with an artificial intelligence tool as compared to texts written by human authors? In Proceedings of the Annual Meeting of the Cognitive Science Society .

Google Scholar  

Heffernan, T. (2022). The imitation game, the “child machine,” and the fathers of AI. AI & SOCIETY . https://doi.org/10.1007/s00146-022-01512-0

Hern, A. (2019, February 14) New AI fake text generator may be too dangerous to release, say creators. The Guardian . https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction . Accessed 19 July 2022.

Johnson, K. (2020, February 24). Neurips requires AI researchers to account for societal impact and financial conflicts of interest. VentureBeat . https://venturebeat.com/2020/02/24/neurips-requires-ai-researchers-to-account-for-societal-impact-and-financial-conflicts-of-interest/ . Accessed 19 July 2022.

Johnson-Eilola, J., & Selber, S. A. (2007). Plagiarism, originality, assemblage. Computers and Composition, 24 (4), 375–403. https://doi.org/10.1016/j.compcom.2007.08.003

Kavulya, J. M., Kiilu, D. V., & Kyengo, B. N. (2022). The quest for quality in university education in the post-COVID-19 era: Do anti-plagiarism tools still matter? Paper presented at the 3rd KLISC annual international conference “re-imagining library services amidst COVID-19 pandemic and beyond: Challenges and opportunities” 25th–26th May 2022, University of Eldoret.

Mantelero, A. (2022). Human rights impact assessment and AI. In Beyond data. Information technology and law series (Vol. 36). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-531-7_2

Chapter   Google Scholar  

McKee, H. A., & Porter, J. E. (2020, February). Ethics for AI writing: The importance of rhetorical context. In Proceedings of the AAAI/ACM conference on AI, ethics, and society (pp. 110–116). Association for Computing Machinery.

McKnight, L. (2021). Robot writers in education: Cheating … or world-beating? Scan, 40 (6), 4–6.

NoCramming. (2022). Reviews on the top essay platforms. https://nocramming.com/all-reviews . Accessed 21 July 2022.

Page, S. A., & Nyeboer, J. (2017). Improving the process of research ethics review. Research Integrity and Peer Review, 2 , 14. https://doi.org/10.1186/s41073-017-0038-7

Schaich Borg, J. (2021). Four investment areas for ethical AI: Transdisciplinary opportunities to close the publication-to-practice gap. Big Data & Society, 104 . https://doi.org/10.1177/20539517211040197

SCIgen. (2005). An automatic CS paper generator. https://pdos.csail.mit.edu/archive/scigen/ . Accessed 21 July 2022.

SciNote. (2021). Manuscript writer by SciNote. https://www.scinote.net/manuscript-writer/ . Accessed 21 July 2022.

Shaikh, S. A., Deshpande, N. A., & Khode, A. (2020). Use of AI for manuscript writing – A study based on patent literature. Allana Institute of Management Sciences, Pune, 10 , 1–8.

Smuha, N. A. (2021a). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology, 34 , 91–104. https://doi.org/10.1007/s13347-020-00403-w

Smuha, N. A. (2021b). From a ‘race to AI’ to a ‘race to AI regulation’: Regulatory competition for artificial intelligence. Law, Innovation and Technology, 13 (1), 57–84. https://doi.org/10.1080/17579961.2021.1898300

Stahl, B. C., & Wright, D. (2018). Ethics and privacy in AI and Big Data: Implementing responsible research and innovation. IEEE Security & Privacy, 16 (3), 26–33. https://doi.org/10.1109/MSP.2018.2701164

Tien, J. M. (2017). Internet of things, real-time decision making, and artificial intelligence. Annals of Data Science, 4 , 149–178. https://doi.org/10.1007/s40745-017-0112-5

UNESCO. (2021). The recommendation on the ethics of artificial intelligence . Retrieved from https://en.unesco.org/artificial-intelligence/ethics

Valstar, M., Gratch, J., Tao, J., Greene, G., & Picard, P. (2019, September 4). Affective computing and the misuse of “our” technology/science [Panel]. In 8th International Conference on Affective Computing & Intelligent Interaction . Cambridge, UK.

Weber-Lewerenz, B. (2021). Corporate digital responsibility (CDR) in construction engineering – Ethical guidelines for the application of digital transformation and artificial intelligence (AI) in user practice. SN Applied Sciences, 3 , 801. https://doi.org/10.1007/s42452-021-04776-1

Download references

Acknowledgments

This work has received support from “Emotional AI in Cities: Cross Cultural Lessons from UK and Japan on Designing for an Ethical Life” funded by JST-UKRI Joint Call on Artificial Intelligence and Society (2019).

Author information

Authors and affiliations.

Ritsumeikan Asia Pacific University, Beppu, Japan

Nader Ghotbi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nader Ghotbi .

Editor information

Editors and affiliations.

Werklund School of Education, University of Calgary, Calgary, AB, Canada

Sarah Elaine Eaton

Section Editor information

Department of Management and Organisation, Hanken School of Economics, Helsinki, Finland

Loreta Tauginienė

Office of the Ombudsperson for Academic Ethics and Procedures, Vilnius, Lithuania

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

Ghotbi, N. (2023). Ethics of Artificial Intelligence in Academic Research and Education. In: Eaton, S.E. (eds) Handbook of Academic Integrity. Springer, Singapore. https://doi.org/10.1007/978-981-287-079-7_143-1

Download citation

DOI : https://doi.org/10.1007/978-981-287-079-7_143-1

Received : 30 September 2022

Accepted : 30 November 2022

Published : 20 July 2023

Publisher Name : Springer, Singapore

Print ISBN : 978-981-287-079-7

Online ISBN : 978-981-287-079-7

eBook Packages : Springer Reference Education Reference Module Humanities and Social Sciences Reference Module Education

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

ethics of artificial intelligence essay

Artificial Intelligence and Ethics: Sixteen Challenges and Opportunities

  • Markkula Center for Applied Ethics
  • Ethics Resources
  • Ethics Blogs
  • All About Ethics

a white Google AI car parked against a backdrop of blue sky with white clouds image link to story

Artificial intelligence offers great opportunity, but it also brings potential hazards—this article presents 16 of them.

a white Google AI car parked against a backdrop of blue sky with white clouds

a white Google AI car parked against a backdrop of blue sky with white clouds

Tony Avelar/Associated Press

Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics. This article is an update of an earlier article  [1]. Views are his own.

Artificial intelligence and machine learning technologies are rapidly transforming society and will continue to do so in the coming decades. This social transformation will have deep ethical impact, with these powerful new technologies both improving and disrupting human lives. AI, as the externalization of human intelligence, offers us in amplified form everything that humanity already is, both good and evil. Much is at stake. At this crossroads in history we should think very carefully about how to make this transition, or we risk empowering the grimmer side of our nature, rather than the brighter.

Why is AI ethics becoming a problem now? Machine learning (ML) through neural networks is advancing rapidly for three reasons: 1) Huge increase in the size of data sets; 2) Huge increase in computing power; 3) Huge improvement in ML algorithms and more human talent to write them. All three of these trends are centralizing of power, and “With great power comes great responsibility” [2].

As an institution, the Markkula Center for Applied Ethics has been thinking deeply about the ethics of AI for several years. This article began as presentations delivered at academic conferences and has since expanded to an academic paper (links below) and most recently to a presentation of “Artificial Intelligence and Ethics: Sixteen Issues” I have given in the U.S. and internationally [3]. In that spirit, I offer this current list:

1. Technical Safety

The first question for any technology is whether it works as intended. Will AI systems work as they are promised or will they fail? If and when they fail, what will be the results of those failures? And if we are dependent upon them, will we be able to survive without them?

For example, several people have died in a semi-autonomous car accident because vehicles encountered situations in which they failed to make safe decisions. While writing very detailed contracts that limit liability might legally reduce a manufacturer’s responsibility, from a moral perspective, not only is responsibility still with the company, but the contract itself can be seen as an unethical scheme to avoid legitimate responsibility.

The question of technical safety and failure is separate from the question of how a properly-functioning technology might be used for good or for evil (questions 3 and 4, below). This question is merely one of function, yet it is the foundation upon which all the rest of the analysis must build.

2. Transparency and Privacy

Once we have determined that the technology functions adequately, can we actually understand how it works and properly gather data on its functioning? Ethical analysis always depends on getting the facts first—only then can evaluation begin.

It turns out that with some machine learning techniques such as deep learning in neural networks it can be difficult or impossible to really understand why the machine is making the choices that it makes. In other cases, it might be that the machine can explain something, but the explanation is too complex for humans to understand.

For example, in 2014 a computer proved a mathematical theorem, using a proof that was, at the time at least, longer than the entire Wikipedia encyclopedia [4]. Explanations of this sort might be true explanations, but humans will never know for sure.

As an additional point, in general, the more powerful someone or something is, the more transparent it ought to be, while the weaker someone is, the more right to privacy he or she should have. Therefore the idea that powerful AIs might be intrinsically opaque is disconcerting.

3. Beneficial Use & Capacity for Good

The main purpose of AI is, like every other technology, to help people lead longer, more flourishing, more fulfilling lives. This is good, and therefore insofar as AI helps people in these ways, we can be glad and appreciate the benefits it gives to us.

Additional intelligence will likely provide improvements in nearly every field of human endeavor, including, for example, archaeology, biomedical research, communication, data analytics, education, energy efficiency, environmental protection, farming, finance, legal services, medical diagnostics, resource management, space exploration, transportation, waste management, and so on.

As just one concrete example of a benefit from AI, some farm equipment now has computer systems capable of visually identifying weeds and spraying them with tiny targeted doses of herbicide. This not only protects the environment by reducing the use of chemicals on crops, but it also protects human health by reducing exposure to these chemicals.

4. Malicious Use & Capacity for Evil

A perfectly well functioning technology, such as a nuclear weapon, can, when put to its intended use, cause immense evil. Artificial intelligence, like human intelligence, will be used maliciously, there is no doubt.

For example, AI-powered surveillance is already widespread, in both appropriate contexts (e.g., airport-security cameras), perhaps inappropriate ones (e.g., products with always-on microphones in our homes), and conclusively inappropriate ones (e.g., products which help authoritarian regimes identify and oppress their citizens). Other nefarious examples can include AI-assisted computer-hacking and lethal autonomous weapons systems (LAWS), a.k.a. “killer robots.” Additional fears, of varying degrees of plausibility, include scenarios like those in the movies “2001: A Space Odyssey,” “Wargames,” and “Terminator.”

While movies and weapons technologies might seem to be extreme examples of how AI might empower evil, we should remember that competition and war are always primary drivers of technological advance, and that militaries and corporations are working on these technologies right now. History also shows that great evils are not always completely intended (e.g., stumbling into World War I and various nuclear close-calls in the Cold War), and so having destructive power, even if not intending to use it, still risks catastrophe. Because of this, forbidding, banning, and relinquishing certain types of technology would be the most prudent solution.

5. Bias in Data, Training Sets, etc.

One of the interesting things about neural networks, the current workhorses of artificial intelligence, is that they effectively merge a computer program with the data that is given to it. This has many benefits, but it also risks biasing the entire system in unexpected and potentially detrimental ways.

Already algorithmic bias has been discovered, for example, in areas ranging from criminal punishment to photograph captioning. These biases are more than just embarrassing to the corporations which produce these defective products; they have concrete negative and harmful effects on the people who are the victims of these biases, as well as reducing trust in corporations, government, and other institutions which might be using these biased products. Algorithmic bias is one of the major concerns in AI right now and will remain so in the future unless we endeavor to make our technological products better than we are. As one person said at the first meeting of the Partnership on AI, “We will reproduce all of our human faults in artificial form unless we strive right now to make sure that we don’t” [5].

6. Unemployment / Lack of Purpose & Meaning

Many people have already perceived that AI will be a threat to certain categories of jobs. Indeed, automation of industry has been a major contributing factor in job losses since the beginning of the industrial revolution. AI will simply extend this trend to more fields, including fields that have been traditionally thought of as being safer from automation, for example law, medicine, and education. It is not clear what new careers unemployed people ultimately will be able to transition into, although the more that labor has to do with caring for others, the more likely people will want to be dealing with other humans and not AIs.

Attached to the concern for employment is the concern for how humanity spends its time and what makes a life well-spent. What will millions of unemployed people do? What good purposes can they have? What can they contribute to the well-being of society? How will society prevent them from becoming disillusioned, bitter, and swept up in evil movements such as white supremacy and terrorism?

7. Growing Socio-Economic Inequality

Related to the unemployment problem is the question of how people will survive if unemployment rises to very high levels. Where will they get money to maintain themselves and their families? While prices may decrease due to lowered cost of production, those who control AI will also likely rake in much of the money that would have otherwise gone into the wages of the now-unemployed, and therefore economic inequality will increase. This will also affect international economic disparity, and therefore is likely a major threat to less-developed nations.

Some have suggested a universal basic income (UBI) to address the problem, but this will require a major restructuring of national economies. Various other solutions to this problem may be possible, but they all involve potentially major changes to human society and government. Ultimately this is a political problem, not a technical one, so this solution, like those to many of the problems described here, needs to be addressed at the political level.

8. Environmental Effects

Machine learning models require enormous amounts of energy to train, so much energy that the costs can run into the tens of millions of dollars or more. Needless to say, if this energy is coming from fossil fuels, this is a large negative impact on climate change, not to mention being harmful at other points in the hydrocarbon supply chain.

Machine learning can also make electrical distribution and use much more efficient, as well as working on solving problems in biodiversity, environmental research, resource management, etc. AI is in some very basic ways a technology focused on efficiency, and energy efficiency is one way that its capabilities can be directed.

On balance, it looks like AI could be a net positive for the environment [6]—but only if it is actually directed towards that positive end, and not just towards consuming energy for other uses.

9. Automating Ethics

One strength of AI is that it can automate decision-making, thus lowering the burden on humans and speeding up – potentially greatly speeding up—some kinds of decision-making processes. However, this automation of decision making will presents huge problems for society, because if these automated decisions are good, society will benefit, but if they are bad, society will be harmed.

As AI agents are given more powers to make decisions, they will need to have ethical standards of some sort encoded into them. There is simply no way around it: the ethical decision-making process might be as simple as following a program to fairly distribute a benefit, wherein the decision is made by humans and executed by algorithms, but it also might entail much more detailed ethical analysis, even if we humans would prefer that it did not—this is because Ai will operate so much faster than humans can, that under some circumstances humans will be left “out of the loop” of control due to human slowness. This already occurs with cyberattacks, and high-frequency trading (both of which are filled with ethical questions which are typically ignored) and it will only get worse as AI expands its role in society.

Since AI can be so powerful, the ethical standards we give to it had better be good.

10. Moral Deskilling & Debility

If we turn over our decision-making capacities to machines, we will become less experienced at making decisions. For example, this is a well-known phenomenon among airline pilots: the autopilot can do everything about flying an airplane, from take-off to landing, but pilots intentionally choose to manually control the aircraft at crucial times (e.g., take-off and landing) in order to maintain their piloting skills.

Because one of the uses of AI will be to either assist or replace humans at making certain types of decisions (e.g. spelling, driving, stock-trading, etc.), we should be aware that humans may become worse at these skills. In its most extreme form, if AI starts to make ethical and political decisions for us, we will become worse at ethics and politics. We may reduce or stunt our moral development precisely at the time when our power has become greatest and our decisions the most important.

This means that the study of ethics and ethics training are now more important than ever. We should determine ways in which AI can actually enhance our ethical learning and training. We should never allow ourselves to become deskilled and debilitated at ethics, or when our technology finally does present us with hard choices to make and problems we must solve—choices and problems that, perhaps, our ancestors would have been capable of solving—future humans might not be able to do it.

For more on deskilling, see this article [7] and Shannon Vallor’s original article on the topic [8].

11. AI Consciousness, Personhood, and “Robot Rights”

Some thinkers have wondered whether AIs might eventually become self-conscious, attain their own volition, or otherwise deserve recognition as persons like ourselves. Legally speaking, personhood has been given to corporations and (in other countries) rivers, so there is certainly no need for consciousness even before legal questions may arise.

Morally speaking, we can anticipate that technologists will attempt to make the most human-like AIs and robots possible, and perhaps someday they will be such good imitations that we will wonder if they might be conscious and deserve rights—and we might not be able to determine this conclusively. If future humans do conclude AIs and robots might be worthy of moral status, then we ought to err on the side of caution and give it.

In the midst of this uncertainty about the status of our creations, what we will know is that we humans have moral characters and that, to follow an inexact quote of Aristotle, “we become what we repeatedly do” [9]. So we ought not to treat AIs and robots badly, or we might be habituating ourselves towards having flawed characters, regardless of the moral status of the artificial beings we are interacting with. In other words, no matter the status of AIs and robots, for the sake of our own moral characters we ought to treat them well, or at least not abuse them.

12. AGI and Superintelligence

If or when AI reaches human levels of intelligence, doing everything that humans can do as well the average human can, then it will be an Artificial General Intelligence—an AGI—and it will be the only other such intelligence to exist on Earth at the human level.

If or when AGI exceeds human intelligence, it will become a superintelligence, an entity potentially vastly more clever and capable than we are: something humans have only ever related to in religions, myths, and stories.

Importantly here, AI technology is improving exceedingly fast. Global corporations and governments are in a race to claim the powers of AI as their own. Equally importantly, there is no reason why the improvement of AI would stop at AGI. AI is scalable and fast. Unlike a human brain, if we give AI more hardware it will do more and more, faster and faster.

The advent of AGI or superintelligence will mark the dethroning of humanity as the most intelligent thing on Earth. We have never faced (in the material world) anything smarter than us before. Every time Homo sapiens encountered other intelligent human species in the history of life on Earth, the other species either genetically merged with us (as Neanderthals did) or was driven extinct. As we encounter AGI and superintelligence, we ought to keep this in mind; though, because AI is a tool, there may be ways yet to maintain an ethical balance between human and machine.

13. Dependency on AI

Humans depend on technology. We always have, ever since we have been “human;” our technological dependency is almost what defines us as a species. What used to be just rocks, sticks, and fur clothes has now become much more complex and fragile, however. Losing electricity or cell connectivity can be a serious problem, psychologically or even medically (if there is an emergency). And there is no dependence like intelligence dependence.

Intelligence dependence is a form of dependence like that of a child to an adult. Much of the time, children rely on adults to think for them, and in our older years, as some people experience cognitive decline, the elderly rely on younger adults too. Now imagine that middle-aged adults who are looking after children and the elderly are themselves dependent upon AI to guide them. There would be no human “adults” left—only “AI adults.” Humankind would have become a race of children to our AI caregivers.

This, of course, raises the question of what an infantilized human race would do if our AI parents ever malfunctioned. Without that AI, if dependent on it, we could become like lost children not knowing how to take care of ourselves or our technological society. This “lostness” already happens when smartphone navigation apps malfunction (or the battery just runs out), for example.

We are already well down the path to technological dependency. How can we prepare now so that we can avoid the dangers of specifically intelligence dependency on AI?

14. AI-powered Addiction

Smartphone app makers have turned addiction into a science, and AI-powered video games and apps can be addictive like drugs. AI can exploit numerous human desires and weaknesses including purpose-seeking, gambling, greed, libido, violence, and so on.

Addiction not only manipulates and controls us; it also prevents us from doing other more important things—educational, economic, and social. It enslaves us and wastes our time when we could be doing something worthwhile. With AI constantly learning more about us and working harder to keep us clicking and scrolling, what hope is there for us to escape its clutches? Or, rather, the clutches of the app makers who create these AIs to trap us—because it is not the AIs that choose to treat people this way, it is other people.

When I talk about this topic with any group of students, I discover that all of them are “addicted” to one app or another. It may not be a clinical addiction, but that is the way that the students define it, and they know they are being exploited and harmed. This is something that app makers need to stop doing: AI should not be designed to intentionally exploit vulnerabilities in human psychology.

15. Isolation and Loneliness

Society is in a crisis of loneliness. For example, recently a study found that “200,000 older people in the UK have not had a conversation with a friend or relative in more than a month” [10]. This is a sad state of affairs because loneliness can literally kill [11]. It is a public health nightmare, not to mention destructive of the very fabric of society: our human relationships. Technology has been implicated in so many negative social and psychological trends, including loneliness, isolation, depression, stress, and anxiety, that it is easy to forget that things could be different, and in fact were quite different only a few decades ago.

One might think that “social” media, smartphones, and AI could help, but in fact they are major causes of loneliness since people are facing screens instead of each other. What does help are strong in-person relationships, precisely the relationships that are being pushed out by addictive (often AI-powered) technology.

Loneliness can be helped by dropping devices and building quality in-person relationships. In other words: caring.

This may not be easy work and certainly at the societal level it may be very difficult to resist the trends we have already followed so far. But resist we should, because a better, more humane world is possible. Technology does not have to make the world a less personal and caring place—it could do the opposite, if we wanted it to.

16. Effects on the Human Spirit

All of the above areas of interest will have effects on how humans perceive themselves, relate to each other, and live their lives. But there is a more existential question too. If the purpose and identity of humanity has something to do with our intelligence (as several prominent Greek philosophers believed, for example), then by externalizing our intelligence and improving it beyond human intelligence, are we making ourselves second-class beings to our own creations?

This is a deeper question with artificial intelligence which cuts to the core of our humanity, into areas traditionally reserved for philosophy, spirituality, and religion. What will happen to the human spirit if or when we are bested by our own creations in everything that we do? Will human life lose meaning? Will we come to a new discovery of our identity beyond our intelligence?

Perhaps intelligence is not really as important to our identity as we might think it is, and perhaps turning over intelligence to machines will help us to realize that. If we instead find our humanity not in our brains, but in our hearts, perhaps we will come to recognize that caring, compassion, kindness, and love are ultimately what make us human and what make life worth living. Perhaps by taking away some of the tedium of life, AI can help us to fulfill this vision of a more humane world.

There are more issues in the ethics of AI; here I have just attempted to point out some major ones. Much more time could be spent on topics like AI-powered surveillance, the role of AI in promoting misinformation and disinformation, the role of AI in politics and international relations, the governance of AI, and so on.

New technologies are always created for the sake of something good—and AI offers us amazing new abilities to help people and make the world a better place. But in order to make the world a better place we need to choose to do that, in accord with ethics.

Through the concerted effort of many individuals and organizations, we can hope that AI technology will help us to make a better world.

This article builds upon the following previous works: “AI: Ethical Challenges and a Fast Approaching Future” (Oct. 2017) [12], “Some Ethical and Theological Reflections on Artificial Intelligence,” (Nov. 2017) [13], Artificial Intelligence and Ethics: Ten areas of interest (Nov. 2017) [1], “AI and Ethics” (Mar. 2018) [14], “Ethical Reflections on Artificial Intelligence”(Aug. 2018) [15], and several presentations of “Artificial Intelligence and Ethics: Sixteen Issues” (2019-20) [3].

[1] Brian Patrick Green, “Artificial Intelligence and Ethics: Ten areas of interest,” Markkula Center for Applied Ethics website , Nov 21, 2017.

[2] Originally paraphrased in Stan Lee and Steve Ditko, “Spider-Man,” Amazing Fantasy vol. 1, #15 (August 1962), exact phrase from Uncle Ben in J. Michael Straczynski, Amazing Spider-Man vol. 2, #38 (February 2002). For more information: https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility

[3] Brian Patrick Green, “Artificial Intelligence and Ethics: Sixteen Issues,” various locations and dates: Los Angeles, Mexico City, San Francisco, Santa Clara University (2019-2020).

[4] Bob Yirka, “Computer generated math proof is too large for humans to check,” Phys.org , February 19, 2014, available at: https://phys.org/news/2014-02-math-proof-large-humans.html

[5] The Partnership on AI to Benefit People and Society, Inaugural Meeting, Berlin, Germany, October 23-24, 2017.

[6] Leila Scola, “AI and the Ethics of Energy Efficiency,” Markkula Center for Applied Ethics website , May 26, 2020, available at: https://www.scu.edu/environmental-ethics/resources/ai-and-the-ethics-of-energy-efficiency/

[7] Brian Patrick Green, “Artificial Intelligence, Decision-Making, and Moral Deskilling,” Markkula Center for Applied Ethics website , Mar 15, 2019, available at: https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/artificial-intelligence-decision-making-and-moral-deskilling/

[8] Shannon Vallor, “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.”  Philosophy of Technology  28 (2015):107–124., available at:  https://link.springer.com/article/10.1007/s13347-014-0156-9

[9] Brad Sylvester, “Fact Check: Did Aristotle Say, ‘We Are What We Repeatedly Do’?” Check Your Fact website , June 26, 2019, available at: https://checkyourfact.com/2019/06/26/fact-check-aristotle-excellence-habit-repeatedly-do/

[10] Lee Mannion, “Britain appoints minister for loneliness amid growing isolation,” Reuters , January 17, 2018, available at: https://www.reuters.com/article/us-britain-politics-health/britain-appoints-minister-for-loneliness-amid-growing-isolation-idUSKBN1F61I6

[11] Julianne Holt-Lunstad, Timothy B. Smith, Mark Baker,Tyler Harris, and David Stephenson, “Loneliness and Social Isolation as Risk Factors for Mortality: A Meta-Analytic Review,” Perspectives on Psychological Science 10(2) (2015): 227–237, available at: https://journals.sagepub.com/doi/full/10.1177/1745691614568352

[12] Markkula Center for Applied Ethics Staff, “AI: Ethical Challenges and a Fast Approaching Future: A panel discussion on artificial intelligence,” with Maya Ackerman, Sanjiv Das, Brian Green, and Irina Raicu, Santa Clara University, California, October 24, 2017, posted to the All About Ethics Blog , Oct 31, 2017, video available at: https://www.scu.edu/ethics/all-about-ethics/ai-ethical-challenges-and-a-fast-approaching-future/

[13] Brian Patrick Green, “Some Ethical and Theological Reflections on Artificial Intelligence,” Pacific Coast Theological Society (PCTS) meeting, Graduate Theological Union, Berkeley, 3-4 November, 2017, available at: http://www.pcts.org/meetings/2017/PCTS2017Nov-Green-ReflectionsAI.pdf

[14] Brian Patrick Green, “AI and Ethics,” guest lecture in PACS003: What is an Ethical Life? , University of the Pacific, Stockton, March 21, 2018.

[15] Brian Patrick Green, “Ethical Reflections on Artificial Intelligence,” Scientia et Fides 6(2), 24 August 2018. Available at: http://apcz.umk.pl/czasopisma/index.php/SetF/article/view/SetF.2018.015/15729

Thank you to many people for all the helpful feedback which has helped me develop this list, including Maya Ackermann, Kirk Bresniker, Sanjiv Das, Kirk Hanson, Brian Klunk, Thane Kreiner, Angelus McNally, Irina Raicu, Leila Scola, Lili Tavlan, Shannon Vallor, the employees of several tech companies, the attendees of the PCTS Fall 2017 meeting, the attendees of the needed.education meetings, several anonymous reviewers, the professors and students of PACS003 at the University of the Pacific, the students of my ENGR 344: AI and Ethics course, as well as many more.

Subscribe to Our Blogs

  • Benison: The Practice of Ethical Leadership
  • Center News
  • Ethical Dilemmas in the Social Sector
  • Internet Ethics: Views from Silicon Valley
  • Internet Ethics: Views from Silicon Valley

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Edward Glaeser outside the Littauer Center of Public Administration at Harvard University.

Lending a hand to a former student — Boston’s mayor 

Karl Oskar Schulz ’24

Where money isn’t cheap, misery follows

An Iowa farm.

Larger lesson about tariffs in a move that helped Trump but not the country

Illustration by Ben Boothman

Great promise but potential for peril

Christina Pazzanese

Harvard Staff Writer

Ethical concerns mount as AI takes bigger decision-making role in more industries

Second in a four-part series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the rising age of artificial intelligence and machine learning , and how to humanize them .

For decades, artificial intelligence, or AI, was the engine of high-level STEM research. Most consumers became aware of the technology’s power and potential through internet platforms like Google and Facebook, and retailer Amazon. Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing.

Also in the series

Illustration of people making ethical decisions.

Trailblazing initiative marries ethics, tech

Illustration of person having an X-ray.

AI revolution in medicine

Illustration of robot making decisions.

Imagine a world in which AI is in your home, at work, everywhere

But its game-changing promise to do things like improve efficiency, bring down costs, and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good. With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.

Its growing appeal and utility are undeniable. Worldwide business spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024, even after the global economic slump caused by the COVID-19 pandemic, according to a forecast released in August by technology research firm IDC. Retail and banking industries spent the most this year, at more than $5 billion each. The company expects the media industry and federal and central governments will invest most heavily between 2018 and 2023 and predicts that AI will be “the disrupting influence changing entire industries over the next decade.”

“Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy,” said Joseph Fuller , professor of management practice at Harvard Business School, who co-leads Managing the Future of Work , a research project that studies, in part, the development and implementation of AI, including machine learning, robotics, sensors, and industrial automation, in business and the work world.

Early on, it was popularly assumed that the future of AI would involve the automation of simple repetitive tasks requiring low-level decision-making. But AI has rapidly grown in sophistication, owing to more powerful computers and the compilation of huge data sets. One branch, machine learning, notable for its ability to sort and analyze massive amounts of data and to learn over time, has transformed countless fields, including education.

Firms now use AI to manage sourcing of materials and products from suppliers and to integrate vast troves of information to aid in strategic decision-making, and because of its capacity to process data so quickly, AI tools are helping to minimize time in the pricey trial-and-error of product development — a critical advance for an industry like pharmaceuticals, where it costs $1 billion to bring a new pill to market, Fuller said.

Health care experts see many possible uses for AI, including with billing and processing necessary paperwork. And medical professionals expect that the biggest, most immediate impact will be in analysis of data, imaging, and diagnosis. Imagine, they say, having the ability to bring all of the medical knowledge available on a disease to any given treatment decision.

In employment, AI software culls and processes resumes and analyzes job interviewees’ voice and facial expressions in hiring and driving the growth of what’s known as “hybrid” jobs. Rather than replacing employees, AI takes on important technical tasks of their work, like routing for package delivery trucks, which potentially frees workers to focus on other responsibilities, making them more productive and therefore more valuable to employers.  

“It’s allowing them to do more stuff better, or to make fewer errors, or to capture their expertise and disseminate it more effectively in the organization,” said Fuller, who has studied the effects and attitudes of workers who have lost or are likeliest to lose their jobs to AI.

“Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”

— Michael Sandel, political philosopher and Anne T. and Robert M. Bass Professor of Government

Though automation is here to stay, the elimination of entire job categories, like highway toll-takers who were replaced by sensors because of AI’s proliferation, is not likely, according to Fuller.

“What we’re going to see is jobs that require human interaction, empathy, that require applying judgment to what the machine is creating [will] have robustness,” he said.

While big business already has a huge head start, small businesses could also potentially be transformed by AI, says Karen Mills ’75, M.B.A. ’77, who ran the U.S. Small Business Administration from 2009 to 2013. With half the country employed by small businesses before the COVID-19 pandemic, that could have major implications for the national economy over the long haul.

Rather than hamper small businesses, the technology could give their owners detailed new insights into sales trends, cash flow, ordering, and other important financial information in real time so they can better understand how the business is doing and where problem areas might loom without having to hire anyone, become a financial expert, or spend hours laboring over the books every week, Mills said.

One area where AI could “completely change the game” is lending, where access to capital is difficult in part because banks often struggle to get an accurate picture of a small business’s viability and creditworthiness.

“It’s much harder to look inside a business operation and know what’s going on” than it is to assess an individual, she said.

Information opacity makes the lending process laborious and expensive for both would-be borrowers and lenders, and applications are designed to analyze larger companies or those who’ve already borrowed, a built-in disadvantage for certain types of businesses and for historically underserved borrowers, like women and minority business owners, said Mills, a senior fellow at HBS.

But with AI-powered software pulling information from a business’s bank account, taxes, and online bookkeeping records and comparing it with data from thousands of similar businesses, even small community banks will be able to make informed assessments in minutes, without the agony of paperwork and delays, and, like blind auditions for musicians, without fear that any inequity crept into the decision-making.

“All of that goes away,” she said.

A veneer of objectivity

Not everyone sees blue skies on the horizon, however. Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale.

“Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” said political philosopher Michael Sandel , Anne T. and Robert M. Bass Professor of Government. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.”

“If we’re not thoughtful and careful, we’re going to end up with redlining again.”

— Karen Mills, senior fellow at the Business School and head of the U.S. Small Business Administration from 2009 to 2013

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.

“Debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar,” said Sandel, referring to conscious and unconscious prejudices of program developers and those built into datasets used to train the software. “But we’ve not yet wrapped our minds around the hardest question: Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”

Panic over AI suddenly injecting bias into everyday life en masse is overstated, says Fuller. First, the business world and the workplace, rife with human decision-making, have always been riddled with “all sorts” of biases that prevent people from making deals or landing contracts and jobs.

When calibrated carefully and deployed thoughtfully, resume-screening software allows a wider pool of applicants to be considered than could be done otherwise, and should minimize the potential for favoritism that comes with human gatekeepers, Fuller said.

Sandel disagrees. “AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status,” he said.

In the world of lending, algorithm-driven decisions do have a potential “dark side,” Mills said. As machines learn from data sets they’re fed, chances are “pretty high” they may replicate many of the banking industry’s past failings that resulted in systematic disparate treatment of African Americans and other marginalized consumers.

“If we’re not thoughtful and careful, we’re going to end up with redlining again,” she said.

A highly regulated industry, banks are legally on the hook if the algorithms they use to evaluate loan applications end up inappropriately discriminating against classes of consumers, so those “at the top levels” in the field are “very focused” right now on this issue, said Mills, who closely studies the rapid changes in financial technology, or “fintech.”

“They really don’t want to discriminate. They want to get access to capital to the most creditworthy borrowers,” she said. “That’s good business for them, too.”

Oversight overwhelmed

Given its power and expected ubiquity, some argue that the use of AI should be tightly regulated. But there’s little consensus on how that should be done and who should make the rules.

Thus far, companies that develop or use AI systems largely self-police, relying on existing laws and market forces, like negative reactions from consumers and shareholders or the demands of highly-prized AI technical talent to keep them in line.

“There’s no businessperson on the planet at an enterprise of any size that isn’t concerned about this and trying to reflect on what’s going to be politically, legally, regulatorily, [or] ethically acceptable,” said Fuller.

Firms already consider their own potential liability from misuse before a product launch, but it’s not realistic to expect companies to anticipate and prevent every possible unintended consequence of their product, he said.

Few think the federal government is up to the job, or will ever be.

“The regulatory bodies are not equipped with the expertise in artificial intelligence to engage in [oversight] without some real focus and investment,” said Fuller, noting the rapid rate of technological change means even the most informed legislators can’t keep pace. Requiring every new product using AI to be prescreened for potential social harms is not only impractical, but would create a huge drag on innovation.

“I wouldn’t have a central AI group that has a division that does cars, I would have the car people have a division of people who are really good at AI.”

— Jason Furman, a professor of the practice of economic policy at the Kennedy School and a former top economic adviser to President Barack Obama

Jason Furman , a professor of the practice of economic policy at Harvard Kennedy School, agrees that government regulators need “a much better technical understanding of artificial intelligence to do that job well,” but says they could do it.

Existing bodies like the National Highway Transportation Safety Association, which oversees vehicle safety, for example, could handle potential AI issues in autonomous vehicles rather than a single watchdog agency, he said.

“I wouldn’t have a central AI group that has a division that does cars, I would have the car people have a division of people who are really good at AI,” said Furman, a former top economic adviser to President Barack Obama.

Though keeping AI regulation within industries does leave open the possibility of co-opted enforcement, Furman said industry-specific panels would be far more knowledgeable about the overarching technology of which AI is simply one piece, making for more thorough oversight.

While the European Union already has rigorous data-privacy laws and the European Commission is considering a formal regulatory framework for ethical use of AI, the U.S. government has historically been late when it comes to tech regulation.

“I think we should’ve started three decades ago, but better late than never,” said Furman, who thinks there needs to be a “greater sense of urgency” to make lawmakers act.

Business leaders “can’t have it both ways,” refusing responsibility for AI’s harmful consequences while also fighting government oversight, Sandel maintains.

More like this

ethics of artificial intelligence essay

The robots are coming, but relax

Illustration of people walking around.

The good, bad, and scary of the Internet of Things

ethics of artificial intelligence essay

Paving the way for self-driving cars

“The problem is these big tech companies are neither self-regulating, nor subject to adequate government regulation. I think there needs to be more of both,” he said, later adding: “We can’t assume that market forces by themselves will sort it out. That’s a mistake, as we’ve seen with Facebook and other tech giants.”

Last fall, Sandel taught “ Tech Ethics ,” a popular new Gen Ed course with Doug Melton, co-director of Harvard’s Stem Cell Institute. As in his legendary “Justice” course, students consider and debate the big questions about new technologies, everything from gene editing and robots to privacy and surveillance.

“Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications — not only to decide what the regulations should be, but also to decide what role we want big tech and social media to play in our lives,” said Sandel.

Doing that will require a major educational intervention, both at Harvard and in higher education more broadly, he said.

“We have to enable all students to learn enough about tech and about the ethical implications of new technologies so that when they are running companies or when they are acting as democratic citizens, they will be able to ensure that technology serves human purposes rather than undermines a decent civic life.”

Next: The AI revolution in medicine may lift personalized treatment, fill gaps in access to care, and cut red tape. Yet risks abound.

Share this article

You might like.

Economist gathers group of Boston area academics to assess costs of creating tax incentives for developers to ease housing crunch

Karl Oskar Schulz ’24

Student’s analysis of global attitudes called key contribution to research linking higher cost of borrowing to persistent consumer gloom

An Iowa farm.

Researcher details findings on policy that failed to boost U.S. employment even as it scored political points

Yes, it’s exciting. Just don’t look at the sun.

Lab, telescope specialist details Harvard eclipse-viewing party, offers safety tips

Forget ‘doomers.’ Warming can be stopped, top climate scientist says

Michael Mann points to prehistoric catastrophes, modern environmental victories

Navigating Harvard with a non-apparent disability

4 students with conditions ranging from diabetes to narcolepsy describe daily challenges that may not be obvious to their classmates and professors

TechRepublic

Artificial intelligence.

Website screenshots of Google Gemini and ChatGPT.

Google Gemini vs. ChatGPT: Is Gemini Better Than ChatGPT?

Now that Google Gemini has entered the arena, how well can it compete against OpenAI’s ChatGPT?

Windows 11 logo seen on the screen of tablet and user pointing at it with finger.

Windows 11 Update Brings New Tricks to Microsoft Copilot

Microsoft’s generative AI companion Copilot gets more deeply integrated into Windows 11 with the latest software update, which also includes new voice control and accessibility tools.

Business people hiring new staff with office chair and a vacancy sign. Characters examining a resume.

Indeed’s 10 Highest-Paid Tech Skills: Generative AI Tops the List

The average salary potential for each top tech skill is also provided by Indeed. Companies hiring for these tech skills include Apple and NVIDIA.

someone using ChatGPT on a laptop

ChatGPT Cheat Sheet: A Complete Guide for 2024

Get up and running with ChatGPT with this comprehensive cheat sheet. Learn everything from how to sign up for free to enterprise use cases, and start using ChatGPT quickly and effectively.

Visual title for 2023 Copilot Guide.

Microsoft Copilot Cheat Sheet: Price, Versions & Benefits

Microsoft will roll out two new Copilot-enabled devices, Surface Pro 10 and Surface Laptop 6, in April.

Latest Articles

Students learning AI topics online.

The 10 Best AI Courses in 2024

Today’s options for best AI courses offer a wide variety of hands-on experience with generative AI, machine learning and AI algorithms.

AI assistant application on digital background.

Is AI ‘Copilot’ a Generic Term or a Brand Name?

Microsoft uses Copilot as a product name, but more and more often the word sums up a generic class of generative AI bots.

U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan.

U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models

The U.K. government has formally agreed to work with the U.S. in developing safety tests for advanced AI models.

Businessman interacting with a virtual B2B button on a virtual screen.

The Tech Needed to Survive This Decade’s ‘Seismic’ APAC B2B Trends

From generative AI and virtual prototyping to the Internet of Things, blockchain and data analytics, Merkle has predicted that four shifts in the business-to-business market will shape tech buying appetites.

Abstract digital human face. Artificial intelligence concept of big data or cyber security. 3D illustration

Artificial Intelligence: Cheat Sheet

Discover the potential of artificial intelligence with our comprehensive cheat sheet. Learn more about the concepts, platforms and applications of AI.

Man face detection. Concept of AI Deepfake

AI Deepfakes Rising as Risk for APAC Organisations

A cyber security expert from Tenable has called on large tech platforms to do more to identify AI deepfakes for users, while APAC organisations may need to include deepfakes in risk assessments.

A circuit board lit up with pink light representing generative AI.

Generative AI Defined: How it Works, Benefits and Dangers

Generative artificial intelligence has rapidly gained traction amongst businesses, professionals and consumers. But what is generative AI, how does it work and what is all the buzz about? Read on to find out.

Hands of robot and human touching on big data network connection background.

Google Cloud/Cloud Security Alliance Report: IT and Security Pros Are ‘Cautiously Optimistic’ About AI

Of the IT and security professionals surveyed, 63% said AI will improve security within their organization.

United Kingdom cyber security concept. Padlock on computer keyboard and UK flag.

3 UK Cyber Security Trends to Watch in 2024

Discover what industry experts think the events of Q1 mean for the business cyber security landscape in the UK.

ethics of artificial intelligence essay

TechRepublic Premium Editorial Calendar: Policies, Checklists, Hiring Kits and Glossaries for Download

TechRepublic Premium content helps you solve your toughest IT issues and jump-start your career or next project.

Man browsing through his laptop with AWS Skill Builder on screen.

Amazon Offers Free Generative AI and Machine Learning Courses

The 'AI Ready' initiative offers online classes for developers and other technical professionals on artificial intelligence and machine learning, in addition to Amazon’s cloud practitioner training resources.

Woman working with WordPress on a laptop.

Price Drop: Incorporate ChatGPT Into Your Website With This WordPress Plugin

Streamline your content production with this ChatGPT WordPress plugin, now 83% off for a lifetime subscription at just $49.97.

Ain Ai-generated image of a brain placed on top of a circuit board.

Learn how to Use AI for Your Business

This online e-degree program will get you up to speed on programs like ChatGPT, DALL-E, Midjourney and more and it's on sale for just $29.99.

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

IMAGES

  1. (PDF) Ethics of Artificial Intelligence

    ethics of artificial intelligence essay

  2. What is Artificial Intelligence Free Essay Example

    ethics of artificial intelligence essay

  3. Artificial Intelligence in Today's Society

    ethics of artificial intelligence essay

  4. Essay on Artificial Intelligence

    ethics of artificial intelligence essay

  5. Essay Assignment: The Ethics of Artificial Intelligence by Curt's Journey

    ethics of artificial intelligence essay

  6. Artificial Intelligence Essay: 500+ Words Essay for Students

    ethics of artificial intelligence essay

VIDEO

  1. Artificial Intelligence Essay In English || Artificial Intelligence Essay || #mdwriting #handwriting

  2. The Ethics of Artificial Intelligence

  3. Artificial intelligence essay for Students

  4. Ethical Implications of Artificial Intelligence

  5. Artificial Intelligence Essay In English l 10 Lines On Artificial intelligence l 10 Line Essay On AI

  6. Unit 1

COMMENTS

  1. Ethics of Artificial Intelligence and Robotics

    Other Internet Resources References. AI HLEG, 2019, "High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI", European Commission, accessed: 9 April 2019. Amodei, Dario and Danny Hernandez, 2018, "AI and Compute", OpenAI Blog, 16 July 2018. Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms, paper in the Proceedings ...

  2. Ethics of Artificial Intelligence

    Introduction. Artificial Intelligence (AI) is rapidly evolving, which offers numerous potential benefits for humankind. However, machine learning imposes additional risks of adverse consequences for society. According to Jobin et al. (2019), AI refers to "the theory and development of computer systems able to perform tasks normally requiring ...

  3. Ethics of Artificial Intelligence

    Ethics of Artificial Intelligence. This article provides a comprehensive overview of the main ethical issues related to the impact of Artificial Intelligence (AI) on human society. ... When Alan Turing introduced the so-called Turing test (which he called an 'imitation game') in his famous 1950 essay about whether machines can think, the ...

  4. PDF The Ethics of Artificial Intelligence

    While this subfield of Artificial Intelligence is only just coalescing, "Artificial Gen-eral Intelligence" (hereafter, AGI) is the emerging term of art used to denote "real" AI (see, e.g., the edited volume Goertzel and Pennachin [2007]). As the name im-plies, the emerging consensus is that the missing characteristic is generality. Current

  5. Ethics of Artificial Intelligence

    Abstract. Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today's most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass ...

  6. The Ethics of Artificial Intelligence: Principles, Challenges, and

    Abstract. The book has two goals. The first goal is meta-theoretical and is fulfilled by Part One: an interpretation of the past (Chapter 1), the present (Chapter 2), and the future of AI (Chapter 3). Part One develops the thesis that AI is an unprecedented divorce between agency and intelligence.

  7. The Ethics of Artificial Intelligence: An Introduction

    Abstract. This chapter introduces the themes covered by the book. It provides an overview of the concept of artificial intelligence (AI) and some of the technologies that have contributed to the current high level of visibility of AI. It explains why using case studies is a suitable approach to engage a broader audience with an interest in AI ...

  8. Why AI Ethics Is a Critical Theory

    The ethics of artificial intelligence (AI) is an upcoming field of research that deals with the ethical assessment of emerging AI applications and addresses the new kinds of moral questions that the advent of AI raises. The argument presented in this article is that, even though there exist different approaches and subfields within the ethics of AI, the field resembles a critical theory. Just ...

  9. Ethics of Artificial Intelligence in Academia

    On the other hand, Artificial General Intelligence (AGI) is defined to be a stronger AI, with capacity to "create machines that can reason and think just like a human is capable of doing" (Larkin, 2022, para 21) and apply that intelligence more generally, to any situation.Although scientists and researchers believe we are nowhere close to achieving that level of intelligence (Berruti et al ...

  10. 28 Perspectives on Ethics of AI: Philosophy

    1 The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation Notes. Notes. 2 The Ethics of the Ethics of AI ... a rather brutal action recalled by Aldo Leopold at the beginning of his essay "The Land Ethic": "When god-like Odysseus, returned from the wars in Troy, he hanged all on ...

  11. [PDF] Ethics of Artificial Intelligence

    Ethics of Artificial Intelligence. TU Eindhoven. Published in Research Library Issues 19 September 2019. Philosophy, Computer Science. Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today's most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking…. Expand.

  12. Ethics of Artificial Intelligence

    S. Matthew Liao. Oxford University Press, 2020 - Business & Economics - 542 pages. "Featuring seventeen original essays on the ethics of Artificial Intelligence (AI) by some of the most prominent AI scientists and academic philosophers today, this volume represents the state-of-the-art thinking in this fast-growing field and highlights some of ...

  13. S. Matthew Liao (ed.), Ethics of Artificial Intelligence

    Abstract. "Featuring seventeen original essays on the ethics of Artificial Intelligence by some of the most prominent AI scientists and academic philosophers today, this volume represents the state-of-the-art thinking in this fast-growing field and highlights some of the central themes in AI and morality such as how to build ethics into AI, how ...

  14. Ethics of Artificial Intelligence

    Recommendation on the Ethics of Artificial Intelligence. UNESCO produced the first-ever global standard on AI ethics - the 'Recommendation on the Ethics of Artificial Intelligence ' in November 2021. This framework was adopted by all 193 Member States. The protection of human rights and dignity is the cornerstone of the Recommendation ...

  15. An Overview of Artificial Intelligence Ethics

    Artificial intelligence (AI) has profoundly changed and will continue to change our lives. AI is being applied in more and more fields and scenarios such as autonomous driving, medical care, media, finance, industrial robots, and internet services. The widespread application of AI and its deep integration with the economy and society have improved efficiency and produced benefits. At the same ...

  16. Worldwide AI ethics: A review of 200 guidelines and ...

    The "guideline" scope excludes the academic work being done worldwide (i.e., we did not consider academic papers on AI ethics). (4) ... While in "Everyday Ethics for Artificial Intelligence," 69 the following norm is suggested: "AI should be designed to align with the norms and values of your user group in mind."

  17. Artificial Intelligence, Humanistic Ethics

    The essay concludes with thoughts on how the prospect of artificial general intelligence bears on this humanistic outlook. Author Information John Tasioulas is Professor of Ethics and Legal Philosophy at the Faculty of Phi losophy and Director of the Institute for Ethics in AI at the University of Oxford.

  18. PDF The ethics of artificial intelligence: Issues and initiatives

    The ethics of artificial intelligence: Issues and initiatives . This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies. It also reviews the guidelines and frameworks which countries and regions around the world have created to address them.

  19. A Short Introduction to the Ethics of Artificial Intelligence

    Current AI is what is known as narrow AI 20 because it is designed to perform a narrowly defined task such as driving a car or identifying a hostile target. In the long term, a number of AI researchers hope to create artificial general intelligence (AGI), which would be capable of performing any intellectual task that a human being can. 21 On one understanding, such AI, sometimes referred to ...

  20. Ethics of artificial intelligence

    Abstract. Featuring seventeen original essays on the ethics of artificial intelligence (AI) by today's most prominent AI scientists and academic philosophers, this volume represents state-of-the-art thinking in this fast-growing field. It highlights central themes in AI and morality such as how to build ethics into AI, how to address mass ...

  21. Ethics of Artificial Intelligence in Academic Research and Education

    The term artificial intelligence (AI) is used to refer to a wide variety of computerized systems that attempt to imitate human decision-making through access to digital information and processing it through written algorithms, the so-called deep learning and neural networks (Tien, 2017).With the aid of other technologies especially electronic engineering, these systems may enable machines to ...

  22. Artificial Intelligence and Ethics: Sixteen Challenges and

    Brian Patrick Green is the director of Technology Ethics at the Markkula Center for Applied Ethics. This article is an update of an earlier article [1]. Views are his own. Artificial intelligence and machine learning technologies are rapidly transforming society and will continue to do so in the coming decades.

  23. Ethical concerns mount as AI takes bigger decision-making role

    Ethical concerns mount as AI takes bigger decision-making role in more industries. Second in a four-part series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the rising age of artificial intelligence and machine learning, and how to humanize them. For decades, artificial intelligence, or AI ...

  24. The Ethics of Artificial Intelligence

    Essay Writing Service. The general purpose AI uses reinforcement learning and neural networks to imitate the learning process of the human brain. AI creators say that the algorithm can learn without alteration or instruction. Deepmind is a British company that created artificial intelligence for general purpose.

  25. Crossing Boundaries: The Ethics of AI and Geographic Information

    Over the past two decades, there has been increasing research on the use of artificial intelligence (AI) and geographic information technologies for monitoring and mapping varying phenomena on the Earth's surface. At the same time, there has been growing attention given to the ethical challenges that these technologies present (both individually and collectively in fields such as critical ...

  26. Artificial Intelligence Challenges to Data Protection

    Keywords: Data Protection, Artificial Intelligence, GDPR, HIIPA, Charter of Fundamental Rights of the European Union, the Council of Europe Treaty Series , Risks, comperative analysis, Legal issues and framework of AI, AI Act 2024, AI challenges and Threats, Rights , EU legal Framework.

  27. Artificial Intelligence

    Artificial Intelligence Australia The Tech Needed to Survive This Decade's 'Seismic' APAC B2B Trends . From generative AI and virtual prototyping to the Internet of Things, blockchain and ...