Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Sir Andre Geim (pictured), giving the Morris Loeb Lecture in Physics.

How did you get that frog to float?

Sherry Turkle.

Lifting a few with my chatbot

A Culex mosquito (Culex pilosus) feeding from an invasive brown anole lizard in Florida.

Hate mosquitoes? Who doesn’t? But maybe we shouldn’t.

Illustration by Ben Boothman

Trailblazing initiative marries ethics, tech

Christina Pazzanese

Harvard Staff Writer

Computer science, philosophy faculty ask students to consider how systems affect society

First in a four-part series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the rising age of artificial intelligence and machine learning, and how to humanize it.

For two decades, the flowering of the Digital Era has been greeted with blue-skies optimism, defined by an unflagging belief that each new technological advance, whether more powerful personal computers, faster internet, smarter cellphones, or more personalized social media, would only enhance our lives.

Also in the series

Illustration Balancing business big and small.

Great promise but potential for peril

Illustration of person having an X-ray.

AI revolution in medicine

Illustration of robot making decisions.

Imagine a world in which AI is in your home, at work, everywhere

But public sentiment has curdled in recent years with revelations about Silicon Valley firms and online retailers collecting and sharing people’s data, social media gamed by bad actors spreading false information or sowing discord, and corporate algorithms using opaque metrics that favor some groups over others. These concerns multiply as artificial intelligence (AI) and machine-learning technologies, which made possible many of these advances, quietly begin to nudge aside humans, assuming greater roles in running our economy, transportation, defense, medical care, and personal lives.

“Individuality … is increasingly under siege in an era of big data and machine learning,” says Mathias Risse, Littauer Professor of Philosophy and Public Administration and director of the Carr Center for Human Rights Policy at Harvard Kennedy School. The center invites scholars and leaders in the private and nonprofit sectors on ethics and AI to engage with students as part of its growing focus on the ways technology is reshaping the future of human rights.

Building more thoughtful systems

Even before the technology field belatedly began to respond to market and government pressures with promises to do better, it had become clear to Barbara Grosz , Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), that the surest way to get the industry to act more responsibly is to prepare the next generation of tech leaders and workers to think more ethically about the work they’ll be doing. The result is Embedded EthiCS , a groundbreaking novel program that marries the disciplines of computer science and philosophy in an attempt to create change from within.

The timing seems on target, since the revolutionary technologies of AI and machine learning have begun making inroads in an ever-broadening range of domains and professions. In medicine, for instance, systems are expected soon to work effectively with physicians to provide better healthcare. In business, tech giants like Google, Facebook, and Amazon have been using smart technologies for years, but use of AI is rapidly spreading, with global corporate spending on software and platforms expected to reach $110 billion by 2024.

“A one-off course on ethics for computer scientists would not work. We needed a new pedagogical model.”

— Alison Simmons, the Samuel H. Wolcott Professor of Philosophy

Stephanie Mitchell/Harvard Staff Photographer

So where are we now on these issues, and what does that mean? To answer those questions, this Gazette series will examine emerging technologies in medicine and business, with the help of various experts in the Harvard community. We’ll also take a look at how the humanities can help inform the future coordination of human values and AI efficiencies through University efforts such as the AI+Art project at metaLAB(at)Harvard and Embedded EthiCS.

In spring 2017, Grosz recruited Alison Simmons , the Samuel H. Wolcott Professor of Philosophy, and together they founded Embedded EthiCS . The idea is to weave philosophical concepts and ways of thinking into existing computer science courses so that students learn to ask not simply “Can I build it?” but rather “Should I build it, and if so, how?”

Through Embedded EthiCS, students learn to identify and think through ethical issues, explain their reasoning for taking, or not taking, a specific action, and ideally design more thoughtful systems that reflect basic human values. The program is the first of its kind nationally and is seen as a model for a number of other colleges and universities that plan to adapt it, including Massachusetts Institute of Technology and Stanford University.

In recent years, computer science has become the second most popular concentration at Harvard College, after economics. About 2,750 students have enrolled in Embedded EthiCS courses since it began. More than 30 courses, including all classes in the computer science department, participated in the program in spring 2019.

Students learn to ask not simply “Can I build it?” but rather “Should I build it, and if so, how?”

“We don’t need all courses, what we need is for enough students to learn to use ethical thinking during design to make a difference in the world and to start changing the way computing technology company leaders, systems designers, and programmers think about what they’re doing,” said Grosz.

It became clear that Harvard’s computer science students wanted and needed something more just a few years ago, when Grosz taught “Intelligent Systems: Design and Ethical Challenges,” one of only two CS courses that had integrated ethics into the syllabus at the time.

During a class discussion about Facebook’s infamous 2014 experiment covertly engineering news feeds to gauge how users’ emotions were affected, students were outraged by what they viewed as the company’s improper psychological manipulation. But just two days later, in a class activity in which students were designing a recommender system for a fictional clothing manufacturer, Grosz asked what information they thought they’d need to collect from hypothetical customers.

“It was astonishing,” she said. “How many of the groups talked about the ethical implications of the information they were collecting? None.”

When she taught the course again, only one student said she thought about the ethical implications, but felt that “it didn’t seem relevant,” Grosz recalled.

“You need to think about what information you’re collecting when you’re designing what you’re going to collect, not collect everything and then say ‘Oh, I shouldn’t have this information,’” she explained.

Making it stick

Seeing how quickly even students concerned about ethics forgot to consider them when absorbed in a technical project prompted Grosz to focus on how to help students keep ethics up front. Some empirical work shows that standalone courses aren’t very sticky with engineers, and she was also concerned that a single ethics course would not satisfy growing student interest. Grosz and Simmons designed the program to intertwine the ethical with the technical, thus helping students better understand the relevance of ethics to their everyday work.

In a broad range of Harvard CS courses now, philosophy Ph.D. students and postdocs lead modules on ethical matters tailored to the technical concepts being taught in the class.

“We want the ethical issues to arise organically out of the technical problems that they’re working on in class,’” said Simmons. “We want our students to recognize that technical and ethical challenges need to be addressed hand in hand. So a one-off course on ethics for computer scientists would not work. We needed a new pedagogical model.”

Examples of ethical problems courses are tackling

Image gallery

Are software developers morally obligated to design for inclusion?

Should social media companies suppress the spread of fake news on their platforms?

Should search engines be transparent about how they rank results?

Should we think about electronic privacy as a right?

Getting comfortable with a humanities-driven approach to learning, using the ideas and tools of moral and political philosophy, has been an adjustment for the computer-science instructors as well as students, said David Grant, who taught as an Embedded EthiCS postdoc in 2019 and is now assistant professor of philosophy at the University of Texas at San Antonio.

“The skill of ethical reasoning is best learned and practiced through open and inclusive discussion with others,” Grant wrote in an email. “But extensive in-class discussion is rare in computer science courses, which makes encouraging active participation in our modules unusually challenging.”

Students are used to being presented problems for which there are solutions, program organizers say. But in philosophy, issues or dilemmas become clearer over time, as different perspectives are brought to bear. And while sometimes there can be right or wrong answers, solutions are typically thornier and require some difficult choices.

“This is extremely hard for people who are used to finding solutions that can be proved to be right,” said Grosz. “It’s fundamentally a different way of thinking about the world.”

“They have to learn to think with normative concepts like moral responsibility and legal responsibility and rights. They need to develop skills for engaging in counterfactual reasoning with those concepts while doing algorithm and systems design” said Simmons. “We in the humanities problem-solve too, but we often do it in a normative domain.”

“What we need is for enough students to learn to use ethical thinking during design to make a difference in the world.”

— Barbara Grosz, Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences

The importance of teaching students to consider societal implications of computing systems was not evident in the field’s early days, when there were only a very small number of computer scientists, systems were used largely in closed scientific or industry settings, and there were few “adversarial attacks” by people aiming to exploit system weaknesses, said Grosz, a pioneer in the field. Fears about misuse were minimal because so few had access.

But as the technologies have become ubiquitous in the past 10 to 15 years, with more and more people worldwide connecting via smartphones, the internet, and social networking, as well as the rapid application of machine learning and big data computing since 2012, the need for ethical training is urgent. “It’s the penetration of computing technologies throughout life and its use by almost everyone now that has enabled so much that’s caused harm lately,” said Grosz.

That apathy has contributed to the perceived disconnect between science and the public. “We now have a gap between those of us who make technology and those of us who use it,” she said.

Simmons and Grosz said that while computer science concentrators leaving Harvard and other universities for jobs in the tech sector may have the desire to change the industry, until now they haven’t been furnished with the tools to do so effectively. The program hopes to arm them with an understanding of how to identify and work through potential ethical concerns that may arise from new technology and its applications.

“What’s important is giving them the knowledge that they have the skills to make an effective, rational argument with people about what’s going on,” said Grosz, “to give them the confidence … to [say], ‘This isn’t right — and here’s why.’”

“It is exciting. It’s an opportunity to make use of our skills in a way that might have a visible effect in the near- or midterm.”

— Jeffrey Behrends, co-director of Embedded EthiCS

A winner of the Responsible CS Challenge in 2019, the program received a $150,000 grant for its work in technology education that helps fund two computer science postdoc positions to collaborate with the philosophy student-teachers in developing the different course modules.

Though still young, the program has also had some nice side effects, with faculty and graduate students in the two typically distant cohorts learning in unusual ways from each other. And for the philosophy students there’s been an unexpected boon: working on ethical questions at technology’s cutting edge. It has changed the course of their research and opened up new career options in the growing field of engaged ethics.

More like this

Socrates and binary code.

Embedding ethics in computer science curriculum

Barbara Grosz (from left), Jeff Behrend, and Allison Simmons

Embedded EthiCS wins $150,000 grant

“It is exciting. It’s an opportunity to make use of our skills in a way that might have a visible effect in the near- or midterm,” said philosophy lecturer Jeffrey Behrends , one of the program’s co-directors.

Will this ethical training reshape the way students approach technology once they leave Harvard and join the workforce? That’s the critical question to which the program’s directors are now turning their attention. There isn’t enough data to know yet, and the key components for such an analysis, like tracking down students after they’ve graduated to measure the program’s impact on their work, present a “very difficult evaluation problem” for researchers, said Behrends, who is investigating how best to measure long-term effectiveness.

Ultimately, whether stocking the field with designers, technicians, executives, investors, and policymakers will bring about a more responsible and ethical era of technology remains to be seen. But leaving the industry to self-police or wait for market forces to guide reforms clearly hasn’t worked so far.

“Somebody has to figure out a different incentive mechanism. That’s where really the danger still lies,” said Grosz of the industry’s intense profit focus. “We can try to educate students to do differently, but in the end, if there isn’t a different incentive mechanism, it’s quite hard to change Silicon Valley practice.”

Next: Ethical concerns rise as AI takes an ever larger decision-making role in many industries.

Share this article

You might like.

Ever-creative, Nobel laureate in physics Andre Geim extols fun, fanciful side of very serious science

Sherry Turkle.

Sociologist Sherry Turkle warns against growing trend of turning to AI for companionship, counsel

A Culex mosquito (Culex pilosus) feeding from an invasive brown anole lizard in Florida.

Entomologist says there is much scientists don’t know about habitats, habits, impacts on their environments

College accepts 1,937 to Class of 2028

Students represent 94 countries, all 50 states

Pushing back on DEI ‘orthodoxy’

Panelists support diversity efforts but worry that current model is too narrow, denying institutions the benefit of other voices, ideas

So what exactly makes Taylor Swift so great?

Experts weigh in on pop superstar's cultural and financial impact as her tours and albums continue to break records.

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Oxford Handbook of Digital Ethics

  • < Previous
  • Next chapter >

1 The History of Digital Ethics

Vincent C. Müller, A. v. Humboldt Professor for Theory and Ethics of AI, Friedrich-Alexander Universität Erlangen-Nürnberg

  • Published: 19 December 2022
  • Cite Icon Cite
  • Permissions Icon Permissions

Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention. But what were the developments that led to its existence and present state? What are the traditions, concerns, and technological and social developments that guided digital ethics? How did ethical issues change with the digitalization of human life? How did the traditional discipline of philosophy respond and how was ‘applied ethics’ influenced by these developments? This chapter proposes to view the history of digital ethics in three phases: pre-digital modernity (before the invention of digital technology), digital modernity (with digital technology but analogue lives), and digital post-modernity (with digital technology and digital lives). For each phase, the developments in digital ethics are explained with the background of the technological and social conditions. Finally, a brief outlook is provided.

Introduction

The history of digital ethics as a field was strongly shaped by the development and use of digital technologies in society. This digital ethics often mirror the ethical concerns of the pre-digital technologies that were replaced, but in more recent times, digital technologies have also posed questions that are truly new. When ‘data processing’ became a more common activity in industry and public administration in the 1960s, the concerns of ethicists were old issues like privacy, data security , and power through information access. Today, digital ethics involves old issues that took on a new quality due to digital technology, such as surveillance , news , or dating , but it also covers new issues that did not exist at all, such as automated weapons , search engines , automated decision-making , and existential risk from artificial intelligence (AI) .

The terms used to name the expanding discipline have also changed over time: we started with ‘computer ethics’ ( Bynum 2001 ; Johnson 1985 ; Vacura 2015 ), then more abstract terms like ‘information ethics’ were proposed ( Floridi 1999 ), and now some use the term ‘digital ethics’ ( Capurro 2010 ), as this Handbook does. We also have digital ethics for particular areas, such as ‘the ethics of AI’, ‘data ethics’, ‘robot ethics’, etc.

There are reasons for these changes: ‘computer ethics’ now sounds dated because it focuses attention on the machines, which made good sense when they were visible, big boxes but began to make less sense when many technical devices invisibly included computing and the location of the processor became irrelevant. The more ambitious notion of ‘information ethics’ involves a digital ontology ( Capurro 2006 ) and faces a significant challenge to explain the role of the notion of ‘information’; see ( Floridi 1999 ) versus ( Floridi and Taddeo 2016 ). Also, the term ‘information ethics’ is sometimes used in contexts in which information is not computed, for example, in ‘library and information science’. Occasionally, one hears the term ‘cyberethics’ ( Spinello 2020 ), which specifically deals with the connected ‘cyberspace’—probably now an outdated term, at least outside the military. In this confusion, some people use ‘digital’ as the new term, which captures the most relevant phenomena and moves away from the machinery to their use. One might argue that the process of ‘computing’ is still fundamental but that we will probably soon care less about whether a device uses computing (analogue or digital)—like we do not care much which energy source the engine in a car uses. The notion of ‘data’ will continue to make sense, but, in the future, I suspect that terms like ‘computing’ and ‘digital’ will just merge into ‘technology’.

Given that this Handbook already has articles on the current state of the art, this article tries to provide historical context, both in debates during the early days of information technology (IT) from the 1940s to the 1970s, when IT was an expensive technology available only in well-funded central ‘computation centres’; then roughly the 1980s to the early 2000s, with networked personal computers entering offices and households; finally, the past fifteen years or so with ‘smart’ phones and other ‘smart’ devices being used privately—for new purposes that emerge with the devices.

This article is structured around two ideas, namely, that (a) technology drives ethics and (b) many issues that are now part of ‘digital ethics’ predate digital technology. There is a certain tension between these two ideas, however, so the discussion will try to disentangle when and in what sense ‘technology drives ethics’ (e.g. by posing new problems, by revealing old ones, or even by effecting ethical change) and when that ‘drive’ is specific to ‘digital’ (computing) technology. I start on the assumption that (b) is true, thus the article must begin before the invention of digital technology, in fact, even before the invention of writing. We will return to these two ideas in the conclusion.

I propose to divide history into three main sections: pre-digital modernity (before the invention of digital technology), digital modernity (with digital technology but analogue lives), and digital post-modernity (with digital technology and digital lives). The hope is that this organization matches the social developments of these periods, but I make no claim that the terminology used here is congruent with a standard history of digital society. In each section, we will briefly look at the technology and then at digital ethics. Finally, it may be mentioned that there are significant research desiderata in the field; a detailed history of digital ethics, and indeed of applied or practical ethics, is yet to be written.

Pre-digital modernity: Talking and writing

Technology and society.

A fair proportion of the concerns of classical digital ethics are about informational privacy, information security, power through information, etc. These issues existed long before the computing age, in fact before writing was invented—after all, they also feature in village gossip.

One significant step in this timeline, however, was the beginning of symbols and iconic representations from cave paintings onwards (cf. Sassoon and Gaur 1997 ). These allowed records that do not immediately vanish to be maintained, as speech does, some of which can be transported to another place. It may be useful to differentiate (a) representation for someone, or intentional representation , and (b) representation per se , when something represents something else because that is its function in a system (assuming this is possible without intentional states). The word ‘tree’, pronounced by someone, is an intentional representation (type 1); the non-linguistic representation of a tree in the brain of an organism that sees the tree is a non-intentional representation (type 2) ( Müller 2007 ). Evidently, one major step that is relevant for digital ethics was the invention and use of writing— for the representation of natural language but also for mathematics and other purposes. Symbols in writing are already digital; that is, they have a sharp boundary with no intermediate stages (something is either an ‘A’ or a ‘B’, it cannot be a bit of both) and they are perfectly reproducible—one can write the exact same word or sentence more than once.

In a further step, the replication of writing and images in print multiplies the impact that goes with that writing—what is printed can be transported, remembered, and read by many people. It can become more easily part of the cultural heritage. A further major step is the transmission of speech and symbols over large distances and then to larger audiences through telegraph, mail, radio, and TV. Suddenly, a single person speaking could be heard and even seen by millions of others around the globe, even in real time.

There is a significant body of ethical and legal discussion on pre-digital information handling, especially after the invention of writing, printing, and mass communication. Much of it is still the law today, such as the privacy of letters and other written communication, the press laws, and laws on libel (defamation). The privacy of letters was legally protected in the early days of postal services in the early eighteenth century, for example, in the ‘Prussian New Postal Order’ of 1712 ( Matthias 1812 : 54). Remarkably, several of these laws have lost their teeth in the digital era without explicit legal change. For example, email is often not protected by the privacy of letters, and online publications are often not covered by press law.

The central issue of privacy, often connected with ‘data protection’, started around 1900 ( Warren and Brandeis 1890 ), developed into a field ( Hoffman 1973 ; Martin 1973 ; Westin 1968 ) and is still a central topic of discussion today; from classical surveillance ( Macnish 2017 ), governance ( Bennett and Raab 2003 ), and ethical analysis ( Roessler 2017 ; van den Hoven et al. 2020 ) to analysis for activism ( Véliz 2020 ). This is an area where the law has not caught up with technical developments in such a way that the original intentions could be maintained—it is not even clear that these intentions are still politically desired.

The power of information and misinformation was well understood after the invention of printing but especially after the invention of mass media like radio and TV and their use in propaganda—media studies and media ethics became standard academic fields after the Second World War. Media ethics is still an important aspect of digital ethics ( Ess 2014 ), especially the aspect of the ‘public sphere’ ( Habermas 1962 ).

Apart from this tradition of more ‘societal’ ethics, there is a more ‘personal’ kind of ethics of professional responsibility that started in this area—and had an impact in the digital era. The influential Institute of Electrical and Electronics Engineers (IEEE, initially American Institute of Electrical Engineers, AIEE) adopted its first ‘Principles of Professional Conduct for the Guidance of the Electrical Engineer’ in 1912 ( AIEE 1912 ). ‘Engineering ethics’ is thus older than ethics of computing—but, interestingly, the electrical and telephone industries in the United States managed to get an exception to the demand that engineers hold a professional licence (PE). This move may have had a far-reaching impact into the computer science of today, which usually does not see itself as a discipline of engineering, and bound by the ethos of engineers—though there are computer scientists that would want to achieve recognition as a profession and thus the ethos of ‘being a good engineer’ (in many countries, engineering has high status and computer science degrees are ‘diplomas in engineering’).

Up to this point, we see the main ethical themes of privacy and data security, power of information, and professional responsibility.

Digital modernity: Digital ethics in IT

As a rough starting point in this part of the timeline, one should take the first design for a universal computer with Babbage’s ‘analytic engine’ in about 1840; the first actual universal computer was feasible only when computers could use electronic parts, starting with Zuse’s Z3 in 1941, followed by the independently developed ENIAC in 1945, and the Manchester Mark I in 1949 and then many more machines, mostly due to military funding ( Ifrah 1981 ). All major computers since then have been electronic universal digital computers with stored programs. Shortly after the Second World War came the beginnings of the science of ‘informatics’ with ‘cybernetics’ ( Ashby 1956 ; Wiener 1948 ) and C.E. Shannon’s ‘A Mathematical Theory of Communication’ ( Shannon 1948 ). In 1956, J. McCarthy, M.L. Minsky, N. Rochester, and C.E. Shannon organized the Dartmouth conference on ‘Artificial Intelligence’, thus coining the term ( McCarthy et al. 1955 ). Less than ten years later, H. Simon predicted, ‘Machines will be capable, within 20 years, of doing any work that a man can do’ ( Simon 1965 : 96). In 1971, integrated processor (microprocessor) computers started, with all integrated circuits in one microchip. This technology effectively started the modern computer era. Up to that point, computers had been big and very expensive devices, only used by large corporations, research centres, or public entities for ‘data processing’; from the 1980s, ‘personal computers’ were possible (and had to be labelled as such).

Ray Kurzweil has put the development from the Second World War to the present with characteristic panache:

Computers started out as large remote machines in air-conditioned rooms tended by white coated technicians. Subsequently they moved onto our desks, then under our arms, and now in our pockets. Soon, we’ll routinely put them inside our bodies and brains. Ultimately we will become more nonbiological than biological. ( Kurzweil 2002 )

Professional ethics

The first discussions about ethics and computers in digital modernity were about the personal ethics of the people who work professionally in computing—what they should or should not do. In that phase, a computer scientist was an expert, rather like a doctor or a mechanical engineer, and the question arose whether the new ‘profession’ needed ethics. These early discussions of computer ethics often had a certain tinge of moralizing, of having discovered an area of life that had escaped the attention of ethicists so far, but where immorality, or at least some impact on society, looms. In contrast to this, professional ethics today often take the more positive approach that practitioners face ethical problems that expert analysis might help to resolve. This suspicion of immorality was often supported by the view of practitioners that our technology is neutral and our aims laudable, thus ‘ethics’ is not needed—a naïve view one finds even today.

The early attempts at professional ethics moved into computer science quite early in the discipline; for example, the US Association for Computing Machinery (ACM) adopted ‘Guidelines for Professional Conduct in Information Processing; in 1966 and Donn Parker pushed this agenda in his discipline in the ensuing years ( Parker 1968 ). The current version is called the ‘ACM Code of Ethics and Professional Conduct’ ( ACM 2018 ).

Responsible technology

The use of nuclear (atomic) bombs in the Second World War and the discussion about the risk of generating electricity in nuclear power stations from the late 1950s fuelled the increasing concern about the limits of technology in the 1960s. This political development is closely connected to the political developments in ‘the generation of 1968’ on the political left in Europe and the United States. The ‘Club of Rome’ was and is a group of high-level politicians, scientists, and industry leaders that deals with the basic, long-term problems of humankind. In 1972, it published the highly influential book, The Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind ( Club of Rome 1972 ). It argued that the industrialized world was on an unsustainable trajectory of economic growth, using up finite resources (e.g. oil, minerals, farmable land) and increasing pollution, with the background of an increasing world population. These were the views of a radical minority at the time, and even today they are still far from commonplace.

This report and other similar discussions fuelled a generally more critical view of technology and the growth it enables. They led to a field of ‘technology assessment’ in terms of long-term impacts that has also dealt with information technologies ( Grunwald 2002 ). This area of the social sciences is influential in political consulting and has several academic institutes (e.g. the Karlsruhe Institute of Technology). At the same time, a more political angle of technology is taken in the field of ‘Science and Technology Studies’ (STS), which is now a sizable academic field with degree programmes, journals, and conferences. As books like The Ethics of Invention ( Jasanoff 2016 ) show, concerns in STS are often quite similar to those in ethics, though typically with a more ‘critical’ and more empirical approach. Despite these agreements, STS approaches have remained oddly separate from the ethics of computing.

Concerns about sustainable development , especially with respect to the environment, have been prominent on the political agenda for about forty years and they are now a central policy aim in most countries, at least officially. In 2015, the United Nations adopted the ‘2030 Agenda for Sustainable Development’ ( United Nations 2015 ) with seventeen ‘Sustainable Development Goals’. These goals are now quite influential; for example, they guide the current development of official European Union policy on AI. The seventeen goals are: (1) no poverty; (2) zero hunger; (3) good health and well-being; (4) quality education; (5) gender equality; (6) clean water and sanitation; (7) affordable and clean energy; (8) decent work and economic growth; (9) industry, innovation, and infrastructure; (10) reducing inequality; (11) sustainable cities and communities; (12) responsible consumption and production; (13) climate action; (14) life below water; (15) life on land; (16) peace, justice, and strong institutions, and (17) partnerships for the Goals.

It had also been understood by some that science and engineering generally pose ethical problems. The prominent physicist, C.F. v. Weizsäcker predicted in 1968 that computer technology will fundamentally transform our lives in the coming decades ( Weizsäcker 1968 ). Weizsäcker asked how we can have individual freedom in such a world, ‘i.e. freedom from the control of anonymous powers’ (439). At the end of his article, he demands a Hippocratic oath for scientists. Soon after, Weizsäcker became the founding Director of the famous Max Planck Institute for Research into the Life in a Scientific-Technical World , co-directed by Jürgen Habermas since 1971. At that time, there was clearly a sense with major state funders that these issues deserved their own research institute.

In the United States, the ACM had a Special Interest Group ‘Computers & Society’ (SIGCAS) from 1969—it is still a significant actor today and still publishes the journal Computers and Society . Norbert Wiener had warned of AI even before the term was coined (see Bynum 2008 : 26–30; 2015 ). In Cybernetics , Wiener wrote:

[ … ] we are already in a position to construct artificial machines of almost any degree of elaborateness of performance. Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil. ( Wiener 1948 : 28)

Note that the atomic bomb was a starting point for a critical view on technology in his case, too. In his later book, The Human Use of Human Beings , he warns of manipulation:

[ … ] such machines, though helpless by themselves, may be used by a human being or a block of human beings to increase their control over the rest of the race or that political leaders may attempt to control their populations by means not of machines themselves but through political techniques as narrow and indifferent to human possibility as if they had, in fact, been conceived mechanically. ( Wiener 1950 )

Thus, in this phase, professional responsibility gains prominence as an issue, the notion of control through information and machinery comes up as a theme, and there is a general concern about the longer-term impacts of technology.

Post-modernity

In this part of the timeline, from 1980 to today (2021), I will use a typical university student in a wealthy European country as an illustration. I think this timeline is useful because it is easy to forget how the availability and use of computers have changed in the past decades and even the past few years. (If this text is read a few years after writing, it will seem quaintly old-fashioned.) We will see that this is the phase in which computers enter peoples’ lives and digital ethics becomes a discipline.

In the first half of the 1980s, a student would have seen a ‘personal computer’ (PC) in a business context, and towards the end of the 1980s they would probably own one. These PCs were not connected to a network, unless on university premises, so data exchange was through floppy disks. Floppy disks held 360KB, later 720 KB and 1.44 MB; if the PC had a hard drive at all, it would hold ca. 20–120 MB. After 1990, if private PCs had network connections, that would be through modem dial-in on analogue telephone lines that would mainly serve links to others in the same network (e.g. CompuServe or AOL), allowing email and file-transfer protocol (ftp). Around the same time, personal computers moved from a command-line to a graphic interface, first on MacOS, then on MS Windows and UNIX. Students would use electrical typewriters or university-owned computers for their writing until ca . the year 2000, and often even later. The first worldwide web (WWW) page came online in 1990 and institutional web pages became common in the late 1990s; around the same time a dial-in internet connection at home through a modem became affordable, and Google was founded (1998). After 2000, it became common for a student to have a computer at home with an internet connection, though file exchanges would still be mostly via physical data carriers. By ca . 2010, the internet connection would be ‘always on’ and fast enough for frequent use of www pages, and video; by ca . 2019, it would be fully digital (ISDN, ASDL, …) and its files would often be stored in the ‘cloud’, that is, spaces somewhere on the internet. Fibre-optic lines started to be used around 2020. With the COVID-19 pandemic over 2020–2022, cooperative work online through live video became common.

Mobile phones (cell phones) became commonly affordable by students in the late 1990s, but these were just phones, increasingly miniaturized. The first ‘smart’ phone, the iPhone, was introduced in 2007. Around 2015, a typical student would own such a smartphone and would use that phone mostly for things other than calls; essentially as a portable tablet computer with wi-fi capability (but it would be called a ‘phone’, not a ‘computer’). After 2015, the typical smartphone would be connected to the internet at all times (with 3G). The frequent use of the web-over-phone internet became affordable around 2018/2019 (with 4G), so around 2020 video calls and online teaching became possible and useful.

The students born after ca . 1980 (i.e. at university from around 2020) are often called ‘digital natives’, meaning that their teenage and adult lives took place when digital information processing was commonplace. To digital natives, pre-digital technologies like print, radio, or television, feel ‘old’, while for the previous generations, digital technologies feel ‘new’. This generational difference may also be one of the few cases where technological change drives actual ethical change, for example, in that digital natives are not worried about privacy in the way older generations are.

Together with smartphones, we now (2022) also begin to have other ‘smart’ devices that incorporate computers and are connected to the internet (soon with 5G), especially portables, TVs, cars, and homes—also known as the ‘Internet of Things’ (IoT). ‘Smart’ superstructures like grids, cities, and roads are being deployed. Sensors with digital output are becoming ubiquitous. In addition, a large part of our lives is digital (and thus does not need to be captured by sensors), much of it conducted through commercial platforms and ‘social media’ systems. All these developments enable a surveillance economy where data is a valuable commodity (as discussed in other chapters in this Handbook).

While a ‘computer’ was easily recognized as a physical box until ca . 2010, it is now incorporated into a host of devices and systems and often not perceived as such; perhaps even designed not to be noticed (e.g. in order to collect data). Much of computing has become a transparent technology in our daily lives: we use it without special learning and do not notice its existence or that computing takes place: ‘The most profound technologies are those that disappear’ ( Weiser 1991 : 94).

For the purposes of digital ethics, the crucial developments of our students were the move from computers ‘somewhere else’ to their own PC ( ca . 1990), the use of the WWW ( ca . 1995) and their smartphone ( ca . 2015); the current development is the move to computing as a ‘transparent technology’.

Establishment

The first phase of digital ethics, or computer ethics, was the effort in the 1980s and 1990s to establish that there is such a thing or that there should be such a thing—both within philosophy or applied ethics and within computer science, especially the curriculum of computer science at universities. This ‘establishment’ is of significant importance for the academic field since, once ‘ethics’ is an established component of degrees in computer science and related disciplines, there is a labour market for academic teachers, a demand for writing textbooks and articles, etc. ( Bynum 2010 ). It is not an accident that the field was established beyond ‘professional ethics’ and general societal concerns around the same time as the move of computers from labs to offices and homes occurred.

The first use of ‘computer ethics’ was probably by Deborah Johnson in her paper ‘Computer Ethics: New Study Area for Engineering Science Students’, where she remarked, ‘Computer professionals are beginning to look toward codes of ethics and legislation to control the use of software’ ( Johnson 1978 ). Sometimes ( Bynum 2001 ), it is Walter Maner who is credited with the first use for ‘ethical problems aggravated, transformed or created by computer technology’ ( Maner 1980 ). Again, professional ethics seems to have been the forerunner for computer ethics, generally.

A few years later, with fundamental publications like James H. [Jim] Moor’s ‘What is Computer Ethics?’ ( Moor 1985 ), the first textbook ( Johnson 1985 ), and three anthologies with established publishers (Blackwell, MIT Press, Columbia University Press), one can speak of an established small discipline ( Moor and Bynum 2002 ). The two texts by Moor and Johnson are still the most cited works in the discipline, together with classic texts on privacy, such as ( Warren and Brandeis 1890 ) and ( Westin 1968 ). As ( Tavani 1999 ) shows, in the next fifteen years there was a steady flow of monographs, textbooks, and anthologies. In the 1990s, ‘ethics’ started to gain a place in many computer science curricula.

In terms of themes , we have the classical ones (privacy, information power, professional ethics, impact of technology) and we now have increasing confidence that there is ‘something unique’ here. Maner says, ‘I have tried to show that there are issues and problems that are unique to computer ethics. For all of these issues, there was an essential involvement of computing technology. Except for this technology, these issues would not have arisen, or would not have arisen in their highly altered form’ ( Maner 1996 ).

We now get a wider notion of digital ethics that includes issues which only come up in ethics of robotics and AI , for example, manipulation, automated decision-making, transparency, bias, autonomous systems, existential risk, etc. ( Müller 2020 ). The relationship between robots or AI systems and humans had already been discussed in Putnam’s classic paper ‘Robots: Machines or Artificially Created Life?’ ( Putnam 1964 ) and it has seen a revival in the discussion of singularity ( Kurzweil 1999 ) and existential risk from AI ( Bostrom 2014 ).

Digital ethics now covers the human digital life , online and with computing devices—both on an individual level and as a society, for example, social networks ( Vallor 2016 ). As a result, this handbook includes themes like human–robot interaction, online interaction, fake news, online relationships, advisory systems, transparency and explainability, discrimination, nudging, cybersecurity, and existential risk—in other words, the digital life is prominently discussed here; something that would not have happened even five years ago.

Institutional

The journal Metaphilosophy , founded by T.W. Bynum and R. Reese in 1970, first published articles on computer ethics in the mid-1980s. The journal Minds and Machines, founded by James Fetzer in 1991, started publishing ethics papers under the editorship of James H. Moor (2001–2010) . The conference series ETHICOMP (1995) and the European Council of the Paint, Printing Ink and Artists’ Colours Industry (CEPE) (1997) started in Europe, and specialized journals were established: the Journal of Information Ethics (1992), Science and Engineering Ethics (1995), Ethics and Information Technology (1999), and Philosophy & Technology (2010). The conferences on ‘Computing and Philosophy’ (CAP), since 1986 in North America, later in Europe and Asia, united to the International Association for Computing and Philosophy (IACAP) in 2011 and increasingly have a strong division on ethical issues; as do the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB) (in the UK) and the Philosophy and Theory of Artificial Intelligence (PT-AI).

Within the academic field of philosophy, applied ethics and digital ethics have remained firmly marginal or specialist even now, with very few presentations at mainstream conferences, publications in mainstream journals, or posts in mainstream departments. As far as I can tell, no paper on digital ethics has appeared in places like the Journal of Philosophy, Mind, Philosophical Review, Philosophy & Public Affairs or Ethics to this day—while, significantly, there are papers on this topic in Science, Nature , or Artificial Intelligence . Practically orientated fields in philosophy are treated largely as the poor and slightly embarrassing cousin who has to work for a living rather than having old money in the bank. In traditional philosophy, what counts as ‘a problem’ is still mostly defined through tradition rather than permitting a problem to enter philosophy from the outside. Cementing this situation, few of these ‘practical’ fields have the ambition to have a real influence on traditional philosophy; but this is changing, and I would venture that this influence will be strong in the decades to come. It is interesting to note that the citation counts of academics in computing ethics and theory have surpassed those of comparable philosophers in related traditional areas, and similar trends are happening now with journals. One data point: as of 2020, the average article in Mind is cited twice within four years, while the average article in Minds and Machines is cited three times within four years—the number for the latter journal doubled in three years. 1

Several prominent philosophers have worked on theoretical issues around AI and computing (e.g. Dennett, Dreyfus, Fodor, Haugeland, Searle), typically with a foundation of their careers in related areas of philosophy, such as philosophy of mind, philosophy of language, or logic. This also applies to Jim Moor, who was one of the first people in digital ethics to hold a professorship at a reputed general university (Dartmouth College). Still, the specialized researchers in the field were at marginal institutions or doing digital ethics on the side. This changed slowly; for example, several technical universities had professors working in digital ethics relatively early on; the Technical Universities in the Netherlands founded a 4TU Centre for Ethics and Technology in 2007 (Delft, Eindhoven, Twente, and Wageningen). In the past decade, Floridi and Bostrom were appointed to professorships at Oxford, at the Oxford Internet Institute (OII) and the Future of Humanity Institute (FHI). Coeckelbergh was appointed to a chair at the philosophy department in Vienna in 2015 (where Hrachovec was already active). A few more people were and are active in philosophical issues of ‘new media’, for example, Ch. Ess, who moved to Oslo in 2012. The ethics of AI became a field only quite recently, with the first conference in 2012 (Artificial General Intelligence (AGI)-Impacts), but it now has its own institutes at many mainstream universities.

In other words, only five years ago, almost all scholars in digital ethics were at institutions marginal to mainstream philosophy. It is only in those last couple of years that digital ethics is becoming mainstream; many more jobs are advertised, senior positions are available to people in the field, younger faculties are picking up on the topic, and more established faculties at established institutions are beginning to deem these matters worthy of their attention. That development is rapidly gaining pace now.

I expect that mainstream philosophy will quickly pick up digital ethics in the coming years—the subject has shown itself to be mature and fruitful for classical philosophical issues, and there is an obvious societal demand and significant funding opportunities. Probably there is also some hype already. In the classic notion of a ‘hype cycle’ for the expectations from a new technology, the development is supposed to go through several phases: After its beginnings at the ‘technology trigger’, it gains more and more attention, reaching a ‘peak of inflated expectations’, after which a more critical evaluation begins and the expectations go down, eventually reaching a ‘trough of disillusionment’. From there, a realistic evaluation shows that there is some use, so we get the ‘slope of enlightenment’ and eventually the technology settles on a ‘plateau of productivity’ and becomes mainstream. The Gartner Hype Cycle for AI, 2019 ( Goasduff 2019 ) sees digital ethics itself at the ‘peak of inflated expectations’ … meaning that it is downhill from here, for some time, until we hopefully reach the ‘plateau of productivity’. (My own view is that this is wrong since we are seeing the beginnings of AI policy and stronger digital ethics now.)

The state of the art at the present and an outlook into the future are given in the chapters of this Handbook. Moor saw a bright future even twenty years ago: ‘The future of computer ethics: You ain’t seen nothin’ yet!’ ( Moor 2001 ), and he followed up with a programmatic plea for ‘machine ethics’ ( Moor 2006 ). Moor opens the former article with the bold statement:

Computer ethics is a growth area. My prediction is that ethical problems generated by computers and information technology in general will abound for the foreseeable future. Moreover, we will continue to regard these issues as problems of computer ethics even though the ubiquitous computing devices themselves may tend to disappear into our clothing, our walls, our vehicles, our appliances, and ourselves. ( Moor 2001 : 89)

The prediction has undoubtedly held up until now. The ethics of the design and use of computers is clearly an area of very high societal importance and we would do well to catch problems early on—this is something we failed to do in the area of privacy ( Véliz 2020 ) and some hope that we will do in the area of AI ( Müller 2020 ).

However, as Moor mentions, there is also a very different possible line that was developed around the same time: Bynum reports on an unpublished talk by Deborah G. Johnson with the title ‘Computer Ethics in the 21st Century’ at the 1999 ETHICOMP conference:

On Johnson’s view, as information technology becomes very commonplace—as it gets integrated and absorbed into our everyday surroundings and is perceived simply as an aspect of ordinary life—we may no longer notice its presence. At that point, we would no longer need a term like ‘computer ethics’ to single out a subset of ethical issues arising from the use of information technology. Computer technology would be absorbed into the fabric of life, and computer ethics would thus be effectively absorbed into ordinary ethics. ( Bynum 2001 : 111ff) (cf. Johnson 2004 )

On Johnson’s view, we will have applied ethics and the ethics will concern most themes, such as ‘information privacy’ or ‘how to behave in a romantic relationship’ ( Nyholm et al. 2022 )—and much of this will be taking places with or through computing devices, but it will not matter (even though many things will remain that cannot be done without such devices). In other words, the ‘drive’ of technology we have seen in this history will come to a close, and the technology will become transparent. This transparency will likely have ethical problems itself—it enables surveillance and manipulation. If Johnson is right, however, we will soon have the situation that all too much is digital and transparent, and thus digital ethics is in danger of disappearing into general applied ethics. In Molière’s play, this bourgeois who wants to become a gentleman tells his ‘philosophy master’:

Oh dear! For more than forty years I have been speaking prose while knowing nothing of it, and I am most obliged to you for telling me so. Molière, Le Bourgeois gentilhomme (Act II) 1670

Conclusion and questions

One feature that is characteristic of the new developments in digital ethics and in applied philosophy generally is how a problem becomes a problem worth investigating. In traditional philosophy, the criterion is often that there already exists a discussion in the past noting that there is something philosophically interesting about it, something unresolved. Thus, typically, we do not need to ask again whether that problem is worth discussing or whether it relies on assumptions we should not make (so we will find people who seriously ask whether Leibniz or Locke was right on the origin of ideas, for example). In digital ethics, what counts as a problem also includes the demand to be philosophically interesting , but more importantly, whether it has relevance . Quite often, this means that the problem first surfaces in fields other than philosophy. The initially dominant approach of professional ethics had a touch of ‘policing’ about it, of checking that everyone behaves—that moralizing gives ethics a bad name and it typically comes too late. More modern digital ethics tries to make people sensitive in the design process (‘ethics by design’) and to pick up problems where people really do not know what the ethically right thing to do is—these are the proper ethical problems that deserve our attention.

For the relation between ethics and computer ethics, Moor seemed right in this prediction:

The development of ethical theory in computer ethics is sometimes overstated and sometimes understated. The overstatement suggests that computer ethics will produce a new ethical theory quite apart from traditional ethical notions. The understatement suggests that computer ethics will disappear into ordinary ethics. The truth, I predict, will be found in the middle [ … ] My prediction is that ethical theory in the future will be recognizable but reconfigured because of work done in computer ethics during the coming century. ( Moor 2001 : 91)

In my view, philosophers must do more than export an expertise from philosophy or ethics to practical problems: we must also import insights from these debates back to philosophy. The field of digital ethics can feed largely on societal demand and the real impact philosophical insights can have in this area, but in order to secure its place within philosophy, we must show that the work is both technically serious and has real potential to shed light on traditional issues. As an example, consider the question of when an artificial agent truly is an agent that is responsible for their actions—that discussion seems to provide a new angle to the debates on agency that traditionally focused on human beings. We can now ask the conceptual question anew and provide evidence from experiments with making things, rather than from passive observation.

Nearly 250 years ago, Immanuel Kant stated that the four main questions of philosophy are: ‘1. What can I know? 2. What should I do? 3. What can I hope for? 4. What is the human?’ ( Kant 1956/1800 : 26) (questions 1–3 in Kant 1956/1781 : A805 and B33). The philosophical reflection on digital technology contributes to all four of these.

Acknowledgements

I am grateful to Karsten Weber and Eleftheria Deltsou for useful comments and to Carissa Véliz, Guido Löhr, Maximilian Karge, and Jeff White for detailed reviewing.

See https://www.scimagojr.com , accessed 8 August 2022.

ACM (Association for Computing Machinery) (2018), ‘ACM Code of Ethics and Professional Conduct’, https://ethics.acm.org , accessed 8 August 2022.

AIEE (American Institute of Electrical Engineers) ( 1912 ), ‘ Principles of Professional Conduct for the Guidance of the Electrical Engineer ’, Transactions of the American Institute of Electrical Engineers , 31.

Ashby, W.R. ( 1956 ), An Introduction to Cybernetics (Eastford, CT: Martino Fine Books).

Google Scholar

Google Preview

Bennett, C.J. , and Raab, C. ( 2003 ), The Governance of Privacy: Policy Instruments in Global Perspective , 3rd 2017 edn (Cambridge, MA: MIT Press).

Bostrom, N. ( 2014 ), Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press).

Bynum, T.W. ( 2001 ), ‘ Computer Ethics: Its Birth and Its Future ’, Ethics and Information Technology 3(2), 109–112.

Bynum, T.W. ( 2008 ), ‘Milestones in the History of Information and Computer Ethics’, in K.E. Himma and H.T. Tavani (eds), The Handbook of Information and Computer Ethics (New York: Wiley), 25–48.

Bynum, T.W. ( 2010 ), ‘The Historical Roots of Information and Computer Ethics’, in L. Floridi (ed.), The Cambridge Handbook of Information and Computer Ethics (Cambridge: Cambridge University Press), 20–38, https://www.cambridge.org/core/books/cambridge-handbook-of-information-and-computer-ethics/AA0E1E64AE997C80FABD3657FD8F6CA8 , accessed 8 August 2022.

Bynum, T.W. ( 2015 ), ‘ Computer and Information Ethics ’, The Stanford Encyclopedia of Philosophy ( Summer 2018 Edition ) (Stanford, CA: CLSI). https://plato.stanford.edu/archives/sum2018/entries/ethics-computer , accessed 8 August 2022.

Capurro, R. ( 2006 ), ‘ Towards an Ontological Foundation of Information Ethics ’, Ethics and Information Technology 8(4), 175–186.

Capurro, R. ( 2010 ), ‘Digital Ethics’, in Academy of Korean Studies (ed.), Civilization and Peace (Seoul: The Academy of Korean Studies), 203–214, http://www.capurro.de/korea.html , accessed 8 August 2022.

Club of Rome ( 1972 ), The Limits to Growth (New York: Potomac Associates).

Ess, C. ( 2014 ), Digital Media Ethics , 2nd edn (Cambridge: Polity Press).

Floridi, L. ( 1999 ), ‘ Information Ethics: On the Philosophical Foundation of Computer Ethics ’, Ethics and Information Technology 1(1), 33–52.

Floridi, L. , and Taddeo, M. ( 2016 ), ‘ What is Data Ethics? ’, Philosophical Transactions of the Royal Society A , 374(2083).

Goasduff, L. (2019), ‘Top Trends on the Gartner Hype Cycle for Artificial Intelligence, 2019’, 12 September, https://www.gartner.com/smarterwithgartner/top-trends-on-the-gartner-hype-cycle-for-artificial-intelligence-2019 , accessed 8 August 2022.

Grunwald, A. ( 2002 ), Technikfolgenabschätzung—eine Einführung (Berlin: Edition Sigma).

Habermas, J. ( 1962 ), Strukturwandel der Öffentlichkeit. Untersuchungen zu einer Kategorie der bürgerlichen Gesellschaft (Neuwied and Berlin: Luchterhand).

Hoffman, L.J. ( 1973 ), Security and Privacy in Computer Systems (Los Angeles, CA: Melville Publications).

Ifrah, G. ( 1981 ), Histoire Universelle des Chiffres (Paris: Editions Seghers).

Jasanoff, S. ( 2016 ), The Ethics of Invention: Technology and the Human Future (New York: Norton).

Johnson, D.G. ( 1978 ), ‘ Computer Ethics: New Study Area for Engineering Science Students ’, Professional Engineer 48(8), 32–34.

Johnson, D.G. ( 1985 ), Computer Ethics (Englewood Cliffs, NJ: Prentice Hall).

Johnson, D.G. ( 2004 ), ‘Computer Ethics’, in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information (Oxford: Blackwell), 65–74.

Kant, I. ( 1956 /1781), Kritik der Reinen Vernunft , ed. W. Weischedel , A/B edn (Werkausgabe III & IV; Frankfurt: Suhrkamp).

Kant, I. ( 1956 /1800), Logik , ed. W. Weischedel (Werkausgabe VI; Frankfurt: Suhrkamp).

Kurzweil, R. ( 1999 ), The Age of Spiritual Machines: When Computers Exceed Human Intelligence (London: Penguin).

Kurzweil, R. (2002), ‘We Are Becoming Cyborgs’, 15 March, http://www.kurzweilai.net/we-are-becoming-cyborgs , accessed 8 August 2022.

Macnish, K. ( 2017 ), The Ethics of Surveillance: An Introduction (London: Routledge).

Maner, W. ( 1980 ), Starter Kit in Computer Ethics (Hyde Park, New York: Helvetia Press and the National Information and Resource Center for Teaching Philosophy).

Maner, W. ( 1996 ), ‘ Unique Ethical Problems in Information Technology ’, Science and Engineering Ethics 2(2), 137–154.

Martin, J. ( 1973 ), Security, Accuracy, and Privacy in Computer Systems (Englewood Cliffs, NJ: Prentice-Hall).

Matthias, W.H. ( 1812 ), Darstellung des Postwesens in den Königlich Ppreußischen Staaten (Berlin: Selbstverlag).

McCarthy, J. , Minsky, M. , Rochester, N. , and Shannon, C.E. (1955), ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, 31 August, http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html , accessed 8 August 2022.

Moor, J.H. ( 1985 ), ‘ What is Computer Ethics? ’, Metaphilosophy , 16(4), 266–275.

Moor, J.H. ( 2001 ), ‘ The Future of Computer Ethics: You Ain’t Seen Nothin’ Yet! ’, Ethics and Information Technology 3(2), 89–91.

Moor, J.H. ( 2006 ), ‘ The Nature, Importance, and Difficulty of Machine Ethics ’, IEEE Intelligent Systems , 21(4), 18–21.

Moor, J.H. , and Bynum, T.W. ( 2002 ), Cyberphilosophy: The Intersection of Philosophy and Computing (Oxford: Blackwell).

Müller, V.C. ( 2007 ), ‘ Is There a Future for AI without Representation? ’, Minds and Machines 17(1), 101–115.

Müller, V.C. ( 2020 ), ‘Ethics of Artificial Intelligence and Robotics’, in E.N. Zalta (ed.), Stanford Encyclopedia of Philosophy (Summer 2020; Palo Alto, CA: CSLI, Stanford University), 1–70, https://plato.stanford.edu/entries/ethics-ai , accessed 8 August 2022.

Nyholm, S. , Danaher, J. , and Earp, B.D. ( 2022 ), ‘The Technological Future of Love’, in A. Grahle , N. McKeever , and J. Sanders (eds), Philosophy of Love in the Past, Present, and Future (London: Routledge), 224–239.

Parker, D.B. ( 1968 ), ‘ Rules of Ethics in Information Processing ’, Communicaitions of the ACM 11, 198–201.

Putnam, H. ( 1964 ), ‘ Robots: Machines or Artificially Created Life? ’, Mind, Language and Reality, Philosophical Papers II , repr. 1975 (Cambridge: Cambridge University Press), 386–407.

Roessler, B. ( 2017 ), ‘ Privacy as a Human Right ’, Proceedings of the Aristotelian Society , 2(CXVII), 187–206.

Sassoon, R. , and Gaur, A. ( 1997 ), Signs, Symbols and Icons: Pre-History of the Computer Age (Exeter: Intellect Books).

Shannon, C.E. ( 1948 ), ‘ A Mathematical Theory of Communication ’, Bell Systems Technical Journal , 27(July, October), 379–423, 623–656.

Simon, H. ( 1965 ), The Shape of Automation for Men and Management (New York: Harper & Row).

Spinello, R.A. ( 2020 ), Cyberethics: Morality and Law in Cyberspace (Burlington, Mass: Jones & Bartlett Learning).

Tavani, H.T. (1999), ‘ Computer Ethics Textbooks: A Thirty-Year Retrospective ’, ACM SIGCAS Computers and Society (September), 26–31.

United Nations (2015), ‘The 2030 Agenda for Sustainable Development’, https://sustainabledevelopment.un.org/post2015/transformingourworld , accessed 8 August 2022.

Vacura, M. ( 2015 ), ‘ The History of Computer Ethics and Its Future Challenges ’, Information Technology and Society Interaction and Independence (IDIMT 2015) (Vienna), 325–333.

Vallor, S. ( 2016 ), ‘Social Networking and Ethics’, in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy , Summer 2016 edn, https://plato.stanford.edu/entries/ethics-social-networking , accessed 8 August 2022.

van den Hoven, J. , Blaauw, M. , Pieters, W. , and Warnier, M. ( 2020 ), ‘Privacy and Information Technology’, in E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy , Summer 2020 edn, https://plato.stanford.edu/archives/sum2020/entries/it-privacy , accessed 8 August 2022.

Véliz, C. ( 2020 ), Privacy is Power (London: Penguin).

Warren, S.D. , and Brandeis, L.D. ( 1890 ), ‘ The Right to Privacy ’, Harvard Law Review 4(5), 193–220.

Weiser, M. ( 1991 ), ‘ The Computer for the 21st Century ’, Scientific American 265(3), 94–104.

Weizsäcker, C.F. v. ( 1968 ), ‘ Die Wissenschaft als Ethisches Problem ’, Physikalische Blätter , 10, 433–441.

Westin, A.F. ( 1968 ), ‘ Privacy and Freedom ’, Washington & Lee Law Review 25(166), 101–106.

Wiener, N. ( 1948 ), Cybernetics: Or Control and Communication in the Animal and the Machine , 2nd. edn 1961 (Cambridge, MA: MIT Press).

Wiener, N. ( 1950 ), The Human Use of Human Beings (Boston, MA: Houghton Mifflin).

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Library Home

Ethical Use of Technology in Digital Learning Environments: Graduate Student Perspectives

essay on ethical use of digital technology

Barbara Brown

Verena Roberts

Michele Jacobsen

Copyright Year: 2020

Publisher: University of Calgary

Language: English

Formats Available

Conditions of use.

Attribution

Learn more about reviews.

Reviewed by Morris Thomas, Director, Center for Excellence in Teaching, Learning, & Assessment, Howard University on 2/8/21

The book is comprehensive by providing content and context on the various issues involving ethical use of technology in digital learning environments. The authors covers information from admissions and access for the use and creation of... read more

Comprehensiveness rating: 4 see less

The book is comprehensive by providing content and context on the various issues involving ethical use of technology in digital learning environments. The authors covers information from admissions and access for the use and creation of technology in the digital learning spaces. However, more discussion or focus could have been placed on some of the topics covered and other topics could have been placed in another text. One could look at this as being an opportunity and a strength of this text, meaning it is so rich that it could have been expanded into another or possibly multiple texts. Nevertheless, the content provided is appropriately covered.

Content Accuracy rating: 5

The text is up to date with the literature covered and referenced throughout the text. It is written from a research lens and therefore the voice of the text is unbiased and or is transparent about any posited positions presented throughout the text.

Relevance/Longevity rating: 5

Any text involving technology could run the risk of being perceived as out-of-date due to the ever evolving technological landscape, however, this text will maintain relevance to the available literature due to its well contextualized presentation of the various aspects of ethical considerations for using technology in digital learning environments.

Clarity rating: 5

One of the strongest aspects of this text is its clarity. The text is well-organized and clearly explains all technical terms used throughout the text. The readers should be able to follow along with clarity and understanding.

Consistency rating: 4

The book is consistent as it pertains to the terminology used. However, the text is vast in content covered and the framework could have been strengthened by expanding on some topics and limiting the inclusion of other topics.

Modularity rating: 4

The text has clear readability. As it pertains to using this text in a course, it depends on the instructors' philosophy on reading assignments, if one simply assigns full chapters, this text certainly hits the mark. The text also includes clear smaller sections of text with subheadings and subunits. The subunits presented should not cause much disruption to the reader.

Organization/Structure/Flow rating: 5

The text is well-organized. In the introduction of the text, its content is clearly described and then as you read the text you find the information provided in a manner that is clear and sensible. The layout of the text makes the information included accessible for the reader.

Interface rating: 5

I did not find evidence of any significant issues with the text's interface. The text was easy to read and navigate.

Grammatical Errors rating: 5

The text is well edited and did not have evidence of any significant grammatical errors.

Cultural Relevance rating: 5

There were no apparent instances of cultural insensitivity or offensive content throughout the text.

This book certainly has value and will be used by me in courses I teach but also has value to me as one who provides training on teaching and learning. The text provides a rich resource to employing the ethical use of technology in digital learning spaces. This is conversation is needed more than ever as learning spaces continue to digitize.

Table of Contents

  • Chapter 1: Ethical Considerations When Using Artificial Intelligence-Based Assistive Technologies in Education
  • Chapter 2: Beware: Be Aware - The Ethical Implications of Teachers Who Use Social Networking Sites (SNSs) to Communicate
  • Chapter 3: From Consumers to Prosumers: How 3D Printing is Putting Us in the Driver’s Seat for Creation and the Ethical Considerations that Accompany this Shift.
  • Chapter 4: Ethical Issues in Academic Resource Sharing
  • Chapter 5: Adaptive Learning Systems in Modern Classrooms
  • Chapter 6: STEM Beyond the Acronym: Ethical Considerations in Standardizing STEM Education in K-12
  • Chapter 7: Considerations of Equitable Standards in the Implementation of Assistive Technology
  • Chapter 8: Who Gets In? Examining Ethics and Equity in Post-Secondary Admissions
  • Chapter 9: To What Extent Does Fake News Influence Our Ability to Communicate in Learning Organizations?

Ancillary Material

About the book.

This book is the result of a co-design project in a class in the Masters of Education program at the University of Calgary. The course, and the resulting book, focus primarily on the safe and ethical use of technology in digital learning environments. The course was organized according to four topics based on Farrow’s (2016) Framework for the Ethics of Open Education.

About the Contributors

Contribute to this page.

World Bank Blogs Logo

Ethics in the digital world: Where we are now and what’s next

Kate gromova, yaroslav eferin.

Lines connected to Thinkers, symbolizing the meaning of artificial intelligence

Will widespread adoption of emerging digital technologies such as the Internet of Things and Artificial Intelligence improve people’s lives? The answer appears to be an easy “yes.” The positive potential of data seems self-evident. Yet, this issue is being actively discussed across international summits and events. Thus, the agenda of Global Technology Government Summit 2021 is dedicated to questions around whether and how “data can work for all”, emphasizing trust aspects, and especially ethics of data use. Not without a reason, at least 50 countries are grappling independently with how to define ethical data use smoothly without violating people’s private space, personal data, and many other sensitive aspects.  

Ethics goes online

What is ethics per se? Aristotle proposed that ethics is the study of human relations in their most perfect form. He called it the science of proper behavior. Aristotle claimed that ethics is the basis for creating an optimal model of fair human relations; ethics lie at the foundation of a society’s moral consciousness. They are the shared principles necessary for mutual understanding and harmonious relations.

Ethical principles have evolved many times over since the days of the ancient Greek philosophers and have been repeatedly rethought (e.g., hedonism, utilitarianism, relativism, etc.). Today we live in a digital world, and most of our relationships have moved online to chats, messengers, social media, and many other ways of online communication.  We do not see each other, but we do share our data; we do not talk to each other, but we give our opinions liberally. So how should these principles evolve for such an online, globalized world? And what might the process look like for identifying those principles?  

Digital chaos without ethics

2020 and the lockdowns clearly demonstrate that we plunge into the digital world irrevocably. As digital technologies become ever more deeply embedded in our lives, the need for a new, shared data ethos grows more urgent. Without shared principles, we risk exacerbating existing biases that are part of our current datasets.  Just a few examples:

  • The common exclusion of women as test subjects in much medical research results in a lack of relevant data on women’s health. Heart disease, for example, has traditionally been thought of as a predominantly male disease. This has led to massive misdiagnosed or underdiagnosed heart disease in women.
  • A study of AI tools that authorities use to determine the likelihood that a criminal reoffends found that algorithms produced different results for black and white people under the same conditions. This discriminatory effect has resulted in sharp criticism and distrust of predictive policing.
  • Amazon abandoned its AI hiring program because of its bias against women. The algorithm began training on the resumes of the candidates for job postings over the previous ten years. Because most of the applicants were men, it developed a bias to prefer men and penalized features associated with women.

These examples all contribute to distrust or rejection of potentially beneficial new technological solutions. What ethical principles can we use to address the flaws in technologies that increase biases, profiling, and inequality? This question has led to significant growth in interest in data ethics over the last decade (Figures 1 and 2). And this is why many countries are now developing or adopting ethical principles, standards, or guidelines.

Figure 1. Data ethics concept, 2010-2021     

Country ethics

Figure 2. AI ethics concept, 2010-2021

AI Ethics

Guiding data ethics

Countries are taking wildly differing approaches to address data ethics. Even the definition of data ethics varies. Look, for example, at three countries—Germany, Canada, and South Korea—with differing geography, history, institutional and political arrangements, and traditions and culture.

Germany established a Data Ethics Commission in 2018 to provide recommendations for the Federal Government’s Strategy on Artificial Intelligence. The Commission declared that its  operating principles were based on the Constitution, European values, and its “cultural and intellectual history.” Ethics, according to the Commission, should not begin with establishing boundaries. Rather, when ethical issues are discussed early in the creation process, they may make a significant contribution to design, promoting appropriate and beneficial applications of AI systems.

In Canada, the advancement of AI technologies and their use in public services has spurred a discussion about data ethics. The Government of Canada’s recommendations focuses on public service officials and processes. It provided guiding principles to ensure ethical use of AI and developed a comprehensive Algorithmic Impact Assessment online tool to help government officials explore AI in a way that is “governed by clear values, ethics, and laws.”

The Korean Ministry of Science and ICT, in collaboration with the National Information Society Agency, released Ethics Guidelines for the Intelligent Information Society in 2018. These guidelines build on the Robots Ethics Charter. It calls for developing AI and robots that do not have “antisocial” characteristics.” Broadly, Korean ethical policies mainly focused on the adoption of robots into society, while emphasizing the need to balance protecting “human dignity” and “the common good ."  

Do data ethics need a common approach?

The differences among these initiatives seem to be related to traditions, institutional arrangements, and many other cultural and historical factors. Germany places emphasis on developing autonomous vehicles and presents a rather comprehensive view on ethics; Canada puts a stake on guiding government officials; Korea approaches questions through the prism of robots. Still, none of them clearly defines what data ethics is. None of them is meant to have a legal effect. Rather, they stipulate the principles of the information society. In our upcoming study, we intend to explore the reasons and rationale for different approaches that countries take.

Discussion and debate on data and technology ethics undoubtedly will continue for many years to come as digital technologies continue to develop and penetrate into all aspects of human life.   But the sooner we reach a consensus on key definitions, principles, and approaches, the easier the debates can turn into real actions. Data ethics are equally important for government, businesses, individuals and should be discussed openly. The process of such discussion will serve itself as an awareness and knowledge-sharing mechanism.

Recall the Golden Rule of Morality: Do unto others as you would have them do unto you. We suggest keeping this in mind when we all go online.

  • The World Region

Kate Gromova

Digital Development Consultant, Co-founder of Women in Digital Transformation

Yaroslav Eferin

Digital Development Consultant

Join the Conversation

  • Share on mail
  • comments added

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Thinking Through the Ethics of New Tech…Before There’s a Problem

  • Beena Ammanath

essay on ethical use of digital technology

Historically, it’s been a matter of trial and error. There’s a better way.

There’s a familiar pattern when a new technology is introduced: It grows rapidly, comes to permeate our lives, and only then does society begin to see and address the problems it creates. But is it possible to head off possible problems? While companies can’t predict the future, they can adopt a sound framework that will help them prepare for and respond to unexpected impacts. First, when rolling out new tech, it’s vital to pause and brainstorm potential risks, consider negative outcomes, and imagine unintended consequences. Second, it can also be clarifying to ask, early on, who would be accountable if an organization has to answer for the unintended or negative consequences of its new technology, whether that’s testifying to Congress, appearing in court, or answering questions from the media. Third, appoint a chief technology ethics officer.

We all want the technology in our lives to fulfill its promise — to delight us more than it scares us, to help much more than it harms. We also know that every new technology needs to earn our trust. Too often the pattern goes like this: A technology is introduced, grows rapidly, comes to permeate our lives, and only then does society begin to see and address any problems it might create.

essay on ethical use of digital technology

  • BA Beena Ammanath is the Executive Director of the global Deloitte AI Institute, author of the book “Trustworthy AI,” founder of the non-profit Humans For AI, and also leads Trustworthy and Ethical Tech for Deloitte. She is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, and e*trade.

Partner Center

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Exploring the ethical issues in research using digital data collection strategies with minors: A scoping review

Contributed equally to this work with: Danica Facca, Maxwell J. Smith, Jacob Shelley, Daniel Lizotte, Lorie Donelle

Roles Conceptualization, Data curation, Formal analysis, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Faculty of Information and Media Studies, Western University, London, ON, Canada

ORCID logo

Affiliation School of Health Studies, Western University, London, ON, Canada

Current address: Faculty of Law, Western University, London, ON, Canada

Current address: Department of Epidemiology and Biostatistics, Western University, London, ON, Canada

Affiliation Department of Computer Science, Western University, London, ON, Canada

Roles Conceptualization, Data curation, Formal analysis, Supervision, Writing – original draft, Writing – review & editing

Affiliation Arthur Labatt Family School of Nursing, Western University, London, ON, Canada

  • Danica Facca, 
  • Maxwell J. Smith, 
  • Jacob Shelley, 
  • Daniel Lizotte, 
  • Lorie Donelle

PLOS

  • Published: August 27, 2020
  • https://doi.org/10.1371/journal.pone.0237875
  • Reader Comments

Table 1

While emerging digital health technologies offer researchers new avenues to collect real-time data, little is known about current ethical dimensions, considerations, and challenges that are associated with conducting digital data collection in research with minors. As such, this paper reports the findings of a scoping review which explored existing literature to canvass current ethical issues that arise when using digital data collection in research with minors. Scholarly literature was searched using electronic academic databases for articles that provided explicit ethical analysis or presented empirical research that directly addressed ethical issues related to digital data collection used in research with minors. After screening 1,156 titles and abstracts, and reviewing 73 full-text articles, 20 articles were included in this review. Themes which emerged across the reviewed literature included: consent, data handling, minors’ data rights, observing behaviors that may result in risk of harm to participants or others, private versus public conceptualizations of data generated through social media, and gatekeeping. Our findings indicate a degree of uncertainty which invariably exists with regards to the ethics of research that involves minors and digital technology. The reviewed literature suggests that this uncertainty can often lead to the preclusion of minors from otherwise important lines of research inquiry. While uncertainty warrants ethical consideration, increased ethical scrutiny and restricting the conduct of such research raises its own ethical challenges. We conclude by discussing and recommending the ethical merits of co-producing ethical practice between researchers and minors as a mechanism to proceed with such research while addressing concerns around uncertainty.

Citation: Facca D, Smith MJ, Shelley J, Lizotte D, Donelle L (2020) Exploring the ethical issues in research using digital data collection strategies with minors: A scoping review. PLoS ONE 15(8): e0237875. https://doi.org/10.1371/journal.pone.0237875

Editor: Ghislaine JMW van Thiel, Utrecht University Medical Center, NETHERLANDS

Received: April 1, 2020; Accepted: August 4, 2020; Published: August 27, 2020

Copyright: © 2020 Facca et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting Information files.

Funding: This work was supported by the Western University Faculty of Health Sciences Emerging Team grant.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Much has been written about the ethics of conducting research with minors, due in part to the distinctive ethical issues that emerge when conducting research with this population [ 1 , 2 ]. Similarly, there is an emerging body of literature about the ethics of research practices that include digital data (sometimes characterized as ‘big data’) collection via digital technologies (e.g., smartphones). However, the ethical dimensions, considerations, and challenges that are associated with digital data collection in research involving minors remains unclear. Notably, does research that involves the generation and/or collection of digital data among minors present unique ethical challenges? What are those challenges and how might researchers best manage or mitigate the ethical dimensions and challenges within a digital technology context? This scoping review explores existing literature to understand and anticipate the ethical issues associated with collecting digitally derived research data with minors in addition to possible resolutions that can be put forward based on the reviewed literature.

Minors and digital data

One challenge worth noting at the outset of this review is that there is no consensus as to the definition of a minor; two broad approaches could therefore be adopted for present purposes [ 3 , 4 ]. The first defines a minor as any child who has not reached the age of majority, generally thought to be under the age of 18. The problem with this definition is that the age of majority varies by jurisdiction. Two notable documents which have been extremely influential in research ethics to the extent that many countries base their regulations on their guidelines, the Declaration of Helsinki [ 5 ] and the International Ethical Guidelines for Health-related Research Involving Humans [ 6 ], discuss age of majority and minors’ participation in research.

For example, within the North American context, the age of majority is 18 in some Canadian provinces like Ontario, Alberta, and New Brunswick, and 19 in other Canadian provinces like British Columbia, Nova Scotia, and the territory of Nunavut. Similar to Canada, in the United States, the age of majority also varies by state. In some U.S. states like Colorado, Idaho, and Minnesota the age of majority is 18, in others like Alabama and Nebraska it is 19, and in Mississippi it is 21. Within a European context, the age of majority in all EU Member States is 18 except for Scotland where it is 16 [ 7 ]. Additionally, the age of majority may depend on context. For example, in the Canadian province of Ontario, the age of majority is 18, but Ontario legislation dictates that one has to be 19 to purchase alcohol, suggesting that, in the context of alcohol purchases, an 18-year-old is a “minor”. Further, in some EU Member States, a minor will gain full legal capacity if they are married or become pregnant before reaching the age of majority [ 7 ], suggesting that, in the context of a marital contract or pregnancy, a minor can become an adult with full legal capacity even if they are under 18.

The second approach to defining a minor is based on capacity. This is consistent with established ethical guidelines which build on the Declaration of Helsinki [ 5 ] and the International Ethical Guidelines for Health-related Research Involving Humans [ 6 ] such as the Tri-Council Policy Statement : Ethical Conduct for Research Involving Humans [ 8 ]. According to the Tri-Council Policy Statement : Ethical Conduct for Research Involving Humans , if a child is “mature sufficiently to decide on their own behalf (subject to legal requirements), the researcher must seek the children’s autonomous consent in order for their participation to continue” [ 8 ]. For our purposes, it is not necessary that we resolve this issue, as our interest is in how others have addressed the issue of digital data collection in research with minors. As identified in Table 1 below, we adopted a broad approach, using keywords in our search intended as over-inclusive.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0237875.t001

A second challenge worth noting is specifying what exactly is meant by “digital data”. For our purposes, while there may be overlap with what is otherwise termed ‘big data’—defined by the volume, variety, complexity, speed and value of the data—we broadly define digital data as electronic data or information however collected [ 9 ]. As Lupton notes: “People’s interactions online, their use of mobile and wearable devices and other ‘smart’ objects and their movements in sensor-embedded spaces all generate multiple and constant flows of digital data, often about intensely personal actions and preferences, social relationships and bodily functions and movements” [ 10 ]. It is in this data that our review is primarily interested.

A multi-disciplinary research team was established to undertake this scoping review, with expertise in computer science, digital health, ethics, law, and public health. The scoping review was conducted according to Arksey and O’Malley’s [ 11 ] five stage framework which included: 1) identifying the research question/s; 2) identifying relevant studies; 3) selecting relevant studies; 4) charting the data; 5) aggregating, summarizing, and reporting the results. The research team consulted a research librarian who refined the initial search strategy and recommended databases to search given the context, subject, and population of interest.

Literature search

To address the research question, scholarly literature was searched using four categories of keywords: ethical issues , digital data collection , research , and minors ( Table 1 ). While initially considered, the search term ‘internet’ was purposefully excluded given the high volume of irrelevant articles captured in the search; the term ‘internet’ is so ubiquitous that it was not helpful to discriminate articles that addressed the purpose of the scoping review. Boolean search operators “AND” and “OR” were used to combine keywords between and within categories. Searches were conducted in PubMed and Scopus databases.

Articles were eligible for inclusion if they: were written in English; discussed the context of interest (i.e., research), subject of interest (i.e., digital data collection), population of interest (i.e., minors), and provided an explicit ethical analysis or presented empirical research that directly addressed ethical issues arising from the topic of interest (i.e., using digital data collection in research with minors). Upon completion of the search, duplicate articles were removed, and search results were screened by title and abstract for eligibility. Screening of abstracts was undertaken by a primary reviewer and confirmed by a second reviewer. Articles which passed the abstract screening were then retrieved in full-text and further screened for eligibility by a secondary reviewer and confirmed by two additional reviewers. Articles were summarized, and characteristics were charted, including: author(s), year of publication, database source, and ethical considerations. Reference lists of full-text articles were searched and eligible articles charted. Articles were thematically coded through multiple iterative discussions among three reviewers. A narrative account of key thematic findings is presented according to each identified theme.

Search strategy

The search strategy and results are summarized ( Fig 1 ). Searching academic databases identified 1,170 abstracts, with 1,156 remaining following the removal of duplicates. Preliminary screening of abstracts identified 73 articles for full-text screening, of which 20 articles met the inclusion criteria. Hand searching reference lists did not identify any additional relevant articles, resulting in a total of 20 articles for inclusion in the scoping review.

thumbnail

https://doi.org/10.1371/journal.pone.0237875.g001

Scholarly literature characteristics

The majority of articles included in the review were published after 2014 ( n = 14, 70%) and originated from the USA ( n = 6, 30%), Australia ( n = 6, 30%), and the UK ( n = 3, 15%). More than half were original research ( n = 11, 55%), including five qualitative studies (5/11, 45%). Overall, half of the articles ( n = 10, 50%) focused on general ethical concerns with web- or internet-based digital data collection methods. Five articles ( n = 5, 25%) focused specifically on the ethical concerns of using social media as a means of digital data collection. In general, social media ( n = 9, 45%) was the most popular means of digital data collection. Facebook ( n = 8/9, 88%) was the most popular platform identified, followed by Myspace ( n = 3/9, 33%), YouTube ( n = 2/9, 22%), Skype ( n = 1/9, 11%), and Twitter ( n = 1/9, 11%). Touch-screen technology was the second most popular means of digital data collection identified ( n = 6, 30%) and included smartphones ( n = 5/6, 83%) and tablets ( n = 3/6, 50%). The remainder of digital data collection means included online forums ( n = 4, 20%), computers ( n = 2, 10%), internet-based surveys ( n = 2, 10%), and an electronic health record system ( n = 1, 5%). Of the participant samples described in the original research articles ( n = 5), the majority age range of minors was 14 years of age and under ( n = 3). One study focused on minors between 14 and 17 years of age, while another study focused on minors between 7 and 17 years of age ( Table 2 ).

thumbnail

https://doi.org/10.1371/journal.pone.0237875.t002

The thematic analysis generated 6 themes that addressed ethical considerations regarding digital data collection in research with minors which included: consent, data handling, minors’ data rights, observing behaviors that may result in risk of harm to participants or others, private versus public conceptualizations of data generated with social media, and gatekeeping. Although these themes can generally be applied to any research that leverages digital data collection, two out of the six themes, minors’ data rights and gatekeeping, warrant greater attention as they uniquely apply to conducting research when the population of interest is minors. The following sections detail the ethical issues raised according to each theme. For purposes of this review, the term minors is used interchangeably with any age descriptor identified in the reviewed literature (i.e., child/ren, kid/s, teen/s, adolescent/s, infant/s).

Consent is an ethical requirement when conducting research with human participants. General principles of consent under policies that inform research practices like the Tri-Council Policy Statement : Ethical Conduct for Research Involving Humans [ 8 ], mandate that an individual’s consent must be given voluntarily and can be withdrawn at any time along with their data or biological materials upon request should they choose to withdraw their participation. To make such a decision, policies like the Tri-Council Policy Statement : Ethical Conduct for Research Involving Humans recommend that consent be determined by an individual’s decision-making capacity, that is, their ability “to understand relevant information presented (e.g., purpose of the research, foreseeable risks, and potential benefits), and to appreciate the potential consequences of any decision they make based upon this information” [ 8 ], rather than relying on age.

Autonomy, or the capacity needed to appreciate and understand the relevant information presented about a study in order to make an informed voluntary decision [ 8 ], is a necessary condition for consent. When it comes to involving minors in research, consent can be achieved in one of two ways: the researcher/s may obtain consent from the minor if it is determined that they have the capacity to make such an autonomous decision. Alternatively, the researcher/s may obtain consent from an authorized third party, like a parent or guardian, if it is determined that the minor does not have the capacity to make such an autonomous decision. If consent is obtained by an authorized third party, then assent to participate in the proposed research is sought from the minor. Consent by an authorized third party and assent are both needed for participation in research where it has been determined that a minor lacks autonomous decision-making capacity [ 8 ].

Numerous articles included in this review highlighted consent and assent as an ethical imperative when using digital data collection in research with minors [ 12 – 18 ]. Given the various geographical contexts of the research literature, the legal and ethical guidelines which determined the authors’ approaches to using digital data collection with minors varied respectively. Parsons [ 17 ] drew attention to digital data collection using digital technologies and the implications of this for participant recruitment and obtaining participant consent. Parsons [ 17 ] argued that digital technologies have the potential to support prospective participants’ autonomy, engagement, and decision making by: improving the accessibility of the research information; increasing motivation to take part in the study; and enhancing competency to inform decision-making. The author [ 17 ] further mentioned that digital technologies could be leveraged to make participant recruitment more inclusive, especially for minors with physical or learning disabilities. For example, using a laptop to present written text to prospective participants could give researchers the ability to tailor font size and color, background color, symbols, and computer-generated speech which can be paused, played, or slowed down, according to a minor’s needs [ 17 ]. Touch interfaces, like a tablet, were also a way to enhance recruitment and consent possibilities as they tended to be intuitive to users and did “not add unnecessary complexity to the learning process” [ 17 ].

Cowie and Khoo [ 12 ] approached issues of consent when using digital data collection in research with minors whereby minors are recognized as social actors and experts in their own lives. To this end, they informed prospective minor participants of the research aims and processes using multiple strategies to support minors’ decision to consent or dissent to participation in the research. They sent home a general newsletter about the study; created an interactive website about the study and research team for minors and their parents to view and discuss at home; and asked the minors’ teachers to explain the research aims in school [ 12 ]. Cowie and Khoo emphasized that once consent or assent is obtained from a minor participant in any study, researchers should treat it as provisional, that is to say, “ongoing and dependent on researcher-researched and inter-participant relationships…built upon sensitivity, reciprocal trust and collaboration” [ 12 ]. Treating consent as provisional and ongoing necessitates that researchers assess participants’ voluntariness at each point of contact and remind them of their right to withdraw their participation at any time [ 12 ].

Researchers [ 12 , 17 ] also advocated for the use of digital technologies to facilitate the consent process as they offer “possibilities for multimodal/multimedia communication that improves the accessibility of research information beyond the inclusion of different fonts, formatting styles and images” [ 12 ]. For example, the use of digital video clips to recruit, inform, and debrief minor participants in an interactive manner engaged participants more effectively than a paper printout [ 12 ]. When considering the integral role and ethical imperative of obtaining consent from minors, Cowie and Khoo emphasized that the “onus is on researchers to use appropriate methods to achieve…consent in a way that scaffolds children’s understanding and encourages and maintains their voluntary and positive participation” [ 12 ].

One scoping review conducted by Hokke et al. [ 19 ] which explored the ethical issues of digital data collection in research with minors (i.e., using the internet to recruit prospective participants for family and child research) also discussed the challenges of obtaining consent and assent. In their review of research, they concluded that minor consent and parental consent was more complex and ethically challenging when facilitated through online means, rather than face to face interaction [ 19 ]. Many of the studies reviewed by the authors [ 19 ] noted that obtaining consent through online means posed a risk that minors would fraudulently complete their parents’ online consent form. To circumvent this risk, they [ 19 ] contacted prospective participants online, and then obtained verbal consent over the phone to assess parents’ and minors’ understanding of the research aims, procedures, and risks using back-questioning techniques.

Depending on the topic of interest, Hokke et al. [ 19 ] noted that waiving parental consent might in fact be a methodological and ethical necessity. For example, two US studies waived parental consent out of ethical necessity [ 20 , 21 ] to collect digital data from gay and bisexual minors as they were considered to be at risk if they had to disclose their sexual identity to their parents as part of the consent process. Further, where the law required parental consent for a minor’s participation in research, researchers identified this as a possible deterrent for minors’ participation as some minors may be reluctant to ask their parents for permission [ 19 ].

Data handling

One ethical imperative about the use of digital data collection with minors that was consistent throughout the reviewed literature was the importance of safeguarding data [ 8 ]. Safeguarding practices mentioned by researchers in the current review included data encryption, storage location, and secure server technology [ 14 , 15 , 22 – 24 ].

Some researchers [ 25 ] stressed that the various ethical issues at stake when handling digitally derived data depended on whether the data was actively, or passively, collected. For Schwab-Reese et al. [ 25 ], active data collection methods closely aligned with traditional data collection methods as they consist of direct interaction with research participants, even if the interaction is facilitated through electronic means [ 25 ]. Active data collection methods required participants to actively engage in the data collection process; in direct conversation with the researcher, or in responding to survey questions. For example, a researcher may ask a series of questions in real-time through a social media app to which a participant can give an immediate response. Alternatively, passive data collection methods aligned with secondary data analysis as they do not require direct interaction with participants, but rather, aggregate and analyze large sets of existing data [ 25 ]. For example, a researcher may monitor the number of hours a participant spends on a certain app, like Twitter, and therefore does not need to be in direct contact with the participant to collect that data in real-time.

Since a researcher would have direct contact with participants when using active data collection methods, informed consent or assent processes follow suit as standard practice [ 25 ]; the participant would be made aware that whatever data they produced in that given research context would be given over to the researcher for analysis. Examples of active digital data collection methods include crowdsourcing (i.e., most commonly done through websites where an open call for participation is issued for users to complete a short survey), or, online recruitment (i.e., most commonly done through websites or social media). Using active digital data collection methods in research with minors may not pose any additive ethical issues beyond those mentioned earlier when considering consent/assent processes as well as the degree to which the data are safeguarded through encryption, server technology, and storage location.

However, passive data collection methods proved to be more ethically challenging for researchers [ 25 ] when considering users’ privacy expectations about their digital data. Since passive data collection methods, according to Schwab-Reese et al. [ 25 ], are predicated on analyzing existing data, this means researchers may not have to necessarily seek out consent or assent from users which could be considered ethically problematic (e.g., does the user consider their digital data on a social media platform, like Twitter, private or public information?). Methods of passive digital data collection include internet search queries (e.g., Google Search Trends, which regularly updates a database of aggregated internet queries), forum postings (e.g., as found on a website like Reddit, or social media platform like Facebook), or social media user activity (e.g., analyzing Twitter users’ tweets, likes, retweets, use of certain hashtags, etc.).

To address the ethical queries related to the use of passive digital data collection methods with minors, researchers consulted guidelines from the Association of Internet Researchers [ 26 ] that recommended consideration of the following: what constitutes ‘engaging with a human subject’; definitions of private and public space; and finally, tensions between regulatory and context specific ethical decision making [ 27 ].

Online forums and social media platforms offer vast research opportunities for publicly available and passively collected data. On the surface, research processes required for ethical research ‘engagement with a human subject’ would not seemingly apply to the collection of publicly available online data. However, Schwab-Reese et al. noted that while passive data collection studies typically dealt with “aggregated or de-identified data…recent research suggests de-identified datasets often contain sufficient personal information to potentially identify individuals” [ 25 ]. Current recommendations for the ethical practice of online and publicly available passive data collection may exempt researchers from the customary oversight of an institutional ethics review board or regulatory body. Instead, researchers are recommending the establishment of “an external advisory committee to critically process the potential harms, vulnerabilities, benefits, and so forth, even if the research does not directly engage with human subjects and thus, does not explicitly require intuitional ethics review” [ 25 ]. They suggest this may be best practice moving forward in order to deal with research that digitally collects “person-based data without direct human contact” [ 25 ].

Ethical research practices pertaining to defining public and private space with respect to digital data collection methods should “carefully consider the expectations of the individuals who are creating the data” [ 25 ]. Different from active digital data collection methods, where prospective participants would be informed as to how their digital data was to be used for research purposes, this may not necessarily be the case with passive digital data collection methods. For example, “the creator of a public blog or social media profile which is viewable by anyone may have a lower expectation of privacy than in a private forum that requires log-in by members” [ 25 ]. Therefore, if researchers seek to passively collect digital data from minor participants, a combined approach of both active and passive data collection methods may be appropriate to understand the privacy expectations of prospective participants (active) before using their preexisting data or monitoring their behavior (passive).

Lastly, Schwab-Reese et al. acknowledged that “regulations are often intended to encourage ethical research and practice, but when applied universally without consideration, regulations may inadvertently restrict important, necessary research” [ 25 ]. With the increase of digital derived data collection, Schwab-Reese et.al. concluded that “as technology-based research moves forward, it will be important to establish firm ethical boundaries for some clearly defined issues, while encouraging flexibility and situation-based ethical decision-making for ethically grey areas” [ 25 ]. Ethical decision making on a case-by-case basis appears to be the way forward when considering the multitude of grey areas that arise when involving minors in research that uses digital data collection, whether active or passive in nature.

Benefits of using digital data collection were reported as an efficient, low cost, environmentally friendly, and convenient way to engage with a diverse range of participants across geographical boundaries [ 28 ]. However, challenges of digital data collection methods included technical aspects like data storage (servers) and security (encryption) measures, particularly as they related to the collection of personal health information [ 23 , 24 , 28 ]. Two studies mentioned a law or regulation with respect to the handling of personal health information collected through digital means [ 23 , 24 ]. One study provided an extensive discussion related to necessary actions taken to comply with Ontario’s Personal Health Information Protection Act [ 29 ] which safeguards the collection and use of personal health information [ 23 ]. Martin-Ruiz et al. [ 24 ] reported on the use of the Data Protection Impact Assessment [ 30 ], a European regulation which assesses a study’s potential risks to individual privacy [ 30 ]. In their report, the researchers addressed the six protection goals regarding the collection and use of participants’ personal data [ 24 ] which included elements such as data availability, integrity, confidentiality, unlinkability, transparency, and intervenability [ 30 ]. As exemplified in both of these studies, addressing the ethical issues associated with handling digital data took both regulatory bodies and context-specific (personal health information) ethical decision making into consideration.

Minors’ data rights.

The extent to which minors are included in discussions regarding the dissemination of their digital data for research purposes, or the sharing of their digital data with adult actors such as their parents or guardians generated through research, was frequently flagged as an ethical issue by many [ 12 , 13 , 16 , 18 , 22 , 24 , 31 ]. The contemporary issue of minors’ (digital) data rights is part of a much larger rights movement influenced by the establishment of the United Nations Convention on the Rights of the Child in 1989 [ 13 , 22 ]. Historically, minors have been framed as a marginal group, with little, if any, opportunity to weigh in on matters (like research) that affect their social worlds [ 22 ]. The establishment of the convention signaled “the deconstruction of the protectionist paradigm of childhood [and] assigned [minors] the right to be co-producers of science and involved in all stages of research on and with them” [ 22 ] and consequently led to the emergence of childhood studies as a discipline. Opportunities for minors to be consulted and included in decision making processes is recognized by childhood studies’ scholars as a way to advance minors’ agency and autonomy within the research landscape [ 13 ].

Another ethical consideration in relation to minors’ data rights included privacy expectations of their parents/guardians [ 18 ]. Cultural and legal practices tend to normalize parent/guardian access to minors’ data held within government, healthcare or educational institutions; as such, parents/guardians may expect to have access to minors’ data collected for research purposes [ 18 ]. Drawing attention to minors’ data rights acknowledges their right to privacy over their personal data, both in an online and offline context, along with the need to obtain their consent in order to share their data with third party adult actors like their parents/guardians [ 32 ].

Observing behaviors that may result in risk of harm to participants or others

Three studies mentioned the ethical challenges of reporting risky behaviors, or potential behaviors that could result in risk, when using digital data collection in research with minors [ 15 , 33 , 34 ]. In general, conducting research with human participants “includes the risk of observing or being informed about behavior that is illegal, amoral, immoral, or otherwise illicit” [ 33 ], to which researchers have to determine a course of action. Using digital data collection methods could potentially involve collecting information about minors’ online activity, for instance on social media, which may fall outside of a research study’s objectives but nonetheless be considered as illegal, amoral, immoral, or otherwise illicit. Under ethical guidelines like the Tri-Council Policy Statement : Ethical Conduct for Research Involving Humans such instances are known as material incidental findings as “they are reasonably determined to have significant welfare implications for the participant or prospective participant” [ 8 ].

Researchers [ 15 , 33 ] highlighted the ongoing tension between existing laws and research ethics when conducting social media related research with minors. One study recognized that observing and responding to behaviors through social media which may harm participants or others poses complex challenges for researchers as it may be hard to decipher the parties involved (i.e., perpetrators, victims), the nature of the activity, or the proper contact to report the activity to [ 33 ]. For example, if a minor participant disclosed to a researcher that they illegally downloaded music through a social media platform, the researcher would need to decide if they would notify the participant and report the illegal activity, even if it meant jeopardizing their trust [ 33 ]. Similarly, researchers utilized the social media platform Facebook to contact under-aged women in Australia to explore the social influences of drinking alcohol [ 15 ]. Ethical challenges emerged for researchers regarding their responsibility to report incidents of illegal, amoral, immoral, or otherwise illicit behavior (i.e., photographs of inappropriate behaviors such as nudity, illegal activity) captured via online data collection against the promise of information confidentiality. Potentially reportable data was captured despite researchers’ caution to participants that “all information provided to the researcher would be treated in the strictest confidence [unless] the researcher was legally obliged to disclose information related to illegal activity if requested by relevant authorities” [ 15 ].

The ethical tension for researchers arises ‘in-between’ the requirement to disclose reportable incidents while ensuring an ethical duty of participant confidentiality whereby information obtained from a participant will remain confidential within and beyond a study [ 8 ]. While a researcher’s promise of confidentiality is central to developing and maintaining trusting relationships with participants, it “must, at times, be balanced against competing ethical considerations or legal or professional requirements that call for disclosure of information obtained or created in a research context” [ 8 ]. Depending on the topic and objective of a study, “researchers are expected to be aware of ethical codes…or laws…that may require disclosure of information they obtain in a research context” [ 8 ]. Breaching confidentiality by reporting confidential information risks losing the trust built between researcher and participant, but in some cases, it may be necessary in order to serve the greater good by protecting “the health, life or safety of a participant or a third party, a community, or the general population” [ 8 ].

Digitally collected data highlights unique tensions for researchers to maintain the integrity of the research purpose, to honor the confidentiality of participant information, and to disclose reportable events. For example, researchers [ 34 ] had to weigh the costs of constraining participants’ photographic activity against their study’s goal of equity and empowerment. They concluded that it was important “to allow youth to depict the reality of the challenges that they or their community faced without necessary constraint” [ 34 ]. Navigating the risk of harm to participants and their community proved to be complex as the authors realized minors often took photos on their own mobile phones rather than the digital cameras provided to them and uploaded the photos to their personal social media accounts to be shared on those platforms [ 34 ].

Private versus public conceptualizations of social media

A large part of the reviewed literature discussed the ongoing debate of whether social media is considered private or public domain when using digital data collection in research with minors [ 13 , 15 , 19 , 25 , 33 – 37 ]. Many articles drew attention to the shifting nature of privacy within digital spaces since “any information posted on the Internet [technically] enters a virtual public space” [ 15 ] even if it is posted with the intention of remaining private to certain audiences. Understandings of privacy depend on the context in which it is invoked as well as a user’s expectations within that context. Since there is not a clear distinction between what is considered public and private domain within the internet, and social media more generally, maintaining user privacy and limiting potential harm surfaced as key ethical challenges [ 15 , 33 , 35 , 36 ].

When using social media for research purposes, one study recommended researchers educate and ensure that their participants understand the Terms of Service associated with the social media site being used for the research project as it may lead to instances of public disclosure [ 33 ]. Another study stated that it is the researcher’s responsibility to also learn the nuances of the site’s privacy settings in order to inform their participants of the ways in which their information will be handled and, to a certain extent jeopardized, since privacy settings are not always fail-safe [ 15 ]. It was also mentioned that researchers should consider the cultural norms of the participants creating the data as well as the norms surrounding the social media platform being utilized to gauge what privacy means within these specific contexts [ 25 ].

When considering the cultural norms of the participants involved in research with social media, two studies recognized that “digital natives” [ 19 ], in other words, minors who are born into digital environments and learn how to use digital technologies from a young age, may conceptualize privacy differently than adults. For example, one article queried whether minors disclose private information on social media “without understanding or considering the permanence or far-reaching nature of online content, and without intending for their information to be used by others” [ 19 ]. While another mentioned that minors may disclose their private information on social media out of “naivety or ignorance” [ 37 ] rather than an intended disregard for their privacy and personal information.

Gatekeeping

By far the most acknowledged ethical issue pertaining to the use of digital data collection in research with minors was gatekeeping by a parent/caregiver [ 12 , 17 , 19 , 37 ], relevant stakeholders like medical professionals or educators [ 14 , 18 , 31 ] and research ethics boards [ 23 , 25 , 33 , 36 , 38 ]. While the inherent power imbalance between minors and adult researchers within the research context necessitates the need for gatekeeping by research ethics boards [ 12 , 14 ], it is important to note that “the power [parents have] as gatekeepers in the processes of…research participation…should not be under-estimated” [ 17 ]. Parents/guardians ultimately provide access to minor participants, and in some cases, consent for their participation.

Conducting research with minors, irrespective of data collection method, must account for the triadic nature of the prospective research relationship which consists of the researcher, minor, and gatekeeper. Ultimately, it is the gatekeeper who grants a researcher access to a minor’s world [ 22 ]. Depending on the gatekeeper’s relationship to the minor, whether they are a parent/guardian, educator, or REB member, a minor’s prospective participation in a study will hinge on whether it is found to be in their best interest by these gatekeepers, and whether these gatekeepers are willing to cooperate with a researcher to make the study happen. When it comes to making decisions within the research context between adults, researchers, and minors, one authors suggests that research “decision-making needs to be understood as part of a discussion or dialogue between young people, parents/caregivers and the researchers” [ 17 ].

The reviewed literature identified numerous ethical issues related to conducting digital data collection in research with minors which included: consent, data handling, minors’ data rights, observing behaviors that may result in risk of harm to participants or others, private versus public conceptualizations of social media, and gatekeeping. While these ethical issues are pertinent to any discussion of using digital data collection in research, whether they are specifically unique when conducting such research with minors as the population of interest requires deeper consideration.

Although consent arose as a common theme among many of the studies, it is not an ethical concern unique to research with minors. Consent is, at the very least, a minimum requirement for research participation. Even so, seeking assent is not a unique ethical practice for research with minors as its tenets of accountability, reciprocal trust, and consultation should be considered an ethical necessity for all populations involved in research. When using digital data collection in research with minors, specifically within the context of social media, researchers [ 33 ] highlighted the importance of maintaining an open dialogue, or as they called it, a dialogic approach , throughout the research process. Following a dialogic approach with minors when using social media as an avenue for digital data collection allows researchers to assess minors’ feelings about their posts to social media and understandings of how published research will “transform…their private information and interactions into public data” [ 33 ]. Maintaining a continual dialogue with minor participants reaffirms the notion of provisional consent and that minors’ participation and data can be withdrawn at any time upon request.

While data handling and ownership are not unique ethical challenges when conducting research, they do warrant greater attention in a digital context. Since digital data encryption, secure storage location, and access are all ethical challenges which equally apply to research with minor and adult populations, the main ethical concern with data handling in this review was the extent to which the digital environment creates concomitant opportunities for data breaches. As noted by some researchers, “the very nature of the internet introduces security and privacy issues, including potential privacy breaches through hacking and data corruption during transfer” [ 23 ]. Given that security risks within a digital context are ever-present, a top priority for researchers using digital data collection methods will be ensuring, as is possible, that no manipulation, in other words hacking, happens during the transmission, encryption, and storage of participants’ digital data [ 24 ].

Observing behaviors which assume risk, or may result in risk to participants or others, is a conventional part of conducting research with human participants. While the reviewed literature acknowledged the ethical complexity of navigating such reporting in relation to participants’ social media data, deciphering what the activity is, what parties are involved, and whether there is an authority that the activity is required to be reported to, are ethical issues which can arise in any study with any population. As mentioned by some researchers, employing digital data collection methods warrants “an ongoing dialogue amongst researchers and ethics committees and between researchers and participants around…dilemmas, as well as processes to resolve them” [ 33 ]. They noted that the “constant evolution of technologies (such as social media and search capabilities) and social practices to which they are put…means that researchers and ethics committees are not necessarily equipped to understand the consequence or implications of their research practice” [ 33 ]. Therefore, within each individual research context that employs social media as a means to collect participants’ digital data, it is important for researchers to understand the social media practices of their population and approach dilemmas by applying all of the perspectives from those involved [ 33 ].

Varied conceptualizations of whether content shared through social media falls within the public or private domain is an important ethical issue when examining digital data collection methods, however, it is not unique to conducting research with minors. Although some articles claimed that digital natives may be naive or ignorant of social media privacy settings and the implications of sharing private information within digital spaces, adult users face similar struggles. With the rapid development, uptake, and variety of social media platforms across the globe, nuances of handling one’s personal information online affects users of all ages, backgrounds, and digital literacy levels; this is especially so given the lengthy and complex jargon of Terms of Service agreements [ 33 ]. Researchers will not necessarily face more struggles when conducting research with minors using digital data collection methods, such as social media, since users of all ages may not understand the intricacies and shifting nature of online contexts to the same degree [ 39 ].

Managing minors’ data rights

Although minors’ data rights seems to point toward a unique ethical issue when conducting research with minors using digital data collection, many of the issues raised can be extended to other populations. For instance, weighing in on the treatment and dissemination of personal data for research purposes can advance the agency and autonomy of any aged participant. This is especially so given that the figure of the child tends to be positioned as “exceptional…rather than part of the wider frame of rights and the digital” [ 32 ]. While there may not be unique ethical concerns, in that other populations also have their best interests dictated by others such as the old, poor, or disabled, one might argue that where minors might differ is in their potential to be involved with research decision-making [ 32 ].

Addressing gatekeeping in research involving minors and digital technology

Institutional review boards, or research ethics boards (REBs), dictate the scope of any research project for they are most concerned with risk mitigation [ 18 ]. Although REBs seek to reduce potential harm to participants while preserving the potential benefits of a research endeavor, they “can be a problematic gatekeeper for researchers, especially for those who are seeking to conduct research in new and contentious areas like online spaces” [ 33 ]. Since there is an additive degree of uncertainty that invariably exists when considering research that involves minors as well as digital technology, REBs themselves “may not even be equipped to best guide ethical practices concerning these new areas of inquiry” [ 33 ].

Gatekeeping is a unique ethical issue concerning research with minors that involves digital data collection. Considering that minors’ use of digital technology in general raises ambivalence among adults, it is no surprise that minors’ participation in research with digital technologies faces hypervigilant gatekeeping. As one author writes, research “decision-making needs to be understood as part of a discussion or dialogue between young people, parents/caregivers and the researchers” [ 17 ]. Perhaps the ethical issue at stake is not the population or topic of interest per se, but rather, the way in which gatekeeping may impede research efforts that can provide opportunities for minors to inform and shape understandings of our uncharted digital environments.

Limitations

This review has several limitations. First, although the search strategy intended to be inclusive with its terminology, there may be relevant articles that were not captured. For example, articles may have described the context, subject, and population of interest without using ‘ethic’ or ‘moral’ in their title or abstract. This limitation also extends to the hand searched reference lists as only titles were screened for inclusion. Second, the search strategy was limited to articles written in English so work which may have contributed to this ethical discussion that was written in another language was not included. Third, the search strategy did not focus on one type of digital data to be collected (e.g., health data, screen-time data) which, if refined, could have identified further ethical issues or gaps in the literature. However, this was intentionally left outside of the scoping review search strategy as we were primarily concerned with the ethical issues of using digital data collection in research with minors, rather than the type of data being collected through digital means.

As indicated at the outset of this review, our intention was to explore existing literature to understand and anticipate the ethical issues associated with collecting digitally derived research data with minors in order to forward any possible resolutions based on the reviewed literature. The reviewed literature indicated that there was no difference in ethical issues when collecting digitally derived research data with minors in comparison to other populations except for gatekeeping. Gatekeeping is a unique ethical issue when collecting digitally derived research data with minors given that it is both a necessary safety measure to ensure minors are not taken advantage of within the research context and a potential barrier to minors’ participation since digital technology is a contentious area of research. For our purposes, this ethical conundrum begets the following question: if gatekeeping is a necessary barrier to minors’ participation in research which specifically involves the collection of digitally derived research data, how do we resolve this as researchers based on the reviewed literature? Our resolution to this conundrum is to suggest that researchers co-produce ethical practice with minors.

At various points throughout the reviewed literature, it was continually suggested that any ethical issue associated with digital data collection in research with minors may be best addressed when minors are part of the research conversation and decision-making processes [ 12 , 13 , 16 – 18 , 22 , 24 , 31 , 33 ]. In this sense, co-producing ethical practice with minors is the most respectable resolution to approaching ethical dilemmas in an area of research where the technology itself, along with accompanying social practices, are constantly evolving. Co-producing ethical practice in research which collects digitally derived research data from minors could be addressed by implementing a Child and Youth Advisory Committee (CYAC) [ 37 ]. CYACs seek to balance children’s protection while supporting their participation in research [ 40 ]. In particular, CYACs have been implemented in research with minors which addressed similarly contentious topics including cyber safety [ 41 ], hazardous agricultural labor [ 42 ], self-advocacy for pediatric patients with chronic illness [ 43 ], and child rights [ 40 ].

The problem when using digital data collection in research with minors is not necessarily the minors, nor the digital technology, but the uncertainty surrounding it. Conducting research with minors, along with digital technology, compounds uncertainty and increases ethical scrutiny. Uncertainty should not lead to preclusion, but rather, to co-production of ethical practice between researchers and minors. Co-production is risk mitigation; it is not an antidote to risk but an approach to working in tandem with minors to foster best ethical practice when using digital means to collect their data within a research context.

Supporting information

https://doi.org/10.1371/journal.pone.0237875.s001

  • View Article
  • Google Scholar
  • 5. WMA Declaration of Helsinki–Ethical Principles for Medical Research Involving Human Subjects. World Medical Association. 2018 July. Available from: https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/
  • 6. International Ethical Guidelines for Health-related Research Involving Humans, Fourth Edition. Geneva. Council for International Organizations of Medical Sciences (CIOMS); 2016. Available from: https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf
  • 7. Mapping minimum age requirements concerning the rights of children in the EU. European Union Agency for Fundamental Rights. 2017 Nov. Available from: https://fra.europa.eu/en/publication/2017/mapping-minimum-age-requirements/age-majority
  • 8. Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. 2018 Dec. Catalogue No: RR4-2/2019E-PDF. ISBN: 978-0-660-29942-6.
  • PubMed/NCBI
  • 29. Personal Health Information Protection Act, 2004, S.O. 2004, c. 3, Sched. A. (March 25, 2020). Available from: https://www.ontario.ca/laws/statute/04p03
  • 30. Data Protection Impact Assessment. (March 12, 2019). Available from: https://edpb.europa.eu/our-work-tools/our-documents/topic/data-protection-impact-assessment-dpia_en

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Health Sciences Policy; Beachy SH, Wizemann T, Choi K, et al., editors. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington (DC): National Academies Press (US); 2020 Jun 19.

Cover of An Examination of Emerging Bioethical Issues in Biomedical Research

An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop.

  • Hardcopy Version at National Academies Press

2 Ethically Leveraging Digital Technology for Health

Highlights of key points made by individual speakers.

  • Digital technologies are being increasingly used in self-care, clinical care, and biomedical research, and it is important that developers consider ethical components in the design process. Potential risks such as exposure of private information will likely need to be addressed by both law and better system architecture. (Estrin)
  • There is a wide range of potential ethical risks associated with the use of digital technologies, including privacy exposure and re-identification of anonymized data; the use of one's data for purposes beyond the original intent without one's knowledge or consent, including selling to commercial entities; the discriminatory use of shared data; the collection and use of poor quality data; and inadvertent inclusion in research by association. (Mello)
  • Many of the ethical concerns associated with emerging digital technologies cannot be adequately addressed within the existing regulatory system and should take into account different views on data privacy and intergenerational shifts in privacy perceptions. (Mello)
  • Research is needed to explore how research participant literacy can be improved so that participants have a better understanding of how research is conducted; how data are collected, stored, and shared; and how a technology works (Nebeker, Ossorio)
  • Machine learning algorithms can be inherently biased as a result of inadequate data, asking bad questions, a lack of robustness to dataset shift, dataset shift due to evolving health care practice, model blind spots, and human errors in design. Metrics and tests should be developed to measure whether an algorithm is biased. (Saria)
  • Safe and reliable machine learning in health care involves understanding how artificial intelligence tools work, being able to determine if they are working, and ensuring they continue to work as expected. (Saria)
  • Collaboration among subject-matter experts and algorithm developers is essential for the development and assessment of safe, reliable, useful tools, and standards are needed to help ensure responsible implementation in health care practice. (Ossorio)
  • Data scientists and digital technology developers operate under a very different set of cultural norms, ethical commitments, and incentive structures than those of biomedical research and health care practice. (Estrin, Mello, Ossorio)
  • Machine learning algorithms for use in health care should be held to rigorous standards, similar to those for the development of drugs. (Ossorio, Saria)

The use of digital health technologies, artificial intelligence (AI), and machine learning in biomedical research and clinical care was discussed during the first two panel sessions. A range of ethical concerns can emerge in the development and implementation of new science and technologies, said Bernard Lo of The Greenwall Foundation and moderator of the sessions. Deborah Estrin, an associate dean and the Robert V. Tishman '37 Professor at Cornell NYC Tech, provided an overview of the digital health technology landscape, and Michelle Mello, a professor of law and medicine at Stanford University, discussed ethical issues associated with emerging digital health technologies. Suchi Saria, the John C. Malone Assistant Professor in the Department of Computer Science and the Department of Health Policy and Management at Johns Hopkins University, reviewed the state of AI and machine learning in biomedical research, and Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin, discussed ethical issues associated with the use of machine learning, including deep learning neural networks, in health care.

  • DEVELOPING, TESTING, AND INTEGRATING DIGITAL TECHNOLOGIES INTO RESEARCH AND CLINICAL CARE

Overview of the Digital Health Technology Landscape

Estrin said current and emerging digital technologies are increasingly being used in self-care, clinical care, and biomedical research across four main categories: wearables, mobile applications (apps), conversational agents, and digital biomarkers. Moreover, technologies such as mobile phones have been used to support care delivery for more than a decade (e.g., by community health workers in resource-limited settings).

Wearables for Biometrics and Behavior

Wearables are mobile devices that measure and track biometric data such as heart rate, activity, temperature, or stages of sleep, Estrin said. Some examples of wearables include activity and sleep trackers and smart watches by Fitbit, Garmin, Apple, and Oura, to name a few. She noted that the availability and usability of wearables have increased dramatically since the early days of actigraphy (noninvasive monitoring of cycles of movement and rest). Even if current wearables do not meet clinical standards, she said, they can track trends; most wearables are used in association with a companion mobile app that provides the wearer access to data summaries.

The increasing ability of machine learning algorithms to interpret the data collected by wearables is enhancing the utility of those data for individuals in self-care decision making as well as for use in guiding clinical care and informing research. For example, Estrin suggested, a wearable might help an individual better understand how exercise, diet, and alcohol consumption contribute to his or her poor sleep patterns; the clinician might use the data to evaluate the effectiveness of interventions to reduce the impacts of poor sleep quality on cognition or metabolism; and the data can help inform research on interventions to improve sleep quality.

Mobile Apps

There are also stand-alone mobile apps that are used independently of a wearable digital device. These mobile apps are focused on an interaction with the patient for self-care, for clinical engagement (e.g., to encourage adherence to a treatment plan), or for research purposes. Estrin briefly described four categories:

  • Symptom Trackers—This category of mobile app allows individuals to enter symptoms and see how they change over time. One example, Estrin said, is the Memorial Sloan Kettering Cancer Center symptom tracker. Using the mobile app, patients recovering from surgery and undergoing treatment can track their symptoms and plot their data against expected results. This interactive approach allows patients to see their progress and better evaluate if they are progressing sufficiently to avoid an unnecessary emergency room visit.
  • Access to Clinical Health Records—Mobile apps are also used to provide individuals with access to their clinical health records. Estrin said that Apple HealthKit and Android CommonHealth are developer platforms that take advantage of data interoperability standards, such as Fast Healthcare Interoperability Resources, to provide access to electronic health records (EHRs). App developers can use these platforms to create apps that allow users to access and share their clinical health information securely.
  • Health Behavior Apps—Another category is health behavior apps that provide coaching and guidance for individuals on choosing healthy behaviors. Examples include diabetes prevention programs such as Omada, the Noom app for weight loss, and the Livongo apps, which support health goals across several conditions. Some health behavior apps have been shown to have a positive effect on behavior, Estrin said, but many others have not been vetted or tested.
  • Behavioral Health Apps—The final category, which involves behavioral health, is different from health behavior apps because of the focus on mental health support, Estrin said. PTSD Coach, developed by the U.S. Department of Veterans Affairs, was an early example of a behavioral health app, which provides “in-the-moment support” based on clinical guidelines. Other examples of behavioral health apps include Talkspace, LARK, and HealthRhythms.

Conversational Agents

Conversational agents are chatbots and voice agents, many of which can be accessed via digital assistants such as Google Home and Alexa, are programmed to hold a conversation in a manner similar to a human. Examples of emerging health-specific conversational agents include Sugarpod for diabetes, Kids MD by Boston Children's Hospital, and other chatbots for use by patients, nurses, and home health aides. Some conversational agents are entirely automated, and others provide details to a human provider or coach, Estrin said, but starting with an automated interaction to address more routine concerns allows providers to better meet and manage client needs.

Digital Biomarkers

Digital traces (i.e., records of online activity) are also being explored as digital biomarkers, Estrin said. For example, she said, researchers have collected data for mood analysis from social media interactions 1 and others have used individual Internet search data as indicators of health status. 2 Another example is an institutional review board (IRB)-approved retrospective study by Northwell Health of the Internet searches done by individuals prior to their first hospital admission for a psychotic episode. 3 Individuals in the study consented to sharing their Google search history (via Google's Takeout data download service), which is used by researchers to look for temporal patterns of online searching, location data, and other online activity that are associated with serious mental illness. Such research seeks to inform specific models for how to use such data to inform care at a population and individual level.

Risks and Concerns Related to Digital Technologies

Potential ethical risks and concerns associated with the use of digital technologies in research and clinical care include privacy exposure when using these digital technologies for health-related surveillance, data use, and transparency around AI-assisted agents, Estrin said. How the data should be controlled depends on the context of use, Estrin explained, and she said that laws and system architectures addressing how data are shared for surveillance need to take the context of use into account. Contextual integrity allows for a more nuanced view of privacy issues than traditional dichotomies ( Nissenbaum, 2018 ). It exposes the risks associated with how an individual's data flow and how they are used. The use of unquestioned consumer-app terms of service for health-related apps might allow the app provider to sell a user's health data. There are some concerns that health data should be protected differently in order to prevent its use in discriminatory ways related to insurance coverage, employment, issuing credit, or dating, for example. This may require legal, as well as technical, protections, Estrin said. Another concern is transparency with regard to when an individual using a digital health technology is interacting with an AI agent (i.e., a “softbot” or software robot) or a human agent. A question for consideration is whether or when it is the right of patients to know if they are interacting with a person or an AI-mediated agent, she said.

  • ETHICAL ISSUES FOR EMERGING DIGITAL TECHNOLOGIES

The increasing use of individuals' data traces in novel ways for both research and clinical care challenges the norms of human subjects research ethics and existing privacy laws, Mello said. Existing research ethics concerns have been heightened by the advent of new digital technologies, she said, and new concerns have also emerged as the use of digital technologies has expanded (summarized in Box 2-1 ).

Existing and Emerging Bioethical Concerns Associated with Digital Health Technologies.

Existing Concerns Compounded by Digital Technologies

Purpose and repurpose.

Existing concerns about purpose and repurpose center on the informed consent process and the extent to which data and biospecimens generated for one purpose may be used for other purposes without securing fresh consent. These concerns now encompass data generated by digital technologies, including whether such data can be shared or sold for research purposes. The digital data of interest for research might include data from user interactions with apps and websites and clinical data generated by digital technologies in the care setting (e.g., ambient listening devices such as surgical black boxes). 4 Data mining raises additional concerns since the research is often not hypothesis-driven but exploratory. It is also possible that unrelated datasets might be linked for research or clinical purposes. As highlighted by a legal complaint filed by a patient in 2019 against Google and The University of Chicago, EHR data collected for clinical purposes may be transferred to private companies for the purpose of developing new commercial products, 5 and even with direct identifiers removed they are potentially re-identifiable through linkages to other data (e.g., linking smartphone geolocation data to the EHR data could reveal which clinics a patient has visited, when, and for what purpose) ( Cohen and Mello, 2019 ). Once a patient is re-identified, Mello said, the EHR data could potentially be linked to other data such as social media and online browsing activity.

The three main solutions that have generally been used to address concerns about purpose and repurpose have been de-identification, waiver of consent, and blanket consent, Mello said, adding that there are issues with each approach. De-identification is “infinitely harder” for digital data than for tissue specimens. Consent waivers, granted when an IRB determines that the research meets certain requirements and therefore some or all consent elements can be waived, are a practical solution in the sense that securing fresh consent is often impracticable, Mello said, but they are not a principled solution to the problem of informed consent for repurposed digital information. 6 Blanket consent might be a more transparent solution, she continued, but it arguably is not meaningful consent if researchers cannot explain to participants the potential range of uses of their data and the potential for future data linkages. The field needs to think deliberately about the issue of informed consent for repurposed digital information, Mello said, and there may be real limits to using transparency as a strategy given the challenges with adequately describing what participants are consenting to and the lack of choice that many users of digital technologies have about accepting the terms of use.

Context Transgressions

Individual expectations of privacy vary depending on the context, Mello said, reiterating the point made by Estrin. Expectations are influenced by the relationship one has with whomever is receiving one's information and by how one expects that information to be used ( Nissenbaum, 2011 ; Sharon, 2016 ; Solove, 2020 ). Furthermore, she said, empirical research has found that the willingness to provide one's information varies significantly depending on whether that information is expected to be used for noncommercial or commercial purposes. For example, how a person feels when one of that person's doctors shares very sensitive clinical information with other health care providers (e.g., to coordinate care) can be very different from how that person feels about social media platforms (e.g., Facebook) sharing much less sensitive information about him or her with other entities for commercial purposes.

The problem of transgressions of context is related to the problem of purpose and repurpose, but it is distinct, Mello said. Historical examples of context transgressions include the case of Henrietta Lacks 7 and the case of Moore v. Regents of University of California , 8 both of which involved an individual's property rights, or lack thereof, in relation to commercial products derived from the person's biospecimens. 9 For rapidly exchanged digital information, the potential for transgressions of context is very high, Mello said—in particular, via the shift in context from noncommercial to commercial uses of data. A current example is health care organizations transferring large volumes of EHR data to technology companies for use in developing commercial products and services.

Addressing potential context transgressions has generally involved clearly disclosing that individuals do not have any rights to a share of the profits from technologies developed from their biospecimens, Mello said, or removing any information identifying the individual, or both. Alternatively, commercial and noncommercial context transgressions could be avoided by simply not sharing information, but Mello said this strategy is neither feasible nor desirable because needed products and services stem from data sharing. Another approach could be to eliminate the expectation of privacy altogether and make individuals aware that they are relinquishing control of their information in exchange for a variety of current and future benefits (e.g., free and low-cost services, development of precision medicine technologies). This approach conflicts with current privacy laws and human subjects protections, she said, and would shift the data sharing model from one of individual control over data to one of group deliberation and benefit sharing.

Corporate Involvement

For-profit corporations, including pharmaceutical companies and others, have long been involved in biomedical research, Mello said, and concern about the influence that corporations have on research persists. Digital technology companies have now emerged as dominant forces in biomedical product development. When they are not partnering with academic researchers or government, digital technology companies operate outside the ambit of structures that traditionally have provided ethical oversight of biomedical research (e.g., IRBs), Mello said, and comparable ethics mechanisms are largely absent in the industrial sector. Furthermore, digital technology companies have developed sufficient analytic capacity that they no longer need to interact with academic biomedical researchers for anything except to acquire patient data. The need for that interaction is also declining since digital product developers can often obtain health information directly from consumers or from direct-to-consumer companies. Corporate involvement is essential for product development, she said, but there are many issues yet to be addressed.

Incidental Research Subjects

Incidental research subjects are individuals who have not consented to be research participants but who have inadvertently come under the observation of researchers by association with others who are sharing data. Incidental sharing of information is a concern in the field of genetics, for example, where one person's genomic data can reveal information about family members. The digital version of the problem is much broader, Mello said. For example, digital technologies such as ambient listening devices collect all conversations, not just those of the device owner, and digital traces such as social media posts can sweep in information about other identifiable individuals (e.g., geolocation data). The problem of incidental research subjects is not addressed by the current model of individual control of data through end user license agreements or informed consent.

Emerging Issues for Digital Technologies

The scale of data collection.

Mobile devices, ambient listening devices, and other passive data-collection technologies have the capability to collect vast amounts of data with minimal cost and effort, Mello said. There are benefits to this scale of data collection, but there are also concerns. Individual privacy is one such concern, but addressing this concern can raise other issues. For example, allowing surgical patients to opt out of having black box data collected during their procedures could impact quality improvement efforts. Data quality is also a concern, as mobile app users can “fudge” their data in ways that are not generally possible in clinical trials. There are also potential social consequences, such as health care providers stigmatizing or discriminating against noncompliant patients whose behaviors are detected through passive data collection.

The End of Anonymity

The de-identification of data is now recognized to be a temporary state, Mello said. Advances in computer science (e.g., hashing techniques, triangulation of data) have enabled the re-identification of individuals' unlinked data from anonymized datasets. Human research protections are based on the concept that de-identified individual patient data do not present a privacy risk and, therefore, transfers of de-identified data do not require oversight. The increasing potential for re-identification calls for reassessment of this thinking, she suggested.

The Ethical Adolescence of Data Science

Traditional training in science and medicine imparts a set of cultural scientific norms and ethical commitments that may not yet be embedded in the training of computer scientists, Mello said. Digital technology companies currently have a high degree of freedom to self-regulate, yet they may lack a fully formed ethics framework to guide their work. Privacy laws do apply to some degree, though perhaps not to the extent people may think, she added. (The Health Insurance Portability and Accountability Act, for example, does not apply to companies that are not providing health care or supporting health care operations.) There is a need to “establish this profession as a distinct moral community,” she said, pointing to the work of Metcalf (2014) and Hand (2018) . The field of computer science has developed initial codes of ethics, which she said are a starting point, but more attention is needed.

Some of the ethical concerns associated with emerging digital technologies are new, Mello said, but many are long-standing concerns applied in a new context and with new implications. These ethical concerns cannot be adequately addressed within the existing regulatory system, she concluded. In addition, efforts to address these concerns need to engage people of younger generations and to take into consideration their perspectives on privacy and tradeoffs.

  • USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN RESEARCH AND CLINICAL CARE

Artificial Intelligence, Machine Learning, and Bias

The future of AI, Saria said, is in augmenting health care providers' capabilities to more efficiently offer higher-quality care. This includes, for example, reducing diagnostic errors, recognizing diagnoses or complications earlier, targeting therapies more precisely, and avoiding adverse events. Ideally, AI would increase the efficiency of care without increasing the burden on providers.

There has been much discussion and concern about bias in AI algorithms, Saria said. To address these concerns, it is necessary to understand the different underlying problems, but this is hindered by a lack of a taxonomy for understanding bias. Using facial recognition algorithms as an example, Saria discussed six potential errors that could introduce bias.

Inadequate Data

Saria presented a study by Buolamwini and Gebru (2018) that found that the performance of three different facial recognition algorithms in determining gender varied by skin tone. In particular, the algorithms frequently misclassified the gender of darker-skinned females, while the genders of lighter females and both darker and lighter males were classified with much greater accuracy. This weakness, Saria said, is a result of inadequate data. In this case, the underrepresentation of data from specific subpopulations can be addressed by augmenting the data or correcting the algorithms to account for the underrepresentation. Understanding the weakness allows for corrections to be made, but a lack of awareness of the weakness can lead to consequences downstream whose exact nature will depend on how these algorithms are used (e.g., for crime investigation, surveillance, employment).

Asking Bad Questions

Another type of error that can lead to bias is what Saria described as “bad questions.” As an example, she described the facial personality profiling offered by a startup technology company. The company claims to use machine learning algorithms for facial analysis to determine traits such as IQ, personality, and career prospects (e.g., whether a person might be a professional poker player or a terrorist). These questions cannot be answered using current observational datasets, Saria said, and there are no experimental interventional datasets in existence to be able to answer these questions. Furthermore, the algorithm is not learning true causal relationships, but rather it is simply mimicking and learning from the data already in the dataset.

Lack of Robustness to Dataset Shift 10

Error can also be introduced when an algorithm is not robust to dataset shift. To illustrate, Saria described the training and use of a deep learning algorithm for detecting pneumonia in chest X-rays ( Zech et al., 2018 ). The algorithm performed well when used by the same hospital from which the training data were obtained. However, the diagnostic performance deteriorated when the algorithm was then used by a different hospital. This lack of robustness when analyzing datasets from another site, Saria said, was found to be related to style features of the X-rays that varied by institution (e.g., inlaid text or metal tokens visible on the images). This potential source of bias could be corrected by adjusting the algorithm to account for those style features that are not generalizable across datasets.

Evolving Health Care Practice

Provider practice patterns evolve over time, Saria said, and if predictive algorithms are not robust to this type of dataset shift, this can lead to false alerts. As an example, an algorithm for the early detection of sepsis based its predictions on the laboratory tests being ordered by providers and, in particular, on whether a measurement of serum lactate level was ordered. The model was trained on data from 2011 through 2013 and performed well when tested in 2014, she said. In 2015, however, predictive performance deteriorated significantly, which Saria explained was associated with a new Centers for Medicare & Medicaid Services requirement for public reporting of sepsis outcomes. As a result of the new regulation, health care institutions increased sepsis surveillance considerably, and laboratory testing for serum lactate levels increased correspondingly. Because the algorithm was not robust to this dataset shift, there were more false alerts.

Model Blind Spots

A small perturbation to a dataset can result in “blind spots” that can lead an algorithm to become “confidently wrong,” Saria said. She described a well-known example in which an image, which was correctly identified with confidence by an algorithm as a panda, was minimally perturbed with random noise. Although the change was imperceptible to the human eye and the image appeared to be the same panda, the algorithm determined with high certainly that the image was now a gibbon ( Goodfellow et al., 2015 ). It is important to understand how a learning algorithm is performing so that errors can be addressed, she said.

Human Error in Design

Human error can also lead to bias in models, Saria said. A recent study uncovered bias in an algorithm designed by Optum that is widely used to identify higher-risk patients in need of better care management ( Obermeyer et al., 2019 ). The algorithm was designed to use billing and insurance payment data to predict illness so that high-cost patients could be assigned case managers to help them more proactively manage their health conditions. However, the study found that the high users of health care identified by the algorithm tended to be white, with black individuals using health care less frequently. This resulted in health systems unknowingly offering more care to those already accessing care and thereby further widening the disparities in care.

Addressing Algorithm Biases

A common element across these scenarios, Saria said, is that the errors are generally fixable if the source of the error is known. Changing human behavior is difficult, she said, but point-of-care algorithms, corrected for the sources of bias discussed, can provide “real-time nudges” to influence health care provider decision making.

In developing AI for health care, there is a need for safe and reliable machine learning, Saria said, suggesting that the field could draw from engineering disciplines, which focus on both understanding how a system should behave and then ensuring that it behaves that way. There is excitement about the use of AI in the health care field and interest in downloading and deploying tools, she said, but the underlying “engineering” principles critical for building safe and reliable systems are currently often overlooked (i.e., understanding how these tools work, determining if they are working, and guaranteeing that they continue to work as expected). She described the three pillars of safe and reliable machine learning as failure prevention, failure identification and reliability monitoring, and maintenance. Engineering health care algorithms for safety and reliability involves ensuring that algorithms are more robust to sources of bias (e.g., dataset shift), are able to detect errors (e.g., inadequate data) and identify scenarios or cases that may be outliers in real-time (test-time monitoring), and are updated as needed when shifts or drifts are detected. She referred participants to her tutorial on safe and reliable machine learning for further information ( Saria and Subbaswamy, 2019 ). In closing, Saria suggested that algorithms should be developed, deployed, and monitored post-deployment with the same rigor as prescription drugs (see Coravos et al., 2019 ).

  • ETHICAL ISSUES IN MACHINE LEARNING AND DEEP LEARNING NEURAL NETWORKS

Sharing Health Care Data

Many of the ethical issues associated with machine learning involve concerns about data sharing, Ossorio said. There is governance in place for the sharing of research data, and she said that clinical trial participants are becoming increasingly better informed about the ways their data might be shared. However, there is less governance of the sharing of clinical care data. At the federal level there have been efforts to collect and use clinical care data for quality analysis purposes. The training of algorithms requires large amounts of data, which is why, for example, developers such as Alphabet and Microsoft seek to acquire millions of medical images and accompanying medical data from hospital picture archiving and communication systems.

Unlike the case with data collected as part of clinical research, patients interacting with the health care system do not expect their clinical care data to be shared (beyond what is needed to facilitate and coordinate their care). The commercial use of health data currently operates under a very different set of norms, professional commitments, and economic commitments than the clinical research enterprise, Ossorio said, reflecting earlier comments from Mello. While pharmaceutical companies are subject to regulations that protect research participants and the future users of their products, there is not yet comparable oversight of developers of AI for health care. Data are being transferred from the health care context, where the norm is to put the interests of the patient at the center of decision making, to a different context that is not patient-centered.

Price and Cohen (2019) have looked at sharing health data for machine learning, which Ossorio said discusses expanding the “circles of trust” to include entities that develop AI. Whether this should or could be done, given that the norms that govern these types of commercial entities (e.g., Google, Microsoft) are very different from the norms governing clinical research and health care, remains an open question, she said. For example, the norm for development by these types of commercial entities is often to deploy a technology as quickly as possible and then make corrections and updates based on additional data collected while the product is being used in the marketplace. This approach might be acceptable when developing apps that, for example, recommend books or movies for the user. In the health arena, however, drugs and devices generally require premarket assessment of safety and efficacy, she said.

Developing and Implementing Responsible Artificial Intelligence for Health Care

Based on her experience, Ossorio said that many of the companies developing AI do not fully understand the scope or context of health care data. For example, in the case of a machine learning algorithm to aid in the interpretation of clinical laboratory test results, to improve that algorithm after deployment one would need data about how the clinical laboratory is using that test as well as patient clinical outcomes data. However, patient outcomes data generally reside with the health care provider (outcomes data are not usually maintained by the testing laboratory), and the outcomes of interest might appear over the long term. In addition, not understanding the context in which the data were generated can result in the development of an algorithm that is inherently biased or lacks clinical utility.

Another concern, Ossorio continued, is the common perception that those who are expert in developing algorithms can do so using any type of data and that simply providing them with access to volumes of health data will transform the practice of medicine. Collaboration among subject matter experts and algorithm developers is essential for the development and assessment of safe, reliable, useful tools, she said. There is also a need for standards to help ensure the responsible implementation of machine learning algorithms in health care practice ( Wiens et al., 2019 ).

Regulating Machine Learning Algorithms

Algorithms should be held to rigorous standards that are similar to those necessary for the development of drugs, Ossorio said. Most algorithms are not regulated, and those that have been subject to regulation by the U.S. Food and Drug Administration (FDA) thus far have been treated as medical devices. Medical devices generally do not need to meet the same standard of evidence required of drugs before authorization for marketing. The validation of algorithms requires sharing of both code and datasets, Ossorio continued, and researchers are also being encouraged by journals to share code. Because some algorithms are in fact medical devices, Ossorio said, the data used for validation need to be shared according to current regulations and guidelines and need to be labeled as being for research purposes only.

FDA is currently considering how to regulate machine learning algorithms for health care. Algorithms that are cleared or approved by FDA as medical devices are trained, tested, and then locked down prior to implementation, Ossorio said. The challenge now, she said, is how to regulate unlocked or partially unlocked machine learning algorithms that might change over time, perhaps in unpredictable ways.

Ethics Training in Data Science and Artificial Intelligence

Are there efforts, Lo asked, to incorporate a discussion of ethical issues into the training of data scientists and AI researchers? Individuals in data science have learned norms and behaviors in the context of the companies they work for and the incentive structures they are presented with, Estrin said, and these do not translate to the health care context. Corrections will require a combination of professional ethics and law, she said. Saria agreed and expressed optimism that positive, corrective action is occurring. All of the leading conferences in the data science field now have discussions of ethics, bias, transparency, and fairness on the agenda, she said, and there are also meetings entirely devoted to these topics. Ossorio was also optimistic and said that, in her experience, data scientists are very interested in discussing ethical issues. Because of this interest she was asked to develop an ethics class for data scientists at her institution, which she said has been popular and is now required for many students in biomedical engineering, biostatistics, public health, and bioinformatics. The curriculum is built around case studies, and she said that engineers and computer scientists have skills in problem solving that translate to solving bioethical problems. A new presidential initiative at Stanford University to provide ethics training to students in computer science has also been very well received, Mello added.

Education on ethical issues in data science is increasing, Estrin said, but the growing interest in ethics in data science should be supported and should also align with laws, regulations, and a shift in incentive structures to help ensure that ethical products can reach the marketplace. Mello said a given field will go through three stages of ethical maturation: recognizing that there are ethical issues, developing a framework for solving those problems, and gaining traction and leadership buy-in so that those who are trained in ethics are supported in taking ethical actions. The field of data science is currently in the first stage, she said, and is just beginning to enter the second.

Transparency and Data Sharing

Transparency in the absence of choice.

What is the value of transparency, Lo asked, if patients have no choice but to accept sharing of their data as an aspect of receiving services? The health care system where he receives his care, he said, is negotiating the transfer of patient data to a company for algorithm development and validation. Patients do not have a choice about whether their data will be shared, other than to choose not to receive care.

People often feel exploited when they do not feel they have a real choice about sharing their data and do not see a clear benefit of giving up their information, Ossorio said. For example, people might feel they have no choice but to use social media to be informed about work-related information or to stay in touch with family and therefore have no real choice about submitting to the collection and use of their data by the social media websites. They do not perceive that they have made a rational tradeoff of providing information to receive benefits. Transparency about data sharing, even in the absence of choice, she suggested, is better than no transparency because it allows people to engage in political activity to help shape laws and norms. Transparency is also important for building trust. However, transparency is not a solution to deeper ethical problems.

Data Governance

Transparency without data governance is also insufficient, Ossorio said. How data are transferred to the commercial context is important, and license agreements should not lead individuals to relinquish all control. There is a distinction, Estrin said, between institutions selling data and relinquishing responsibility and them collaborating as institutions or providers with companies and bringing their own norms to that collaborative process. This idea, currently part of work being done by her colleagues at Cornell NYC Tech, may be an interesting way to think about data sharing, she added. Many academic institutions have initiated collaborations with for-profit companies, Ossorio said, but full collaboration is often not possible as the digital technology developer is often interested in using the data for an area of research that the data-sharing department or institution does not have interest or expertise in. A challenge, she said, is to define the data governance approach that would be appropriate for that middle ground relationship between a simple transfer of the data and a full research collaboration.

Allowing Patients to Consent or Opt Out of Data Sharing

Should patients be able to opt out of the sharing of their health care data for secondary purposes, Lo asked, and how might that impact the datasets and the ability of researchers to develop and validate digital technologies? Individuals should be given the choice to opt out of data sharing, Saria said, adding that having some patients choose to not share data should not create technical problems for researchers. Institutional infrastructure is the main barrier to implementing an opt-out choice for patients, she said. In her personal experience she has been asked to consent or decline to the sharing of her health data. Whether patients like and trust their providers can influence their decisions on sharing their health data.

There are generational differences in culture and norms that affect the acceptance of information sharing, Saria observed. Generations that grew up using the Internet tend to be more skeptical of what they read online, while older generations are more likely to believe that what they read online is true. Furthermore, she said, she and many others who grew up with the Internet understand and accept that they are receiving services of value to them in exchange for websites collecting and using their user data. Informed consent is important, but there is also a need for education to ensure that people understand what consenting means for them, she said. Many people do perceive data sharing to be exploitative and do not understand or consider the benefits and costs of sharing or not sharing one's data. Institutional leaders who are resistant to providing data to for-profit technology developers at no cost are less concerned about protecting patient privacy, Saria said, and more concerned about not leveraging a potential revenue stream. Instead, they should be thinking about how patients might benefit from more efficient and transparent use of their data.

Estrin agreed that opting out of data sharing should be allowed; however, she said, simply allowing opt outs is not sufficient, and institutions still need to behave responsibly and establish ethical norms, independent of patient choice. Just because a technology or service is provided to consumers at no cost monetarily does not necessarily mean it is acceptable, she added. In many cases it is difficult to avoid a free technology or service because it has become part of the digital technology infrastructure, and in a capitalist economy consumers cannot vote with their purchase power when something is already free.

Unlike patients in integrated health systems, many people do not have the ability to transfer their health data from one provider to another, Estrin noted. Solutions, such as HealthKit (Apple) and CommonHealth (Android), have emerged to allow patients to download their own clinical and health data and to share it across apps and providers. A challenge, she said, is defining which apps or other data users are allowed access to an individual's data. It has been suggested, Estrin said, that consenting to share one's data should be included in the standard terms of service for apps, which advocates say would support frictionless development and innovation by startup companies. However, she said, there is empirical evidence that this type of consent is not sufficient. One option under discussion is that a health system could approve the use of apps that provide some oversight of data sharing and use (i.e., apps that do not sell or reuse patients' data).

Effects of Machine Learning and Artificial Intelligence–Based Tools on Clinician Practice

Is it possible, one workshop participant asked, that physicians might become dependent on digital technology–based interventions that propose interpretations and solutions, and could that dependence degrade provider expertise?

Physician integration with technology is not a new problem, Mello said, and providers have long used different types of decision-support tools (e.g., clinical practice guidelines, automated decision support within the EHR). Questions have been raised as to whether standards are needed for how physicians should interact with digital technologies, she continued, and whether codes of conduct in the medical professions need to address this explicitly. Clinical providers are currently using laboratory tests that employ algorithms for race correction of results (e.g., the calculation of estimated glomerular filtration rate adjusted for race), said a workshop participant. Existing race-based corrections in medicine need to be examined along with new and emerging algorithms, the participant added.

Clinicians and clinical laboratory professionals need training on how to properly use algorithms, Ossorio said. Before that can happen, the algorithms need to be studied to better understand their characteristics and role in practice (e.g., generalizability, indications, contra-indications). These types of studies, however, are not incentivized by the current regulatory system for medical devices, she said.

To what extent, a workshop participant asked, should patients be made aware that provider decisions are being assisted by AI? Providers do not generally discuss with patients the specific resources they use in the course of practice, Mello said, and it is not clear that a patient encounter needs to include discussion of any algorithms used by the provider.

Structural Inequalities in Datasets Used for Algorithm Development

There are structural inequalities embedded in the data being used to develop and train machine learning algorithms, Dorothy Roberts, the George A. Weiss University Professor of Law and Sociology and the director of the Penn Program on Race, Science, and Society at the University of Pennsylvania, said, which can result in the outcomes of predictive analytics being biased (racially biased in particular). Predictive policing, which uses arrest data to predict who in a community is likely to commit crimes in the future, is an example, she said. Discriminatory law enforcement practices (e.g., racially biased stop-and-frisk programs, policing efforts focused on African American neighborhoods) result in racially skewed arrest data that then lead algorithms to predict that those who have the characteristics of black people are likely to commit crimes in the future, she said. There are similar examples in medicine of existing structural inequalities being perpetuated by algorithms, Roberts continued, such as the study of the Optum algorithm discussed by Saria. In that case, an algorithm designed to identify high-risk users of health care in need of additional services was trained using payment data. In choosing health care costs as the dataset, the developers of the algorithm did not take into account the fact that less money is spent caring for black patients, who are often sicker, she said.

Greater collaboration is needed, Roberts said, but that collaboration needs to extend beyond medical professionals and algorithm developers. Collaborations also need to include sociologists and others who understand structural inequality in society and who can recognize errors in datasets that could lead to bias, a point with which Saria agreed.

Structural inequality patterns reflected in datasets can be due to social inequalities that exist outside of the health care system, inequalities in access to health care (e.g., insurance coverage, proximity to providers), and inequalities that have been created within the health care system, Ossorio said. Machine learning algorithms could be helpful in identifying inequalities so that they can be addressed, she suggested; however, assessing the performance of commercial algorithms can be hampered by the fact that these products are frequently licensed—often with restrictions on how they can be studied—rather then sold outright. In some cases, for example, the data used for development are considered a trade secret and are not disclosed.

Potential Research Questions for Funding

Are there research topics in the areas of bioethics, data science, computer science, and digital technology development that should be funded for study? This was the next question Lo posed.

Views on Health Data Sharing and Privacy

Research is needed to better understand how patients would respond if given the choice to opt out of having their clinical data shared with digital technology companies, said Benjamin Wilfond, the director of the Treuman Katz Center for Pediatric Bioethics at Seattle Children's Hospital and Research Institute and a professor in and the chief of the Division of Bioethics and Palliative Care in the Department of Pediatrics at the University of Washington School of Medicine. Mello agreed that how people think through a choice to opt out could be better understood. Studies have used administrative data to assess how many people opt out of programs such as electronic health information exchanges, but these studies do not differentiate between those who have made an informed decision to not opt out (i.e., to participate) and those who simply take no action and participate by default. The role of education in understanding the benefits and risks of participation versus opting out could be studied, she said. It would also be helpful to understand the higher rate of opting out among certain racial and ethnic groups and how the health care enterprise can build trust with these communities. When presented with the choice to opt out, most people will not do anything, Estrin said, so a better question might be how people respond to the choice to opt in (i.e., asking patients to share their data). Most patients presented with an opt-out choice do not fully understand what they are being asked to decide, Saria added. In particular, they do not understand the potential ramifications of not participating (e.g., products of value to them that might not be developed). There is an initiative in the United Kingdom to educate the public about the benefits and risks of sharing or not sharing health data, she said, and this could be a good initiative to replicate in the United States in order to help individuals move from a general fear of data sharing to an understanding of the good that can result. 11 Investment should go beyond informed consent research to studies of better ways to use data to improve people's lives, she added.

Improving Stakeholder Literacy

Pilot studies could be conducted to explore alternative approaches to individual informed consent, Mello said. Some institutions, for example, have established data use committees to evaluate the proposed uses of health data. Studies could be undertaken to identify the benefits and drawbacks of this approach, compare how the decisions made by the data use committee align with what individuals would choose for themselves, and assess the extent to which committee deliberations reflect the views of minority communities. Understanding intergenerational shifts in perceptions of privacy is another area in need of further research, Mello said. This includes understanding different views on the acceptability of trade-offs (e.g., sharing personal information in return for receiving goods and services at low or no cost). Privacy rules being established now might not be relevant for the next generation, she added. Research could be done, Lo said, to assess patients' understanding of their options regarding data sharing, to identify effective approaches for informing them of their options, and to determine if educating patients about their options changes their behavior.

Research is also needed on how to improve stakeholder literacy, said Camille Nebeker, an associate professor in the University of California, San Diego, School of Medicine. This includes, for example, ensuring that research participants have an adequate understanding of research, data, and technology; that researchers have sufficient literacy in data management; and that students in technology fields gain literacy in ethics. This is an important area for research, Estrin agreed. Developing ethics training programs for computer scientists and educational materials for consumers should not be difficult, Mello said; the challenge is gaining and holding the attention of consumers who are already bombarded with opportunities to consider information and make decisions about data sharing. Ossorio said that an educational approach being developed at Duke provides information about an algorithm in the form of prescribing information (e.g., recommended use, contraindications). This approach quickly and concisely communicates the most important information about an algorithm to users. Research could be done to understand the impact of this and other types of educational interventions on outcomes of interest, Lo suggested.

Assessing Algorithms for Bias and Fairness

Developing metrics and tests that can measure whether an algorithm is biased is another area for research, Saria said. Studies could explore different scenarios in which bias might be present and be used to design tests and metrics to assess the likelihood of bias. Automated approaches to detecting, diagnosing, and correcting bias are needed, she explained, because access to proprietary code and datasets might not be provided, and significant time and resources are needed to conduct in-depth analyses. Metrics are needed for assessing the datasets used for algorithm training, Ossorio agreed, and she noted the importance of understanding the impact of data cleaning on the fairness of datasets. Researchers at the University of Wisconsin have written algorithms that can assess the fairness of other algorithms and can provide input during algorithm training to increase fairness, Ossorio said. This approach is more challenging in the health care context than in many other contexts, she added. There is value in getting researchers and scholars to collaborate in considering different theories of fairness, how they apply in a given context, why one theory might be chosen over another, and how the theories can be built into a software product, she said.

It is also important to learn from the cases of algorithms that did not perform as expected, Estrin said. Working backward to see how the implementation of regulations, laws, or incentives might have altered the outcomes (e.g., prevented the biased outcomes), could be one option, she suggested. In the case of the Optum algorithm discussed by Saria, for example, the company was seeking to optimize patient care in order to control costs. The research question in this case could be, Estrin said, What laws and regulations might have allowed for this optimization function while ensuring ethical outcomes?

Moving Forward

In closing, the panelists reiterated the need for funding to support broad interdisciplinary research in the areas of bioethics and digital technology development. Potential ethical issues need to be addressed up front, Mello said, before digital technologies are released for use, while Estrin underscored the need to understand the incentive structures that currently drive digital technology development and deployment.

For more information, see Saha et al., 2017 .

For more information, see White et al., 2018 .

For more information, see Kirschenbaum et al., 2019 .

Surgical black boxes can record a range of data during surgical procedures, including videos of the procedure, conversations in the room, and ambient conditions for the purpose of identifying intraoperative errors that may have led to adverse events.

For more information on Dinerstein v. Google , see https://edelson ​.com/wp-content ​/uploads/2016 ​/05/Dinerstein-Google-DKT-001-Complaint.pdf (accessed April 20, 2020).

A waiver of informed consent (45 CFR 46.116) can be granted by an IRB if research involves minimal risk to participants, if research cannot be conducted practically without a waiver, if the waiver does not negatively affect the rights of the participant, and if participants will be provided additional information about their participation following the study (when applicable). Blanket consent refers to a research participant consenting to all uses of their data with no restrictions.

For more information, see https://www ​.hopkinsmedicine ​.org/henriettalacks ​/upholding-the-highest-bioethical-standards.html (accessed April 20, 2020).

For more information, see https://law ​.justia.com ​/cases/california/supreme-court ​/3d/51/120.html (accessed April 20, 2020).

In each case, cancer cells collected from patients Henrietta Lacks and John Moore in the course of their clinical care were used to develop cell lines that were later commercialized, without the patients' knowledge or consent.

Dataset shift is a condition that occurs when data inputs and outputs differ between the training and testing stages. When this occurs, researchers are unable to make generalizations that may allow them to predict events that could occur ( Quiñonero-Candela et al., 2009 ; Subbaswamy et al., 2020 ).

For more information about the United Kingdom's Understanding Patient Data Initiative, see https: ​//understandingpatientdata.org.uk (accessed April 21, 2020).

  • Cite this Page National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Health Sciences Policy; Beachy SH, Wizemann T, Choi K, et al., editors. An Examination of Emerging Bioethical Issues in Biomedical Research: Proceedings of a Workshop. Washington (DC): National Academies Press (US); 2020 Jun 19. 2, Ethically Leveraging Digital Technology for Health.
  • PDF version of this title (760K)

In this Page

Other titles in this collection.

  • The National Academies Collection: Reports funded by National Institutes of Health

Recent Activity

  • Ethically Leveraging Digital Technology for Health - An Examination of Emerging ... Ethically Leveraging Digital Technology for Health - An Examination of Emerging Bioethical Issues in Biomedical Research

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Teaching Students Cyber Ethics

  • Share article

essay on ethical use of digital technology

Student safety, student behavior, and student learning are a trio of educator responsibilities. Educators have always operated within and held students to a standard of behavior memorialized in a code of conduct. Ethical behavior, good manners, and certainly legal behavior are taught in homes, houses of worship, and community groups, and are reinforced by teachers and leaders throughout the students’ thirteen years in school. But, the cyber world is introducing new questions and more amorphous answers for all of the adults and certainly for the children as well.

Now, with the ability to participate in unsupervised online environments, the stakes are higher for the children and the responsibilities are greater for the adults. Educators must understand the capabilities of the accessible digital tools so that they can maximize their use in the education of their students. They must also have this understanding to uphold their obligation for safety and for moral, ethical and legal behaviors. It is no longer acceptable to shake off the use of technology. Consequences are increasingly high. It is as much a part of the educators’ responsibility as is the teaching of subject matter. It is as much a responsibility of the educators as of the parents and guardians who place the technology on the desks and in the hands of the students. Ethical behavior, doing when no one is looking what you would do if someone was, is essential to good digital citizenship.

The unfortunate news from Colorado is evidence of the need for this. Without dedicated and ongoing attention to digital citizenship, more children will certainly wander into dangerous waters. We learned that this past week from students at a Colorado high school. Hundreds of nude photos were shared between students at both middle and high schools and stored on their cell phones in a disguised app. A review of the articles reporting the event includes words like “sexting ring” and “felony”, both would have been phrases we would not like to associate with students in schools.

A New York Times article reported:

Because it is a felony to possess or distribute child pornography, the charges could be serious. But because most of the people at fault are themselves minors and, in some cases, took pictures of themselves and sent them to others, law enforcement officials are at a loss as to how to proceed.

Because this sometimes happened during the school day, the school is considered at least partially responsible. It gets worse. There is more. In that same article, it was reported that a “ parent had spoken to a counselor about her concerns and the counselor responded, there was nothing the school could do because half the school was sexting .”

essay on ethical use of digital technology

The intersection of technology knowledge, adolescent behavior, and leading and teaching with values and ethics is the intersection where educator must operate from or else more adolescents will find themselves in the same horrible situation these Colorado students have.

ISTE Guidelines The International Society of Technology in Education, ISTE ®, offers sets of essential standards for students, teachers, administrators, coaches and computer science educators. One of the domains of the standards is called “Digital Citizenship.” Teachers are called to

  • advocate, model, and teach safe, legal, and ethical use of digital information and technology, including respect for copyright, intellectual property, and the appropriate documentation of sources.

And administrators are called to:

  • Promote, model and establish policies for safe, legal, and ethical use of digital information and technology and
  • Promote and model responsible social interactions related to the use of technology and information and
  • Promote and model responsible social interactions related to the use of technology and information

Keeping Children Safe We cannot ignore the wonders or the perils presented by current and future technologies. We can no longer allow educators, leaders or teachers to dismiss technology as unimportant and social media as a nuisance. Those views, if held personally, must not be shared or lived within schools. A new level of alertness is required for the adults. It is imperative for observation and acute listening as students circulate illegal photos of themselves and others, changing their lives forever. Educators cannot stop students from enjoying the advantages of current and future technologies outside of school but they can make students aware of the legal and ethical implications of actions. They can also help educate parents and community leaders.

All Students Are Not The Same As with all gaps in student knowledge to assume that all students are equally knowledgeable about technology is a mistake. Just as gaps in literacy, math, study, and thinking skills exist, the disparity of income can and does contribute to a gap in use ( digitalresponsibility.org ). Both students swimming in the deep waters of social media and those who are simply wading need teaching and modeling. No different from other moral and ethical behavior taught in schools, values and dangers of technology use must be part of the educational process. One Colorado student interviewed said no one had told the students what they were doing was illegal. And, we can never underestimate the power of peer pressure, even in the world of the 21 st century student in the cyber world. What goes into that world almost always remains there, delete or not. Digital Citizenship references the need for “Respect, Educate and Protect” as themes for developing digital citizenship among students; simple and powerful messages for changing the behaviors revealed in Colorado. ( digitalcitizenship.net ).

From kindergarten to 12 th grade, the integration of this information into teaching and learning is essential. No high school elective will do the trick...this is something to be embedded into the work of the students throughout their school careers.

Successful Technology Integration No matter whether the push for expansion of technology in schools comes from a critical mass of faculty, parents, community, or boards of education, the vision for the way technology is used and the way students and teachers are prepared is, ultimately, in the hands of the school leaders. We watched as the superintendent and the athletic department staff made the decision about forfeiting the last game of the season. It wasn’t an everyday moment and they entered the extraordinary new day together. This is not a question of changing curriculum or directing teachers. Preparing students to take advantage of the technology that is in their hands and on their desks safely is a school responsibility. Helping students find their way through this unlimited world, teaching them how to incorporate it into their learning, and guiding them through the new challenges and questions that arise out there is a part of what we must do. Whether while they are still students in our schools like those in Colorado, or after they graduate and are in the beginning of their careers or in college, the results of our efforts will be evident. Will they be informed, creative, critical thinking communicators? Or will they be off course, uninformed and potentially felons? Part of this choice begins now with us.

Photo copyrighted by Cathy Yeulet courtesy of 123RF

Connect with Ann and Jill on Twitter or by Email.

The opinions expressed in Leadership 360 are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Sign Up for The Savvy Principal

Logo

Essay on Digital Technology

Students are often asked to write an essay on Digital Technology in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Digital Technology

What is digital technology.

Digital technology refers to any system, device, or process that uses digital information. This includes computers, smartphones, and the internet. It’s a part of our daily lives.

Benefits of Digital Technology

Digital technology makes our lives easier. It helps us communicate, learn, and work. For example, we can send emails, learn online, and create digital art.

Challenges of Digital Technology

However, digital technology also has challenges. It can lead to less physical activity and face-to-face interaction. Plus, it can be hard to protect personal information online.

The Future of Digital Technology

The future of digital technology is exciting. We can expect more advancements that will continue to change our lives.

250 Words Essay on Digital Technology

Introduction.

Digital technology, a term encapsulating a wide array of software, hardware, and services, has revolutionized our world. It has altered how we communicate, learn, work, and entertain ourselves, shaping a new societal landscape.

The Evolution of Digital Technology

Digital technology has evolved exponentially over the past few decades. From the advent of personal computers and the internet, to the ubiquity of smartphones and the rise of artificial intelligence, each wave of technology has brought profound changes. This evolution has led to the digitization of various sectors, including education, healthcare, and commerce, thereby facilitating efficiency and convenience.

Impact on Society

The impact of digital technology on society is significant. It has democratized information, breaking down geographical and socio-economic barriers. Moreover, it has fostered global connectivity, enabling collaboration and interaction on an unprecedented scale. However, it also presents challenges such as privacy concerns and digital divide, necessitating thoughtful policy-making and ethical considerations.

Future Prospects

The future of digital technology is exciting, with emerging fields like quantum computing, virtual reality, and blockchain promising to further transform our lives. Nonetheless, it is crucial to ensure that this digital revolution is inclusive and sustainable, balancing technological advancement with societal well-being.

In conclusion, digital technology, while presenting certain challenges, offers immense potential to reshape our world. As we navigate this digital age, it is incumbent upon us to harness this potential responsibly, ensuring that the benefits of digital technology are accessible to all.

500 Words Essay on Digital Technology

Introduction to digital technology.

Digital technology, an umbrella term encompassing a myriad of devices, systems, and platforms, has revolutionized the world. It has transformed how we communicate, work, learn, and entertain ourselves, influencing every facet of our lives. This essay delves into the essence, benefits, and challenges of digital technology.

Understanding Digital Technology

Digital technology refers to any system, device, or process that uses a binary, numeric, or digital approach to create, store, process, and communicate information. It includes a broad range of technologies, such as computers, smartphones, digital televisions, email, robots, artificial intelligence, the Internet, and more. It is the cornerstone of the Information Age, underpinning the rapid exchange of information globally.

Digital technology has brought about numerous benefits. Firstly, it has enhanced communication. Digital platforms like email, social media, and instant messaging allow for instantaneous, affordable, and efficient communication across the globe. Secondly, digital technology has revolutionized education. Online learning platforms, digital textbooks, and educational apps have made education more accessible and personalized.

Furthermore, digital technology has transformed the business landscape. E-commerce, digital marketing, and remote working tools have opened new avenues for business growth and flexibility. Lastly, digital technology has also made significant strides in healthcare, with telemedicine, electronic health records, and digital diagnostic tools improving healthcare delivery.

Despite its numerous benefits, digital technology also poses significant challenges. Privacy and security concerns are at the forefront, with cybercrime, data breaches, and identity theft becoming increasingly prevalent. Additionally, the digital divide, the gap between those with access to digital technology and those without, exacerbates social and economic inequalities.

Moreover, the over-reliance on digital technology can lead to health issues, including digital eye strain and mental health problems. The rapid pace of technological change also presents challenges, as individuals and businesses struggle to keep up with the latest trends and developments.

Conclusion: A Balanced Perspective on Digital Technology

In conclusion, digital technology, while transformative and beneficial, also presents significant challenges that society must address. It is crucial to approach digital technology with a balanced perspective, acknowledging its immense potential to drive progress and innovation, while also recognizing and mitigating its risks. As digital technology continues to evolve at a rapid pace, fostering digital literacy and promoting responsible digital citizenship will be key to harnessing its potential responsibly and equitably.

In the future, we must strive to create a digital world that is secure, inclusive, and beneficial for all. This will require concerted efforts from all stakeholders, including individuals, businesses, governments, and international organizations. The journey is complex, but the potential rewards are immense, promising a future where digital technology serves as a tool for empowerment, progress, and prosperity.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

  • Essay on Dependence on Technology
  • Essay on Advantages and Disadvantages of Modern Technology
  • Essay on School Environment

Apart from these, you can look at all the essays by clicking here .

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • International edition
  • Australia edition
  • Europe edition

Playboy decided against taking legal action, instead preferring to ‘exploit this … phenomenon’.

Institute bans use of Playboy test image in engineering journals

Lena Forsén picture used as reference photo since 1970s now breaches code of ethics, professional association says

Cropped from the shoulders up, the Playboy centrefold of Swedish model Lena Forsén looking back at the photographer is an unlikely candidate for one of the most reproduced images ever.

Shortly after it was printed in the November 1972 issue of the magazine, the photograph was digitised by Alexander Sawchuk, an assistant professor at the University of California, using a scanner designed for press agencies. Sawchuk and his engineering colleagues needed new images to test their processing algorithms. Bored with TV test images, they turned to the centrefold, defending its choice by noting that it featured a face and a mixture of light and dark colours. Fortunately, the limits of the scanner meant that only the top five inches were scanned, with just Forsén’s bare shoulder hinting at the nature of the original picture.

From that beginning, the photo became a standard reference image, used countless times over the 50-plus years since to demonstrate advances in image compression technology, test new hardware and software, and to explain image editing techniques.

Now, though, Lena’s days may be numbered. The Institute of Electrical and Electronics Engineers (IEEE), a large global professional association, has issued a notice to its members warning against the continued use of the photo in academic articles.

“Starting 1 April, new manuscript submissions will no longer be allowed to include the Lena image,” wrote Terry Benzel, vice-president of the IEEE Computer Society’s technical and conference wing. Benzel cited a motion passed by the group’s publishing board, which reads: “IEEE’s diversity statement and supporting policies such as the IEEE code of ethics speak to IEEE’s commitment to promoting an inclusive and equitable culture that welcomes all. In alignment with this culture and with respect to the wishes of the subject of the image, Lena Forsén, IEEE will no longer accept submitted papers which include the ‘Lena image’.”

The IEEE isn’t the first organisation to call time on the photo. In 2018, the scientific journal Nature issued its own edict, blocking the image in all its research journals. “We believe that the history of the Lena image clashes with the extensive efforts to promote women undertaking higher education in science and engineering and therefore have decided to adopt this policy,” the publisher wrote in an unsigned editorial.

Plenty of reasons have been given for the image’s durability, including its “dynamic range”, the centrality of a human face, the fine detail on Lena’s hair and the feather in the hat she is wearing. But as far back as 1996, the outgoing editor in chief of one IEEE journal said, by way of explaining why he hadn’t taken action against the picture, that “the Lena image is a picture of an attractive woman”. He added: “It is not surprising that the [mostly male] image processing research community gravitated toward an image that they found attractive.”

One organisation that could have put an end to the spread of Lena’s image in an instant, but never did, was Playboy itself. In 1992, the magazine wrote to one academic journal threatening action, but never pushed the matter. A few years later, the company changed its mind. “We decided we should exploit this, because it is a phenomenon,” Playboy’s vice-president of new media said in 1997.

Forsén herself has also suggested that the photo should be retired. In 2019, she said she was “really proud” of the picture and she re-created the shot for Wired magazine , which called her “the patron saint of JPEGs”. But later that year, the documentary Losing Lena spearheaded the latest effort to encourage computer science to move on. “I retired from modelling a long time ago,” Forsén said on its release. “It’s time I retired from tech, too. We can make a simple change today that creates a lasting change for tomorrow. Let’s commit to losing me.”

  • Photography
  • Electronics and electrical engineering

Most viewed

Business and the Ethical Implications of Technology: Introduction to the Symposium

  • Published: 13 June 2019
  • Volume 160 , pages 307–317, ( 2019 )

Cite this article

  • Kirsten Martin 1 ,
  • Katie Shilton 2 &
  • Jeffery Smith 3  

158k Accesses

47 Citations

13 Altmetric

Explore all metrics

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. In this introduction, we, first, identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

Avoid common mistakes on your manuscript.

Mobile phones track us as we shop at stores and can infer where and when we vote. Algorithms based on commercial data allow firms to sell us products they assume we can afford and avoid showing us products they assume we cannot. Drones watch our neighbors and deliver beverages to fishermen in the middle of a frozen lake. Autonomous vehicles will someday communicate with one another to minimize traffic congestion and thereby energy consumption. Technology has consequences, tests norms, changes what we do or are able to do, acts for us, and makes biased decisions (Friedman and Nissenbaum 1996 ). The use of technology can also have adverse effects on people. Technology can threaten individual autonomy, violate privacy rights (Laczniak and Murphy 2006 ), and directly harm individuals financially and physically. Technologies can also be morally contentious by “forcing deep reflection on personal values and societal norms” (Cole and Banerjee 2013 , p. 555). Technologies have embedded values or politics, as they make some actions easier or more difficult (Winner 1980 ), or even work differently for different groups of people (Shcherbina et al. 2017 ). Technologies also have political consequences by structuring roles and responsibilities in society (Latour 1992 ) and within organizations (Orlikowski and Barley 2001 ), many times with contradictory consequences (Markus and Robey 1988 ).

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. As emphasized in a recent Journal of Business Ethics article, Johnson (Johnson 2015 ) notes the possibility of a responsibility gap: the abdication of responsibility around decisions that are made as technology takes on roles and tasks previously afforded to humans. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. Within the symposium, digital technologies are conceptualized to include applications of machine learning, information and communications technologies (ICT), and autonomous agents such as drones. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. How ought organizations recognize, negotiate, and govern the values, biases, and power uses of technology? How should the inevitable social costs of technology be shouldered by companies, if at all? And what responsibilities should organizations take for designing, implementing, and investing in technology?

This introduction is organized as follows. First, we identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

Technology and the Scope of Business Ethics

For some it may seem self-evident that the use and application of digital technology is value-laden in that how technology is commercialized conveys a range of commitments on values ranging from freedom and individual autonomy, to transparency and fairness. Each of the contributions to this special issue discusses elements of this starting point. They also—implicitly and explicitly—encourage readers to explore the extent to which technology firms are the proper locus of scrutiny when we think about how technology can be developed in a more ethically grounded fashion.

Technology as Value-Laden

The articles in this special issue largely draw from a long tradition in computer ethics and critical technology studies that sees technology as ethically laden: technology is built from various assumptions that—either implicitly or explicitly—express certain value commitments (Johnson 2015 ; Moor 1985 ; Winner 1980 ). This literature argues that, through affordances—properties of technologies that make some actions easier than others—technological artifacts make abstract values material. Ethical assumptions in technology might take the form of particular biases or values accidentally or purposefully built into a product’s design assumptions, as well as unforeseen outcomes that occur during use (Shilton et al. 2013 ). These issues have taken on much greater concern recently as forms of machine learning and various autonomous digital systems drive an increasing share of decisions made in business and government. The articles in the symposium therefore consider ethical issues in technology design including sources of data, methods of computation, and assumptions in automated decision making, in addition to technology use and outcomes.

A strong example of values-laden technology is the machine learning (ML) algorithms that power autonomous systems. ML technology underlies much of the automation driving business decisions in marketing, operations, and financial management. The algorithms that make up ML systems “learn” by processing large corpi of data. The data upon which algorithms learn, and ultimately render decisions, is a source of ethical challenges. For example, biased data can lead to decisions that discriminate against individuals due to morally arbitrary characteristic, such as race or gender (Danks and London 2017 ; Barocas and Selbst 2016 ). One response to this problem is for companies to think more deliberately about how the data driving automation are selected and assessed to understand discriminatory effects. However, the view that an algorithm or computer program can ever be ‘clean’ feeds into the (mistaken) idea that technology can be neutral. An alternative approach is to frame AI decisions—like all decisions—as biased and capable of making mistakes (Martin 2019 ). The biases can be from the design, the training data, or in the application to human contexts.

Corporate Responsibility for the Ethical Challenges of Technology

It is becoming increasingly accepted that the firms who design and implement technology have moral obligations to proactively address problematic assumptions behind, and outcomes of, new digital technologies. There are two general reasons why this responsibility rests with the firms that develop and commercialize digital technologies. First, in a nascent regulatory environment, the social costs and ethical problems associated with new technologies are not addressed through other institutions. We do not yet have agencies of oversight, independent methods of assessment or third parties that can examine how new digital technologies are designed and applied. This may change, but in the interim, the non-ideal case of responsible technological development is internal restraint, not external oversight. An obvious example of this is the numerous efforts put forth by large firms, such as Microsoft and Google, focused on developing principles or standards for the responsible use of artificial intelligence (AI). There are voices of skepticism that such industry efforts will genuinely focus on the public’s interest; however, it is safe to say that the rate of technological development carries an expectation that firms responsible for innovation are also responsible for showing restraint and judgment in how technology is developed and applied (cf. Smith and Shum 2018 ).

A second reason that new technologies demand greater corporate responsibility is that technologies require attention to ethics during design , and design choices are largely governed by corporations. Design is the projection of how a technology will work in use and includes assumptions as to which users and uses matter and which do not, and how the technology will be used. As STS scholar Akrich notes “…A large part of the work of innovators is that of ‘ inscribing’ this vision of (or prediction about) the world in the technical content of the new object” (Akrich 1992 , p. 208). Engineers and operations directors need to be concerned about how certain values—like transparency, fairness, and economic opportunity—are translated into design decisions.

Because values are implicated during technology design, developers make value judgments as part of their corporate roles. Engineers and developers of technology inscribe visions or preferences of how the world works (Akrich 1992 ; Winner 1980 ). This inscription manifests in choices about how transparent, easy to understand and fix, or inscrutable a technology is (Martin 2019 ), as well as who can use it easily or how it might be misused (Friedman and Nissenbaum 1996 ). Ignoring the value-laden decisions in design does not make them disappear. Philosopher Richard Rudner addresses this in realm of science; for Rudner, scientists as scientists make value judgements; and ignoring value-laden decisions means those decisions are made badly because they are made without much thought or consideration (Rudner 1953 ). In other words, if firms ignore the value implications of design, engineers still make moral decisions; they simply do so without an ethical analysis.

Returning to the example of bias-laden ML algorithms illustrates ways that organizations can work to acknowledge and address those biases through their business practices. For example, acknowledging bias aligns with calls for algorithms to be “explainable” or “interpretable”: capable of being deployed in ways that allow users and affected parties to more fully understand how an algorithm rendered its decisions, including potential biases (cf. Kim and Routledge 2018 ; Kim 2018 ; Selbst and Barocas 2018 ). Explainable and interpretable algorithms require design decisions that carry implications for corporate responsibility. If a design team creates an impenetrable AI-decision, where users are unable to judge or address potential bias or mistakes, then the firm in which that team works can be seen to have responsibility for those decisions (Martin forthcoming).

It follows from these two observations—technology firms operate with nascent external oversight and designers are making value-laden decisions as part of their work in firms—that the most direct means of addressing ethical challenges in new technology is through management decisions within technology firms. The articles in this special issue point out many ways this management might take place. For example, in their paper “A Micro-Ethnographic Study of Big Data Innovation in the Financial Services Sector,” authors Richard Owen and Keren Naa Abeka Arthur give a descriptive account focusing on how an organization makes ethics a selling point of a new financial services platform. Ulrich Leicht-Deobald and his colleagues take a normative tact, writing in “The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity” that firms designing technologies to replace human decision making with algorithms should consider their impact on the personal integrity of humans. Tae Wan Kim and Allan Scheller-Wolf present a case for increased corporate responsibility for what they call technological unemployment : the job losses that will accompany an accelerated pace of automation in the workplace. Their discussion, “Technological Unemployment, Meaning in Life, Purpose of Business and the Future of Stakeholders,” asks what corporations owe not only to employees who directly lose their jobs to technology, but what corporations owe to a future society when they pursue workerless production strategies.

The Interface of Business and Technology Ethics

One of the central insights discussed in the pages of this special issue is that technology-driven firms assume a role in society that demands a consideration of ethical imperatives beyond their financial bottom line. How does a given technology fit within a broader understanding of the purpose of a firm as value creation for a firm and its stakeholders? The contributions to this special issue, directly or indirectly, affirm that neither the efficiencies produced by the use of digital technology, nor enhanced financial return to equity investors solely justify the development, use, or commercialization of a technology. These arguments will not surprise business ethicists, who routinely debate the purpose and responsibilities of for-profit firms. Still, the fact that for-profit firms use new technology and profit from the development of technology raises the question of how the profit-motive impacts the ethics of new digital technology.

One way of addressing this question is to take a cue from other, non-digital technologies. For example, the research, development and commercialization necessary for pharmaceutical products carries ethical considerations for associated entities, whether individual scientists, government agencies, non-governmental organizations, or for-profit companies. Ethical questions include: how are human test subjects treated? How is research data collected and analyzed? How are research efforts funded, and are there any conflicts of interest that could corrupt the scientific validity of that research? Do medical professionals fully understand the costs and benefits of a particular pharmaceutical product? How should new drugs be priced? The special set of ethical issues related to pharmaceutical technology financed through private capital markets include the ones raised above plus a consideration of how the profit-motive, first, creates competing ethical considerations unrelated to pharmaceutical innovation itself, and second, produces social relationships within firms that may compromise the standing responsibilities that individuals and organizations have to the development of pharmaceutical products that support the ideal of patient health.

A parallel story can be told for digital technology. There are some ethical issues that are closely connected to digital technology, such as trust, knowledge, privacy, and individual autonomy. These issues, however, take on a heightened concern when the technologies in question are financed through the profit-motive. We have to be attentive to the extent to which a firm’s inclination to show concern for customer privacy, for instance, can be marginalized when its business model relies on using predictive analytics for advertising purposes (Roose 2019 ). A human resource algorithm that possibly diminishes employee autonomy may be less scrutinized if its use cuts operational expenses in a large, competitive industry. The field of business ethics contributes to the discussion about the responsible use of new technology by illustrating how the interface of the market, profit-motive and the values of technology can be brought into a more stable alignment. Taken together, the contributions in this special issue provide a blueprint for this task. They exemplify the role of technology firmly within the scope of business ethics in that managers and firms can (and should) create and implement technology in a way that remains attentive to the value creation for a firm and its stakeholders including employees, users, customers, and communities.

At the same time, those studying the social aspects of technology need to remain mindful of the special nature—and benefits—of business. Business is a valuable social mechanism to finance large-scale innovation and economic progress. It is hard to imagine that some of the purported benefits of autonomous vehicles, for example, would be on our doorstep if it were not for the presence of nimble, fast-paced private markets in capital and decentralized transportation services. Business is important in the development of technology even if we are concerned about how well it upholds the values of responsible use and application of technology. The challenge taken up by the discussions herein is to explore how we want to configure the future and the role that business can play in that future. Are firms exercising sufficient concern for privacy in the use of technology? What are the human costs associated with relegating more and more decisions to machines, rather than ourselves? Is there an opportunity for further regulatory oversight? If so, in what technological domain? Business ethicists interested in technology need to pay attention to the issues raised by this symposium’s authors and those that study technology need to appreciate the special role that business can play in financing the realization of technology’s potential.

In addition, the articles in this symposium illustrate how the intersection of business ethics and technology ethics illuminates how our conceptions of work—and working—shape the ethics of new technology. The symposium contributions herein have us think critically about how the employment relationship is altered by the use and application of technology. Again, Ulrich Leicht-Deobald and his co-authors prompt an examination of how the traditional HR function is altered by the assistance of machine-learning platforms. Kim and Scheller-Wolf force an examination of what firms using job-automation technologies owe to both displaced and prospective employees, which expands our conventional notions of employee responsibility beyond those who happens to be employed by a particular firm, in a particular industry. Although not exclusively focused on corporate responsibility within the domain of employment, Aurelie Laclercq-Vandelannoitte’s contribution “Is Technological ‘Ill-Being’ Missing from Corporate Responsibility?” encourages readers to think about the implications of “ubiquitous” uses of information technology for future individual well-being and social meaning. There are clear lines between her examination of how uses of technology can adversely impact freedom, privacy and respect and how ethicists and policy makers might re-think firms’ social responsibilities to employees. And, even more pressing, these discussions provide a critical lens for how we think through more fundamental problems such as the rise of work outside of the confines of the traditional employment relationship in the so-called “gig economy” (Kondo and Singer 2019 ).

How Business Ethics Informs Technology Ethics

Business ethics can place current technology challenges into perspective by considering the history of business and markets behaving outside the norms, and the corrections made over time. For example, the online content industry’s claim that changes to the digital marketing ecosystem will kill the industry echoes claims made by steel companies fighting environmental regulation in the 1970s (IAB 2017 ; Lomas 2019 ). Complaints that privacy regulation would curtail innovation echo the automobile industry’s complaints about safety regulation in the 1970s. Here we highlight two areas where business ethics’ understanding of the historical balance between industry desires and pro-social regulation can offer insights on the ethical analysis of technology.

Human Autonomy and Manipulation

There are a host of market actors impacted by the rise of digital technology. Consumers are an obvious case. What we buy and how our identities are created through marketing is, arguably, ground zero for many of the ethical issues discussed by the articles in this symposium. Recent work has begun to examine how technology can undermine the autonomy of consumers or users. For example, many games and online platforms are designed to encourage a dopamine response that makes users want to come back for more (“Technology Designed for Addiction” n.d.). Similar to the high produced by gambling [machines for which have long been designed for maximum addiction (Schüll 2014 )], games and social media products encourage users to seek the interaction’s positive feedback to the point where their lives can be disrupted. Through addictive design patterns, technology firms create a vulnerable consumer (Brenkert 1998 ). Addictive design manipulates consumers and takes advantage of human proclivities to threaten their autonomy.

A second example of manipulation and threatened autonomy is the use of aggregated consumer data to target consumers. Data aggregators can frequently gather enough information about consumers to infer their concerns and desires, and use that information to narrowly and accurately target ads. By pooling diverse information on consumer behavior, such as location data harvested from a phone and Internet browsing behavior tracked by data brokers, consumers can be targeted in ways that undermine individuals’ ability to make a different decision (Susser et al. 2019 ). If marketers infer you are worried about depression based on what you look up or where you go, they can target you with herbal remedies. If marketers guess you are dieting or recently stopped gambling, they can target you with food or casino ads. Business ethics has a long history of examining the ways that marketing strategies target vulnerable populations in a manner that undermines autonomy. A newer, interesting twist on this problem is that these tactics have been extended beyond marketing products into politics and the public sphere. Increasingly, social media and digital marketing platforms are being used to inform and sway debate in the public sphere. The Cambridge Analytica scandal is a well-known example of the use of marketing tactics, including consumer profiling and targeting based on social media data, to influence voters. Such tactics have serious implications for autonomy, because individuals’ political choices can now be influenced as powerfully as their purchasing decisions.

More generally, the articles in this symposium help us understand how the creation and implementation of new technology fits alongside the other pressures experienced within businesses. The articles give us lenses on the relationship between an organization’s culture—its values, processes, commitments, and governance structures—and the challenge of developing and deploying technology in a responsible fashion. There has been some work on how individual developers might or might not make ethical decisions, but very little work on how pressures from organizations and management matter to those decisions. Recent work by Spiekermann et al., for example, set out to study developers, but discovered that corporate cultures around privacy had large impacts on privacy and security design decisions (Spiekermann et al. 2018 ). Studying corporate cultures of ethics, and the complex motivations that managers, in-house lawyers and strategy teams, and developers bring to ethical decision making, is an important area in business ethics, and one upon which the perspectives collected here shed light.

Much of the current discussion around AI, big data, algorithms, and online platforms centers on trust. How can individuals (or governments) trust AI decisions? How do online platforms reinforce or undermine the trust of their users? How is privacy related to trust in firms and trust online? Trust, defined as someone’s willingness to become vulnerable to someone else, is studied at three levels in business ethics: an individual’s general trust disposition, an individual’s trust in a specific firm, and an individual’s institutional trust in a market or community (Pirson et al. 2016 ). Each level is critical to understanding the ethical implications of technology. Trust disposition has been found to impact whether consumers are concerned about privacy: consumers who are generally trusting may have high privacy expectations but lower concerns about bad acts by firms (Turow et al. 2015 ).

Users’ trust in firms can be influenced by how technology is designed and deployed. In particular, design may inspire consumers to overly trust particular technologies. This problem arguably creates a fourth level of trust unique to businesses developing new digital technologies. More and more diagnostic health care decisions, for example, rely upon automated data analysis and algorithmic decision making. Trust is a particularly pressing topic for such applications. Similar concerns exist for autonomous systems in domains such as financial services and transportation. Trust in AI is not simply about whether a system or decision making process will “do” what it purportedly states it will do; rather, trust is about having confidence that when the system does something that we do not fully understand, it will nevertheless be done in a manner that supports in our interests. David Danks ( 2016 ) has argued that such a conception of trust moves beyond mere predictability—which artificial intelligence, by definition, makes difficult—and toward a deeper sense of confidence in the system itself (cf. LaRosa and Danks 2018 ). Finally, more work is needed to identify how technology—e.g., AI decisions, sharing and aggregating data, online platforms, hyper-targeted ads—impact consumers’ institutional trust online. Do consumers see questionable market behavior and begin to distrust an overall market? For example, hearing about privacy violations—the use of a data aggregator—impacts individuals’ institutional trust online and makes consumers less likely to engage with market actors online (Martin 2019 ). The study of technology would benefit from the ongoing conversation about trust in business ethics.

Stakeholder Relations

Technology firms face difficult ethical choices in their supply chain and how products should be developed and sold to customers. For example, technology firms such as Google and Microsoft are openly struggling with whether to create technology for immigration and law enforcement agencies and U.S and international militaries. Search engines and social networks must decide the type of relationship to have with foreign governments. Device companies must decide where gadgets will be manufactured, under what working conditions, and where components will be mined and recycled.

Business ethics offers a robust discussion about whether and how to prioritize the interests of various stakeholders. For example, oil companies debate whether and how to include the claims of environmental groups. Auto companies face claims from unions, suppliers, and shareholders and must navigate all three simultaneously. Clothing manufacturers decide who to partner with for outsourcing. So when cybersecurity firms consider whether to take on foreign governments as clients, their analysis need not be completely new. An ethically attuned approach to cybersecurity will inevitably face the difficult choice of how technology, if at all, should be limited in development, scope, and sale. Similarly, firms developing facial recognition technologies have difficult questions to ask about the viability of those products, if they take seriously the perspective of stakeholders who may find those products an affront to privacy. More research in the ethics of new digital technology should utilize existing work on the ethics of managing stakeholder interests to shed light on the manner in which technology firms should appropriately balance the interests of suppliers, financiers, employees, and customers.

How Technology Ethics Informs Business

Just as business ethics can inform the study of recent challenges in technology ethics, scholars who have studied technology, particularly scholars of sociotechnical systems, can add to the conversation in business ethics. Scholarship in values in design—how social and political values become design decisions—can inform discussions about ethics within firms that develop new technologies. And research in the ethical implications of technology—the social impacts of deployed technologies—can inform discussions of downstream consequences for consumers.

Values in Design

Values in design (ViD) is an umbrella term for research in technology studies, computer ethics, human–computer interaction, information studies, and media studies that focuses on how human and social values ranging from privacy to accessibility to fairness get built into, or excluded from, emerging technologies. Some values in design scholarship analyzes technologies themselves to understand values that they do, or don’t, support well (Brey 2000 ; Friedman and Nissenbaum 1996 ; Winner 1980 ). Other ViD scholars study the people developing technologies to understand their human and organizational motivations and the ways those relate to design decisions (Spiekdermann et al. 2018; JafariNaimi et al. 2015 ; Manders-Huits and Zimmer 2009 ; Shilton 2018 ; Shilton and Greene 2019 ). A third stream of ViD scholarship builds new technologies that purposefully center particular human values or ethics (Friedman et al. 2017 ).

Particularly relevant to business ethics is the way this literature examines how both individually and organizationally held values become translated into design features. The values in design literature points out that the material outputs of technology design processes belong alongside policy and practice decisions as an ethical impact of organizations. In this respect, the values one sees in an organization’s culture and practices are reflected in its approach to the design of technology, either in how that technology is used or how it is created. Similarly, an organization’s approach to technology is a barometer of its implicit and explicit ethical commitments. Apple and Facebook make use of similar data-driven technologies in providing services to their customers; but how those technologies are put to use—within what particular domain and for what purpose—exposes fundamental differences in the ethical commitments to which each company subscribes. As Apple CEO Tim Cook has argued publicly, unlike Facebook, Apple’s business model does not “traffic in your personal life” and will not “monetize [its] customers” (Wong 2018 ). How Facebook and Apple managers understand the boundaries of individual privacy and acceptable infringements on privacy is conveyed in the manner in which their similar technologies are designed and commercialized.

Ethical Implications of Technology and Social Informatics

Technology studies has also developed a robust understanding of technological agency—how technology acts in the world—while also acknowledging the agency of technology users. Scholars who study the ethical implications of technology and social informatics focus on the ways that deployed technology reshapes power relationships, creates moral consequences, reinforces or undercuts ethical principles, and enables or diminishes stakeholder rights and dignity (Martin forthcoming; Kling 1996 ). Importantly, technology studies talks about the intersecting roles of material and non-material actors (Latour 1992 ; Law and Callon 1988 ). Technology, when working in concert with humans, impacts who does what. For example, algorithms influence the delegation of roles and responsibilities within a decision. Depending on how an algorithm is deployed in the world, humans working with their results may have access to the training data (or not), understand how the algorithm reached a conclusion (or not), and have an ability to see the decision relative to similar decisions (or not). Choices about the delegation of tasks between algorithms and individuals may have moral import, as humans with more insight into the components of an algorithmic decision may be better equipped to spot systemic unfairness. Technology studies offers a robust vocabulary for describing where ethics intersect with technology, ranging from design to deployment decisions. While business includes an ongoing discussion about human autonomy as noted above, technology studies adds a conversation about technological agency.

Navigating the Special Issue

The five papers that comprise this thematic symposium range in their concerns from AI and the future of work to big data to surveillance to online cooperative platforms. They explore ethics in the deployment of future technologies, ethics in the relationship between firms and their workers, ethics in the relationship between firms and other firms, and ethical governance of technology use within a firm. All five articles place the responsibility for navigating these difficult ethical issues directly on firms themselves.

Technology and the Future of Employment

Tae Wan Kim and Allan Scheller-Wolf raise a number of important issues related to technologically enabled job automation in their paper “Technological Unemployment, Meaning in Life, Purpose of Business, and the Future of Stakeholders.” They begin by emphasizing what they call an “axiological challenge” posed by job automation. The challenge, simply put, is that trends in job automation (including in manufacturing, the service sector and knowledge-based professions) will likely produce a “crisis in meaning” for individuals. Work—apart from the economic means that it provides—is a deep source of meaning in our lives and a future where work opportunities are increasingly unavailable means that individual citizens will be deprived of the activities that heretofore have defined their social interactions and given their life purpose. If such a future state is likely, as Kim and Scheller-Wolf speculate, what do we expect of corporations who are using the automation strategies that cause “technological unemployment”?

Their answer to this question is complicated, yet instructive. They argue that neither standard shareholder nor stakeholder conceptions of corporate responsibility provide the necessary resources to fully address the crisis in meaning tied to automation. Both approaches fall short because they conceive of corporate responsibility in terms of what is owed to the constituencies that make up the modern firm. But these approaches have little to say about whether there is any entitlement to employment opportunities or whether society is made better off with employment arrangements that provide meaning to individual employees. As such, Kim and Scheller-Wolf posit that there is a second, “teleological challenge” posed by job automation. The moral problem of a future without adequate life-defining employment is something that cannot straightforwardly be answered by existing conceptions of the purpose of the corporation.

Kim and Scheller-Wolf encourage us to think about the future of corporate responsibility with respect to “technological unemployment” by going back to the “Greek agora,” which they take to be in line with some of the premises of stakeholder theory. Displaced workers are neither “employees” nor “community” members in the standard senses of the terms. So, as in ancient Greece, the authors imagine a circumstance where meaningful social interactions are facilitated by corporations who offer “university-like” communities where would-be employees and citizens can participate and collectively deliberate about aspects of the common good, including, but not limited to, how corporations conduct business and how to craft better public policy. This would add a new level of “agency” into their lives and allow them to play an integral role in how business takes place. The restoration of this agency allows individuals to maintain another important sense of meaning in their lives, apart from the work that may have helped define their sense of purpose in prior times. This suggestion is proscriptive and, at times, seems idealistic. But, as with other proposals, such as the recent discussion of taxing job automation, it is part of an important set of conversations that need to be had to creatively imagine the future in light of technological advancement (Porter 2019 ).

The value in this discussion, which frames a distinctive implication for future research, is that it identifies how standard accounts of corporate responsibility are inadequate to justify responsibilities to future workers displaced by automation. It changes the way scholars should understand meaningful work beyond meaning at work to meaning in place of work and sketches an alternative to help build a more comprehensive social response to changing nature of employment that technology will steadily bring.

Technology and Human Well-Being

Aurelie Leclercq-Vandelannoitte’s “Is Employee Technological ‘Ill-Being’ Missing From Corporate Responsibility? The Foucauldian Ethics of Ubiquitous IT Uses in Organizations” explores the employment relationship more conceptually by introducing the concept of “technological ill-being” with the adoption of ubiquitous information technology in the workplace. Leclercq-Vandelannoitte defines technological ill-being as the tension or disconnect between an individual’s social attributes and aspirations when using modern information technology (IT) and the system of norms, rules, and values within the organization. Leclercq-Vandelannoitte asks a series of research questions as to how technological ill-being is framed in organizations, the extent to which managers are aware of the idea, and who is responsible for employees’ technological ill-being.

Leclercq-Vandelannoitte leverages Foucauldian theory and a case study to answer these questions. Foucault offers a rich narrative about the need to protect an individual’s ability to enjoy “free thought from what it silently thinks and so enable it to think differently” (Foucault 1983 , p. 216). The Foucauldian perspective offers an ethical frame by which to analyze ubiquitous IT, where ethics “is a practice of the self in relation to others, through which the self endeavors to act as a moral subject.” Perhaps most importantly, the study, through the lens of Foucault, highlights the importance of self-reflection and engagement as necessary to using IT ethically. An international automotive company provides a theoretically important case of the deployment of ubiquitous IT contemporaneous with strong engagement with corporate social responsibility. The organization offers a unique case in that the geographically dispersed units adopted unique organizational patterns and working arrangements for comparison.

The results illustrate that technological ill-being is not analyzed in broader CSR initiatives but rather as “localized, individual, or internal consequences for some employees.” Further, the blind spot toward employees’ ill-being constitutes an abdication of responsibility, which benefits the firm. The paper has important implications for the corporate responsibility of organizations with regard to the effects of ubiquitous IT on employee well-being—an underexamined area. The author brings to the foreground the value-laden-ness of technology that is deployed within an organization and centers the conversation on employees in particular. Perhaps most importantly, ethical self-engagement becomes a goal for ethical IT implementation and a critical concept to understand technological ill-being. Leclercq-Vandelannoitte frames claims of “unawareness” of the value-laden implications of ubiquitous IT as “the purposeful abdication of responsibility” thereby placing the responsibility for technological ill-being squarely on the firm who deploys the IT. Future work could take the same critical lens toward firms who sell (rather than internally deploy) ubiquitous IT and their responsibility to their consumers.

Technology and Governance

Richard Owen and Keren Naa Abeka Arthur’s “A Micro-Ethnographic Study Of Big Data—Based Innovation In The Financial Services Sector: Governance, Ethics And Organisational Practices” uses a case study of a financial services firm to illustrate how organizations might responsibly govern their uses of big data. This topic is timely, as firms in numerous industries struggle to self-regulate their use of sensitive data about their users. The focus on how a firm achieves ethics-oriented innovation is unusual in the literature and provides important evidence of the factors that influence a firms’ ability to innovate ethically.

The authors describe a company that governs its uses of big data on multiple levels, including through responses to legislation, industry standards, and internal controls. The authors illustrate the ways in which the company strives for ethical data policies that support mutual benefit for their stakeholders. Though the company actively uses customer data to develop new products, the company’s innovation processes explicitly incorporate both customer consent mechanisms, and client and customer feedback. The company also utilizes derived, non-identifiable data for developing new insights and products, rather than using customers’ identifiable data for innovation. The authors describe how national regulation, while not directly applicable to the big data innovations studied, guided the company’s data governance by creating a culture of compliance with national data privacy protections. This has important consequences for both regulators and consumers. This finding implies that what the authors refer to as “contextual” legislation—law that governs other marginally related data operations within the firm—can positively influence new innovations, as well. The authors write that contextual data protection legislation was internalized by the company and “progressively embedded” into future innovation.

The authors also found that company employees directly linked ethical values with the success of the company, highlighting consumer trust as critical to both individual job security and organizational success. This finding speaks to the importance of corporate culture in setting the values incorporated into technology design. Owen & Arthur use the company’s practices as a case study to begin to define ethical and responsible financial big data innovation. Their evidence supports frameworks for responsible innovation that emphasize stakeholder engagement, anticipatory ethics, reflexivity on design teams, and deliberative processes embedded in development practice.

Technology and Personal Integrity

Ulrich Leicht-Deobald and his colleagues unpack the responsibilities organizations have to their workers when adopting and implementing new data collection and behavior analysis tools in “The Challenges of Algorithm-based HR Decision-making for Personal Integrity.” It unites theory from business ethics and the growing field of critical algorithm and big data studies to study the topical issue of algorithmic management of workers by human resource departments. The authors focus on tools for human resources decision making that monitor employees and use algorithms and machine learning to make assessments, such as algorithmic hiring and fraud monitoring tools. The authors argue that, in addition to well-documented problems with bias and fairness, such algorithmic tools have the potential to undermine employees’ personal integrity, which they define as consistency between convictions, words, and actions. The authors argue that algorithmic hiring technologies threaten a fundamental human value by shifting employees to a compliance mindset. Their paper demonstrates how algorithmic HR tools undermine employees’ personal integrity by encouraging blind trust in rules and discouraging moral imagination. The authors argue that the consequences of such undermining include increased information asymmetries between management and employees. The authors classify HR decision making as an issue of corporate responsibility and suggest that companies that wish to use predictive HR technologies must take mitigation measures. The authors suggest participatory design of algorithms, in which employees would be stakeholders in the design process, as one possible mitigative tactic. The authors also advocate for critical data literacy for managers and workers, and adherence to private regulatory regimes such as the Association of Computing Machinery’s (ACM) code of ethics and professional conduct and the Toronto Declaration of Machine Learning.

This paper makes an important contribution to the scoping of corporate responsibility for the algorithmic age. By arguing that companies using hiring algorithms have a moral duty to protect their workers’ personal integrity, it places the ethical dimensions of the design and deployment of algorithms alongside more traditional corporate duties such as responsibility for worker safety and wellness. And like Owen and Arthur, the authors believe that attention to ethics in design—here framed as expanding employees’ capacity for moral imagination—will open up spaces for reflection and ethical discourse within companies.

Technology and Trust

Livia Levine’s “Digital Trust and Cooperation with an Integrative Digital Social Contract” focuses on digital business communities and the role of the members in creating communities of trust. Levine notes that digital business communities, such as online markets or business social networking communities, have all the markers of a moral community as conceived by Donaldson and Dunfee in their Integrative Social Contract Theory (ISCT) (Donaldson and Dunfee 1999 ): these individuals in the community form relationships which generate authentic ethical norms. Digital business communities, on the other hand, differ in that participants cannot always identify each other and do not always have the legal or social means to punish participant businesses who renege on the community’s norms.

By identifying the hypernorm of “the efficient pursuit of aggregate economic welfare,” which would transcend communities and provide guidance for the development of micronorms in a community, Levine then focuses on trust and cooperation micronorms. Levine shows that trust and cooperation are “an instantiation of the hypernorm of necessary social efficiency and that authentic microsocial norms developed for the ends of trust and cooperation are morally binding for members of the community.” Levine uses a few examples, such as Wikipedia, open-source software, online reviews, and Reddit, to illustrate micronorms at play. In addition, Levine illustrates how the ideas of community and moral free space should be applied in new arenas including online.

The paper has important implications for both members of the social contract community and platforms that host the community to develop norms focused on trust and cooperation. First, the idea of community has traditionally been applied to people who know each other. However, Levine makes a compelling case as to why community can and should be applied for groups online of strangers—strangers in real life, but known online. Future research could explore the responsibilities of platforms who facilitate or hinder the development of authentic norms for communities on their service. For example, if a gaming platform is seen as a community of gamers, then what are the obligations of the gaming platform to enforce hypernorms and support the development of authentic micronorms within communities? Levine’s approach opens up many avenues to apply the ideas behind ISCT in new areas.

While each discussion in this symposium offers a specific, stand-alone contribution to the ongoing debate about the ethics of the digital economy, the five larger themes addressed by the articles—the future of employment, personal identity and integrity, governance and trust—will likely continue to occupy scholars’ attention for the foreseeable future. More importantly, the diversity of theoretical perspectives and methods represented within this issue is illustrative of the how the ethical challenges presented by new information technologies are likely best understood through continued cross-disciplinary conversations with engineers, legal theorists, philosophers, organizational behaviorists, and information scientists.

Akrich, M. (1992). The de-scription of technological objects. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 205–224). Cambridge, MA: MIT Press.

Google Scholar  

Barocas, S. I., & Selbst, A. W. (2016). Big data’s disparate impact. California Law Review, 104, 671–733.

Brenkert, G. G. (1998). Marketing and the vulnerable. The Ruffin Series of the Society for Business Ethics , 1 , 7–20.

Brey, P. (2000). Method in computer ethics: Towards a multi-level interdisciplinary approach. Ethics and Information Technology , 2 (2), 125–129.

Article   Google Scholar  

Cole, B. M., & Banerjee, P. M. (2013). Morally contentious technology-field intersections: The case of biotechnology in the United States. Journal of Business Ethics, 115 (3), 555–574.

Danks, D. (2016). Finding trust and understanding in autonomous systems. The Conversation . Retrieved from https://theconversation.com/finding-trust-and-understanding-in-autonomous-technologies-70245

Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence . Retrieved from https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf

Donaldson, T., & Dunfee, T. W. (1999). Ties that bind: A social contracts approach to business ethics . Harvard Business Press.

Foucault, M. (1983). The subject and power. In H. Dreyfus & P. Rabinow (Eds.), Michel Foucault: Beyond structuralism and hermeneutics (2nd ed., pp. 208–228). Chicago: University of Chicago Press.

Friedman, B., Hendry, D. G., & Borning, A. (2017). A survey of value sensitive design methods. Foundations and Trends® in Human–Computer Interaction, 11 (2), 63–125.

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14 (3), 330–347.

IAB. (2017). The economic value of the advertising-supported Internet Ecosystem. https://www.iab.com/insights/economic-value-advertising-supported-internet-ecosystem/

JafariNaimi, N., Nathan, L., & Hargraves, I. (2015). Values as hypotheses: design, inquiry, and the service of values. Design issues, 31 (4), 91–104.

Johnson, D. G. (2015). Technology with no human responsibility? Journal of Business Ethics, 127 (4), 707.

Kim, T. W. (2018). Explainable artificial intelligence, the goodness criteria and the grasp-ability test. Retrieved from https://arxiv.org/abs/1810.09598

Kim, T. W., & Routledge, B. R. (2018). Informational privacy, a right to explanation and interpretable AI. 2018 IEEE Symposium on Privacy - Aware Computing . https://doi.org/10.1109/pac.2018.00013

Kling, R. (1996). Computerization and controversy: value conflicts and social choices . San Diego: Academic Press.

Kondo, A., & Singer, A. (2019 April 3). Labor without employment. Regulatory Review . Retrieved from https://www.theregreview.org/2019/04/03/kondo-singer-labor-without-employment/

Laczniak, G. R., & Murphy, P. E. (2006). Marketing, consumers and technology. Business Ethics Quarterly, 16 (3), 313–321.

LaRosa, E., & Danks, D. (2018). Impacts on trust of healthcare AI. Proceedings of the 2018 AAAI/ACM conference on artificial intelligence, ethics, and society . https://doi.org/10.1145/3278721.3278771

Latour, B. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 225–258). Cambridge, MA: MIT Press.

Law, J., & Callon, M. (1988). Engineering and sociology in a military aircraft project: A network analysis of technological change. Social Problems, 35 (3), 284–297. https://doi.org/10.2307/800623 .

Lomas, N. (2019). Even the IAB warned adtech risks EU privacy rules. Tech Crunch. https://techcrunch.com/2019/02/21/even-the-iab-warned-adtech-risks-eu-privacy-rules/

Manders-Huits, N., & Zimmer, M. (2009). Values and pragmatic action: The challenges of introducing ethical intelligence in technical design communities. International Review of Information Ethics, 10 (2), 37–45.

Markus, M. L., & Robey, D. (1988). Information technology and organizational change: Causal structure in theory and research. Management Science, 34 (5), 583–598.

Martin, K. (2019). Designing Ethical Algorithms. MIS Quarterly Executive , June .

Martin, K. (Forthcoming). Ethics and accountability of algorithms. Journal of Business Ethics .

Moor, J. H. (1985). What is computer ethics? Metaphilosophy , 16 (4), 266–275.

Orlikowski, W. J., & Barley, S. R. (2001). Technology and institutions: What can research on information technology and research on organizations learn from each other? MIS Quarterly, 25 (2), 145–165.

Pirson, M., Martin, K., & Parmar, B. (2016). Public trust in business and its determinants. Business & Society . https://doi.org/10.1177/0007650316647950 .

Porter, E. (2019 February 23). Don’t fight the robots, tax them. New York Times. Retrieved from https://www.nytimes.com/2019/02/23/sunday-review/tax-artificial-intelligence.html

Rose, K. (2019 January 30). Maybe only tim cooke can fix Facebook’s privacy problem. Retrieved from https://www.nytimes.com/2019/01/30/technology/facebook-privacy-apple-tim-cook.html

Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 20 (1), 1–6.

Schüll, N. D. (2014). Addiction by design: Machine gambling in Las Vegas (Reprint edition) . Princeton: Princeton University Press.

Selbst, A. D., & Barocas, S. I. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87, 1085–1140.

Shcherbina, A., Mattsson, C. M., Waggott, D., Salisbury, H., Christle, J. W., Hastie, T., … Ashley, E. A. (2017). Accuracy in Wrist-Worn, sensor-based measurements of heart rate and energy expenditure in a diverse cohort. Journal of Personalized Medicine , 7 (2), 3. https://doi.org/10.3390/jpm7020003

Shilton, K. (2018). Engaging values despite neutrality: Challenges and approaches to values reflection during the design of internet infrastructure. Science, Technology and Human Values, 43 (2), 247–269.

Shilton, K., & Greene, D. (2019). Linking platforms, practices, and developer ethics: Levers for privacy discourse in mobile application development. Journal of Business Ethics, 155 (1), 131–146.

Shilton, K., Koepfler, J. A., & Fleischmann, K. R. (2013). Charting sociotechnical dimensions of values for design research. The Information Society, 29 (5), 259–271.

Smith, B., & Shum, H. (2018). The future computed: Artificial intelligence and its role in society . Retrieved from https://blogs.microsoft.com/blog/2018/01/17/future-computed-artificial-intelligence-role-society/

Spiekermann, S., Korunovska, J., & Langheinrich, M. (2018). Inside the organization: Why privacy and security engineering is a challenge for engineers[40pt]. Proceedings of the IEEE , PP (99), 1–16.

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online Manipulation: Hidden Influences in a Digital World. Available at SSRN 3306006 . https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3306006

Turow, J., Hennessy, M., & Draper, N. (2015). The tradeoff fallacy: how marketers are misrepresenting american consumers and opening them up to exploitation. Annenburg School of Communication. https://www.asc.upenn.edu/sites/default/files/TradeoffFallacy_1.pdf .

Winner, L. (1980). Do artifacts have politics? Daedalus, 109 (1), 121–136.

Wong, J. (2018 March 28). Apple’s tim cook rebukes Zuckerberg over Facebook’s business model. The Guardian . Retrieved from https://www.theguardian.com/technology/2018/mar/28/facebook-apple-tim-cook-zuckerberg-business-model

Zuckerberg, M. (2019 March 30). The internet needs new rules. Washington Post. Retrieved from https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/

Download references

Author information

Authors and affiliations.

George Washington University, Washington, DC, USA

Kirsten Martin

University of Maryland, College Park, MS, USA

Katie Shilton

Seattle University, Seattle, WA, USA

Jeffery Smith

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kirsten Martin .

Ethics declarations

Animal and human rights.

The authors conducted no research on human participants or animals.

Conflict of interest

The authors declare that they have no conflict of interest.

Informed Consent

The authors had no reason to receive informed consent (no empirical research).

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Martin, K., Shilton, K. & Smith, J. Business and the Ethical Implications of Technology: Introduction to the Symposium. J Bus Ethics 160 , 307–317 (2019). https://doi.org/10.1007/s10551-019-04213-9

Download citation

Received : 22 May 2019

Accepted : 28 May 2019

Published : 13 June 2019

Issue Date : December 2019

DOI : https://doi.org/10.1007/s10551-019-04213-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Socio-technical systems
  • Science and technology studies
  • Values in design
  • Social contract theory
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Ethical Use of Technology in Digital Learning Environments: Graduate

    essay on ethical use of digital technology

  2. Ethics in the Digital World Essay Example

    essay on ethical use of digital technology

  3. Computer and Internet Ethics

    essay on ethical use of digital technology

  4. Social Solution Design for the Ethical Use of Technology: Because

    essay on ethical use of digital technology

  5. A complete guide on ethical technology

    essay on ethical use of digital technology

  6. A new guide to digital ethics

    essay on ethical use of digital technology

VIDEO

  1. Ethik der Digitalität

  2. Technology Essay in English For Students|ESSAY ON TECHNOLOGY IN 250 WORDS #Technology#tech#techno

  3. Youth Testimony: A Media and Information Literacy Letter to the youth

  4. Essay on Digital education and it's benefits

  5. Moving Deals from Zero to Done

  6. Digital Leadership vlog

COMMENTS

  1. Societal and ethical issues of digitization

    In this paper we discuss the social and ethical issues that arise as a result of digitization based on six dominant technologies: Internet of Things, robotics, biometrics, persuasive technology, virtual & augmented reality, and digital platforms. We highlight the many developments in the digitizing society that appear to be at odds with six recurring themes revealing from our analysis of the ...

  2. Experts consider the ethical implications of new technology

    In business, tech giants like Google, Facebook, and Amazon have been using smart technologies for years, but use of AI is rapidly spreading, with global corporate spending on software and platforms expected to reach $110 billion by 2024. "A one-off course on ethics for computer scientists would not work. We needed a new pedagogical model.".

  3. Philosophical foundations for digital ethics and AI Ethics: a

    AI ethics is a burgeoning and relatively new field that has emerged in response to growing concerns about the impact of artificial intelligence (AI) on human individuals and their social institutions [2,3,4].In turn, AI ethics is a part of the broader field of digital ethics, which addresses similar concerns generated by the development and deployment of new digital technologies, including of ...

  4. Advancing the ethical use of digital data in human research ...

    The proliferation of digital data and internet-based research technologies is transforming the research landscape, and researchers and research ethics communities are struggling to respond to the ethical issues being raised. This paper discusses the findings from a collaborative project that explored emerging ethical issues associated with the expanding use of digital data for research. The ...

  5. PDF Ethical issues in digital technologies

    1 Ethical challenges in digital technologies. The use of technology can deliver significant benefit to citizens and wider society. Digital technologies provide a direct contribution to the economy, in terms of employment and value-added, and delivering productivity improvements where used in other sectors.

  6. Ethical framework for Artificial Intelligence and Digital technologies

    Abstract. The use of Artificial Intelligence (AI) in Digital technologies (DT) is proliferating a profound socio-technical transformation. Governments and AI scholarship have endorsed key AI principles but lack direction at the implementation level. Through a systematic literature review of 59 papers, this paper contributes to the critical ...

  7. 1 The History of Digital Ethics

    This chapter proposes to view the history of digital ethics in three phases: pre-digital modernity (before the invention of digital technology), digital modernity (with digital technology but analogue lives), and digital post-modernity (with digital technology and digital lives). For each phase, the developments in digital ethics are explained ...

  8. Ethical Use of Technology in Digital Learning Environments: Graduate

    This book is the result of a co-design project in a class in the Masters of Education program at the University of Calgary. The course, and the resulting book, focus primarily on the safe and ethical use of technology in digital learning environments. The course was organized according to four topics based on Farrow's (2016) Framework for the Ethics of Open Education.

  9. Ethics in the digital world: Where we are now and what's next

    Ethical principles have evolved many times over since the days of the ancient Greek philosophers and have been repeatedly rethought (e.g., hedonism, utilitarianism, relativism, etc.). Today we live in a digital world, and most of our relationships have moved online to chats, messengers, social media, and many other ways of online communication.

  10. Thinking Through the Ethics of New Tech…Before There's a Problem

    Third, appoint a chief technology ethics officer. We all want the technology in our lives to fulfill its promise — to delight us more than it scares us, to help much more than it harms. We also ...

  11. The Sociotechnical Ethics of Digital Health: A Critique and Extension

    Introduction. Hope in the promise of digital technologies to contribute to better health and health care continues to grow among many policymakers, health care providers, researchers and technology users around the world (1, 2).Documented perspectives among patients and the public about the use of digital technologies within health care systems are generally positive (3-5), and digital ...

  12. (PDF) Ethical Issues of Digital Transformation

    Abstract. Ethical issues are involved in the increasing use of emerging technologies by organizations. Several business forums and academic research have been exploring the relationship between ...

  13. Digital Ethics: Future of Risk in the Digital Era

    As a response to societal concerns around increasing cellphone use, as well as to create long-term value through ethical leadership, a technology company has added new tools to its operating system that help users monitor and curb time spent on mobile devices.

  14. Citizenship in the Digital Age: Implications and Challenges

    This essay explores the importance of ethical digital citizenship in technology use. This study examines the implications and challenges of positive digital citizenship. This essay emphasizes ...

  15. The Overuse of Digital Technologies: Human Weaknesses ...

    This is an interdisciplinary article providing an account of a phenomenon that is quite widespread but has been thus far mostly neglected by scholars: the overuse of digital technologies. Digital overuse (DO) can be defined as a usage of digital technologies that subjects perceive as dissatisfactory and non-meaningful a posteriori. DO has often been implicitly conceived as one of the main ...

  16. Exploring the ethical issues in research using digital data collection

    Introduction. Much has been written about the ethics of conducting research with minors, due in part to the distinctive ethical issues that emerge when conducting research with this population [1, 2].Similarly, there is an emerging body of literature about the ethics of research practices that include digital data (sometimes characterized as 'big data') collection via digital technologies ...

  17. 14 Tech-Related Ethical Concerns And How They Can Be Addressed

    6. Ad Fraud. Big Tech platforms make huge profits as advertisers spend money to reach their audiences, and they have an ethical responsibility to provide accurate data on whether ads are reaching ...

  18. Ethically Leveraging Digital Technology for Health

    The use of digital health technologies, artificial intelligence (AI), and machine learning in biomedical research and clinical care was discussed during the first two panel sessions. A range of ethical concerns can emerge in the development and implementation of new science and technologies, said Bernard Lo of The Greenwall Foundation and moderator of the sessions. Deborah Estrin, an associate ...

  19. Teaching Students Cyber Ethics (Opinion)

    Teaching Students Cyber Ethics. By Jill Berkowicz & Ann Myers — November 12, 2015 6 min read. Jill Berkowicz. Jill Berkowicz, Ed.D. has held K-12 leadership positions since 1993. Ann Myers. Ann ...

  20. Digital Technologies, Ethical Questions, and the Need of an

    3.1 Technology, Science, and Ethics. Mitcham and Briggle reconstruct the relations between technology and ethics, from antiquity to the present day.They argue that from the Greeks we have learned to recognize the specificity of technologies, and at the same time, we have inherited the idea that what pertains to the téchne is often morally inferior to knowledge (and, a fortiori, to the science ...

  21. Essay on Digital Technology

    Students are often asked to write an essay on Digital Technology in their schools and colleges. And if you're also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic. ... necessitating thoughtful policy-making and ethical considerations. Future Prospects. The future of digital technology is exciting ...

  22. The Role of Ethics in Information Technology: A Critical ...

    Published Aug 9, 2023. In today's digital age, as the lines between the virtual and real worlds blur, the ethical challenges faced by the Information Technology (IT) industry have grown in ...

  23. Institute bans use of Playboy test image in engineering journals

    Institute bans use of Playboy test image in engineering journals. Lena Forsén picture used as reference photo since 1970s now breaches code of ethics, professional association says. Alex Hern UK ...

  24. Alternative Bank, TK Tech Africa set $500m digital sukuk partnership

    The Alternative Bank and TK Tech Africa are poised to transform the landscape of financial technology and non-interest banking in Nigeria through their innovative $500 million Digital Sukuk ...

  25. Business and the Ethical Implications of Technology ...

    While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. Although firms play an important role in the development ...