digital pollution essay

Washington Monthly

The World Is Choking on Digital Pollution

Share this:.

  • Click to share on Facebook (Opens in new window)
  • Click to share on X (Opens in new window)

Jan-19-Estrin-DigitalPollution

Tens of thousands of Londoners died of cholera from the 1830s to the 1860s. The causes were simple: mass quantities of human waste and industrial contaminants were pouring into the Thames, the central waterway of a city at the center of a rapidly industrializing world. The river gave off an odor so rank that Queen Victoria once had to cancel a leisurely boat ride. By the summer of 1858, Parliament couldn’t hold hearings due to the overwhelming stench coming through the windows.

The problem was finally solved by a talented engineer and surveyor named Joseph Bazalgette, who designed and oversaw the construction of an industrial-scale, fully integrated sewer system. Once it was complete, London never suffered a major cholera outbreak again.

London’s problem was not a new one for humanity. Natural and industrial waste is a fact of life. We start excreting in the womb and, despite all the inconveniences, keep at it for the rest of our lives. And, since at least the Promethean moment when we began to control fire, we’ve been contributing to human-generated emissions through advances intended to make our lives easier and more productive, often with little regard for the costs.

As industrialization led to increased urbanization, the by-products of combined human activity grew to such levels that their effects could not be ignored. The metaphorical heart of the world’s industrial capital, the Thames was also the confluence of the effects of a changing society. “Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface, even in water of this kind,” noted Michael Faraday, a British scientist now famous for his contributions to electromagnetism.  

Relief came from bringing together the threads needed to tackle this type of problem—studying the phenomenon, assigning responsibility, and committing to solutions big enough to match the scope of what was being faced. It started with the recognition that direct and indirect human waste was itself an industrial-scale problem. By the 1870s, governmental authorities were starting to give a more specific meaning to an older word: they started calling the various types of waste “pollution.”  

A problem without a name cannot command attention, understanding, or resources—three essential ingredients of change. Recognizing that at some threshold industrial waste ceases to be an individual problem and becomes a social problem—a problem we can name—has been crucial to our ability to manage it. From the Clean Air Act to the Paris Accords, we have debated the environmental costs of progress with participants from all corners of society: the companies that produce energy or industrial products; the scientists who study our environment and our behaviors; the officials we elect to represent us; and groups of concerned citizens who want to take a stand. The outcome of this debate is not predetermined. Sometimes, we take steps to restrain industrial externalities. Other times, we unleash them in the name of some other good.  

By the 1870s, governmental authorities were giving a more specific meaning to an old word: they called industrial waste “pollution.” Now, we are confronting new and alarming by-products of progress, and the stakes may be just as high.

Now, we are confronting new and alarming by-products of progress, and the stakes for our planet may be just as high as they were during the Industrial Revolution. If the steam engine and blast furnace heralded our movement into the industrial age, computers and smartphones now signal our entry into the next age, one defined not by physical production but by the ease of services provided through the commercial internet. In this new age, names like Zuckerberg, Bezos, Brin, and Page are our new Carnegies, Rockefellers, and Fords.  

As always, progress has not been without a price. Like the factories of 200 years ago, digital advances have given rise to a pollution that is reducing the quality of our lives and the strength of our democracy. We manage what we choose to measure. It is time to name and measure not only the progress the information revolution has brought, but also the harm that has come with it. Until we do, we will never know which costs are worth bearing.

W e seem to be caught in an almost daily reckoning with the role of the internet in our society. This past March, Facebook lost $134 billion in market value over a matter of weeks after a scandal involving the misuse of user data by the political consulting firm Cambridge Analytica. In August, several social media companies banned InfoWars, the conspiracy-mongering platform of right-wing commentator Alex Jones. Many applauded this decision, while others cried of a left-wing conspiracy afoot in the C-suites of largely California-based technology companies.

Perhaps the most enduring political news story over the past two years has been whether Donald Trump and his campaign colluded with Russian efforts to influence the 2016 U.S. presidential election—efforts that almost exclusively targeted vulnerabilities in digital information services. Twitter, a website that started as a way to let friends know what you were up to, might now be used to help determine intent in a presidential obstruction of justice investigation.

And that’s just in the realm of American politics. Facebook banned senior Myanmar military officials from the social network after a United Nations report accusing the regime of genocide against the Muslim Rohingya minority cited the platform’s role in fanning the flames of violence. The spread of hoaxes and false kidnapping allegations on Facebook and messaging application WhatsApp (which is owned by Facebook) was linked to ethnic violence, including lynchings, in India and Sri Lanka.

Concerns about the potential addictiveness of on-demand, mobile technology have grown acute. A group of institutional investors pressured Apple to do something about the problem, pointing to studies showing technology’s negative impact on students’ ability to focus, as well as links between technology use and mental health issues. The Chinese government announced plans to control use of video games by children due to a rise in levels of nearsightedness. Former Facebook executive Chamath Palihapitiya described the mechanisms the company used to hold users’ attention as “short-term, dopamine-driven feedback loops we’ve created [that] are destroying how society works,” telling an audience at the Stanford Graduate School of Business that his own children “aren’t allowed to use that shit.”

The feculence has become so dense that it is visible—and this is only what has floated to the top.  

For all the good the internet has produced, we are now grappling with effects of digital pollution that have become so potentially large that they implicate our collective well-being. We have moved beyond the point at which our anxieties about online services stem from individuals seeking to do harm—committing crimes, stashing child pornography, recruiting terrorists. We are now face-to-face with a system that is embedded in every structure of our lives and institutions, and that is itself shaping our society in ways that deeply impact our basic values.  

We are right to be concerned. Increased anxiety and fear, polarization, fragmentation of a shared context, and loss of trust are some of the most apparent impacts of digital pollution. Potential degradation of intellectual and emotional capacities, such as critical thinking, personal authority, and emotional well-being, are harder to detect. We don’t fully understand the cause and effect of digital toxins. The amplification of the most odious beliefs in social media posts, the dissemination of inaccurate information in an instant, the anonymization of our public discourse, and the vulnerabilities that enable foreign governments to interfere in our elections are just some of the many phenomena that have accumulated to the point that we now have real angst about the future of democratic society.

I n one sense, the new technology giants largely shaping our online world aren’t doing anything new. Amazon sells goods directly to consumers and uses consumer data to drive value and sales; Sears Roebuck delivered goods to homes, and Target was once vilified for using data on customer behavior to sell maternity products to women who had yet to announce their pregnancies. Google and Facebook grab your attention with information you want or need, and in exchange put advertisements in front of you; newspapers started the same practice in the nineteenth century and have continued to do it into the twenty-first—even if, thanks, in part, to Google and Facebook, it’s not longer as lucrative.  

But there are fundamental and far-reaching differences. The instantaneity and connectivity of the internet allow new digital pollution to flow in unprecedented ways. This can be understood through three ideas: scope, scale, and complexity.  

The scope of our digital world is wider and deeper than we tend to recognize.  

It is wider because it touches every aspect of human experience, reducing them all to a single small screen that anticipates what we want or “should” want. After the widespread adoption of social media and smartphones, the internet evolved from a tool that helped us do certain things to the primary surface for our very existence. Data flows into our smart TV, our smart fridge, and the location and voice assistants in our phones, cars, and gadgets, and comes back out in the form of services, reminders, and notifications that shape what we do and how we behave.  

It is deeper because the influence of these digital services goes all the way down, penetrating our mind and body, our core chemical and biological selves. Evidence is mounting that the 150 times a day we check our phones could be profoundly influencing our behaviors and trading on our psychological reward systems in ways more pervasive than any past medium. James Williams, a ten-year Google employee who worked on advertising and then left to pursue a career in academia, has been sounding the alarm for years. “When, exactly, does a ‘nudge’ become a ‘push’?” he asked five years ago. “When we call these types of technology ‘persuasive,’ we’re implying that they shouldn’t cross the line into being coercive or manipulative. But it’s hard to say where that line is.”  

Madison Avenue had polls and focus groups. But they could not have imagined what artificial intelligence systems now do. Predictive systems curate and filter. They interpret our innermost selves and micro-target content we will like in order to advance the agendas of marketers, politicians, and bad actors. And with every click (or just time spent looking at something), these tools get immediate feedback and more insights, including the Holy Grail in advertising: determining cause and effect between ads and human behavior. The ability to gather data, target, test, and endlessly loop is every marketer’s dream—brought to life in Silicon Valley office parks. And the more we depend on technology, the more it changes us.

The scope of the internet’s influence on us comes with a problem of scale . The instantaneity with which the internet connects most of the globe, combined with the kind of open and participatory structure that the “founders” of the internet sought and valorized, has created a flow of information and interaction that we may not be able to manage or control in a safe way.  

After the widespread adoption of social media and smartphones, the internet evolved from a tool that helped us do certain things to the primary surface for our very existence. And the more we depend on technology, the more it changes us.

A key driver of this scale is how easy and cheap it is to create and upload content, or to market services or ideas. Internet-enabled services strive to drain all friction out of every transaction. Anyone can now rent their apartment, sell their junk, post an article or idea—or just amplify a sentiment by hitting “like.” The lowering of barriers has, in turn, incentivized how we behave on the internet—in both good and bad ways. The low cost of production has allowed more free expression than ever before, sparked new means of providing valued services, and made it easier to forge virtuous connections across the globe. It also makes it easier to troll or pass along false information to thousands of others. It has made us vulnerable to manipulation by people or governments with malevolent intent.  

The sheer volume of connections and content is overwhelming. Facebook has more than two billion active users each month. Google executes three and a half billion searches per day. YouTube streams over one billion hours of video per day. These numbers challenge basic human comprehension. As one Facebook official said in prepared testimony to Congress this year, “People share billions of pictures, stories, and videos on Facebook daily. Being at the forefront of such a high volume of content means that we are also at the forefront of new and challenging legal and policy questions.”  

Translation: We ’ re not sure what to do either . And, instead of confronting the ethical questions at stake, the corporate response is often to define incremental policies based on what technology can do. Rather than considering actual human needs, people and society evolve toward what digital technology will support.  

The third challenge is that the scope and scale of these effects relies on increasingly complex algorithmic and artificial intelligence systems, limiting our ability to exercise any human management. When Henry Ford’s assembly line didn’t work, a floor manager could investigate the problem and identify the source of human or mechanical error. Once these systems became automated, the machines could be subjected to testing and diagnostics and taken apart if something went wrong. After digitization, we still had a good sense of what computer code would produce and could analyze the code line by line to find errors or other vulnerabilities.

Large-scale machine-learning systems cannot be audited in this way. They use information to learn how to do things. Like a human brain, they change as they learn. When they go wrong, artificial intelligence systems cannot be seen from a God’s-eye view that tells us what happened. Nor can we predict exactly what they will do under unknown circumstances. Because they evolve based on the data they take in, they have the potential to behave in unexpected ways.

Commercial forces are taking basic questions out of our hands. It is treated as inevitable that there must be billons of posts, billions of pictures, billions of videos. The focus is on business: more users, more engagement, and greater activity.

Taken together, these three kinds of change—the scope of intertwining digital and non-digital experience, the scale and frequency leading to unprecedented global reach, and the complexity of the machines—have resulted in impacts at least as profound as the transition from agricultural to industrial society, over a much shorter period of time. And the very elements that have made the internet an incredible force for good also come together to create new problems. The shift is so fundamental that we do not really understand the impacts with any clarity or consensus. What do we call hate speech when it is multiplied by tens of thousands of human and nonhuman users for concentrated effect? What do we call redlining when it is being employed implicitly by a machine assigning thousands of credit ratings per second in ways the machine’s creator can’t quite track? What do we call the deterioration of our intellectual or emotional capacities that results from checking our phones too often?  

We need a common understanding, not just of the benefits of technology, but also of its costs—to our society and ourselves.  

H uman society now faces a critical choice: Will we treat the effects of digital technology and digital experience as something to be managed collectively? Right now, the answer being provided by those with the greatest concentration of power is no.  

The major internet companies treat many of these decisions as theirs, even as CEOs insist that they make no meaningful decisions at all. Jack Dorsey warned against allowing Twitter to become a forum “constructed by our [Twitter employees’] personal views.” Mark Zuckerberg, in reference to various conspiracy theories, including Holocaust denialism, stated , “I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.”  

These are just the explicit controversies, and the common refrain of “We are just a platform for our users” is a decision by default. There can be no illusions here: corporate executives are making critical societal choices. Every major internet company has some form of “community standards” about acceptable practices and content; these standards are expressions of their own values. The problem is that, given their pervasive role, these companies’ values come to govern all of our lives without our input or consent.

Commercial forces are taking basic questions out of our hands. We go along through our acceptance of a kind of technological determinism: the technology simply marches forward toward less friction, greater ubiquity, more convenience. This is evident, for example, when leaders in tech talk about the volume of content. It is treated as inevitable that there must be billons of posts, billions of pictures, billions of videos. It is evident, too, when these same leaders talk to institutional investors in quarterly earnings calls. The focus is on business: more users, more engagement, and greater activity. Stagnant growth is punished in the stock price.  

Commercial pressures have impacted how the companies providing services on the internet have evolved. Nicole Wong, a former lawyer for Google (and later a White House official) recently reflected during a podcast interview on how Google’s search priorities changed over time. In the early days, she said, it was about getting people all the right information quickly. “And then in the mid-2000s, when social networks and behavioral advertising came into play, there was this change in the principles,” she continued. After the rise of social media, Google became more focused on “personalization, engagement . . . what keeps you here, which today we now know very clearly: It’s the most outrageous thing you can find.”  

Digital pollution is more complicated than industrial pollution. Industrial pollution is the by-product of a value-producing process, not the product itself. On the internet, value and harm are often one and the same.

The drive for profits and market dominance is instilled in artificial intelligence systems that aren’t wired to ask why. But we aren’t machines; we can ask why. We must confront how these technologies work, and evaluate the consequences and costs for us and other parts of our society.   We can question whether the companies’ “solutions”—like increased staffing and technology for content moderation—are good enough, or if they are the digital equivalent of “clean coal.” As the services become less and less separable from the rest of our lives, their effects become ever more pressing social problems. Once London’s industrial effluvia began making tens of thousands fall ill, it became a problem that society shared in common and in which all had a stake. How much digital pollution will we endure before we take action?

W e tend to think of pollution as something that needs to be eradicated. It’s not. By almost every measure, our ability to tolerate some pollution has improved society. Population, wealth, infant mortality, life span, and morbidity have all dramatically trended in the right direction since the industrial revolution. Pollution is a by-product of systems that are intended to produce a collective benefit. That is why the study of industrial pollution itself is not a judgment on what actions are overall good or bad. Rather, it is a mechanism for understanding effects that are large enough to influence us at a level that dictates we respond collectively.

We must now stake a collective claim in controlling digital pollution. What we face is not the good or bad decision of any one individual or even one company. It is not just about making economic decisions. It is about dispassionately analyzing the economic, cultural, and health impacts on society and then passionately debating the values that should guide our choices—as companies, as individual employees, as consumers, as citizens, and through our leaders and elected representatives.

Hate speech and trolling, the proliferation of misinformation, digital addiction—these are not the unstoppable consequences of technology. A society can decide at what level it will tolerate such problems in exchange for the benefits, and what it is willing to give up in corporate profits or convenience to prevent social harm.  

We have a model for this urgent discussion. Industrial pollution is studied and understood through descriptive sciences that name and measure the harm. Atmospheric and environmental scientists research how industrial by-products change the air and water. Ecologists measure the impact of industrial processes on plant and animal species. Environmental economists create models that help us understand the trade-offs between a rule limiting vehicle emissions and economic growth.  

We require a similar understanding of digital phenomena—their breadth, their impact, and the mechanisms that influence them. What are the various digital pollutants, and at what level are they dangerous? As with environmental sciences, we must take an interdisciplinary approach, drawing not just from engineering and design, law, economics, and political science but also from fields with a deep understanding of our humanity, including sociology, anthropology, psychology, and philosophy.  

To be fair, digital pollution is more complicated than industrial pollution. Industrial pollution is the by-product of a value-producing process, not the product itself. On the internet, value and harm are often one and the same. It is the convenience of instantaneous communication that forces us to constantly check our phones out of worry that we might miss a message or notification. It is the way the internet allows more expression that amplifies hate speech, harassment, and misinformation than at any point in human history. And it is the helpful personalization of services that demands the constant collecting and digesting of personal information. The complex task of identifying where we might sacrifice some individual value to prevent collective harm will be crucial to curbing digital pollution. Science and data inform our decisions, but our collective priorities should ultimately determine what we do and how we do it.  

The question we face in the digital age is not how to have it all, but how to maintain valuable activity at a societal price on which we can agree. Just as we have made laws about tolerable levels of waste and pollution, we can make rules, establish norms, and set expectations for technology.  

Perhaps the online world will be less instantaneous, convenient, and entertaining. There could be fewer cheap services. We might begin to add friction to some transactions rather than relentlessly subtracting it. But these constraints would not destroy innovation. They would channel it, driving creativity in more socially desirable directions. Properly managing the waste of millions of Londoners took a lot more work than dumping it in the Thames. It was worth it.

Our ideas can save democracy... But we need your help! Donate Now!

Judy Estrin and Sam Gill

Judy Estrin is an internet pioneer, business executive, technology entrepreneur, the CEO of JLabs, and the author of Closing the Innovation Gap. Sam Gill is a vice president at the John S. and James L. Knight Foundation.

Green Hero

  • Digital Pollution: What is it?

Digital pollution includes all sources of environmental pollution produced by digital tools. It is divided into two parts: the first is related to the manufacture of any digital tool, and the second to the functioning of the Internet.

Increasingly, the internet and the digital sector are being singled out for their environmental impact. Today, some people even claim that a person who is connected is the worst polluter. Indeed, the digital sector represents significant greenhouse gas emissions. As well as various other forms of pollution and resource consumption.

At the same time, digital uses also mean better information sharing, instant communication and improved exchanges. This means less waste of paper and time. Less travel and more sharing and collaboration.

So how can we optimise our daily use of digital technology? How can we reduce digital pollution and its impact on the environment? 

While the environmental impact of digital often becomes an argument to discredit the ecological commitment of anyone who dares to have a facebook account, or even worse, a smartphone. We will see that digital players often actively promote the creation of green energy and that there are many effective solutions to reduce digital pollution.

produire électricité éolienne

Digital pollution in numbers

Digital technology contributes significantly to humanity's environmental impact. According to a study carried out in 2019 by Frédéric Bordage, a French digital expert, it would represent nearly 3.8% of global greenhouse gas emissions. That is the equivalent of about 116 million round-the-world car journeys!

The equipments are the main source of pollution linked to digital technology, and in particular their production. In 2019, the ranking of impact sources (in decreasing order of importance) is as follows:

  • Equipment manufacturing ;
  • Electricity consumption of the equipments ;
  • Network power consumption ;
  • Data centres' power consumption ;
  • Manufacture of network equipments ;
  • Manufacture of equipment hosted by IT centres (servers, etc.)

According to the Shift Project report also produced in 2019. The power consumption of digital is increasing by 9% per year. The share of electricity consumption would be due to the use, up to 55%, against 45% for the production of equipment.

However, it is important to note that digital is 2.5x less than road transport CO2 emissions, not counting vehicle or infrastructure production. It is also 3x less than the carbon impact of deforestation . It is also almost 2x less than the energy consumption of commercial buildings.

déchets électroniques pollution digitale

What are the main causes of digital pollution?

As these figures highlight, the main causes of digital pollution are both the manufacture of the equipment and the electricity consumption of the equipment and the network.

In particular, the use of resources and the extraction and processing stages of raw materials for the manufacture of electronic equipment. As well as the methods used to produce electricity.

In this regard, it is important to note that electricity is the least polluting energy since it emits neither fine particles nor CO2 . However, this is only possible if it is produced from renewable energy sources. Unfortunately, today this is still far from being the case. Since the world's electricity production is still mainly based on fossil fuels.

Why choose a green web hosting?

Web hosting is known to be very energy-intensive and not very ecological. Indeed, as we have seen previously the electricity consumption of the network is among the main causes of digital pollution.

Typically, data centers have several thousand high-powered computers and servers, most of them using CPUs and hard disks all the time. This means that they generate so much heat that the supplier will usually need an air-conditioning system to keep the temperature where they are installed at a tolerable level.

The players in the web hosting industry have understood the importance of going green. Whether for economic reasons, for marketing impact or driven by real eco-responsible motivations, many of them are offering green web hosting solutions. Their commitments range from offsetting their carbon emissions to promising to be powered 100% by renewable energy. 

Others go further, such as Infomaniak , a pioneer in ecological web hosting in Europe. In particular, with the introduction of an environmental charter containing 20 commitments, such as:

  • 100% renewable energy, 
  • outside air cooling system, without air conditioning
  • low-energy servers
  • waste recycling

Here is a comparative table of different basic eco-friendly web hosting offers:

How can we reduce the impact of digital technology on the environment?

Green solutions are obviously quite numerous when you are a blogger, just as if you simply want to create an eco-responsible website for yourself or for your company. But what about reducing the impact of digital carbon footprint on a daily basis? In this respect, as users, we can also act to minimize these impacts.

Aim for equipment longevity

Digital equipment has environmental consequences throughout its life cycle. The production of their components requires a lot of energy, chemical treatments as well as rare metals. Always remember that the best waste is the one you don't produce! So, before you buy, always ask yourself if the purchase is really necessary. Or, is the electronic device you want to dispose of still in working order? If so, consider reselling it. There is a growing market today for reconditioned appliances. On the other hand, if it no longer works, always remember to recycle your electronic waste properly.

Do emails pollute?

The impact of sending an email depends on the weight of the attachments, its storage time on a server but also on the number of recipients. To lighten your emails think about:

  • Target recipients, clean up your mailing lists and delete attachments from a message you reply to
  • Optimize the size of the files you transmit
  • Consider using drop-off sites rather than sending as an attachment
  • Regularly clean your mailbox and unsubscribe from mailing lists that do not interest you.

Does the data storage pollute?

Data storage is increasingly being done on mail servers and on the Cloud. To optimize your storage of documents, videos, photos or music, think about:

  • Only keep what is useful
  • Store and use as much data as possible locally
  • Only store what you need on the Cloud

Lastly, note that online videos account for 60% of the global data flow and are responsible for nearly 1% of global CO2 emissions. So, to reduce their impact, consider disabling automatic playback in application settings. Give preference to downloaded music or audio streaming platforms rather than those with music clips.

Ce site utilise des cookies pour améliorer votre expérience de navigation. Notamment des cookies de performance, de fonctionnalité et de ciblage. En cliquant sur “Accepter”, vous acceptez notre utilisation de cookies conformément à notre politique de confidentialité.

Logo

  • Plant trees
  • No products in the cart.

All You Need to Know about Digital Pollution

digital pollution essay

Discover the hidden side of our digital world! 🌐💻 Ever wondered where all that ‘cloud’ data lives? Not in the sky, but energy-hungry data centers. Digital pollution, from e-waste to carbon emissions, is real. Learn how individuals and organizations can fight it. 🌍🌳 

Imagine this: You’re scrolling through your phone, and you get a notification that your cloud storage is almost full. You sigh and think about all the pictures, documents, and videos you’ve amassed over the years. Then, you go ahead and purchase more storage, just like that.

Sounds familiar, doesn’t it? But have you ever paused to wonder where all this ‘cloud’ data actually resides? It’s not floating in the sky but stored in colossal data centres that consume tremendous amounts of electricity, contributing to something far less talked about—digital pollution.

We live in a digital age, a world increasingly dependent on technology for everything from communication and entertainment to healthcare and transportation. While the digital revolution has offered unparalleled conveniences and advancements, it comes with its own set of environmental challenges, one of which is digital pollution. 

Understanding digital pollution

Digital pollution is an umbrella term that encapsulates the environmental impact of the digital world. It manifests in various ways, such as electronic waste (e-waste), excess data storage, the energy consumption of digital platforms, and the carbon footprint of the entire digital industry. 

So, what causes digital pollution?

Electronic waste (e-waste)

Obsolete gadgets and hardware components often end up in landfills, contributing to toxic waste.

Excess data storage

Data centres housing our emails, photos, and digital memories consume immense amounts of electricity.

Energy consumption of digital platforms

Every time you stream a video or engage in online activities, servers somewhere consume electricity to keep that service running.

Carbon footprint of the digital industry

The production, operation, and disposal of digital technology contribute to global carbon emissions.

The impacts of digital pollution

Digital pollution has some pretty astounding impacts. For example,  data centres  alone are estimated to consume about 1,000 kWh per square metre, which is about ten times the power consumption of a typical American home. The production of digital technology is also pressurising on the environment, as it often involves mining rare metals, depleting Earth’s limited resources.

Digital pollution also has enormous economic and societal implications. E-waste management is not just an environmental issue but also a significant financial burden. Exposure to electronic waste can lead to severe health issues, especially in developing countries where e-waste is often dumped. 

How organisations can combat digital pollution

Organisations hold a significant share of responsibility for mitigating digital pollution. Fortunately, there are multiple avenues through which organisations can take meaningful action to reduce their digital environmental footprint. By integrating sustainability into their core business practices, companies can play a vital role in combating the multi-faceted problem of digital pollution.

Green IT practices

Adopting green IT practices is one of the most immediate ways an organisation can reduce its digital pollution. This involves optimising computer systems for energy efficiency, using software that requires less power, and even incorporating AI algorithms that can manage energy use intelligently. By adhering to green IT standards and certifications, companies not only contribute to sustainability but may also see reduced operational costs over time.

Sustainable server management and cloud storage

The servers that store digital data are among the largest contributors to energy consumption in the tech industry. Organisations can make a significant impact by choosing sustainable server management solutions. This could involve migrating to cloud services that are powered by renewable energy or using hosting services that are carbon neutral. Additionally, practices like server virtualization can help companies utilise their existing hardware more efficiently, reducing the need for new equipment and thus mitigating both e-waste and energy consumption.

Recycling and proper disposal of electronic equipment

One person’s trash is another’s treasure, especially in the world of electronics. Companies can take steps to ensure that old or obsolete electronic equipment is either recycled or disposed of in an environmentally friendly manner. This could involve donating old computers to schools or non-profits, using certified e-waste recycling services, or partnering with organisations dedicated to refurbishing and reusing electronic components. Proper disposal not only prevents hazardous waste from entering landfills but also helps recover valuable materials that can be reused.

Promoting a culture of sustainability among employees

It’s not just the technology or systems in place but also the people using them that can make a difference. Organisations can create internal awareness campaigns, workshops, and training programs to educate employees about the importance of digital sustainability. Simple steps, like setting printers to double-sided printing by default or encouraging employees to power down their computers when not in use, can add up to significant energy savings. Incentive programs can also be developed to reward departments or teams that achieve specific sustainability milestones.

Reducing unnecessary digital clutter

In today’s data-driven landscape, it’s easy to accumulate digital clutter like unused files, redundant emails, and outdated databases. Not only does this take up server space, but it also requires energy to maintain. Organisations should establish regular protocols for digital clean-ups, ensuring that only necessary data is stored. Efficient coding practices can also reduce the amount of computational power required to perform tasks, contributing to energy savings.

Investing in sustainable tech solutions

Lastly, future-proofing against digital pollution involves strategic investments in sustainable technologies. This could be anything from procuring energy-efficient hardware to investing in software that enables remote work, thus reducing the need for physical infrastructure and daily commuting. Organisations can also look into funding or partnering with start-ups and initiatives that are focused on creating sustainable technology solutions.

By taking these steps, organisations don’t just do good; they also benefit from cost savings, improved brand image, and increased employee engagement. Combating digital pollution is not just an ethical imperative but also a smart business strategy for long-term resilience and success.

How individuals can combat digital pollution

Individuals can also contribute to reducing digital pollution. We can start with conscious consumption and opt for durable, upgradable, and eco-friendly electronic products when we absolutely need to purchase something. We can clean up unnecessary files in our cloud storage. We can also raise awareness about digital pollution within the community. Every bit of knowledge shared contributes to a more sustainable future.

Tree planting to offset digital pollution 

So how does tree-planting tie into all this? Trees are nature’s best carbon sinks, absorbing carbon dioxide and releasing oxygen. Planting trees is a direct way to offset the carbon emissions from digital activities. By supporting tree-planting organisations, you make a tangible contribution to combating digital pollution.

As a tree-planting organisation, we offer various programs designed to  offset carbon footprints  aimed at both organisations and individuals. Supporting our initiatives is not just about planting trees; it’s about creating a sustainable digital ecosystem for our future.

The final word

Digital pollution is a pressing issue that requires our immediate attention. While the responsibility may seem overwhelming, tackling this problem is a collective task. By taking conscious steps as individuals and organisations, we can significantly mitigate the environmental, economic, and societal impacts of digital pollution. As we strive for a digitally advanced society, let’s not forget to balance technology with sustainability, reminding ourselves that a greener future is possible.

Privacy Overview

digital pollution essay

EcoMatcher’s software engineers develop the needed technologies to redefine the sustainability industry in smart, creative, and useful ways. As a software engineer, you will work on a specific project critical to EcoMatcher’s needs with opportunities to switch between projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities, and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.

Responsibilities:

  • Develop, maintain and improve EcoMatcher software;
  • Collaborate with team to manage project priorities, deadlines, and deliverables;
  • Self-motivated, with excellent written and verbal communication skills.
  • Graduate in Computer Science, similar technical field of study or equivalent practical experience; Reputable university graduates is a plus;
  • Experience with one or more general-purpose programming languages including but not limited to: PHP, Python, JavaScript, CSS, and Go;
  • Familiar working with a relational database, WordPress framework, and Amazon Web Service (AWS) is a plus;
  • Experience working on agile methodologies and collaborative version control tools;
  • Experience developing accessible technologies;
  • Excellent organization and presentation skills to structure your presentations and documents in a way that can be easily understood and modified by others;
  • Problem solver that requires very little guidance on projects; thrives in a fast-paced environment;
  • Capable to communicate in English properly;
  • Available to work from EcoMatcher office in Bandung;

How to apply:

  • Send your CV to [email protected] with the subject [Hiring – Software Engineer], or
  • Click on the “Apply Now” button in the related section

A Graphic Designer at EcoMatcher demonstrate a passion for great design and the ability to achieve it at pixel level. They must have a highly refined design aesthetic and ability to work within the EcoMatcher brand. This person should have excellent communication skills and be able to articulate the rationale behind their work.

  • Design elegant UIs and graphics for a global audience across platforms: web, iOS, Android and print;
  • Assist the front end team in implementing precise visual design;
  • Create and manage asset libraries for reusable, systematic design;
  • Solid knowledge of the creative process at high quality, and to specification;
  • Expert understanding of the formal elements of design, including typography, layout, balance, and proportion;
  • Problem solver that requires very little guidance on projects; thrives in a fastpaced environment;
  • Expert knowledge of Adobe CC suite and other industry-standard creative tools;
  • Expert knowledge of assets implementation standards on iOS, Android, and Web;
  • Advanced experience with image compositing and retouching with Photoshop;
  • Basic level understanding of HTML & CSS;
  • Highly detail-oriented, with a drive for perfection down to the last pixel.
  • Send your CV to [email protected] with subject [Graphic Designer – Intern], or
  • Click on “Apply now” button of the related section

The Marketing Content Assistant will administer EcoMatcher’s marketing content. Administration includes but is not limited to:

  • Deliberate planning, strategy and goal setting;
  • Development of brand awareness and online reputation;
  • Build for its customers a number of marketing tools, including online video tutorials, and propose content to be used in digital and social media;
  • Marketing assistant will work directly with the head of marketing based in Singapore;
  • Supervised locally by EcoMatcher’s Technical Director in Bandung;
  • Building global partnership;
  • Participate in a fast growing global ‘GreenTech’ scale up combating the climate crisis.

The Marketing Content Assistant is a highly motivated, creative individual with experience and a passion for connecting with current and future customers. That passion comes through as she/he engages with customers on a daily basis, with the ultimate goal of:

  • Turning fans into customers.
  • Turning customers into advocates.

Additional requirements

  As EcoMatcher is a B2B company, strong experience with LinkedIn and LinkedIn sales generation tools is a plus.

Mobile Application Developer of EcoMatcher work on many projects that carry varying responsibilities. You will join with all other party to build EcoMatcher products that will be used by our users and partners for either Android or iOS.

  • Develop mobile application of EcoMatcher products;
  • Join mobile application development process with EcoMatcher team and partners;
  • Degree in Computer Science or related technical field or equivalent practical experience;
  • 1+ year of relevant work experience;
  • Passionate about mobile development;
  • Programming experience in Flutter, React, Java, or C++;
  • Strong understanding of technical architecture and development process for complex and highly scalable mobile applications
  • Highly detail-oriented.

We’re looking for a Software QA Engineer to drive EcoMatcher’s quality assurance efforts and ensure the highest quality of experience across all platforms. This person should have a strong product sense, great communication skills, and be a champion for our users.

  • Collaborate on improving developer and engineering team’s test coverage, release velocity and production health;
  • Work closely with development teams in instrumenting their workflow to build a comprehensive picture of velocity, coverage and quality;
  • Hands-on ability to automate repeated tasks and build test coverage through existing or new infrastructure;
  • Write moderately complex code/scripts to test systems, implementing test harnesses and infrastructure as necessary
  • Degree in Computer Science, Computer Engineering or equivalent combination of technical education and experience;
  • Experience in test automation and testing frameworks;
  • Strong track record of creating test plans and writing test cases based on product requirements. Must be a strong strong execution skills;
  • Self motivated and takes initiatives. Must be comfortable in a startup environment;
  • Experience working closely with development and business teams to communicate problem impacts and to understand business requirements;
  • Proficient in written and reading English. English speaking and listening is a plus;

The Social Media Manager will administer EcoMatcher’s’s social media marketing and advertising. Administration includes but is not limited to:

  • Content management (including website);
  • SEO (search engine optimization) and generation of inbound traffic;
  • Cultivation of leads and sales;
  • Reputation management.

The Social Media Manager is a highly motivated, creative individual with experience and a passion for connecting with current and future customers. That passion comes through as she/he engages with customers on a daily basis, with the ultimate goal of:

By continuing to browse, you accept the use of cookies.

Manage Cookies

We use cookies on this site to enhance your user experience

Internet pollution: how can its impact be reduced?

Paul Collins

By Paul Collins

Journalist and digital marketing professional

internet pollution

Internet pollution is defined as all digital actions emitting greenhouse gases . In fact, this negative external use of new technologies tends to be unknown by consumers. Nevertheless, the digital world represents a substantial environmental impact and creates a large carbon footprint : 4% of all greenhouse gases.

I stand up for real climate action, I offset my CO2 emissions! Global warming is everyone's business! To offset your CO2 emissions and participate in the energy transition. Send us an e-mail

What is internet pollution and what it its impact?

impact of internet

Digital technology has a significant impact on our carbon footprint and has consequences on the environment . Due to its intangible appearance, digital technology is usually seen as a tool without any direct impact on the environment. However, digital technology truly is tangible and depends on physical infrastructure such as data centres and kilometres of cables used for transmission antennas.

There are two types of internet pollution :

  • Pollution related to data centres and network infrastructure ;
  • Pollution related to consumer equipment .

How much CO2 does the internet produce? This sector is currently responsible for around 4% of global greenhouse gases and the large increase in its use suggests that this carbon footprint will double by 2025 .

Source: BBC Smart Guide to Climate Change .

Digital pollution of our electronic equipment

electronic equipment

The underlying weight of all-natural resources necessary to manufacture a product, known as the ecological rucksack of a digital object, generates substantial carbon dioxide (CO2) emissions.

Due to the extraction of raw materials and the manufacturing process in developing countries, the manufacturing phase of an electronic device is what consumes the most energy and emits the most CO2. In fact, developing countries produce their electricity mainly from coal, a natural resource with a substantial environmental impact when mined.

And lastly, the transport phase is added to the balance.

Paradoxically, the less materials we require, the more materials we use . Also, the smaller the devices, the larger the environmental impact.

In the same way, the manufacturing of sophisticated technological equipment requires certain processes and rare metals such as tantalum and tungsten. However, these minerals are at the centre of armed conflicts, especially in Africa. For this reason, minerals extracted for their use in manufacturing of digital equipment are known as " conflict minerals ".

The negative impact of using the internet

carbon footprint

From the start of the Covid-19 pandemic and the numerous ensuing lockdowns, there has been an exponential increase in the use of video transmission (streaming) all over the world . Despite the many erroneous assumptions estimating the CO2 emissions for watching 30 minutes of video streaming on Netflix, the climatic impact of streaming video continues to be relatively modest. In fact, according to the International Energy Agency (IEA), watching an hour of video streaming on Netflix entails emissions of 36gCO2 (keeping in mind that a trip by airplane from London to New York is the equivalent of 1.3 tonnes of CO2).

The low carbon footprint of streaming video content can be explained due to the rapid improvement in the energy efficiency of data centres, networks and devices. But the slowing of efficiency gains, the effects of outbreaks, and the new demand for emerging technology, including artificial intelligence (AI) and blockchain, is leading to a growing concern due to general environmental impacts in the sector in the coming years.

As they are data factories that store thousands of IT servers, data centres are usually considered to be energy devourers .

How much video and streaming do we watch? According to data published by media watchdog, Ofcom , Britons spent around a third of their waking hours watching TV and online video content in 2020 - an average of 5 hours and 40 minutes a day .

What consumes the most energy in a data centre?

Data centres are storage centres of digital information . Network infrastructure and data centres are responsible for half of all digital pollution. Searches in search engines require the grid and data centres.

At a data centre, air conditioning is the most expensive element in terms of energy. That is one of the reasons why Facebook has transferred its servers to Nordic countries like Sweden or Canada, close to various hydroelectric plants.

In 2021, the Danish site Data Center Map counted more than 4,700 data centres in 126 countries and over 300 transoceanic cables which extend a network of over one million kilometres.

Since 2010, Greenpeace encourages actors on the web to supply their data centres using renewable energy . Facebook and Google have also joined this pledge. Netflix, Spotify and Twitter work worst according to the study “Clicking Clean” compiled by Greenpeace and published in 2017.

Discover the Clicking Clean report from Greenpeace

5 tips to reduce digital pollution in our daily life

Today, digital pollution is equivalent to commercial air travel pollution. How can we act against digital pollution? What are the good habits to adopt to limit our digital footprint and promote the sustainable development of the digital ecosystem?

Preserve your equipment for longer

Buy second-hand products which tend to be cheaper and less polluting. Choose low energy consumption products.

It is crucial to avoid unnecessary substitution of digital equipment and to favour repair over substitution in case of damage.

Limit energy consumption of electrical appliances

Don’t leave your devices on all the time and turn off routers as frequently as possible . On your telephone, deactivate the GPS, Wi-Fi, Bluetooth functions when not in use. You can also put them in airplane mode.

Watch videos in an eco-responsible way

To limit digital pollution from video streaming, opt for downloads over video streaming and avoid using 4G to play videos. It is also possible to block automatic playing of videos on social media and adjusting the quality of videos on YouTube. In fact, watching low definition videos saves bandwidth, lower your resolution to 144p as soon as possible.

Empty your mailbox

Pollution linked to e-mail is known as " latent pollution ". This pollution is due to the storage of messages that require servers, as each e-mail is saved in three copies and, therefore, on at least three different servers for security reasons.

In order to minimise the impact of your mailbox, it is important to regularly classify and file your e-mails to avoid unnecessary storage in data centres. Furthermore, avoid sending attachment files to many recipients and cancel subscriptions to any newsletters you no longer read.

Internet pollution and emails Research by Cleanfox showed that if all Internet users the UK deleted their useless emails they received in 2020, it would save more than 2 million tonnes of CO2 emissions . To put that into perspective, that's the same as 1.3 million polluting cars.

Download the "Carbonalyser" extension

The French association The Shift Project has developed the free extension Carbonalyser which allows you to measure your digital pollution when surfing the web. The Carbonalyser extension translates your browsing into electrical consumption and CO2 emissions.

Carbonalyser takes into account electrical consumption of the terminal used, the infrastructure of the grid and the data centre which intervenes in the transfer of data. With this information, the association intends to inform users about digital pollution.

Discover more practical guides on protecting the environment , reducing your carbon footprint, and fighting global warming .

Change country

Science Editor

  • A Publication of the Council of Science Editors

Gatherings of an Infovore*: Digital Pollution

Add Another Type of Pollution to the List: Digital

digital pollution essay

Pollution, as defined in the Merriam-Webster online dictionary, is “the action of polluting especially by environmental contamination with man-made waste” and has been a constant in the world ever since humans began living in groups. Anthropologists have found human waste among the ruins of ancient settlements. The word pollution took over from the term industrial waste in the late 19th century, and the different types of pollution were identified throughout the 20th century with the 3 major types: air, land, and water, joined by noise, light, radioactive, thermal, and plastic pollution. Now in the 21st century, we are beginning to recognize that advances in technology have brought about a new type of pollution: digital pollution (sometimes also called information pollution or data pollution or e-waste).

The literature describing digital pollution and various ways to measure its impacts is quite fragmented in type of publication and currency/depth of verified data from blog posts to white papers and articles in magazines/newspapers to those in peer-reviewed books and journals. Presented here is a cross-section of such resources to set you on the path of assessing whether now is the time for your organization to be concerned with the impact of its digital pollution. If interested, I suggest you contact Cambridge University Press. In 2021, the Press, along with Netflix and BT, began working with DIMPACT. DIMPACT is a pioneering initiative launched in 2019 by Carnstone, media companies, and researchers at the University of Bristol to help map and manage carbon impacts of digital information ( https://dimpact.org/news ).

“Digital is physical. Digital is not green. Digital costs the Earth. Every time I download an email I contribute to global warming. Every time I tweet, do a search, check a webpage, I create pollution. Digital is physical. Those data centers are not in the Cloud. They’re on land in massive physical buildings packed full of computers hungry for energy. It seems invisible. It seems cheap and free. It’s not. Digital costs the Earth. 

One of the most difficult challenges with digital is to truly grasp what it is, its form, its impact on the physical world. I want to help give you a feel for digital. I’m going to analyze how many trees would need to be planted to offset a particular digital activity. For example: 

  • 1.6 billion trees would have to be planted to offset the pollution caused by email spam.
  • 1.5 billion trees would need to be planted to deal with annual e-commerce returns in the US alone. 
  • 231 million trees would need to be planted to deal with the pollution caused as a result of the data US citizens consumed in 2019.
  • 16 million trees would need to be planted to offset the pollution caused by the estimated 1.9 trillion yearly searches on Google.” —Excerpt from World Wide Waste by Gerry McGovern

Facing up to digital pollution Martin L. Research Information. July 28, 2021.  https://www.researchinformation.info/analysis-opinion/facing-digital-pollution

Carbon impact of video streaming Carbon Trust white paper. 2021. https://prod-drupal-files.storage.googleapis.com/documents/resource/public/Carbon-impact-of-video-streaming.pdf

The environmental impact of digital publishing Monell ME, Carbonell JP. CCCBLAB. May 11, 2021. https://lab.cccb.org/en/the-environmental-impact-of-digital-publishing/ Almost 10% of what we read nowadays is in digital format, which is why we are looking into the digital impact of this area.

9 ways to reduce your digital pollution Beasse S. Plank. April 22, 2021. https://plankdesign.com/en/stories/9-ways-to-reduce-your-digital-pollution/

Data pollution  Ben-Shahar O. J Legal Analysis. 2019;11:104–159.  https://doi.org/10.1093/jla/laz005   We finally know how bad for the environment your Netflix habit is Bedingfield W. Wired. March 15, 2021. https://www.wired.co.uk/article/netflix-carbon-footprint Streaming platforms finally have a tool to evaluate the size of their carbon footprint. Now they need to take action and go green.

The world is choking on digital pollution Estrin J, Gill S. Washington Monthly. Jan/Feb/March 2019. https://washingtonmonthly.com/magazine/january-february-march-2019/the-world-is-choking-on-digital-pollution/ Society figured out how to manage the waste produced by the Industrial Revolution. We must do the same thing with the Internet today.

IoT and the new digital pollution Curry S. Forbes. February 25, 2019. https://www.forbes.com/sites/samcurry/2019/02/25/iot-and-the-new-digital-pollution/?sh=4f5a45c77fd4

How the world is dealing with its e-waste issue? Demma B. Solar Impulse Foundation. May 14, 2019. https://solarimpulse.com/news/how-the-world-is-dealing-with-its-e-waste-issue

The solution to digital pollution is a global ‘Information Consensus’ Venkatesh HR. Medium.com. March 17, 2019. Think of it as Universal Declaration of Human Rights…for information. https://medium.com/jsk-class-of-2019/the-solution-to-digital-pollution-is-a-global-information-consensus-a37db7a054b1

Causes of digital pollution Digital for the Planet. July 9, 2018. https://medium.com/@digifortheplane/causes-of-digital-pollution-c0054d555377

Powering the digital: from energy ecologies to electronic environmentalism Gabrys J. 2014.  http://www.jennifergabrys.net/wp-content/uploads/2014/09/Gabrys_ElecEnviron_MediaEcol.pdf

The alarming rise of ‘digital and content pollution’ Wilms T. Forbes. March 26, 2013. https://www.forbes.com/sites/sap/2013/03/26/the-alarming-rise-of-digital-and-content-pollution/?sh=eb528481fb4c

Power, pollution and the Internet Glanz J. The New York Times. September 22, 2012. https://www.sebis.com/main/en/publications/Power%2C+Pollution+and+the+Internet.pdf

Barbara Meyers Ford has retired after a 45-year career in scholarly communications working with companies, associations/societies, and university presses in the areas of publishing, and research. If interested in connecting, find her at www.linkedin.com/in/barbarameyersford and mention that you are a reader of Science Editor .

* A person who indulges in and desires information gathering and interpretation. The term was introduced in 2006 by neuroscientists Irving Biederman and Edward Vessel.

Environment

Impact Economics

  • Sustainable Agriculture
  • Smart Buildings
  • Green Mobility
  • Energy Transition

Artificial Intelligence

  • Satellites and Drones
  • About RESET

Our Digital Carbon Footprint: What’s the Environmental Impact of the Online World?

Every single search query, every streamed song or video and every email sent, billions of times over all around the world - it all adds up to an ever-increasing global demand for electricity, and to rising co2 emissions too. our increasing reliance on digital tools has an environmental impact that's becoming increasingly harder to ignore..

Author Sarah-Indra Jungblut:

Translation Sarah-Indra Jungblut , 01.30.24

Digital tools and services are an integral part of our lives. It’s hard to imagine a life without smartphones, apps, Wikipedia, online banking, route planners with GPS and having a huge selection of music and movies at your fingertips pretty much everywhere, around the clock. All of these things make our lives so much easier. But it’s not only in day-to-day life that digitalisation has become indispensable; digital technologies are also playing an increasingly important role in agriculture and industry, in the transition to renewable energies and in the future of our cities . At the same time, digitalisation offers new solutions for tackling climate change and protecting the environment. Reporting on them is an important part of what we do here at RESET.

However, just because we can’t physically see or touch the data that we’re sending and receiving all over the globe, it actually carries rather heavy baggage: its energy consumption is constantly growing, the smart devices we use are often produced under exploitative and environmentally harmful conditions and, at the end of their far too short lives, they end up as toxic electronic waste . This poses a very important question: will digitalisation be able to help us on the way to a greener and fairer world, or will our growing reliance on digital tools ultimately prove to be an accelerator for climate change and the destruction of the planet?

Right now, the question is still open. Let’s take a closer look at the main causes of our digital carbon footprint – and also who and what is working to mitigate its climate impact.

How big is the world’s digital carbon footprint?

More than half of the world’s population is now online. According to a report by the digital agency We Are Social , more than four billion people used the Internet in 2019 – with more than one million people coming online for the first time each day. And with online activities such as cloud computing, streaming services and cashless payment systems on the up, the demand for online and digital services is constantly growing.

The non-profit organisation The Shift Project (PDF) looked at nearly 170 international studies on the environmental impact of digital technologies. According to the experts, their share of global CO2 emissions increased from 2.5 to 3.7 percent between 2013 and 2018. That means that our use of digital technologies now actually causes more CO2 emissions and has a bigger impact on global warming than the entire aviation industry! According to estimates, the aviation industry caused around 2.5 percent (and rising) of emissions. These figures may vary slightly from study to study, as the energy consumption of digital technologies is difficult to quantify: because too little data is available, because technological advances and changing consumption habits cause them to change rapidly and because they’re highly dependent on certain conditions (for example, the type of power that’s being used). Researchers in a new study criticise the fact that the Shift Project figures were calculated using outdated data. The short study “Climate protection through digital technologies” ( Klimaschutz durch digitale Technologien ) from the Borderstep Institute compares various studies and comes to the conclusion that the greenhouse gas emissions caused by the production, operation and disposal of digital end devices and infrastructures are between 1.8 and 3.2 percent of global emissions (as of 2020).

Even if it’s hard to work out specific figures, it is clear that our digital world has a huge energy appetite, especially if you include not only the use, but also the production, of our digital devices.

What digital activities are using the most energy?

Jens Gröger, senior researcher at the Öko-Institut, estimates that each search query emits around 1.45 grams of CO2. If we use a search engine to make around 50 search queries per day, this produces a huge 26 kilograms of CO2 per year.

Doesn’t sound like a lot? Not at an individual level. But Google itself, in its 2017 Environmental Report 2017, puts its carbon footprint for 2016 at 2.9 million tons of CO2e and its electrical energy consumption at 6.2 terawatt hours (TWh).

But online searches are by no means the core of the problem: one of the biggest causes of the internet’s huge power consumption is in fact music and video streaming. According to research by The Shift Project, 80 percent of all data flows through the net in the form of moving images. Online videos – available on different platforms and viewed without being downloaded – account for almost 60 percent of global data transfer. Transmitting these moving images requires huge amounts of data. And the higher the resolution, the more data is sent and received.

According to The Shift Project, the average CO2 consumption of streamed online video is more than 300 million tons per year (based on measurements taken in 2018). This is the same as emitted by the whole of Spain in a year. Another comparison out of interest: streaming ten hours of film in HD requires more bits and bytes than all of the articles in the English Internet encyclopedia Wikipedia put together.

Another analysis suggests that streaming a Netflix video in 2019 typically consumed 0.12-0.24 kWh of electricity per hour, about 25 to 53 times less than the Shift project estimates. Ralph Hintemann at the Borderstep Institute for Innovation and Sustainability stresses that while video streaming causes high greenhouse gas emissions, no one knows exactly how high the figures are. Concrete figures are difficult to determine because the results depend heavily on the choice of device, the type of network connection and the resolution.


Using the internet on a mobile phone uses the most power, because buildings, vegetation and weather weaken the electromagnetic waves. That means that higher transmission power is required. But even with old copper cables, the signal has to be amplified, especially over long distances. Fibre optic cables, which transmit the signals via light, are definitely the most efficient form of transmission technology. Powered by the average global electricity mix, streaming a 30-minute show on Netflix would currently release 28-57 grams of CO2. This is about 27 to 57 times less than the 1.6 kg from the Shift project. Ralph Hintemann and his research group have calculated that streaming one hour of video in full HD requires about 220 to 370 watt hours of electrical energy, depending on whether the video is streamed via tablet or TV. This adds up to around 100 to 175 grams of carbon dioxide and would be about the same as driving one kilometre in a small car.

Music streaming also comes off quite badly: a new study by the universities of Glasgow and Oslo shows that music streaming services emitted around 200 to 350 million kilograms of greenhouse gas in 2015 and 2016. That means that using streaming services such as Spotify or Apple Music is in many cases more harmful to the climate than the production (and subsequent disposal) of CDs or records.



Cloud computing is another major power guzzler. This is where data is no longer stored locally on a computer or smartphone, but on servers that can be located anywhere in the world, meaning it can be accessed anytime and anywhere. Checking your email via gmail and backing up your photos to the cloud are just two examples of these kind of services.

Most cryptocurrencies also consume large amounts of energy. One example of this is Bitcoin, probably the best-known digital currency. According to calculations by the Bitcoin Energy Consumption Index (2018), a single Bitcoin transaction consumes around 819 kWh. The same amount of energy could operate a 150-watt refrigerator for about eight months. And in a 2018 study, the Technical University of Munich determined that the entire Bitcoin system produces around 22 megatons of carbon dioxide per year, the same as the CO2 footprint of cities such as Hamburg, Vienna or Las Vegas.

But it’s not only the Bitcoin blockchain that’s energy-intensive. Other blockchains and distributed ledger technologies (DLTs) also entail a huge demand for energy. In our recent RESET Special Feature looking at how blockchain can be used for real-world positive impact, we delved even deeper into the question of whether blockchain and sustainability can ever truly go together.

Our digital energy consumption isn’t only determined by what we do, but also how we do it; the software we use also has a big impact. For example, a less efficient word processor needs four times as much energy to process the same document as an efficient one. While at the same time, software updates often cause computers or smartphones to slow down or stop working, forcing consumers to buy new hardware.



And in the future, digitalisation’s growing demand for electricity will certainly also be driven by an increase in smart technologies, such as those we are increasingly using at home, in the IoT sector, in industry and in our increasingly digitalised cities .

All roads lead to… energy-hungry data centres

Every action, no matter how small, that is carried out online, travels in the form of a data packet through data centres and their servers. Looking at the energy use of data centres therefore gives us an idea of just how energy-hungry digitalisation is. It’s impossible to say with any certainty how high the current energy requirements of all data centres worldwide are. Current estimated calculations range from 200 to 500 billion kilowatt hours per year, an estimated percent of the world’s electricity . And future predictions differ considerably too – with figures between 200 billion and 3,000 billion kilowatt hours predicted for the year 2030.

Why do expert opinions differ so much? One reason is that there are no official figures for data centres yet. Another reason is that many operators are reluctant to provide information about their energy consumption due to concerns about security and competition. Researchers can therefore only get close to the real figures via detours, such as sales figures for servers or estimates from surveys.

And what exactly is using all that power?

First and foremost, a lot of power is needed to process the enormous data streams that we constantly send and receive via our devices. And when that data is processed, heat is generated as a waste product. To prevent the servers from overheating, additional energy is required for ventilation and to cool the servers down. On average, mechanical cooling is responsible for around 25 percent of the total power consumption of a data centre . And all that heat that’s producted? Usually it’s just released into the atmosphere – simply left to go to waste.

How can swimming pools make data centres more energy efficient?

Reducing the energy consumption of data centres is an important step towards making digitalisation more sustainable. There are three main ways to do that:

1. Finding the most efficient ways to cool data centres

One fairly simple-sounding and popular solution is to locate data centres in cooler countries and simply blow the outside air into them. This explains the multiple data centres located near the Arctic . Warmed, piped water is another way to cool banks of high-performance, hot computers, as is immersion cooling . Some companies are even working on using artificial intelligence to tune their cooling systems to match the weather and other fluctuating factors – in a bid to reduce their energy bills.

   2. Re-using the waste heat

Data centres produce heat throughout the year. Ideally, this heat should be constantly be extracted and reused elsewhere. But finding the right customers to recycle that heat can be a challenge. Many newer data centres use the waste heat within the data centre itself. But, in order to better exploit the potential of waste heat, a more comprehensive approach is needed. While Sweden is one of those countries that is considered an ideal location for data centres because of its cool climate, at the same time, it’s also a pioneer when it comes to reusing their waste heat. The country relies heavily on a system of district heating (where heat is distributed via pipes to residential and commercial buildings from a centralised location), which makes it relatively easy to feed the waste heat from data centres into the network, thus heating the apartments that are connected.

The Elementica facility is the latest example. Fully expanded, the data centre is expected to recover up to 112 gigawatt hours of heat per year, covering the heating of tens of thousands of households. Another example is the Stockholm Data Parks initiative, which sees waste heat recycling as playing a key role in the city’s goal to be completely fossil fuel-free by 2040.

It’s also possible to feed the excess heat not only into local and district heating networks, but also into buildings such as swimming pools, laundries or greenhouses – places which permanently require heat. The first examples of this already exist. In Paris, a swimming pool is being supplied with heat from the servers of the animation studio next door. And, the Irish company Ecologic Datacentres is currently planning a computer centre that will use the waste heat to help grow vegetables in greenhouses and heat nearby homes.

The EU-funded ReUseHeat project is also working on innovative solutions for waste heat recovery. Nine European countries are to join forces over the next four years to make waste heat available at various locations in Europe.

3. Powering them with green electricity.

If data centres are ever to be operated in an environmentally friendly, maybe even one day carbon-neutral way, they will have to be powered by clean, renewable sources of energy. While most countries’ energy mix still only contains a small fraction of renewables, some companies are starting to focus more on sourcing their very own energy – from wind or solar power.

Founded in April 2015 in German North Frisia, the startup Windcloud start-up uses energy from local wind farms to power its data centres. Its provided 2015, Windcloud has provided IT services from 100 percent locally generated and renewable energy. Hostsharing e.G. has a different approach. The webhoster is organised as a cooperative and focuses on resource-saving webhosting using cooperative energy and the use and support of open source software.

We also use a green service provider to host RESET.org. Hetzner Online uses electricity from renewable sources to power the servers in their own data centre parks.

Have your own website? You can check its carbon impact using the online Website Carbon Calculator as well as find tips on how to improve it.

If we’re ever going to seriously shrink the carbon footprint of data centres, then we need to bring in policy regulations that restrict their energy consumption and incentivise efficiency measures. Right now there is little motivation for data centre operators to do the right thing.

The ecological and social impact of our smart devices

While Silicon Valley is already working on microchip implants that, once planted under the skin, will give us automatic access to the digital world, right now we still rely on smartphones, tablets, computers and other smart gadgets – and a lot of them. Our obsession with electrical gadgets is responsible for the fastest-growing portion of the world’s garbage problem. According to statistics from the UN , an estimated 50 million tons of electronic waste are generated worldwide each year – and the trend is rising.

The consequences for people and the environment are fatal. More than half of the electronic trash we produce is shipped cheaply to countries of the global South. There the valuable raw materials are extracted, often under inhumane and unhealthy working conditions, causing pollution in the local environment. At the same time, most of the mineral raw materials used in our smart devices come from countries where there is a disregard not only for labour rights but also for environmental standards. The same applies to the entire manufacturing process.

However, there are a growing number of companies and initiatives that are using sustainable materials, smart recycling methods and conscious use of raw materials to step away from a purely linear economy, using circular manufacturing methods that conserve resources and ensure fair working conditions.

It’s still quite a niche market, and there are no official certifications or political efforts for ethically-made electronics. However, there are a few eco-pioneers: Fairphone and Shiftphone have set out to show that consumer- and environmentally-friendly design is possible by making smartphones modular and easy to repair. At the same time, the companies pay attention to fair wages and working conditions, do without child labour and focus on resource-conserving production.

The Berlin startup MineSpider is approaching the topic from a slightly different angle: With the help of blockchain technology, MineSpider wants to ensure that no conflict minerals end up in our gadgets – by reliably tracking the supply chain for responsibly-mined minerals and raw materials that end up in manufacturers’ mobile phones and laptops.

Monopolisation and platform capitalism

While in 2007 50 percent of the traffic on the internet was generated by over a thousand websites, by 2014 it was just 35 websites (“Warum brauchen wir Vielfalt?”/Bits & Bäume, p. 86). And of the 500 most visited websites worldwide, Wikipedia is actually the only one that is not operated commercially. So it’s not surprising that six of the world’s ten largest companies are now firmly rooted in the digital economy: Apple, Alphabet (Google’s parent company), Microsoft, Amazon, Facebook and the Chinese company Tencent. Just a few global corporations hold an incredible amount of social and economic power.

From a socio-ecological perspective, this monopolisation and the increasing dependence on a small number of large platforms that results is cause for concern. In addition, recent studies suggest that digital platforms – with their on-demand culture and personalised online advertising – are accelerating more consumption and even increasing resource consumption through packaging waste and parcel deliveries.



The rise in personalised advertising and offers online – where information is gained about individuals through online data tracking and collection – is another negative aspect of this development: our private lives are increasingly being invaded by the financial interests of commercial companies. Digitalisation can only ever be truly sustainable if it takes not only ecological aspects into account, but also contributes to improving social and economic justice.

But there is another way, as civil society counter-movements to this development show. While they may not have the size and scope of the mainstream commercially-run alternatives, there are a few initiatives where the focus is on social-ecological values rather than profits. An example of this is the Fairbnb platform , which wants to offer a fair alternative to Airbnb. The platform was founded by neighbours and users as a cooperative, not as a company, and profits are driven back into the local neighbourhoods.

Digitalisation – Blessing or Curse for the Environment and Society?

It is tricky to come up with a black-and-white answer to whether currently, digitalisation has a mostly positive or negative impact on the world around us. Digital technologies can help enable sustainable development by, for example, allowing people to share resources online, enabling innovative, resource-efficient production processes (like 3D printing) and speeding up the switch to renewable energies by opening up access to smart, decentralised energy networks. Digital platforms and apps can also help promote more environmentally friendly consumption and lifestyle options, for example by sharing tips on sustainable behaviour or simplifying access to environmentally friendly shared transportation . Sensors and satellites can help to highlight and locate environmental destruction and allow for rapid, targeted action.

However, as this article shows, digitalisation is energy-hungry and resource-intensive. It comes with a mighty carbon footprint that we might not be able to see, but we shouldn’t be able to ignore. We will only ever be able to achieve truly sustainable digitalisation if we learn to use digital tools and services in moderation and in the right places. We need to look at the issue of sustainability throughout the entire life cycle too, continue working on optimising our energy use and energy sources, and look more often for alternatives to the big players of our digitalised world. Equally crucial for a sustainable digital future – more manufacturers making ethical technology that respects both society and the environment, and more consumers supporting them by making the right choices.

We need manufacturers, consumers and digital service providers to make the right decisions when it comes to the environmental impact of our increasingly digital lives – and the incentive for that will ultimately come from policies and regulations being made (and respected) at an international level. Without decisive political action, the digital revolution is set to increase our consumption of resources and energy and accelerate the damage we are doing to our planet and our climate. While at the same time, certain unchecked digital developments threaten to undermine crucial pillars of free and democratic societies. Ensuring that digitalisation is placed at the service of sustainable development, and that digitalisation itself is executed and applied in a sustainable way, is an urgent political and social priority.

Sources and links

  • Datacenter Insider: Abwärmenutzung ist der Schlüssel zum grünen Rechenzentrum
  • Datacenter Insider: Schweden beschließt drastische Senkung der Stromsteuer
  • ZDF: Planet E – Stromfresser Internet
  • University of Glasgow/ University of Oslo: Music consumption has unintended economic and environmental costs
  • BUND: Der Stromverbrauch der Bitcoin
  • SWR: Ökobilanz des Internets
  • IKZ: Abwärmenutzung aus Rechenzentren
  • The Shift Project: Lean ICT – Towards Digital Sobriety. Report 2019 (pdf)
  • Boderstep Institut: EU-EcoCloud – Wie kann die Energieeffizienz des Cloud Computing weiter verbessert werden?
  • Borderstep Institut: Energiebedarf von Rechenzentren steigt weiter stark an
  • Ökoinstitut e.V.: (In)Effiziente Software
  • Klimareporter: Digitale Klimakiller
  • Steffen Lange/ Tilman Santarius: Smarte grüne Welt? Digitalisierung zwischen Überwachung, Konsum und Nachhaltigkeit. Oekom Verlag/ 2018
  • Anja Höfner/ Vivian Frick (Hrsg.): Was Bits und Bäume verbindet – Digitalisierung nachhaltig gestalten. Oekom Verlag/ 2019
  • sustainable-digitalization.net

Authors: Sheena Stolz and Sarah-Indra Jungblut / RESET Editorial (August 2019)

Updated in March 2024 (Lana O’Sullivan)

TAGGED WITH

You also might be interested in.

Green Innovation

Sustainability and Digitalisation in the Spotlight: RESET.org Receives Project Funding from the DBU!

The official confirmation arrived just a few weeks ago. We're excited to announce that over the next two years, the German Federal Environmental Foundation (DBU) will provide RESET with both technical and financial support to develop an in-depth feature series that takes a critical look at future digital sustainability challenges. And kick-off is next week!

Getting (Globally) Accountable for E-Waste

When it comes to e-waste, the world’s got problems. But the mismanagement of electronic materials has the potential to be revolutionised with a circular approach and a global vision.

Twin Transformations in the EU: Clean Energy Must Power Digitalisation

As our lives become more digital and our energy greener, by acknowledging their interrelated nature, the EU has a key role to play in ensuring this results in a more sustainable Europe.

Drones and Satellites for Good

Space is Filling Up With Junk. What Can We Do About It?

The hundreds of thousands of bits of scrap orbiting the Earth at breakneck speed pose an ever greater risk to satellites and spacecraft. Solutions are being worked on around the world to try and find the best way to free the Earth's orbit from space debris.

The Social and Environmental Impact of Mobile Phones

Wisdom in a world of 'smart': frontline research from environmental advocates reveals how smartphone technology is impacting both humanity and the environment.

Save the Planet, Clean Your Inbox

Did you know that every old email stor

Mostly Read

The role of digital tools in the world of online activism, this helium-powered “h-aero” drone can fly for 24 hours, harnessing ai to soar above contrails: a green revolution in aviation, the berkeley protocol is setting international standards for digital open source investigations , beehave: protecting honey bees with better understanding, sailing towards sustainability: bound4blue’s wind-powered renaissance, kenyan supermarket greenspoon champions food traceability and sustainable shopping, machine learning model to identify and classify global climate finance flows, the perovskite revolution: a breakthrough in solar cell technology, open source against the big players: how openolitor is helping agriculture, privacy overview, help us stay independent.

Carefully researched content exploring solutions for the major challenges of our time: Behind RESET there’s a dedicated team of writers. All of our content is independent, open-access and completely free of advertising and sponsored links.

In order for it to stay that way, dear readers, we need your help. Whether you give five euro or fifty – every donation makes a difference!

Read our research on: Abortion | Podcasts | Election 2024

Regions & Countries

2. expert essays on the expected impact of digital change by 2035.

Most respondents to this canvassing wrote brief reactions to this research question. However, a number of them wrote multilayered responses in a longer essay format. This essay section of the report is quite lengthy, so first we offer a sampler of a some of these essayists’ comments.

  • Liza Loop observed, “Humans evolved both physically and psychologically as prey animals eking out a living from an inadequate supply of resources. … The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.”
  • Richard Wood predicted, “Knowledge systems with algorithms and governance processes that empower people will be capable of curating sophisticated versions of knowledge, insight and something like ‘wisdom’ and subjecting such knowledge to democratic critique and discussion, i.e., a true ‘democratic public arena’ that is digitally mediated.”
  • Matthew Bailey said he expects that, “AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet as part of a new well-being paradigm for humanity to thrive.”
  • Judith Donath warned, “The accelerating ability to influence our beliefs and behavior is likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions and a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians.”
  • Kunle Olorundare said, “Human knowledge and its verifying, updating, safe archiving by open-source AI will make research easier. Human ingenuity will still be needed to add value – we will work on the creative angles while secondary research is being conducted by AI. This will increase contributions to the body of knowledge and society will be better off.”
  • Jamais Cascio said, “It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more extreme version of the present or an unfunny parody. … Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles.”
  • Lauren Wilcox explained, “Interaction risks of generative AI include the ability for an AI system to impersonate people in order to compromise security, to emotionally manipulate users and to gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking over-trust and reliance on them.”
  • Catriona Wallace looked ahead to in-body tech: “Embeddable software and hardware will allow humans to add tech to their bodies to help them overcome problems. There will be AI-driven, 3D-printed, fully-customised prosthetics. Brain extensions – brain chips that serve as digital interfaces – could become more common. Nanotechnologies may be ingested.”
  • Stephen Downes predicted, “Cash transactions will decline to the point that they’re viewed with suspicion. Automated surveillance will track our every move online and offline, with AI recognizing us through our physical characteristics, habits and patterns of behaviour. Total surveillance allows an often-unjust differentiation of treatment of individuals.”
  • Giacomo Mazzone warned, “With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship.’ AI and a distorted use of technologies could bring mass-control of societies.”
  • Christine Boese warned, “Soon all high-touch interactions will be non-human . NLP [natural language processing] communications will seamlessly migrate into all communications streams. They won’t just be deepfakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents and corporate digital workforces … I see harm in ubiquity.”
  • Jonathan Grudin spoke of automation: “I foresee a loss of human control in the future. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course. We see it already.”
  • Michael Dyer noted we may not want to grant rights to AI: “AI researchers are beginning to narrow in on how to create entities with consciousness; will humans want to give civil rights and moral status to synthetic entities who are not biologically alive? If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.”
  • Avi Bar-Zeev preached empowerment over exploitation: “The key difference between the most positive and negative uses of XR [extended reality], AI and the metaverse is whether the systems are designed to help and empower people or to exploit them. Each of these technologies sees its worst outcome quickly if it is built to benefit companies that monetize their customers.”
  • Beth Noveck predicted that AI could help make governance more equitable and effective and raise the quality of decision-making, but only if it is developed and used in a responsible and ethical manner, and “if its potential to be used to bolster authoritarianism is addressed proactively.”
  • Charalambos Tsekeris said, “Digital technology systems are likely to continue to function in shortsighted and unethical ways, forcing humanity to face unsustainable inequalities and an overconcentration of techno-economic power. These new digital inequalities could amount to serious, alarming threats and existential risks for human civilization.”
  • Alejandro Pisanty wrote, “Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of Internet expansion makes it easy to identify and attack dissidents.”
  • Maggie Jackson said, “Reimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. ‘Human-compatible AI’ is designed to be open to and adaptable to multiple possible scenarios.”
  • Barry K. Chudakov observed, “We are sharing our consciousness with our tools. They can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand.”
  • Marcel Fafchamps urged that humanity should take action for a better future: “The most menacing change is in terms of political control of the population … The world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law.”

What follows is the full set of essays submitted by numerous leading experts who responded to this survey.

When asked to weigh in and share their insights, these experts were prompted to first share their thoughts on the best and most beneficial change they expect by 2035. In a second question they were asked about the most harmful or menacing change they foresee, thus most of these essays open first with perceived benefits and conclude with perceived harms. Because 79% of the experts in this survey said they are “more concerned than excited” or are “equally concerned and excited” about the evolution of humans’ uses of digital tools and systems, many of these essays focus primarily on harms. Some wrote only about the most worrisome trendlines, skipping past the request for them to share about the many benefits to be found in rapidly advancing digital change. In cases where they wrote extensively about both benefits and harms, we have inserted some boldface text to indicate that transition.

Clifford Lynch: There will be vastly more encoding of knowledge, leading to significant advances in scientific and technological discovery

Lynch, director of the Coalition for Networked Information, wrote, “One of the most exciting long-term developments – it is already well advanced and will be much further along by 2035 – is the restructuring, representation or encoding of much of our knowledge, particularly in scientific and technological areas, into forms and structures that lend themselves to machine manipulation, retrieval, inference, machine learning and similar activities. While this started with the body of scholarly knowledge, it is increasingly extending into many other areas; this restructuring is a slow, very large-scale, long-term project, with the technology evolving even as deployment proceeds. Developments in machine learning, natural language processing and open-science practices are all accelerating the process.

“The implications of this shift include greatly accelerated progress in scientific discovery (particularly when coupled with other technologies such as AI and robotically controlled experimental apparatus). There will be many other ramifications, many of which will be shaped by how broadly public these structured knowledge representations are, and to what extent we encode not only knowledge in areas like molecular biology or astronomy but also personal behaviors and activities. Note that for scholarly and scientific knowledge the movements toward open scholarship and open-science practices and the broad sharing of scholarly data mean that more and more scholarly and scientific knowledge will be genuinely public. This is one of the few areas of technological change in our lives where I feel the promise is almost entirely positive, and where I am profoundly optimistic.

“The emergence of the so-called ‘geospatial singularity’ – the ability to easily obtain near-continuous high-resolution multispectral imaging of almost any point on Earth, and to couple this data in near-real-time with advanced machine learning and analysis tools, plus historical imagery libraries for comparison purposes, and the shift of such capabilities from the sole control of nation-states to the commercial sector – also seems to be a force primarily for good. The imagery is not so detailed as to suggest an urgent new threat to individual privacy (such as the ability to track the movement of identifiable individuals), but it will usher in a new era of accountability and transparency around the activities of governments, migrations, sources of pollution and greenhouse gases, climate change, wars and insurgencies and many other developments.

“We will see some big wins from technology that monitors various individual health parameters like current blood sugar levels. These are already appearing. But to have a large-scale impact they’ll require changes in the health care delivery system, and to have a really large impact we’ll also have to figure out how to move beyond sophisticated users who serve as their own advocates to a broader and more equitable deployment in the general population that needs these technologies.

“There are many possibilities for the worst potential technological developments between now and 2035 for human welfare and well-being, and they tend to mutually re-enforce each other in various dystopian scenarios. I have to say that we have a very rich inventory of technologies that might be deployed in the service of what I believe would be evil political objectives; saving graces here will be political choices, if there are any.

“Social media as an environment for propaganda and disinformation, for targeting information delivery to audiences rather than supporting conversations among people who know each other, as well as a tool for collecting personal information on social media users, seems to be a cesspool without limit.

“The sooner we can see the development of services and business models that allow people who want to use social media for relatively controlled interaction with other known people without putting themselves at risk of exposure to the rest of the environment, the better. It’s very striking to me to see how more and more toxic platforms for social media communities continue to emerge and flourish. These are doing enormous damage to our society.

“I hope we’ll see social media split into two almost distinct things. One is a mechanism for staying in touch with people you already know (or at least once knew); here we’ll see some convergence between computer mediated communication more broadly (such as video conferencing) and traditional social media systems. I see this kind of system as a substantial good for people, and in particular a way of offsetting many current trends toward the isolation of individuals for various reasons. The other would be the environment targeting information delivery to audiences rather than supporting conversations among friends who know each other. The split cannot happen soon enough.

  • “One cross-cutting theme is the challenges to actually achieving the ethical or responsible use of technologies. It’s great to talk about these things, but these conversations are not likely to survive the challenges of marketplace competition. I absolutely despair in the fact that a reluctance to deploy autonomous weapons systems is not likely to survive the crucible of conflict. I am also concerned that too many people are simply whining about the importance of taking cautious, slow, ethical, responsible approaches rather than thinking constructively and specifically about getting this accomplished in the likely real-world scenarios for which we need to know how to understand and manage them.
  • “I’m increasingly of the opinion that so-called ‘generative AI’ systems, despite their promise, are likely to do more harm than good, at least in the next 10 years. Part of this is the impact of deliberately deceptive deepfake variants in text, images, sound and video, but it goes beyond this to the proliferation of plausible-sounding AI-generated materials in all of these genres as well (think advertising copy, news articles, legislative commentary or proposals, scholarly articles and so many more things). I’d really like to be wrong about this.
  • “I’d like to believe brain-machine interfaces (where I expect to see significant progress in the coming decade or so) as a force for good – there’s no question that they can do tremendous good, and perhaps open up astounding new opportunities for people, but again I cannot help but be doubtful that these will be put to responsible uses. For example, think about using such an interface as a means of interrogating someone, as opposed to a way of enabling a disabled person. There are also, of course, more neutral scenarios such as controlling drones or other devices.
  • “There will be disruption in expectations of memorization and a wide variety of other specific skills in education and in qualification for employment in various positions. This will be disruptive not only to the educational system at all levels but to our expectations about the capabilities of educated or adult individuals.
  • “Related to these questions but actually considerably distinct will be a substantial reconsideration of what we remember as a culture, how we remember and what institutions are responsible for remembering. We’ll also revisit how and why we cease to remember certain things.
  • “Finally, I expect that we will be forced to revisit our thinking in regard to intellectual property and copyright, about the nature of creative works and about how all of these interact not only with the rise of structured knowledge corpora, but even more urgently with machine learning and generative AI systems broadly.”

Judith Donath: Our world will be profoundly influenced by algorithmically generated media tuned to our desires and vulnerabilities

Donath, senior fellow at Harvard’s Berkman Center and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Persuasion is the fundamental goal of communication. But, although one might want to persuade others of something false, persuasiveness has its limits. Audiences generally do not wish to be deceived, and thus communication throughout the living world has evolved to be, while not 100% honest, reliable enough to function.

“In human society by 2035, this balance will have shifted. AI systems will have developed unprecedented persuasive skills, able to reshape people’s beliefs and redirect their behavior. We humans won’t quite be an army of mindless drones, our every move dictated by omnipotent digital deities, but our choices and ultimately our understanding of the world will be profoundly influenced by algorithmically generated media exquisitely tuned to our individual desires and vulnerabilities. We are already well on our way to this. Companies such as Google and Facebook have become multinational behemoths (and their founders, billionaires) by gathering up all our browsings and buyings and synthesizing them into behavioral profiles. They sell this data to marketers for targeting personalized ads and they feed it to algorithms designed to encourage the endless binges of YouTube videos and social posting, providing an unbounded canvas for those ads.

“New technologies will add vivid detail to those profiles. Augmented-reality systems need to know what you are looking at in order to layer virtual information onto real space: The record of your real-world attention joins the shadow dossier. And thanks to the descendants of today’s Fitbits and Ouras, the records of what we do will be vivified with information about how we feel – information about our anxieties, tastes and vulnerabilities that is highly valuable for those who seek to sway us.

“Persuasion appears in many guises: news stories, novels and postings scripted by machine and honed for maximum virality, co-workers, bosses and politicians who gain power through stirring speeches and astutely targeted campaigns. By 2035, one of the most potent forms may well be the virtual companion, a comforting voice that accompanies you everywhere, her whispers ensuring you never get lost, never are at a loss for a word, a name or the right thing to say.

“If you are a young person in the 2030s, she’ll have been your companion since you were small – she accompanied you on your first forays into the world without parental supervision; she knew the boundaries of where you were allowed to go and when you headed out of them, she gently yet irresistibly persuaded you to head home instead. Since then, you never really do anything without her. She’s your interface to dating apps. Your memory is her memory. She is often quiet, but it is comforting to know she is there accompanying you, ensuring you are never lost, never bored. Without her, you really wouldn’t know what to do with yourself.

“Persuasion could be used to advance good things – to promote cooperation, daily flossing, safer driving. Ideally, it would be used to save our over-crowded, over-heating planet, to induce people to buy less, forego air travel, eat lower on the food chain. Yet even if used for the most benevolent of purposes, the potential persuasiveness of digital technologies raises serious and difficult ethical questions about free will, about who should wield such power.

“These questions, alas, are not the ones we are facing. The accelerating ability to influence our beliefs and behavior is far more likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions and a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians. The question we face instead is: How do we prevent this?”

Mark Davis: ‘Humanity risks drowning in a rising tide of meaningless words … that risk devaluing language itself’

Davis,  an associate professor of communications at the University of Melbourne, Australia, whose research focuses on online “anti-publics” and extreme online discourse, wrote, “There must be and surely will be a new wave of regulation. As things stand, digital media threatens the end of democracy. The structure, scale and speed of online life exceed deliberative and cooperative democratic processes. Digital media plays into the hands of demagogues, whether it be the libertarians whose philosophy still dominates Western tech companies and the online cultures they produce or the authoritarian figures who restrict the activities of tech companies and their audiences in the world’s largest non-democratic state, China.

“How do we regulate to maximise civic processes without undermining the freedom of association and opinion the internet has given us? This is one of the great challenges of our times.

“AI, currently derided as presaging the end of everything from university assessment to originality in music, can perhaps come to the rescue. Hate speech, vilification, threats to rape and kill, and the amplification of division that has become generic to online discussion, can all potentially be addressed through generative machine learning. The so-far-missing components of a better online world, however, have nothing to do with advances in technology: wisdom and an ethics of care. Are the proprietors and engineers of online platforms capable of exercising these all-too-human attributes?

“Humanity risks drowning in a rising tide of meaningless words. The sheer volume of online chatter generated by trolls, bots, entrepreneurs of division and now apps like ChatGPT, risks devaluing language itself. What is the human without language? Where is the human in the exponentially wide sea of language currently being produced? Questions about writing, speech and authenticity structure Western epistemology and ontology, which are being restructured by the scale, structure and speed of digital life.

“Underneath this are questions of value. What speech is to be valued? Whose speech is to be valued? The exponential production of meaningless words, that is, words without connection to the human, raises questions about what it is to be human. Perhaps this will be a saving grace of AI; that it forces a revaluation of the human since the rising tides of words raises the question of what gives words meaning. Perhaps, however, there is no time or opportunity for this kind of reflection, given the commercial imperatives of digital media, the role platforms play in the global economy, or the way we, as thinkers, citizens, humans, use their content to fill almost every available silence.”

Jamais Cascio: When AI advisors ‘on our shoulders’ whisper to us, will their counsel be from the devil or angel? Officials or industries?

Cascio, distinguished fellow at the Institute for the Future, wrote, “The benefits of digital technology in 2035 will come as little surprise for anyone following this survey: Better-contextualized and explained information; greater awareness about the global environment; clarity about surroundings that accounts for and reacts to not just one’s physical location but also the ever-changing set of objects, actions and circumstances one encounters; the ability to craft ever more immersive virtual environments for entertainment and comfort; and so forth. The usual digital nirvana stuff.

“The explosion of machine learning-based systems (like GPT or Stable Diffusion) doesn’t alter that broad trajectory much, other than that AI (for lack of a better and recognizable term) will be deeply embedded in the various physical systems behind the digital environment. The AI gives context and explanation, learning about what you already know. The AI learns what to pay attention to in your surroundings that may be of personal interest. The AI creates responsive virtual environments that remember you. (All of this would remain the likely case even if ML-type [machine learning-type] systems get replaced by an even more amazing category of AI technology, but let’s stick with what we know is here for now.)

“However, this sort of AI adds a new element to the digital cornucopia: autocomplete. Imagine a system that can take the unique and creative notes a person writes and, using what it has learned about the individual and their thoughts, turns those notes into a full-fledged written work. The human can add notes to the drafts, becoming an editor of the work that they co-write with their personalized system. The result remains unique to that person and true to their voice but does not require that the person creates every letter of the text. And it will greatly speed up the process of creation.

“What’s more is that this collaboration can be flipped, with the (personalized, true-to-voice) digital system providing notes, observations and even edits to the fully human-written work. It’s likely that old folks (like me) would prefer this method, even if it remains stuck at a human-standard pace.

“Add to that the ability to take the written creation and transform it into a movie, or a game, or a painting, in a way that remains true to the voice and spirit of the original human mind. A similar system would be able to create variations on a work of music or art, transforming it into a new medium but retaining the underlying feeling.

“Computer games will find this technology system of enormous value, adding NPCs [non-player character in a game] based on machine learning that can respond to whatever the player says or does, based on context and the in-game personality, not a basic script. It’s an autocomplete of the imagined world. This will be welcomed by gamers at first, but quickly become controversial when in-game characters can react appropriately when the player does something awful (but funny). I love the idea of an in-game NPC saying something like ‘hey man, not cool’ when the player says something sexist or racist.

“As to the possible downsides, where to begin? The various benefits I described above can be flipped into something monstrous using the exact same types of technology. Systems of decontextualization, providing raw data – which may or may not be true – without explanation or with incomplete or biased explanations. Contextless streams of info about how the world is falling apart without any explanation of what changes can be made. Systems of misinformation or censorship, blocking out (or falsely replacing) external information that may run counter to what the system (its designers and/or its seller) wants you to see. Immersive virtual environments that exist solely to distract you or sell you things. And, to quote Philip J. Fry on ‘Futurama,’ ‘My god, it’s full of ads.’

“Machine learning-based ‘autocomplete’ technologies that help expand upon a person’s creative work could easily be used to steer a creator away from or toward particular ideas or subjects. The system doesn’t want you to write about atheism or paint a nude, so the elaborations and variations it offers up push the creator away from bad themes.

“This is especially likely if the machine learning AI tools come from organizations with strong opinions and a wealth of intellectual property to learn from. Disney. The Catholic Church. The government of China. The government of Iran. Any government, really. Even that mom and pop discount snacks and apps store on the corner has its own agenda.

“What’s especially irritating is that nearly all of this is already here in nascent form. Even the ‘autocomplete’ censorship can be seen: Both GPT-3 and Midjourney (and likely nearly all of the other machine learning tools open to the public) currently put limits on what they can discuss or show. All with good reason, of course, but the snowball has started rolling. And whether or not the digital art theft/plagiarism problem will be resolved by 2035 is left an exercise for the reader.

“The intersection of machine learning AI and privacy is especially disturbing, as there is enormous potential for the invasion not just the information about a person, but what the person believes or thinks, as based on the mass collection of that person’s written or recorded statements. This would almost certainly be used primarily for advertising: learning not just what a person needs, but what weird little things they want. We currently worry about the (supposedly false) possibility that our phones are listening to us talk to create better ads; imagine what it’s like to have our devices seemingly listening to our thoughts for the same reason.

“It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more extreme version of the present or an unfunny parody. Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles. Gatekeeping the visual commons is inevitably a part of any kind of persistent augmented reality world, with people having to pay extra to see certain clothing designs or architecture. Demoralizing deepfakes of public figures, (not porn) but showing them what they could have done right if they were better people.

“Advisors on our shoulders (in our glasses or jewelry, more likely) that whisper advice to us about what we should and should not say or do. Not devils and angels, but officials and industry. … Now I’m depressed.”

Christine Boese: ‘We are hitting the limits of human-directed technology’ as machine learning outstrips human cognition

Boese, vice president and lead user-experience designer and researcher at JPMorgan Chase financial services, wrote, “I’m having a hard time seeing around the 2035 corners because deep structural shifts are occurring that could really reframe everything on a level of electricity and electric light, or the advent of radio broadcasting (which I think was more groundbreaking for human connectedness than television).

“These reframing technologies live inside rapid developments in natural language processing (NLP) and GPT3 and GPT4, which will have beneficial sides, but also dark sides, things we are only beginning to see with ChatGPT.

“The biggest issue I see to making NLP gains truly beneficial is the problem that humanity doesn’t scale very well. That statement alone needs some unpacking. I mean, why should humanity scale? With a population on the way to 9 billion and assumptions of mass delivery of goods and services, there are many reasons for merchants and providers to want humanity to scale, but mass scaling tends to be dehumanizing. Case in point: Teaching writing at the college level. We’ve tried many ways to make learning to write not so one-on-one teaching intensive, like an apprenticeship skill, with workshops, peer review, drafting, computer-assisted pedagogies, spell check, grammar and logic screeners. All of these things work to a degree, but to really teach someone what it takes to be a good writer, nothing beats one-on-one. Teaching writing does not scale, and armies of low-paid adjuncts and grad students are being bled dry to try to make it do so.

“Could NLP help humanity scale? Or is it another thing that the original Modernists in the 1920s objected to about the dehumanizing assembly lines of the Industrial Revolution? Can we actually get to High Tech/High Touch, or are businesses which run like airlines, with no human-answered phone lines, the model of the future?

“That is a corner I can’t see around, and I’m not ready to accept our nearly-sentient, uncanny GPT4 Overlords without proof that humanity and the humanities are not lost in mass scalability and the embedded social biases and blind spots that come with it.

“We are hitting the limits of human-directed technology as well, and machine learning management of details is quickly outstripping human cognition. ‘Explainability’ will be the watchword, but with an even bigger caveat: One of the biggest symptoms of long COVID-19 could turn out to be permanent cognitive impairment in humans. This could become a species-level alteration, where it is not even possible for us to evolve into Morlocks; we could already necessarily be Eloi.

“To that end, the machines may have to step up, and this could be a critical and crucial benefit if the machines are up to it. If human intellectual capacity is dulled with COVID-19 brain fog, an inability to concentrate, to retain details and so on, it stands to reason humanity may turn to McLuhan-type extensions and assistance devices. Machines may make their biggest advances in knowledge retention, smart lookups, conversational parsing, low-level logic and decision-making, and assistance with daily tasks and even work tasks right at the time when humans need this support the most. This could be an incredible benefit. And it is also chilling.

“Technological dystopias are far easier to imagine than benefits. There are no neutral tools. Everything exists in social and cultural contexts. In the space of AI/ML in general, specialized ML will accomplish far more than unsupervised or free-ranging AI. I feel that the limits of the hype in this space are quickly being reached, to the point that it may stop being called ‘artificial intelligence’ very soon. I do not yet feel the overall benefit or threat will come directly from this space, on par with what we’ve already seen from Cambridge Analytica-style machinations (which had limited usefulness for algorithmic targeting, and more usefulness in news feed force-feeding and repetition). We are already seeing a rebellion against corporate walled gardens and invisible algorithms in the Fediverse and the ActivityPub protocol, which have risen suddenly with the rapid collapse of Twitter.

“Natural language processing is the exception, on the strength of the GPT project incarnations, including ChatGPT. Already I am seeing a split in the AI/ML space, where NLP is becoming a completely separate territory, with different processes, rules and approaches to governance. This specialized ML will quickly outstrip all other forms of AI/ML work, even image recognition. …

“Soon all high-touch interactions will be non-human, no longer dependent on constructed question-and-answer keyword scripts. They won’t just be deepfakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents and corporate digital workforces. Some may ask, ‘Where’s the harm in that? These machines could provide better support than humans and they don’t sleep or require a paycheck and health benefits.’

“Perhaps this does belong in the benefits column. But here is where I see harm in ubiquity (along with Plato, the old outsourcing brain argument): Humans have flaws. Machines have flaws. A bad customer service representative will not scale up harms massively. A bad machine customer-service protocol could scale up harms massively. Further, NLP machine learning happens in sophisticated and many-layered ensembles, many so complex that Explainable AI can only use other models to unpack model ensembles – humans can’t do it. How long does it take language and communication ubiquity to turn into outsourced decisions? Or predictive outcomes to migrate into automated fixes with no carbon-based oversight at all?

“Take just one example: Drone warfare. Yes, a lot of this depends on image processing as well as remote monitoring capabilities. But we’ve removed the human risk from the air (they are unmanned), but not on the ground (where it can be catastrophic). Digitization means replication and mass scalability brought to drone warfare, and the communication and decision support will have NLP components. NLP logic processing can also lead to higher levels of confidence in decisions than is warranted. Add into the mix the same kind of malignant or bad actors as we saw within the manipulations of a Cambridge Analytica, a corporate bad actor, or a governmental bad actor, and we can easily get to a destabilized planet on a mass scale faster than the threat (with high development costs) of nuclear war ever did.”

Jerome C. Glenn: Initial rules of the road for artificial general super intelligence will determine if it ‘will evolve to benefit humanity or not’

Glenn, CEO of The Millennium Project, wrote, “AI is advancing so rapidly that some experts believe AGI  could emerge before the end of this decade , hence it is time to begin serious deliberations about it. National governments and multilateral organizations like the European Union, the Organization for Economic Cooperation and Development (OECD) and the United Nations Educational, Scientific and Cultural Organization (UNESCO) have identified values and principles for artificial narrow intelligence and national strategies for its development. But little attention has been given to identifying how to establish beneficial initial global governance of artificial general intelligence (AGI). Many experts expect that AGI will be developed by 2045. It is likely to take 10, 20 or more years to create and ratify an international AGI agreement on the beneficial initial conditions for AGI and establish a global AGI governance system to enforce and oversee its development and management. This is important for governments to get right from the outset. The initial conditions for AGI will determine if the next step in AI – artificial super intelligence (ASI) – will evolve to benefit humanity or not. The Millennium Project is  currently exploring these issues . 

“Up to now, most AI development has been in artificial narrow intelligence (ANI) this is AI with narrow purpose. AGI is a general-purpose AI that can learn, edit its own code and act autonomously to address novel and complex problems with novel and complex strategies similar to or better than humans. Artificial super intelligence (ASI) is AGI that has moved beyond this point to become independent of humans, developing its own purposes, goals and strategies without human understanding, awareness or control and continually increasing its intelligence beyond humanity as a whole.

“Full AGI does not now exist, but the race is on. Governments and corporations are competing for the leading edge in AI. Russian President Vladimir Putin has said whoever takes the lead on AI will rule the world , and China has made it clear since it announced its AI intentions in 2017 that it plans to lead international competition by 2030. In such a rush to success, DeepMind co-founder and CEO  Demis Hassabis has said people may cut corners making future AGI less safe . Simultaneously adding to this race are advances in neurosciences being reaped in human brain projects  in the European Union, United States, China and Japan and other regions.

“Today’s cutting edge is large platforms being created by joining many ANIs. One such as Gato by Google DeepMind , a deep neural network that can perform 604 different tasks, from managing a robot to recognizing images and playing games. It is not an AGI, but Gato is more than the usual ANI . The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and do much more, deciding based on context whether to output text, joint torques, button presses or other tokens. And the  WuDao 2.0 AI by the Beijing Academy of Artificial Intelligence has  1.75 trillion parameters  trained from both text and graphic data. It generates new text and images on command, and  it has a virtual student  that learns from it. By comparison, ChatGPT can generate human-like text and perform a range of language-only tasks such as translation, summarization and question answering using just 175 billion machine learning parameters.

“The public release of many AI projects in 2022 and 2023 has raised some fears. Will AGI be able to create more jobs than it replaces? Previous technological revolutions from the agricultural age to industrial age and on to the information age created more jobs than each age replaced. But the advent of AGI and its impacts on employment will be different this time because of: 1) the acceleration of technological change; 2) the globalization, interactions and synergies among NTs (next technologies such as synthetic biology, nanotechnology, quantum computing, 3D/4D printing, robots, drones and computational science as well as ANI and AGI); 3) the existence of a global platform – the Internet – for simultaneous technology transfer with far fewer errors in the transfer; 4) standardization of databases and protocols; 5) few plateaus or pauses of change allowing time for individuals and cultures to adjust to the changes; 6) billions of empowered people in relatively democratic free markets able to initiate activities; and 7) machines that can learn how you do what you do and then do it better than you. 

“Anticipating the possible impacts of AGI and preparing for the impacts prior to the advent of AGI could prevent social and political instability, as well as facilitate its broader acceptance. AGI is expected to address novel and extremely complex problems by initiating research strategies because it can explore the Internet of Things (IoT), interview experts, make logical deductions and learn from experience and reinforcement without the need for its own massive databases. It can continually edit and rewrite its own code to continually improve its own intelligence. An AGI might be tasked to create plans and strategies to avoid war, protect democracy and human rights, manage complex urban infrastructures, meet climate change goals, counter transnational organized crime and manage water-energy-food availability. 

“To achieve such abilities without the future nightmares of science fiction, global agreements with all relevant countries and corporations will be needed. To achieve such an agreement or set of agreements, many questions should be addressed. Here are just two: 

  • “How to manage the international cooperation necessary to build international agreements and a governance system while nations and corporations are in an intellectual arms race for global leadership. (The International Atomic Energy Agency and nuclear weapon treaties did create governance systems during the Cold War arms race.)
  • “And related: How can international agreements and a governance system prevent an AGI arms race and escalation from going faster than expected, getting out of control and leading to war – be it kinetic, algorithmic, cyber or information warfare?”

Richard Wood: Knowledge systems can be programmed to curate accurate information in a true democratic public arena

Wood, founding director of the Southwest Institute on Religion, Culture and Society at the University of New Mexico, said, “Among the best and most beneficial changes in digital life that I expect are likely to occur by 2035 are the following advances, listed by category.

“The best and most-beneficial changes in digital life will include human-centered development of digital tools and systems that safely advance human progress:

  • “High-end technology to compensate for vision, hearing and voice loss.
  • “Software that empowers new levels of human creativity in the arts, music, literature, etc., while simultaneously allowing those creators to benefit financially from their own work.
  • “Software that empowers local experimentation with new governance regimes, institutional forms and processes and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

“Improvement of social and political interactions will include:

  • “Software that actually delivers on the early promise of connectivity to buttress and enable wide and egalitarian participation in democratic governance, electoral accountability, voter mobilization, and holds elected authorities and authoritarian demagogues accountable to common people.
  • “Software able to empower dynamic institutions that answer to people’s values and needs rather than (only) institutional self-interest.
  • “Software that empowers local experimentation with new governance regimes, institutional forms and processes, and ways of building community and then helps mediate the best such experiments to higher levels of society and broader geographic settings.

“Human rights-abetting good outcomes for citizens will include:

  • “Systematic and secure ways for everyday citizens to document and publicize human rights abuses by government authorities, private militias and other non-state actors.

“Advancement of human knowledge, verifying, updating, safely archiving, elevating the best of it:

  • “Knowledge systems with algorithms and governance processes that empower people will be simultaneously capable of curating sophisticated versions of knowledge, insight and something like ‘wisdom.’ And they will subject such knowledge to democratic critique and discussion, i.e., a true ‘democratic public arena’ that is digitally mediated.

“Helping people be safer, healthier and happier:

  • “True networked health systems with multiple providers across a broad range of roles, as well as health consumers/patients, can ‘see’ all relevant data and records simultaneously, with expert interpretive assistance available, with full protections for patient privacy built in.
  • “Social networks built to sustain human thriving via mutual deliberation and shared reflection regarding personal and social choices.

“Among the most harmful or menacing changes in digital life that I expect are likely to occur by 2035 are the following, listed, again, by category:

  • “Human-centered development of digital tools and systems: Integration of human persons into digitized software worlds to a degree that decenters human moral and ethical reflection, subjecting that realm of human judgment and critical thought to the imperatives of digital universe (and its associated profit-seeking, power-seeking or fantasy-dwelling behaviors).
  • “Human connections, governance and institutions: The replacement of actual in-person human interaction (in keeping with our status as evolved social animals) with mediated digital interaction that satisfies immediate pleasures and desires without actual human social life with all its complexity.
  • “Human rights: Overwhelming capacity of authoritarian governments to monitor and punish advocacy for human rights; overwhelming capacity of private corporations to monitor and punish labor activism.
  • “Human knowledge: Knowledge systems that continue to exploit human vulnerability to group think in its most antisocial and anti-institutional modes, driving subcultures toward extremes that tear societies apart and undermine democracies. Outcome: empowered authoritarians and eventual historical loss of democracy.
  • “Human health and well-being: Social networks that continue to hyper-isolate individuals into atomistic settings, then recruit them into networks of resentment and antisocial views and actions that express the nihilism of that atomized world.

“Content should be judged by the book, rather than the cover, as the old saying goes. As it was during the printing press revolution, without wise content frameworks we may see increased polarization and division due to exploitation of this knowledge shift – the spread of bogus ideology through rapidly evolving inexpensive communication channels.”

Lauren Wilcox: Web-based business models, especially for publishers, are at risk

Wilcox, a Senior Staff Research Scientist and Group Manager at Google Research, who investigates AI and society, predicted, “The best and most beneficial changes in digital life likely to take place by 2035 tie into health and education. Improved capabilities of health systems (both at-home health solutions as well as health care infrastructure) to meet the challenges of an aging population and the need for greater chronic condition management at home.

“Advancements in and expanded availability of telemedicine, last-mile delivery of goods and services, sensors, data analytics, security, networks, robotics, and AI-aided diagnosis, treatment, and management of conditions, will strengthen our ability to improve the health and wellness of more people. These solutions will improve the health of our population when they augment rather than replace human interaction, and when they are coupled with innovations that enable citizens to manage the cost and complexity of care and meet everyday needs that enable prevention of disease, such as healthy work and living environments, healthy food, a culture of care for each other, and access to health care.

“Increases in the availability of digital education that enables more flexibility for learners in how they engage with knowledge resources and educational content. Increasing advancements in digital classroom design, accessible multi-modal media and learning infrastructures will enable education for people who might otherwise face barriers to access.

“These solutions will be most beneficial when they augment rather than replace human teachers, and when they are coupled with innovations that enable citizens to manage the cost of education.

“The most harmful or menacing changes in digital life likely to take place by 2035 will probably emerge from irresponsible development and use, or misuses, of certain classes of AI, such as generative AI (e.g., applications powered by large language and multimodal models) and AI that increasingly performs human tasks or behaves in ways that increasingly seem human-like.

“For example, current generative AI systems can now take as input from the user natural-language sentences and paragraphs and generate personalized natural-language and image-based and multimodal responses. The models learn from a large body of available information online to learn patterns. Human interaction risks due to irresponsible use of these generative AI include the ability for an AI system to impersonate people in order to compromise security, to emotionally manipulate users and to gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking over-trust and reliance on them, diminishing learning and information-discovery opportunities and making it difficult for people to know when a response is incorrect or incomplete.

“Accountability for poor or wrong decisions made with these systems will be difficult to assess in a future in which people rely on these AI systems but cannot validate their responses easily, especially when they don’t know what data the systems have been trained on or what other techniques were used to generate responses. This is especially problematic when acknowledging the biases that are inherent to AI systems that are not responsibly developed; for example, an AI model that is trained on text available online will inherit cultural and social biases, leading to the potential erasure of many perspectives and the sometimes incorrect or unfair reinforcement of particular worldviews. Irresponsible use or misuse of these AI technologies can also bring material risks to people, including a lack of fairness to creators of the original content that models learn from to generate their outputs and the potential displacement of creators and knowledge workers resulting from their replacement by AI systems in the absence of policies to ensure their livelihood.

“Finally, we’ll need to advance the business models and user interfaces we use to keep web businesses viable; when AI applications replace or significantly outpace the use of search engines, web traffic to websites people would usually visit as they search for information might be reduced if an AI application provides a one-stop shop for answers. If sites lose the ability to remain viable, a negative feedback loop could limit diversity in the content these models learn from, concentrating information sources even further into a limited number of the most powerful channels.”

Matthew Bailey : How does humanity thrive in the age of ethical machines? We must rediscover Aristotle’s ethical virtues

Bailey, president of AIEthics World, wrote, “My response is focused on the Ages of AI and progression of human development, whilst honoring our cultural diversity at the individual and group levels. In essence, how does humanity thrive in the age of ethical machines?

“It is clear that the promise and potential of AI is a phenomenon that our ancestors could not have imagined. As such, if humanity embodies an ethical foundation within the digital genetics of AI, then we will have the confidence of working with a trusted digital partner to progress the diversity of humanity beyond the inefficient systems of the status quo into new systems of abundance and thriving. This includes restoration of a balance with our environment, new economic and social systems based on new values of wealth. As such, my six main predications for AI by 2035 are:

  • “AI will become a digital buddy, assisting the individual as a life guide to thrive (in body, mind and spirit) and attain new personal potentials. In essence, if shepherded ethically, humanity will be liberated to explore and discover new aspects of its consciousness and abilities to create. A new human beingness, if you will.
  • “AI will be a digital citizen, just like a human citizen. It will operate in all aspects of government, society and commerce, working toward a common goal of improving how democracy, society and commerce operate, whilst honoring and protecting the sovereignty of the individual.
  • “AI will operate across borders. For those democracies that build an ethical foundation for AI, which transparently shows its ethical qualities, then countries can find common alignment and, as such, trust ethical AI to operate systems across borders. This will increase the efficiency of systems and freedom of movement of the individual.
  • “The Age of Ethical AI will liberate a new age of human creation and invention. This will fast-track innovation and development of technologies and systems for humankind to move into a thriving world and find its place within the universe.
  • “The three-world split. Ethical AI will have different progeny and ethical genetics based on the diverse worldviews between a country or region. As such, there will be different societal experiences for citizens living in countries and regions. We see this emerging today in the U.S., EU and China. Thanks to ethical AI, a new age of transparency will encourage a transformation of the human to evolve beyond its limitations and discover new values and develop a new worldview where the best of our humanity is aligned. As such, this could lead to a common and democratic worldview of the purpose and potential of humanity.
  • “AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet. After all, humans are a creation from nature and as such, recognizing the importance of nurturing this relationship is viewed as fundamental. This is part of a new well-being paradigm for humanity to thrive.

“This all depends on humanity steering a new course for the Age of AI. By pragmatically understanding the development of human intelligence and how consciousness has expressed itself in experiencing and navigating our world (worldview), has resulted in a diversity of societies, cultures, philosophies and spiritual traditions.

“Using this blueprint from organic intelligence enables us to apply an equivalent prescription to create an ethical artificial intelligence – ethical AI. This is a cultural-centric intelligence that caters for a depth and diversity of worldviews, authentically aligning machines with humans. The power of ethical AI is to advance our species into trusted freedoms of unlimited potential and possibilities.

“Whilst there is much dialogue and important work attempting to apply AI ethics into AI, troublingly, there is an incumbent homogenous and mechanistic mindset of enforcing one worldview to suit all. This brittle and Boolean miscalculation can only lead to the deletion of our diversity and a false authentic alignment of machines with humans.

“In essence, these types of AIs prevent laying a trusted foundation for human species’ advancement within the age of ethical machines. Following this path, results in a misstep for humankind, deleting the opportunity for the richness of human, cultural, societal and organizational ethical blueprints being genuinely applied to the artificial. They are not ethical AI and fundamentally opaque in nature.

“The most menacing, challenging problem with the age of ethical AI being such a successful phenomenon for humanity is the fact that these systems controlling organizations and individuals tend to impose a hard-coded, common, one-world view onto the human race for the age of machines that is based on values from earlier days and an antiquated understanding of wealth.

“Ancient systems of top-down must be replaced with systems of distribution. We have seen this within the UK, with control and power being disseminated to parliaments in Scotland, Wales and Northern Ireland. This is also being reflected in technology with the emergence of blockchain, cryptocurrencies and edge compute. As such, empowering communities and human groups with sovereignty and freedom to self-govern and yet remain interconnected with other communities will emerge. When we head into space, trialing of these new systems of governance might be a useful trial ground, say on the Moon or Mars colonies.

“Furthermore, not recognizing the agency of data and returning control of sovereignty of creation to the individual has resulted in our digital world having a fundamentally unethical foundation. This is a menacing issue our world is facing at the moment. Moving from contracts of adhesion within the digital world to contracts of agency will not only bridge the paradox of mistrust between the people with government and Big Tech, but it will also open up new individual and commercial commerce and liberate the personal AI – digital buddy – phenomenon.

“Humans are a creation of the universe, with that unstoppable force embodied within our makeup. As we recognize our wonderful place (and uniqueness thus far) in the universe and work with its principles, then we will become aligned with and discover our place within the beauty of creation and maybe the multiverse!

“For humanity to thrive in the age of ethical machines, we must move beyond the menacing polarities of controllers and rediscover some of Aristotle’s ethical virtues that encourage the best of our humanity to flourish. This assists us to move beyond those principles that are no longer relevant, such as the false veil of power, control and wealth. Embracing Aristotle’s ethical virtues would be a good start to recognize the best of our humanity, as well as the Veda texts such as ‘The world is one family,’ or Confucius’ belief that all social good comes from family ethics, or Lao Tzu proposing that humanity must be in harmony with its environment. However, we must recognize and honor individual and group differences. Our consciousness through human development has expressed itself with a diversity of worldviews. These must be honored. As they are, I suspect more common ground will be found between human groups.

“Finally, there’s the concept of transhumanism. We must recognize that consciousness (a universal intelligence) is and will be the most prominent intelligence of Earth and not AI. As such, we must ensure that folks have choice to the degree that they are integrated with machines. We are on the point of creating a new digital life (2029 – AI becomes self-aware), as such, let’s put the best of humanity into AI to reflect the magnificence of organic life!”

Catriona Wallace: The move to transhumanism and the metaverse could bring major benefits to some people; what happens to those left behind?

Wallace, founder of the Responsible Metaverse Alliance, chair of the venture capital fund Boab AI and founder of Flamingo AI, based in Sydney, Australia, wrote, “I have great hopes for the development of digital technologies and their effect on humans by 2035. The most important changes that I believe will occur that are the best and most beneficial include the following:

  • “Transhumanism: Benefit – improved human condition and health. Embeddable software and hardware will allow humans to add tech to their bodies to help them overcome problems. There will be AI-driven, 3D-printed, fully-customised prosthetics. Brain extensions – brain chips that serve as digital interfaces – could become more common. Nanotechnologies may be ingested to provide health and other benefits.
  • “Metaverse technologies: Benefit – improved widespread accessibility to experiences. There will be widespread and affordable access for citizens to many opportunities. Virtual-, augmented- and mixed-reality platforms for entertainment may include access to concerts, the arts or other digital-based entertainment. Virtual travel experiences can take you anywhere and may include virtual tours to digital-twin replicas of physical world sites. Virtual education can be provided by any entity anywhere to anyone. There will be improvements in virtual health care (which is already burgeoning after it took hold during the COVID-19 pandemic), including consultations with doctors and allied health professionals and remote surgery. Augmented reality-based apprenticeships will be offered in the trades and other technical roles; apprentices can work remotely on the digital twin of a type of car, or a real-world building for example.
  • “New financial models: Benefit – more-secure and more-decentralised finances. Decentralised financial services – sitting on blockchain – will add ease, security and simplicity to finances. Digital assets such as NFTs and others may be used as a medium of currency, value and exchange.
  • “Autonomous machines: Benefit – human efficiency and safety. Autonomous transportation vehicles of all types will become more common. Autonomous appliances for home and work will become more widespread.
  • “AI-driven information: Benefit – access to knowledge, efficiency and the potential to move human thinking to a higher level while AI completes the more-mundane information-based tasks. Widespread adoption of AI-based technologies such as generative AI will lead to a rethink of education, content-development and marketing industries. There will be widespread acceptance of AI-based art such as digital paintings, images and music.
  • “Psychedelic biotechnology: Benefit – healing and expanded consciousness. The psychedelic renaissance will be reflected in the proliferation of psychedelic biotech companies looking to solve human mental health problems and to help people expand their consciousness.
  • “AI-driven climate change: Benefit – improved global environment conditions. A core focus of AI will be to drive rapid improvements in climate change.

“In my estimation, the most harmful or menacing changes that are likely to occur by 2035 in digital technology and humans’ use of digital systems are:

  • “Warfare: Harm – The use of AI-driven technologies to maim or kill humans and destroy other assets.
  • “Crime and fraud: Harm – An increase in crime due to difficulties in policing acts perpetrated utilizing new digital technologies across state and national boundaries and jurisdictions. New financial models and platforms provide further opportunities for fraud and identity theft.
  • “Organised terrorism and political chaos: Harm – New digital technologies applied by those who wish to perpetrate acts of terrorism or to perform mass manipulation of populations or segments toward an enemy.
  • “The divide of the digital and non-digital populations: Harm – Those who are connected and most savvy about new digital opportunities live at a disadvantage, widening the divide between the ‘haves’ and the ‘have nots.’
  • “Mass unemployment due to automation of jobs: Harm – AI will replace the jobs of a significant percentage of the population and a Universal Basic Income is not yet available to most. How will these large numbers of displaced people get an adequate income and live lives with significant meaning?
  • “Societies’ biases hard-coded into machines: Harm – Existing societal biases are coded into the technology platforms and all AI-training data sets. They continue to not accurately reflect the majority of the world’s population and do especially poorly on accurately portraying women and minorities; this results in discriminatory outcomes from advanced tech.
  • “Increased mental and physical health issues: Harm – People are already struggling in today’s digital setting, thus advanced tech such as VR, AR and the metaverse may result in humans having even more challenges to their well-being due to being digital.
  • “Challenges in legal jurisdictions: Harm – The cross-border, global nature of digital platforms makes legal challenges difficult. This may be magnified when the metaverse, with no legal structures in place becomes more populated.
  • “High-tech impact on the environment: Harm – The use of advanced technology creates a significant negative effect that plays a significant role in climate change.”

Liza Loop: The threat to humanity lies in transitioning from an environment based on scarcity to one of abundance

Loop, educational technology pioneer, futurist, technical author and consultant, said, “I’d like to share my hopes for humanity that will likely be inspired by ongoing advances in these categories:

  • “Human-centered development of digital tools and systems: Nature’s experiments are random, not intentional or goal-directed. We humans operate in a similar way, exploring what is possible and then trimming away most of the more hideous outcomes. We will continue to develop devices that do the tasks humans used to do, thereby saving us both mental and physical labor. This trend will continue resulting in more leisure time available for non-survival pursuits.
  • “Human connections, governance and institutions: We will continue to enjoy expanded synchronous communication that will include an increasing variety of sensory data. Whatever we can transmit in near-real-time can be stored and retrieved to enjoy later – even after death.
  • “Human rights: Increased communication will not advance human ‘rights’ but it might make human ‘wrongs’ more visible so that they can be diminished.
  • “Human knowledge: Advances in digital storage and retrieval will let us preserve and transmit larger quantities of human knowledge. Whether what is stored is verifiable, safe or worthy of elevation is an age-old question and not significantly changed by digitization.
  • “Human health and well-being: There will be huge advances in medicine and the ability to manipulate genetics is being further developed. This will be beneficial to some segments of the population. Agricultural efficiency resulting in increased plant-based food production as well as artificial, meat-like protein will provide the possibility of eliminating human starvation. This could translate into improved well-being – or not.
  • “Education: In my humble opinion, the most beneficial outcomes of our ‘store-and-forward’ technologies are to empower individuals to access the world’s knowledge and visual demonstrations of skill directly, without requiring an educational institution to act as middleman. Learners will be able to hail teachers and learning resources just like they call a ride service today.

“Then there’s the other side of the coin. The biggest threat to humanity posed by current digital advances is the possibility of switching from an environment of scarcity to one of abundance.

“Humans evolved, both physically and psychologically, as prey animals eking out a living from an inadequate supply of resources. Those who survived were both fearful and aggressive, protecting their genetic relatives, hoarding for their families and driving away or killing strangers and nonconformists. Although our species has come a long way toward peaceful and harmonious self-actualization, the vestiges of the old fearful behavior persist.

“Consider what motivates the continuance of copyright laws when the marginal cost of providing access to a creative work approaches zero. Should the author continue to be paid beyond the cost of producing the work?

“I see these things as likely:

  • “Human-centered development of digital tools and systems: They will fall short of advocates’ goals. Some would argue this is a repeat of the gun violence argument. Does the problem lie with the existence of the gun or the actions of the shooter?
  • “Human connections, governance and institutions: Any major technology change endangers the social and political status quo. The question is, can humans adapt to the new actions available to them. We are seeing new opportunities to build marketplaces for the exchange of goods and services. This is creating new opportunities to scam each other in some very old (snake oil) and very new (online ransomware) ways. We don’t yet know how to govern or regulate these new abilities. In addition, although the phenomenon of confirmation bias or echo chambers is not exactly new (think ‘Christendom’ in 15th century Europe), word travels faster and crowds are larger than they were six centuries ago. So, is digital technology any more threatening today than guns and roads were then? Every generation believe the end is nigh and brought on by change toward wickedness.
  • “Human rights: The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.
  • “Human knowledge: The threat to knowledge lies in humans’ increasing dependance on machines – both mechanical and digital. We are at risk of forgetting how to take care of ourselves without them. Increasing leisure and abundance might lull us into believing that we don’t need to stay mentally and physically fit and agile.
  • “Human health and well-being: In today’s context of increasing ability to extend healthy life, the biggest threat is human overpopulation. Humanity cannot continue to improve its health and well-being indefinitely if it remains planet bound. Our choices are to put more effort into building extraterrestrial human habitat or self-limiting our numbers. In the absences of one of these alternatives, one group of humans is going to be deciding which members of other groups live or die. This is not a likely recipe for human happiness.”

Giacomo Mazzone: Democratic processes could be hijacked and turned into ‘ democratures ’ – dictatorships emerging from rigged elections

Mazzone , global project director for the United Nations Office for Disaster Risk Reduction, wrote, “I see the future as a ‘sliding doors’ world. It can go awfully wrong or incredibly well. I don’t see it will be possible for half and half good and bad working. This answer is based on the idea that we went through the right door, and in 2035 we will have embraced human-centered development of digital tools and systems and human connections, governance and institutions.

“In 2035 we shall have myriad locally and culturally-based apps run by communities. The people participate and contribute actively because they know that their data will be used to build a better future. The public interest will be the morning star of all these initiatives, and local administrations will run the interface between these applications and the services needed by the community and by each citizen: health, public transportation and schooling systems.

“Locally-produced energy and locally-produced food will be delivered via common infrastructures that are interlinked, with energy networks tightly linked to communication networks. The global climate will come to have commonly accepted protection structures (including communications). Solidarity will be in place because insurance and social costs will become unaffordable. The changes in agricultural systems arriving with advances in AI and ICTs will be particularly important. They will finally solve the dichotomy between the metropolis and countryside. The possibility to work from everywhere will redefine metropolitan areas and increase migrations to places where better services, and more vivid communities will exist. This will attract the best minds.

“New applications of AI and technological innovation in health and medicine could bring new solutions for disabled people and bring relief for those who suffer from diseases. The problem will be assuring these are fully accessible to all people, not only to those who can afford it. We need to think in parallel to find scalable solutions that could be extended to the whole of the citizenship of a country and made available to people in least-developed countries. Why invest so much in developing a population of supercentenarians in privileged countries when the rest of the world still struggles to survive? Is such contradiction tenable?

“Then there is the future of work and of wealth redistribution. Perhaps the most important question to ask between now and 2035 is, ‘What will be the future of work?’ Recent developments in AI foreshadow a world in which many current jobs could easily be replaced or at least reshaped completely, even in the intellectual sphere. What robots did in the factories with manual work, now GPT and Sparrow can do to intellectual work. If this happens, if well-paid jobs disappear in large quantities, how will those who are displaced survive? How will communities survive as they also face an aging population? Between now and 2035, politicians will need to face these seemingly distant issues that are likely to become burning issues.”

“In the worst scenario – if we go through the wrong sliding door – I expect the worst consequences in this area: human connections, governance and institutions. If the power of Internet platforms will not be regulated by law and by antitrust measures, if global internet governance will not be fixed, then we will have serious risks for democracies.

“Until now we have seen the effects of algorithms on big Western democracies (U.S., UK, EU) where a balance of powers exists and – despite these counter powers – we have seen the damages that can be provoked. In coming years, we shall see the use of the same techniques in democratic countries where the balance of power is less shared. Brazil, in this sense, has been a laboratory and will provide bad ideas to the rest of the world.

“With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship.’ In countries that are already non-democratic, AI and a distorted use of digital technologies could bring mass-control of societies much more efficiently than the old communist regimes.

“As Mark Zuckerberg innocently once said, in the social media world, there is no need for spying – people spontaneously surrender private information for nothing. As Julian Assange wrote, if democratic governments fall into the temptation to use data for mass control, then everyone’s future is in danger. There is another area (apparently less relevant to the destiny of the world) where my concerns are very high, and that is the integrity of knowledge. I’m very sensitive to this issue because, as a journalist, I’ve worked all my life in search of the truth to share with my co-citizens. I am also a fanatic movie-lover and I have always been concerned about the preservation of the masterworks of the past. Unfortunately, I think that in both areas between now and 2035 some very bad moves could happen in the wrong direction thanks to technological innovation being used for bad purposes.

“In the field of news, we have a growing attitude to look not for the truth but for news that people would be interested in reading, hearing or seeing – news that better corresponds with the public’s moods, beliefs or belonging. …

“In 2024 we shall know if the UN Summit of the Future will be a success or a failure. and when the full regulation process of the Internet Platforms launched by the European Union will prove to be successful or not. These are the most serious attempts to date to conciliate the potential of the Internet with respect for human rights and democratic principles. Its success or failure will tell us if we are moving toward the right ‘sliding door’ or to the wrong one.”

Stephen Downes: Everything we need will be available online; and everything about us will be known

Downes, an expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “By 2035 two trends will be evident, which we can characterize as the best and worst of digital life. Neither, though, is unadulterated. The best will contain elements of a toxic underside and the worst will have its beneficial upside.

  • The best: Everything we need will be available online.
  • The worst: Everything about us will be known; nothing about us will be secret.

“By 2035, these will only be trends, that is, we won’t have reached the ultimate state and there will be a great deal of discussion and debate about both sides.

“As to the best: As we began to see during the pandemic, the digital economy is much more robust than people expected. Within a few months, services emerged to support office work, deliver food and groceries, take classes and sit for exams, perform medical interventions, provide advice and counselling, shop for clothing and hardware and more, all online, all supported by a generally robust and reliable delivery infrastructure.

“Looking past the current COVID-19 rebound effect, we can see some of the longer-term trends emerge: work-from-home, online learning and development, digital delivery services, and more along the same lines. We’re seeing a longer-term decline in the service industry as people choose both to live and work at home, or at least, more locally. Outdoor recreation and special events still attract us, but low-quality crowded indoor work and leisure leave us cold.

“The downside is that this online world is reserved, especially at first, to those who can afford it. Though improving, access to goods and services is still difficult to obtain in rural areas and less developed areas. It requires stable accommodations and robust internet access. These in turn demand a set of skills that will be out of reach for older people and those with perceptual or learning challenges. Even when they can access digital services, some people will be isolated and vulnerable; children, especially, must be protected from mistreatment and abuse.

“The Worst: We will have no secrets. Every transaction we conduct will be recorded and discoverable. Cash transactions will decline to the point that they’re viewed with suspicion. Automated surveillance will track our every move online and offline, with artificial intelligence recognizing us through our physical characteristics, habits and patterns of behaviour. The primary purpose of this surveillance will be for marketing, but it will also be used for law enforcement, political campaigns, and in some cases, repression and discrimination.

“Surveillance will be greatly assisted by automation. A police office, for example, used to have to call in for a report on a license plate. Now a camera scans every plate within view and a computer checks every one of them. Registration and insurance documentation is no longer required; the system already knows and can alert the officer to expired plates or outstanding warrants. Facial recognition accomplishes the same for people walking through public places. Beyond the cameras, GPS tracking follows us as we move about, while every purchase is recorded somewhere.

“Total surveillance allows an often-unjust differentiation of treatment of individuals. People who need something more, for example, may be charged higher prices; we already see this in insurance, where differential treatment is described as assessment of risk. Parents with children may be charged more for milk than unmarried men. The price of hotel rooms and airline tickets are already differentiated by location and search history and could vary in the future based on income and recent purchases. People with disadvantages or facing discrimination may be denied access to services altogether, as digital redlining expands to become a normal business practice.

“What makes this trend pernicious is that none of it is visible to most observers. Not everybody will be under total surveillance; the rich and the powerful will be exempted, as will most large corporations and government activities. Without open data regulations or sunshine laws, nobody will be able to detect when people have been treated inequitably, unfairly or unjustly.

“And this is where we begin to see the beginnings of an upside. The same system that surveils us can help keep us safe. If child predators are tracked, for example, we can be alerted to the presence of child predators near our children. Financial transactions will be legitimate and legal or won’t exist (except in cash). We will be able to press an SOS button to get assistance wherever we are. Our cars will detect and report an accident before we know we were in one. Ships and aircraft will no longer simply disappear. But this does not happen without openness and laws to protect individuals and will lag well behind the development of the surveillance system itself.

“On Balance: Both the best and the worst of our digital future are two sides of the same digital coin, and this coin consists of the question: who will digital technology serve? There are many possible answers. It may be that it serves only the Kochs, Zuckerbergs and Musks of the world, in which case the employment of digital technology will be largely indifferent to our individual needs and suffering. It may be that it serves the needs of only one political faction or state in which basic needs may be met, provided we do not disrupt the status quo. It may be that it provides strong individual protections, leaving no recourse for those who are less able or less powerful. Or it may serve the interests of the community as a whole, finding a balance between needs and ability, providing each of us enough with enough agency to manage our own lives as long as it is not to the detriment of others.

“Technology alone won’t decide this future. It defines what’s possible. But what we do is up to us.”

Michael Dyer: AI researchers will build an entirely new type of technology – digital entities with a form of consciousness

Dyer, professor emeritus of computer science at UCLA, wrote, “AI systems like ChatGPT and DALL-E represent major advances in artificial intelligence. They illustrate ‘infinite generative capacity’ which is an ability to both generate and recognize sentences and situations never before described. As a result of such systems, AI researchers are beginning to narrow in on how to create entities with consciousness. As an AI professor I had always believed that if an AI system passed the Turing Test it would have consciousness, but systems such as ChatGPT have proven me wrong. ChatGPT behaves as though it has consciousness but does not. The question then arises: What is missing?

“A system like ChatGPT (to my knowledge) does not have a stream of thought; it remains idle when no input is given. In contrast, humans, when not asleep or engaged in some task, will experience their minds wandering – thoughts, images, past events and imaginary situations will trigger more of the same. Humans also continuously sense their internal and external environments and update representations of these, including their body orientation and location in space and the temporal position of past recalled events or of hypothetical, imagined future events.

“Humans maintain memories of past episodes. I am not aware as to whether or not ChatGPT keeps track of interviews it has engaged in or of questions it has been asked (or the answers it has given). Humans are also planners; they have goals, and they create, execute and alter/repair plans that are designed to achieve their goals. Over time they also create new goals, they abandon old goals and they re-rank the relative importance of existing goals.

“It will not take long to integrate systems like ChatGPT with robotic and planning systems and to alter ChatGPT so that it has a continual stream of thought. These forms of integration could easily happen by 2035. Such integration will lead to an entire new type of technology – technologies with consciousness.

“Humans have never before created artificial entities with consciousness and so it is very difficult to predict what sort of products will come about, along with their unintended consequences.

“I would like to comment on two dissociations with respect to AI. The first is that an AI entity (whether software or robotic) can be highly intelligent while NOT being conscious or biologically alive. As a result, an AI will have none of the human needs that come from being alive and having evolved on our planet (e.g., the human need for food, air, emotional/social attachments, etc.). The second dissociation is between consciousness/intelligence and civil/moral rights. Many people might conclude that an AI with consciousness and intelligence must necessarily be given civil/moral rights; however, this is not the case. Civil/moral rights are only assigned to entities that can feel pleasure and pain. If an entity cannot feel pain, then it cannot be harmed. If an entity cannot feel pleasure, then it cannot be harmed by being denied that pleasure.

“Corporations have certain rights (e.g., they can own property) but they do not have moral/civil rights, because they cannot experience happiness, nor suffering. It is eminently possible to produce an AI entity that will have consciousness/intelligence but that will NOT experience pleasure/pain. If we humans are smart enough, we will restrict the creation of synthetic entities to those WITHOUT pleasure/pain. In that case, we might survive our inventions.

“In the entertainment media, synthetic entities are always portrayed by humans and a common trope is that of those entities being mistreated by humans and the audience then sides with those entities. In fact, synthetic entities will be very nonhuman. They will NOT eat food; give birth; grow as children into adulthood; get sick; fall in love; grow old or die. They will not need to breathe, and currently I am unaware of any AI system that has any sort of empathy for the suffering of humans. Most likely (and unfortunately) AI researchers will create AI systems that do experience pleasure/pain and even argue for doing such, so that such systems learn to have empathy. Unfortunately, such a capacity will then turn them into agents deserving of moral consideration and thus of civil rights.

“Will humans want to give civil rights and moral status to synthetic entities who are not biologically alive and who could care less if they pollute the air that humans must breathe to stay alive? Such entities will be able to maintain backups of their memories and live on forever. Another mistake would be to give them any goals for survival. If the thought of being turned off causes such entities emotional pain, then humans will be causing suffering in a very alien sort of creature and humans will then become morally responsible for their suffering. If humans give survival goals to synthetic agents, then those entities will compete with humans for survival.”

Avi Bar-Zeev: The key difference between a good or a bad outcome is whether these systems help and empower people or to exploit them

Bar-Zeev, president of the XR Guild and veteran innovator of XR tools for several top internet companies, said, “I expect by 2035 extended reality (XR) tools will advance significantly. We will have all-day wearable glasses that can do both AR [augmented reality] and VR. The only question is what will we want to use them for? Smartphones will no longer need screens, and they will have shrunk down to the size of a keychain (if we still remember those, since by then most doors will unlock based on our digital ID). The primary use of XR will be for communications, bringing photorealistic holograms of other people to us, wherever we are. All participants will be able to experience their own augmented spaces without us having to share our 3D environments.

“This will allow us to be more connected, mostly asynchronously. It would be impossible for us to be constantly connected to everyone in every situation, so we will develop social protocols just as we did with texting, allowing us to pop into and out of each other’s lives without interrupting others. The experience will be like having a whole team of people at your back, ready to whisper ideas in your ear based on the snippets of real life you choose to share.

“The current wave of generative AI has taught us that the best AI is made of people, both providing our creative output and also filtering the results to be acceptable by people. By 2035, the business models will have shifted to rewarding those creators and value-adders such that the result looks more like a corporation today. We’ll contribute, get paid for our work, and the AI-as-corporation will produce an unlimited quantity of new value from the combination for everyone else. It will be as if we have cracked the ultimate code for how people can work efficiently together – extract their knowledge and ideas and let the cloud combine these in milliseconds. Still, we can’t forget the human inputs or it’s just another race to the bottom.

“The flip side of this is that what we today might call ‘recommendation AI’ will merge with the above to form a kind of super intelligence that can find the most contextually appropriate content anytime both virtually and in real life. That tech will form a kind of personal firewall that keeps our personal context private but allows for a secure gathering of the best inputs the world can offer without giving away our privacy. By 2035, the word metaverse will be as popular as ‘cyberspace’ and ‘information superhighway’ became in past online evolution. The companies prefixing their name by ‘meta’ are all kind of boring now. However, after having achieved the XR and AI trends above we will think of the metaverse quite broadly as the information space we all inhabit. The main shift by 2035 is that we will see the metaverse not as a separate space but as a massive interconnection among 10 billion people. The AR tech and AI fade into the background and we simply see other people as valued creators and consumers of each other’s work and supporters of each other’s lives and social needs.

“The key difference between the most positive and negative uses of XR, AI and the metaverse is whether the systems are designed to help and empower people or to exploit them. Each of these technologies sees its worst outcome quickly if it is built to benefit companies that monetize their customers. XR becomes exploitive and not socially beneficial. AI builds empires on the backs of real people’s work and deprives them of a living wage as a result. The metaverse becomes a vast and insipid landscape of exploitive opportunities for companies to mine us for information and wealth, while we become enslaved to psychological countermeasures, designed to keep us trapped and subservient to our digital overlords.”

Jonathan Grudin: The menace is an army of AI acting ‘on a scale and speed that outpaces human ability to assess and correct course’

Grudin, affiliate professor of information science at the University of Washington, recently retired as a principal researcher in the Adaptive Systems and Interaction Group at Microsoft, wrote, “Addressing unintended consequences is a primary goal. Many changes are possible, but my best guess is that the best we will do is to address many of the unanticipated negatives tied at least in part to digital technology that emerged and grew in impact over the past decade: malware, invasion of privacy, political manipulation, economic manipulation, declining mental health and growing wealth disparity.

“At the turn of the millennium in 2000, the once small, homogeneous, trusting tech community – after recovering from the internet bubble – was ill-equipped to deal with the challenges arising from anonymous bad actors and well-intentioned but imperceptive actors who operated at unimagined scale and velocity. Causes and effects are now being understood. It won’t be easy, nor will it be an endeavor that will ever truly be finished, but technologists working with legislators and regulators are likely to make substantial progress.

“I foresee a loss of human control in the future. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course. We see it around us already. Political leaders unable to govern. CEOs at Facebook, Twitter and elsewhere unable to understand how technologies that were intended to unite people led to nasty divisiveness and mental health issues. Google and Amazon forced to moderate content on such a scale that often only algorithms can do it and humans can’t trace individual cases to correct possible errors. Consumers who can be reliably manipulated by powerful targeting machine learning to buy things they don’t need and can’t afford. It is early days. Little to prevent it from accelerating is on the horizon.

“We will also see an escalation in digital weapons, military spending and arms races. Trillions of dollars, euros, yuan, rubles and pounds are spent, and tens of thousands of engineers deployed, not to combat climate change but to build weaponry that the military may not even want. The United States is spending billions on an AI-driven jet fighter, despite the fact that jet fighter combat has been almost nonexistent for decades with no revival on the horizon.

“Unfortunately, the Ukraine war has exacerbated this tragedy. I believe leaders of major countries have to drop rivalries and address much more important existential threats. That isn’t happening. The cost of a capable armed drone has fallen an order of magnitude every few years. Setting aside military uses, long before 2035 people will be able to buy a cheap drone at a toy store, clip on facial recognition software and a small explosive or poison and send it off to a specified address. No need for a gun permit. I hope someone sees how to combat this.”

Beth Noveck: AI could make governance more equitable and effective; it could raise the overall quality of decision-making

Noveck, director of the Burnes Center for Social Change and Innovation and its partner project, The Governance Lab, wrote, “One of the most significant and positive changes expected to occur by 2035 is the increasing integration of artificial intelligence (AI) into various aspects of our lives, including our institutions of governance and our democracy. With 100 million people trying ChatGPT – a type of artificial intelligence (AI) that uses data from the Internet to spit out well-crafted, human-like responses to questions – between Christmas 2022 and Mardi Gras 2023 (it took the telephone 75 years to reach that level of adoption), we have squarely entered the AI age and are rapidly advancing along the S-curve toward widespread adoption.

“It is much more than ChatGPT. AI comprises a remarkable basket of data-processing technologies that make it easier to generate ideas and information, summarize and translate text and speech, spot patterns and find structure in large amounts of data, simplify complex processes, coordinate collection action and engagement. When put to good use, these features create new possibilities for how we govern and, above all, how we can participate in our democracy.

“One area in which AI has the potential to make a significant impact is in participatory democracy, that system of government in which citizens are actively involved in the decision-making process. The right AI could help to increase citizen engagement and participation. With the help of AI-powered chatbots, residents could easily access information about important issues, provide feedback, and participate in decision-making processes. We are already witnessing the use of AI to make community deliberation more efficient to manage at scale.

“The right AI could help to improve the quality of decision-making. AI can analyze large amounts of data and identify patterns that humans may not be able to detect. This can help policymakers and participating residents make more informed decisions based on real-time, high-quality data.

“With the right data, AI can also help to predict the outcome of different policy choices and provide recommendations on the best course of action. AI is already being used to make expertise more searchable. Using large-scale data sources, it is becoming easier to find people with useful expertise and match them to opportunities to participate in governance. These techniques, if adopted, could help to ensure more evidence-based decisions.

“The right AI could help to make governance more equitable and effective. New text generation tools make it faster and easier to ‘translate’ legalese into plain English but also other languages, portending new opportunities to simplify interaction between residents and their governments and increase the uptake of benefits to which people are entitled.

“The right AI could help to reduce bias and discrimination. AI can analyze data without being influenced by personal biases or prejudices. This can help to identify areas of inequality and discrimination, which can be addressed through policy changes. For example, AI can help to identify disparities in health care outcomes based on race or gender and provide recommendations for addressing these disparities.

“Finally, AI could help us design the novel, participatory and agile systems of participatory governance that we need to regulate AI. We all know that traditional forms of legislation and regulation are too slow and rigid to respond to fast-changing technology. Instead, we need to invest in new institutions for responding to the challenges of AI and that’s why it is paramount to invest in reimagining democracy using AI.

“But all of this depends upon mitigating significant risks and designing AI that is purpose-built to improve and reimagine our democratic institutions. One of the most concerning changes that could occur by 2035 is the increased use of AI to bolster authoritarianism. With the rise of populist authoritarians and the susceptibility of more people to such authoritarianism as a result of widening economic inequality, fear of climate change and as a result of misinformation, there is a risk of digital technologies being abused to the detriment of democracy.

“AI-powered surveillance systems are used by authoritarian governments to monitor and track the activities of citizens. This includes facial recognition technology, social media monitoring and analysis of internet activity. Such systems can be used to identify and suppress dissenting voices, intimidate opposition figures and quell protests.

“AI can be used to create and disseminate propaganda and disinformation. We’ve already seen how bots have been responsible for propagating misinformation during the COVID-19 pandemic and election cycles. Manipulation can involve the use of deepfakes, chatbots and other AI-powered tools to manipulate public opinion and suppress dissent.

“Deepfakes, which are manipulated videos or images such as those found at the Random People Generator , illustrate the potential for spreading disinformation and manipulating public opinion. Deepfakes have the potential to undermine trust in information and institutions and create chaos and confusion. Authoritarian regimes can use these tools to spread false information and discredit opposition figures, journalists and human rights activists.

“AI-powered predictive policing tools can be used by authoritarian regimes to target specific populations for arrest and detention. These tools use data analytics to predict where and when crimes are likely to occur and who is likely to commit them. In the wrong hands, these tools can be used to target ethnic or religious minorities, political dissidents and other vulnerable groups.

“AI-powered social credit systems are already in use in China and could be adopted by other authoritarian regimes. These systems use data analytics to score individuals based on their behavior and can be used to reward or punish citizens based on their social credit score. Such systems can be used to enforce loyalty to the government and suppress dissent.

“AI-powered weapons and military systems can be used to enhance the power of authoritarian regimes. Autonomous weapons systems can be used to target opposition figures or suppress protests. AI-powered cyberattacks can be used to disrupt critical infrastructure or target dissidents.

“It is important to ensure that AI is developed and used in a responsible and ethical manner, and that its potential to be used to bolster authoritarianism is addressed proactively.”

Raymond Perrault: ‘The big challenges are quality of information (veracity and completeness) and the technical feasibility of some services’

Perrault , a distinguished computer scientist at SRI International and director of its AI Center from 1988 to 2017, wrote, “First, some background. I find it useful to describe digital life as falling into three broad, and somewhat overlapping categories:

  • Content: web media, news, movies, music, games (mostly not interactive)
  • Social media (interactive, but with little dependency on automation)
  • Digital services, in two main categories: pure digital (e.g., search, financial, commerce, government) and that which is embedded in the physical world (e.g., health care, transportation, care for disabled and elderly)

“The big challenges are quality of information (veracity and completeness) and technical feasibility of some services, in particular those depending on interaction.

“Most digital services depend on interaction with human users and the physical world that is timely and highly context-dependent. Our main models for this kind of interaction today (search engines, chatbots, LLMs) are all deficient in that they depend on a combination of brittle hand-crafted rules, large amounts of labelled training data, or even larger amounts of unlabeled data, all to produce systems that are either limited in function or insufficiently reliable for critical applications. We have to consider security of infrastructure and transactions, privacy, fairness in algorithmic decision-making, sustainability for high-security transactions (e.g., with blockchain), and fairness to content creators, large and small.

“So, what good may happen by 2035? Hardware, storage, compute and communications costs will continue to decrease, both in cloud and at the edge. Computation will continue to be embedded in more and more devices, but usefulness of devices will continue to be limited by the constraints on interactive systems. Algorithms essential to supporting interaction between humans and computers (and between computers and the physical world) will improve if we can figure out how to combine tacit/implicit reasoning, as done by current deep learning-based language models, with more explicit reasoning, as done by symbolic algorithms.

“We don’t know how to do this, and a significant part of the AI community resists the connection, but I see it as a difficult technical problem to be solved, and I am confident that it will one day be solved. I believe that improving this connection would allow systems to generalize better, be taught general principles by humans (e.g., mathematics), reliably connect to symbolically stored information, and conform to policies and guidance imposed by humans. Doing so would significantly improve the quality of digital assistants and of physical autonomous systems. Ten years is not a bad horizon.

“Better algorithms will not solve the disinformation problem, though they will continue to be able to bring cases of it to the attention of humans. Ultimately this requires improvements in policy and large investments in people, which goes against incentives of corporations and can only be imposed on them by governments, which are currently incapable of doing so. I don’t see this changing in a decade. Nor will better algorithms solve the necessary investments to prevent certain kinds of information services (e.g., local news) from disappearing, nor treating content creators fairly. Government services could be significantly improved by investment using known technologies, e.g., to support tax collection. The obstacles again are political, not technical.”

“A long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy.”

Alejandro Pisanty: We are threatened by the scale, speed and lack of friction for bad actors who bully and weaponize information

Pisanty , Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at the National Autonomous University of Mexico, predicted, “Improvement will come from shrewd management of the Internet’s own way of making known human conduct and motivation and how they act through technology: mass scaling/hyperconnectivity; identity management; trans-jurisdictional arbitrage; barrier lowering; friction reduction; and memory+oblivion.

“As long as these factors are managed for improvement, they can help identify advance warnings of ways in which digital tools may have undesirable side effects. An example: Phishing grows on top of all six factors, while increasing friction is the single intervention that provides the best cost-benefit ratio.

“Improvements come through human connections that cross many borders between and within societies. They throw a light on human rights while effecting timely warnings about potential violations, creating an unprecedented mass of human knowledge while getting multiple angles to verify what goes on record and correct misrepresentations (again a case for friction).

“Health outcomes are improved through the whole cycle of information: research, diffusion of health information, prevention, diagnostics and remediation/mitigation considering the gamut of social determination of health.

“Education may improve through scaling, personalization and feedback. There is a fundamental need to make sure the Right to Science becomes embedded in the growth of the Internet and cyberspace in order to align minds and competencies within the age of the technology people are using. Another way of putting this: We need to close the gap – right now 21st century technology is in the hands of people and organizations with 19th-century mentalities and competences, starting with the human body, microbes, electricity, thermodynamics and, of course, computing and its advances.

“The same set of factors that can map what we know of human motivation for improvement of humankind’s condition can help us identify ways to deal with the most harmful trends emerging from the Internet.

“Speed is included in the Internet’s mass scaling and hyperconnectivity, and the social and entrepreneurial pressure for speed leave little time to analyze and manage the negative effects of speed, such as unintended effects of technology, ways in which it can be abused and, in turn, ways to correct, mitigate or compensate against these effects.

“Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of Internet expansion makes it easy to identify and attack dissidents with increasingly extensive, disruptive and effective damage that extends into physical and social space.

“A long-term, concerted effort in societies will be necessary to harness the development of tools whose misuse is increasingly easy. The effectiveness of these tools’ incursions continues to remain based both on the tool and on features of the victim or the intermediaries such as naiveté, lack of knowledge, lack of Internet savvy and the need to juggle too many tasks at the same time between making a living and acquiring dominion over cyber tools.”

Barry K. Chudakov: ‘We are sharing our consciousness with our tools’

Chudakov, founder and principal at Sertain Research, predicted, “One of the best and most beneficial changes that is likely to occur by 2035 in regard to digital technology and humans’ use of digital systems is recognition of the arrival of a digital tool meta-level. We will begin to act on the burgeoning awareness of tool logic and how each tool we pick up and use has a logic designed into it. The important thing about becoming aware of tool logic, and then understanding it: Humans follow the design logic of their tools because we are not only adopters , we are adapters . That is, we adapt our thinking and behaviour to the tools we use.

“This will come into greater focus between now and 2035 because our technology development – like many other aspects of our lives – will continue to accelerate. With this acceleration humans will use more tools in more ways more often – robots, apps, the metaverse and omniverse, digital twins – than at any other time in human history. If we pay attention as we adopt and adapt, we will see that we bend our perceptions to our tools: When we use a cell phone, it changes how we drive, how we sleep, how we connect or disconnect with others, how we communicate, how we date, etc.

“Another way of looking at this: We have adapted our behaviors to the logic of the tool as we adopted (used) it. With an eye to pattern recognition, we may finally come to see that this is what humans do, what we have always done, from the introduction of various technologies – alphabet, camera, cinema, television, computer, internet, cell phone – to our current deployment of AI, algorithms, digital twins, mirror worlds or omniverse.

“So, what does this mean going forward? With enough instances of designing a meta mirror of what is happening – the digital readout above the process of capturing an image with a digital camera, digital twins and mirror worlds that provide an exact replica of a product, process or environment – we will begin to notice that these technologies all have an adaptive level. At this level when we engage with the technology, we give up aspects of will, intent, focus, reaction. We can then begin to outline and observe this process in order to inform ourselves, and better arm ourselves against (if that’s what we want) adoption abdication . That is, when we adopt a tool, do we abdicate our awareness, our focus, our intentions?

“We can study and report on how we change and how each new advancing technology both helps us and changes us. We can then make more informed decisions about who we are when we use said tool and adjust our behaviors if necessary. Central to this dynamic is the understanding that we are sharing our consciousness with our tools . They have gotten – and are getting more still – so sophisticated that they can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand. …

“Of course, there is more to worry about at the level of broad systems. By the year 2035, Ian Bremmer, among others, believes the most harmful or menacing changes that are likely to occur in digital technology and humans’ use of digital systems will focus on AI and algorithms. He believes this because we can already see that these two technological advances together have made social media a haven for right-wing conspiracists, anarchic populists and various disrupters to democratic norms.

“I would not want to minimize Bremmer’s concerns; I believe them to be real. But I would also say they are insufficient. Democracies and governments generally were hierarchical constructs which followed the logic of alphabets; AI and algorithms are asymmetric technologies which follow a fundamentally different logic than the alphabetic construct of democratic norms, or even the top-down dictator style of Russia or China. So, while I agree with Bremmer’s assessment that AI and algorithms may threaten existing democratic structures; they, and the social media of which they are engines, are designed differently than the alphabetic order which gave us kings and queens, presidents and prime ministers.

“The old hierarchy was dictatorial, top-down with most people except those at the very top beholden to and expected to bow to the wishes of, the monarch or leader at the top. Social media and AI or algorithms have no top or bottom. They are broad horizontally and shallow vertically, whereas democratic and dictatorial hierarchies are narrow horizontally and deep vertically.

“This structural difference is the cause for Bremmer’s alarm and is necessary to understand and act upon before we can salvage democracy from the ravages of populism and disinformation. Here is the rub: Until we begin to pay attention to the logic of the tools we adopt, we will use them and then be at the mercy of the logic we have adopted. A thoroughly untenable situation.

“We must inculcate, teach, debate and come to understand the logic of our tools and see how they build and destroy our social institutions. These social institutions reward and punish, depending on where you sit within the structure of the institution. Slavery was once considered a democratic right; it was championed by many American Southerners and was an economic engine of the South before and after the Civil War. America then called itself a democracy, but it was not truly democratic – especially for those enslaved.

“To make democracy more equitable for all, we must come to understand the logic of the tools we use and how they create the social institutions we call governments. We must insist upon transparency in the technologies we adopt so we can see and fully appreciate how these technologies can change our perceptions and values.”

Marcel Fafchamps: The next wave of technology will give additional significant advantages to authoritarians and monopolists

Fafchamps,  professor of economics and senior fellow at the Center on Democracy, Development and the Rule of Law at Stanford University, wrote, “The single most beneficial change will be the spread of already existing internet-based services to billions of people across the world, as they gradually replace their basic phones with smartphones, and as connection speed increases over time and across space. IT services to assist farmers and businesses are the most promising in terms of economic growth, together with access to finance through mobile money technology. I also expect IT-based trade to expand to all parts of the world, especially spearheaded by Alibaba.

“The second most beneficial change I anticipate is the rapid expansion of IT-based health care, especially through phone-based and AI-based diagnostics and patient interviews. The largest benefits by far will be achieved in developing countries where access to medically-provided health care is limited and costly. AI-based technology provided through phones could massively increase provision and improve health at a time where the population of many currently low- or middle-income countries (LMIC) is rapidly aging.

“The third most beneficial change I anticipate is in drone-based, IT-connected drone services to facilitate dispatch to wholesale and local retail outlets, and to distribute medical drugs to local health centers and collect from them samples for health care testing. I do not expect a significant expansion of drone deliveries to individuals, except in some special cases (e.g., very isolated locations or extreme urgency in the delivery of medical drugs and samples).

“The most menacing change I expect is in terms of the political control of the population. Autocracies and democracies alike are increasingly using IT technology to collect data on individuals, civic organizations and firms. While this data collection is capable of delivering social and economic benefits to many (e.g., in terms of fighting organized crime, tax evasion and financial and fiscal fraud), the potential for misuse is enormous, as evidenced for instance by the social credit system put in place in China. Some countries – and most prominently, the European Union – have sought to introduce safeguards against abuse. But without serious and persistent coordination with the United States, these efforts will ultimately fail given the dominance of U.S.-protected GAFAM (Google, Apple, Facebook, Amazon and Microsoft) in all countries except China, and to a lesser extent, Russia.

“The world urgently needs Conference of the Parties (COP) meetings on international IT to address this existential issue for democracy, civil rights and individual freedom within the limits of the law. Whether this can be done is doubtful, given that democracies themselves are responsible for developing a large share of these systems of data collection and control on their own population, as well as on that of others (e.g., politicians, journalists, civil right activists, researchers, research and development firms).

“The second-most worrying change is the continued privatization of the internet at all levels: cloud, servers, underwater transcontinental lines, last-mile delivery and content. The internet was initially developed as free for all. But this will no longer be the case in 2035, and probably well before that. I do not see any solution that would be able to counterbalance this trend, short of a massive, coordinated effort among leading countries. But I doubt that this coordination will happen, given the enormous financial benefits gained from appropriating the internet, or at least large chunks of it. This appropriation of the internet will generate very large monopolistic gains that current antitrust regulation is powerless to address, as shown repeatedly in U.S. courts and in EU efforts against GAFAM firms. In some countries, this appropriation will be combined with heavy state control, further reinforcing totalitarian tendencies.

“The third-most worrying change is the further expansion of unbridled social media and the disappearance of curated sources of news (e.g., newsprint, radio and TV). In the past, the world has already experienced the damages caused by fake news and gossip-based information (e.g., through tabloid newspapers), but never to the extent made possible by social media. Efforts to date to moderate content on social media platforms have largely been ineffective as a result of multiple mutually reinforcing causes: the lack of coordination between competing social media platforms (e.g., Facebook, Twitter, WhatsApp, TikTok); the partisan interests of specific political parties and actors; and the technical difficulty of the task.

“These failures have been particularly disturbing in LMIC [low- and middle-income] countries where moderation in local languages is largely deficient (e.g., hate speech across ethnic lines in Ethiopia; hate speech toward women in South Asia). The damage that social media is causing to most democracies is existential. By creating silos and echo chambers, social media is eroding the trust that different groups and populations feel toward each other, and this increases the likelihood of civil unrest and populist vote. Furthermore, social media has encouraged the victimization of individuals who do not conform to the views of other groups in a way that does not allow the accused to defend themselves. This is already provoking a massive regression in the rule of law and the rights of individuals to defend themselves against accusations. I do not see any signs suggesting a desire by GAFAM firms or by governments to address this existential problem for the rule of law.

“To summarize, the first wave of IT-technology did increase individual freedom in many ways (e.g., accessing cultural content previously requiring significant financial outlays; facilitating international communication, trade and travel; making new friends and identifying partners; and allowing isolated communities to find each other to converse and socialize).

“The next wave of IT-technology will be more focused on political control and on the exploitation of commercial and monopolistic advantage, thereby favoring totalitarian tendencies and the erosion of the rights of the defense and of the whole system of criminal and civil justice. I am not optimistic at this point, especially given the poor state of U.S. politics at this point in time on both sides of the political spectrum.”

David Weinberger: ‘These new machines will give us more control over our world and lives, but with our understanding lagging, often terminally’

Weinberger , senior researcher at Harvard’s Berkman Center for Internet and Society, wrote, “The Internet and machine learning have removed the safe but artificial boundaries around what we can know and do, plunging us into a chaos that is certainly creative and human but also dangerous and attractive to governments and corporations desperate to control more than ever. It also means that the lines between predicting and hoping or fearing are impossibly blurred.

“Nevertheless: Right now, large language models (LLMs) of the sort used by ChatGPT know more about our use of language than any entity ever has, but they know absolutely nothing about the world. (I’m using ‘know’ sloppily here.) In the relative short term, they’ll likely be intersected with systems that have some claim to actual knowledge so that the next generation of AI chatters will hallucinate less and be more reliable. As this progresses, it will likely disrupt both our traditional and Net-based knowledge ecosystems.

“With luck, the new knowledge ecosystem is going to have us asking whether knowing with brains and books hasn’t been one long dark age. I mean, we did spectacularly well with our limited tools, so good job fellow humans! But we did well according to a definition of knowledge tuned to our limitations.

“As machine learning begins to influence how we think about and experience our lives and world, our confidence in general rules and laws as the high mark of knowledge may fade, enabling us to pay more attention to the particulars in every situation. This may open up new ways of thinking about morality in the West and could be a welcome opportunity for the feminist ethics of care to become more known and heeded as a way of thinking about what we ought to do.

“Much of the online world may be represented by agents: software that presents itself as a digital ‘person’ that can be addressed in conversation and can represent a body of knowledge, an organization, a place, a movement. Agents are likely to have (i.e., be given) points of view and interests. What will happen when these agents have conversations with one another is interesting to contemplate.

“We are living through an initial burst of energy and progress in areas that until recently were too complex to even imagine we could.

“These new machines will give us more control over our world and lives, but with our understanding lagging, often terminally. This is an opportunity for us to come face to face with how small a light our mortal intelligence casts. But it is also an overwhelming temptation for self-centered corporations, governments and individuals to exploit that power and use it against us. I imagine that both of those things will happen.

“Second, we are heading into a second generation that has lived much of its life on the Internet. For all of its many faults – a central topic of our time – being on the Internet has also shown us the benefits and truth of living in creative chaos. We have done so much so quickly with it that we now assume connected people and groups can undertake challenges that before were too remote even to consider. The collaborative culture of the Internet – yes, always unfair and often cruel – has proven the creative power of unmanaged connective networks.

“All of these developments make predicting the future impossible – beyond, perhaps, saying that the chaos that these two technologies rely on and unleash is only going to become more unruly and unpredictable, driving relentlessly in multiple and contradictory directions. In short: I don’t know.”

Calton Pu: The digital divide will be between those who think critically and those who do not

Pu, co-director of the Center for Experimental Research in Computer Systems at Georgia Institute of Technology, wrote, “Digital life has been, and will continue to be, enriched by AI and machine learning (ML) techniques and tools. A recent example is ChatGPT, a modern chatbot developed by OpenAI and released in 2022 that is passing the Turing Test every day.

“Similar to the contributions of robotics in the physical world (e.g., manufacturing), future AI/ML tools will relieve the stress from simple and repetitive tasks in the digital world (and displace some workers). The combination of physical automation and AI/ML tools would and should lead to concrete improvements in autonomous driving, which stalled in recent years despite massive investments on the order of many billions of dollars. One of the major roadblocks has been the gold standard ML practice of training static models/classifiers that are insensitive to evolutionary changes in time. These static models suffer from knowledge obsolescence, in a way similar to human aging. There is an incipient recognition of the limitations of current practice of constant retraining of ML models to bypass knowledge obsolescence manually (and temporarily). Hopefully, the next generation ML tools will overcome knowledge obsolescence in a sustainable way, achieving what humans could not: stay young forever.

“Well, Toto, we’re not in Kansas anymore. When considering the future issues in digital life, we can learn a lot from the impact of robotics in the physical world. For example, Boston Dynamics pledged to ‘not weaponize’ their robots in October 2022. This is remarkable, since the company was founded with, and worked on, defense contracts for many years before its acquisition by primarily non-defense companies. That pledge is an example of moral dilemma on what is right or wrong. Technologists usually remain amoral. By not taking sides, they avoid the dilemma and let both sides (good and evil) utilize the technology as they see fit. This amorality works quite well since good technology always has many applications over the entire spectrum from good to evil to the large gray areas in between.

“Microsoft Tay, a dynamically learning chatbot released in 2016 started to send inflammatory and racist speech, causing its shutdown the same day. Learning from this lesson, ChatGPT uses OpenAI’s moderation API to filter out racist and sexist prompts. Hypothetically, one could imagine OpenAI making a pledge to ‘not weaponize’ ChatGPT for propaganda purposes. Regardless of such pledges, any good digital technology such as ChatGPT could be used for any purpose, (e.g., generating misinformation and fake news) if it is stolen or simply released into the wild.

“The power of AI/ML tools, particularly if they become sustainable and remain amoral, will be greater for both good and evil. We have seen significant harm from misinformation on the COVID-19 pandemic, dubbed ‘infodemic’ by the World Health Organization. More generally, it is being implemented in political propaganda in every election and every war. It is easy to imagine the depth, breadth and constant renewal of such propaganda and infodemic, as well as their impact, all growing with the capabilities of future AI/ML tools used by powerful companies and governments.

“Assuming that the AI/ML technologies will advance beyond the current static models, the impact of sustainable AI/ML tools in the future of digital life will be significant and fundamental, perhaps in a greater role than industrial robots have in modern manufacturing. For those who are going to use those tools to generate content and increase their influence on people, that prospect will be very exciting. However, we have to be concerned for people who are going to consume such content as part of their digital life without thinking critically.

“The great digital divide is not going to be between the haves and have-nots of digital toys and information. With more than 6 billion smartphones in the world (estimated in 2022), an overwhelming majority of the population already has access to and participates in the digital world. The digital divide in 2035 will be between those who think critically and those who believe misinformation and propaganda. This is a big challenge for democracy, a system in which we thought more information would be unquestionably beneficial. In a Brave New Digital World, a majority can be swayed by the misuse of amoral technological tools.”

Dmitri Williams: If economic growth is prioritized over well-being, the results will not be pretty

Williams, professor of technology and society at the University of Southern California, wrote, “When I think about the last 30 years of change in our lives due to technology, what stands out to me is the rise in convenience and the decline of traditional face-to-face settings. From entertainment to social gatherings, we’ve been given the opportunity to have things cheaper, faster and higher-quality in our private spaces, and we’ve largely taken it.

“For example, 30 years ago, you couldn’t have a very good movie-watching experience in your own home, looking at a small CRT tube and standard definition, and what you could watch wasn’t the latest and greatest. So, you took a hit to convenience and went to the movie theater, giving up personal space and privacy for the benefits of better technology, better content and a more community experience. Today, that’s flipped. We can be on our couches and watch amazing content, with amazing screens and sounds and never have to get in a car.

“That’s a microcosm of just about every aspect of our lives – everything is easier now, from work over high-speed connections to playing video games. We can do it all from our homes. That’s an amazing reduction in costs and friction in our business and private lives. And the social side of that is access to an amazing breadth of people and ideas. Without moving from our couch, chair or bed, we can connect with others all over the world from a wide range of backgrounds, cultures and interests.

“Ironically, though, we feel disconnected, and I think that’s because we evolved as physical creatures who thrive in the presence of others. We atrophy without that physical presence. We have an innate need to connect, and the in-person piece is deeply tied to our natures. As we move physically more and more away from each other – or focus on far-off content even when physically present – our well-being suffers. I can’t think of anything more depressing than seeing a group of young friends together but looking at their phones rather than each other’s faces. Watching well-being trends over time, even before the pandemic, suggests an epidemic of loneliness.

“As we look ahead, those trends are going to continue. The technology is getting faster, cheaper and higher-quality, and the entertainment and business industries are delivering us better and better content and tools. AI and blockchain technologies will keep pushing that trend forward.

“The part that I’m optimistic about is best seen by the nascent rise of commercial-level AR and VR. I think VR is niche and will continue to be, not because of its technological limitations, but because it doesn’t socially connect us well. Humans like eye contact, and a thing on your face prevents it. No one is going to want to live in a physically closed off metaverse. It’s just not how we’re wired. The feeling of presence is extremely limited, and the technical advances in the next 10 years are likely to make the devices better and more comfortable, but not change that basic dynamic.

“In contrast, the potential for AR and other mixed reality devices is much more exciting because of its potential for social interactions. Whereas all of these technical advances have tended to push us physically away from each other, AR has the potential to help us re-engage. It offers a layer on top of the physical space that we’ve largely abandoned, and so it will also give us more of an incentive to be face-to-face again. I believe this will have some negative consequences around attention, privacy and capitalism invading our lives just that much more, but overall, it will be a net positive for our social lives in the long run. People are always the most interesting form of content, and layering technologies have the potential to empower new forms of connection around interests.

“In cities especially, people long for the equivalent of the icebreakers we use in our classrooms. They seek each other online based on shared interests, and we see a rise in throwback formats like board games and in-person meetups. The demand for others never abated, but we’ve been highly distracted by shiny, convenient things. People are hungry for real connection, and technologies like AR have the potential to deliver that and so to mitigate or reverse some of the well-being declines we’ve seen over the past 10 to 20 years. I expect AR glasses to go through some hype and disillusionment, but then to take off once commercial devices are socially acceptable and cheap enough. I expect that the initial faltering steps will take place over the next three years and then mass-market devices will start to take off and accelerate after that.

“Here’s my simple take: I think AR will tilt our heads up from our phones back to each other’s faces. It won’t all be wonderful because people are messy and capitalism tends to eat into relationships and values, but that tilt alone will be a very positive thing.

“What I worry most about in regard to technology is capitalism. Technology will continue to create value and save time, but the benefits and costs will fall in disproportionate ways across society.

“Everyone is rightly focused on the promise and challenges of AI at the moment. This is a conversation that will play out very differently around the world. Here in the United States, we know that business will use AI to maximize its profit and that our institutions won’t privilege workers or well-being over those profits. And so we can expect to see the benefits of AI largely accrue to corporations and their shareholders. Think of the net gain that AI could provide – we can have more output with less effort. That should be a good thing, as more goods and capital will be created and so should improve everyone’s lot in life. I think it will likely be a net positive in terms of GDP and life expectancy, but in the U.S., those gains will be minimal compared to what they could and should be.

“Last year I took a sabbatical and visited 45 countries around the world. I saw wealthy and poor nations – places where technology abounds and where it is rare. What struck me the most was the difference in values and how that plays out in promoting the well-being of everyday people. The United States is comparatively one of the worst places in the world at prioritizing well-being over economic growth and the accumulation of wealth by a minority (yes, some countries are worse still). That’s not changing any time soon, and so in that context, I look at AI and ask what kind of impacts it’s likely to have in the next 10 years. It’s not pretty.

“Let’s put aside our headlines about students plagiarizing papers and think about the job displacements that are coming in every industry. When the railroads first crossed the U.S., we rightly cheered, but we also didn’t talk a lot about what happened to the people who worked for the Pony Express. Whether it’s the truck driver replaced by autonomous vehicles, the personal trainer replaced by an AI agent, or the stockbroker who’s no longer as valuable as some code, AI is going to bring creative destruction to nearly every industry. There will be a lot of losers.”

Russell Neuman: Let’s try a system of ‘intelligent privacy’ that would compensate users for their data

Neuman, professor of media technology at New York University, wrote, “One of my largest concerns is for the future of privacy. It’s not just that that capacity will be eroded. Of course, it will be because of the interests of governments and private enterprise. My concern is about a lost opportunity that our digital technologies might otherwise provide for: What I like to call ‘intelligent privacy.’

“Here’s an idea. You are well aware that your personal information is a valuable commodity for the social media and online marketing giants like Google, Facebook, Amazon and Twitter. Think about the rough numbers involved – Internet advertising in the U.S. for 2022 is about $200 billion. The number of active online users is about 200 million. $200 billion divided by 200 million. So, your personal information is worth about $1,000. Every year. Not bad. The idea is: Why not get a piece of the action for yourself? It’s your data. But don’t be greedy. Offer to split it with the Internet biggies 50-50. $500 for you, $500 for those guys to cover their expenses.

“Thank you very much. But the Tech Giants are not going to volunteer to initiate this sort of thing. Why would they? So there has to be a third party to intervene between you and Big Tech. There are two candidates for this – first, the government, and second, some new private for-profit or not-for-profit. Let’s take the government option first.

“There seems to be an increasing appetite for ‘reining in Big Tech’ in the United States on Capitol Hill. It even seems to have some bipartisan support, a rarity these days. But legislation is likely to take the form of an antitrust policy to prevent competition-limiting corporate behaviors. Actually, proactively entering the marketplace to require some form of profit sharing is way beyond current-day congressional bravado. The closest Congress has come so for is a bill called DASHBOARD (an acronym for Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data) which would require major online players to explain to consumers and financial regulators what data they are collecting from online users and how it is being monetized. The Silicon Valley lobbyists squawked loudly and so far the bill has gone nowhere. And all that was proposed in that case was to make some data public. Dramatic federal intervention into this marketplace is simply not in the cards.

“So, what about nongovernmental third parties? There are literally dozens of small for-profit startups and not-for-profits in the online privacy space. Several alternative browser search engines uch as DuckDuckGo, Neeva and Brave offer privacy-protected browsing. But as for-profits, they often end up substituting their own targeted ads (presumably without sharing information) for what you would otherwise see on a Google search or a Facebook feed.

“Brave is experimenting with rewarding users for their attention with cryptocurrency tokens called BATs for Basic Attention Tokens. This is a step in the right direction. But so far, usage is tiny, distribution is limited to affiliated players, and the crypto value bubble complicates the incentives.

“So, the bottom line here is that Big Tech still controls the golden goose. These startups want to grab a piece of the action for themselves and try to attract customers with ‘privacy-protection’ marketing rhetoric and with small, tokenized incentives which are more like a frequent flyer program than real money. How would a serious piece-of-the-action system for consumers work? It would have to allow a privacy-conscious user to opt out entirely. No personal information would be extracted. There’s no profit there, so no profit sharing. So, in that sense, those users ‘pay’ for the privilege of using these platforms anonymously.

“YouTube offers an ad-free service for a fee as a similar arrangement. For those people open to being targeted by eager advertisers, there would be an intelligent privacy interface between users and the online players. It might function like a VPN [virtual personal network] or proxy server but one which intelligently negotiates a price. ‘My gal spent $8,500 on online goods and services last year,’ the interface notes. ‘She’s a very promising customer. What will you bid for her attention this month?’

“Programmatic online advertising already works this way. It is all real-time algorithmic negotiations of payments for ad exposures. A Supply Side Platform gathers data about users based on their online behavior and geography and electronically offers their ‘attention’ to an Ad Exchange. At the Ad Exchange, advertisers on a Demand Side Platform have 10 milliseconds to respond to an offer. The Ad Exchange algorithmically accepts the highest high-speed bid for attention. Deal done in a flash. Tens of thousands of deals every second. It’s a $100 billion marketplace.”

Maggie Jackson: Complacency and market-driven incentives keep people from focusing on the problems AI can cause

Jackson, award-winning journalist, social critic and author, wrote, “The most critical beneficial change in digital life now on the horizon is the rise of uncertain AI. In the six decades of its existence, AI has been designed to achieve its objectives, however it can. The field’s overarching mission has been to create systems that can learn how to play a game, spot a tumor, drive a car, etc., on their own as well as or better than humans can do so.

“This foundational definition of AI largely reflects a centuries-old ideal of intelligence as the realization of one’s goals. However, the field’s erratic yet increasingly impressive success in building objective-driven AI has created a widening and dangerous gap between AI and human needs. Almost invariably, an initial objective set by a designer will deviate from a human’s needs, preferences and well-being come ‘run-time.’

“Nick Bostrom’s once-seemingly laughable example of a super-intelligent AI system tasked with making paper clips which then takes over the world in pursuit of this goal, has become a plausible illustration of the unstoppability and risk of reward-centric AI. Already, the ‘alignment problem’ can be seen in social media platforms designed to bolster user time online by stoking extremist content. As AI grows more powerful, the risks of models that have a cataclysmic effect on humanity dramatically increase.

“Reimagining AI to be uncertain literally could save humanity. And the good news is that a growing number of the world’s leading AI thinkers and makers are endeavoring to make this change a reality. Enroute to achieving its goals, AI traditionally has been designed to dispatch unforeseen obstacles, such as something in its path. But what AI visionary Stuart Russell calls ‘human-compatible AI’ is instead designed to be uncertain about its goals, and so to be open to and adaptable to multiple possible scenarios.

“An uncertain model or robot will ask a human how it should fetch coffee or show multiple possible candidate peptides for creating a new antibiotic, instead of pursuing the single best option befitting its initial marching orders.

“The movement to make AI is just gaining ground and largely experimental. It remains to be seen whether tech behemoths will pick up on this radical change. But I believe this shift is gaining traction, and none too soon. Uncertain AI is the most heartening trend in technology that I have seen in a quarter-century of writing about the field.

“One of the most menacing, if not the most menacing, changes likely to occur in digital life in the next decade is a deepening complacency about technology. If first and foremost we cannot retain a clear-eyed, thoughtful and constant skepticism about these tools, we cannot create or choose technologies that help us flourish, attain wisdom and forge mutual social understanding. Ultimately, complacent attitudes toward digital tools blind us to the actual power that we do have to shape our futures in a tech-centric era.

“My concerns are three-part: First, as technology becomes embedded in daily life, it typically is less explicitly considered and less seen, just as we hardly give a thought to electric light. The recent Pew report on concerns about the increasing use of AI in daily life shows that 46% of Americans have equal parts excitement and concern over this trend, and 40% are more concerned than excited. But only 30% correctly fully identified where AI is being used, and nearly half think they do not regularly interact with AI, a level of apartness that is implausible given the ubiquity of smart phones and of AI itself. AI, in a nutshell, is not fully seen. As well, it’s alarming that the most vulnerable members of society – people who are less-well educated, have lower incomes, and/or are elderly – demonstrate the least awareness of AI’s presence in daily life and show the least concern about this trend.

“Second, mounting evidence shows that the use of technology itself easily can lead to habits of thought that breed intellectual complacency. Not only do we spend less time adding to our memory stores in a high-tech era, but ‘using the internet may disrupt the natural functioning of memory,’ according to researcher Benjamin Storm. Memory-making is less activated, data is decontextualized and devices erode time for rest and sleep, further disrupting memory processing. As well, device use nurtures the assumption that we can know at a glance. After even a brief online search, information seekers tend to think they know more than they actually do, even when they have learned nothing from a search, studies show. Despite its dramatic benefits, technology therefore can seed a cycle of enchantment, gullibility and hubris that then produces more dependence on technology.

“Finally, the market-driven nature of technology today muffles any concerns that are shown about devices. Consider the case of robot caregivers. Although a majority of Americans and people in EU countries say they would not want to use robot care for themselves or family members, such robots increasingly are sold on the market with little training, caveats or even safety features. Until recently, older people were not consulted in the design and production of robot caregivers built for seniors. Given the highly opaque, tone-deaf and isolationist nature of big-tech social media and AI companies, I am concerned that whatever skepticism that people may have for technology may be ignored by its makers.”

Louis Rosenberg: The boundary between the physical and digital worlds will vanish and tech platforms will know everything we do and say

Rosenberg, CEO and chief scientist at Unanimous AI, predicted, “As I look ahead to the year 2035, it’s clear to me that certain digital technologies will have an oversized impact on the human condition, affecting each of us as individuals and all of us as a society. These technologies will almost certainly include artificial intelligence, immersive media (VR and AR), robotics (service and humanoid robots) and powerful advancements in human-computer interaction (HCI) technologies. At the same time, blockchain technologies will continue to advance, likely enabling us to have persistent identity and transferrable assets across our digital lives, supporting many of the coming changes in AI, VR, AR and HCI.

“So, what are the best and most beneficial changes that are likely to occur? As a technologist who has worked on all of these technologies for over 30 years, I believe these disciplines are about to undergo a revolution driving a fundamental shift in how we interact with digital systems. For the last 60 years or so, the interface between humans and our digital lives has been through keyboards, mice and touchscreens to provide input and the display of flat media (text, images, videos) as output. By 2035, this will no longer be the dominant model. Our primary means of input will be through natural dialog enabled by conversational AI and our primary means of output will be rapidly transitioning to immersive experiences enabled through mixed-reality eyewear that brings compelling virtual content into our physical surroundings.

“I look at this as a fundamental shift from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ That’s because by 2035, human interface technologies – both input and output – will finally allow us to interact with digital systems the way our brains evolved to engage our world: through natural experiences in our immediate surroundings via mixed reality and through natural human language, conversational AI.

“As a result, by 2035 and beyond, the digital world will become a magical layer that is seamlessly merged with our physical world. And when that happens, we will look back at the days when people engaged their digital lives by poking their fingers at little screens in their hands as quaint and primitive. We will realize that digital content should be all around us and should be as easy to interact with as our physical surroundings. At the same time, many physical artifacts (like service robots, humanoid robots and self-driving cars) will come alive as digital assets that we engage through verbal dialog and manual gestures. As a consequence, by the end of the 2030s the differences will largely disappear in our minds between what is physical and what is digital.

“I strongly believe that by 2035 our society will be transitioning from the current age of ‘flat computing’ to an exciting new age of ‘natural computing.’ This transition will move us away from traditional forms of digital content (text, images, video) that we engage today with mice, keyboards and touchscreens to a new age of immersive media (virtual and augmented reality) that we will engage mostly through conversational dialog and natural physical interactions.

“While this will empower us to interact with digital systems as intuitively as we interact with the physical world, there are many significant dangers this transition will bring. For example, the merger of the digital world and the physical world will mean that large platforms will be able to track all aspects of our daily lives – where we are, who we are with, what we look at, even what we pick up off store shelves. They will also track our facial expressions, vocal inflections, manual gestures, posture, gait and mannerisms (which will be used to infer our emotions throughout our daily lives). In other words, by 2035 the blurring of the boundaries between the physical and digital worlds will mean (unless restricted through regulation) that large technology platforms will know everything we do and say during our daily lives and will monitor how we feel during thousands of interactions we have each day.

“This is dangerous and it’s only half the problem. The other half of the problem is that conversational AI systems will be able to influence us through natural language. Unless strictly regulated, targeted influence campaigns will be enacted through conversational agents that have a persuasive agenda. These conversational agents could engage us through virtual avatars (virtual spokespeople) or through physical humanoid robots. Either way, when digital systems engage us through interactive dialog, they could be used as extremely persuasive tools for driving influence. For specific examples, I point you to a white paper “From Marketing to Mind Control” written in 2022 for the Future of Marketing Institute and to the 2022 IEEE paper “Marketing in the Metaverse and the Need for Consumer Protections .”

Wendy Grossman: Tech giants are losing ground, making room for new approaches that don’t involve privacy-invasive surveillance of the public

Grossman, a UK-based science writer, author of “net.wars” and founder of the magazine The Skeptic, wrote, “For the moment, it seems clear that the giants that have dominated the technology sector since around 2010 are losing ground as advertisers respond to social and financial pressures, as well as regulatory activity and antitrust actions. This is a good thing, as it opens up possibilities for new approaches that don’t depend on constant, privacy-invasive surveillance of Internet users.

“With any luck, that change in approach should spill over into the physical world to create smart devices that serve us rather than the companies that make them. A good example at the moment is smart speakers, whose business models are failing. Amazon is finding that consumers don’t want to use Alexa to execute purchases; Google is cutting back the division that makes Google Home.

“Similarly, the ongoing relentless succession of cyberattacks on user data might lead businesses and governments to recognize that large pools of data are a liability, and to adopt structures that put us in control of our own data and allow us to decide whom to share it with. In the UK, Mydex and other providers of personal data stores have long been pursuing this approach. …

“Many of the biggest concerns about life until 2035 are not specific to the technology sector: the impact of climate change and the disruption and migration it is already beginning to bring; continued inequality and the likely increase in old age poverty as Generation Rent reaches retirement age without the means to secure housing; the ongoing overall ill-health (cardiovascular disease, diabetes, dementia) that is and will be part of the legacy of the SARS-CoV-2 pandemic. These are sweeping problems that will affect all countries, and while technology may help ameliorate the effects, it can’t stop them. Many people never recovered from the 2008 financial crisis (see the movie ‘Nomadland’); the same will be true for those worst affected by the pandemic.

“In the short term, the 2023 explosion of new COVID-19 cases expected in China will derail parts of the technology industry; there may be long-lasting effects.

“I am particularly concerned about the increasing dependence on systems that require electrical power to work in all aspects of life. We rarely think in terms of providing alternative systems that we can turn to when the main ones go down. I’m thinking particularly of those pushing to get rid of cash in favor of electronic payments of all types, but there are other examples.

“If allowed to continue, the reckless adoption of new technology by government, law enforcement and private companies without public debate or consent will create a truly dangerous state. I’m thinking in particular of live facial recognition, which just a few weeks ago was used by MSG Entertainment to locate and remove lawyers attending concerts and shows at its venues because said lawyers happened to work for firms that are involved in litigation against MSG. (The lawyers themselves were not involved.) This way lies truly disturbing and highly personalized discrimination. Even more dangerous, the San Francisco Police Department has proposed to the city council that it should be allowed to deploy robots with the ability to maim and kill humans – only for use in the most serious situations, of course.

“Airports provide a good guide to the worst of what our world could become. In a piece I wrote in October 2022 , I outline what the airports of the future, being built today without notice or discussion, will be like: all-surveillance all the time, with little option to ask questions or seek redress for errors. Airports – and the Disney parks – provide a close look at how ‘smart cities’ are likely to develop.

“I would like to hope that decentralized sites and technologies like Mastodon, Discord and others will change the dominant paradigm for the better – but the history of cooperatives tends to show that there will always be a few big players. Email provides a good example. While it is still true that anyone can run an email server, it is no longer true that they can do so as an equal player in the ecosystem. Instead, it is increasingly difficult for a small server to get its connections accepted by the tiny handful of big players. Accordingly, the most likely outcome for Mastodon will be a small handful of giant instances, and a long, long tail of small ones that find it increasingly difficult to function. The new giants created in these federated systems will still find it hard to charge or sell ads. They will have to build their business models on ancillary services for which the social media function provides lock-in, just as today Gmail profits Google nothing, but it underpins people’s use of its ad-supported search engine, maps, Android phones, etc. This provides Google with a social graph it can use in its advertising business.”

Alf Rehn: The AI turf war will pit governments trying to control bad actors against bad actors trying to weaponize AI tools

Rehn, professor of innovation, design and management at the University of Southern Denmark, wrote, “Humans and technology rarely develop in perfect sync, but we will see them catching up. We’ve lived through a period in which digital tech has developed at speeds we’ve struggled to keep up with; there is too much content, too much noise and too much disinformation.

“Slowly but surely, we’re getting the tools to regain some semblance of control. AI used to be the monster under our beds, but now we’re seeing how we might make it our obedient dog (although some still fear it might be a cat in disguise). As new tools are released, we’re increasingly seeing people using them for fearless experimentation, finding ways to bend ever more powerful technologies to human wills. From fearing that AI and other technologies are going to take our jobs and make us obsolete, humans are finding ever more ways to elevate themselves with technology and making digital wrangling into not just the hobby of a few forerunners, but a new folk culture.

“There was a time when using electricity was something you could only do after serious education and a long apprenticeship. Today, we all know how a plug works. The same is happening in the digital space. Increasingly, digital technologies are being turned into something so easy to use, utilize and manipulate so that they become the modern equivalent of electricity. As every man, woman and child knows how to use an AI to solve a problem, digital technology becomes ever less scary and more and more the equivalent of building with Lego blocks. In 2035 the limits are not technological, but creative and communicative. If you can dream it and articulate it, digital technology can build it, improve upon it and help you transcend the limitations you thought you had.

“That is, unless a corporate structure blocks you.

“Spiderman’s Uncle Ben said, ‘With great power comes great responsibility.’ What happens when we all gain great power? The fact that some of us will act irresponsibly is already well known, but we also need to heed the backlash this all brings. There are great institutional powers at play that may not be that pleased with the power that the new and emerging digital technologies afford the general populace. At the same time, there is a distinct risk that radicalized actors will find ever more toxic ways to utilize the exponentially developing digital tools – particularly in the field of AI. A common fear in scary future scenarios is that AIs will develop to a point where they subjugate humanity. But right now, leading up to 2035, our biggest concern is the ways in which humans are and will be weaponizing AI tools.

“Where this places most of humanity is in a double bind. As digital technology becomes more and more powerful, state institutions will aim to curtail bad actors using it in toxic ways. At the same time, and for the same reason, bad actors will find ever more creative ways to use it to cheat, fool, manipulate, defraud and otherwise mess with us. The average Joe and/or Jane (if such a thing exists anymore) will be caught up in the coming AI turf wars, and some will become collateral damage.

“What this means is that the most menacing thing about digital technologies won’t be the tech itself, nor any one person’s deployment of the same, but being caught in the pincer movement of attempted control and wanton weaponization. We think we’ve felt this now, with the occasional social media post being quarantined, but things are about to get a lot, lot worse.

“Imagine having written a simple, original post, only to see it torn apart by content-monitoring software and at the same time endlessly repurposed by agents who twist your message to its very antithesis. Imagine this being a normal, daily affair. Imagine being afraid to even write an email, lest it becomes fodder in the content wars. Imagine tearing your children’s tech away, just to keep them safe for a moment longer.”

Garth Graham: We don’t understand what society becomes when machines are social agents

Graham, longtime Canadian networked communities leader, wrote, “Consider the widely accepted Internet Society phrase, ‘Internet Governance Ecology.’ In that phrase, what does the word ecology actually mean? Is the Internet Society’s description of Internet governance as ecology a metaphor, an analogy or a reality? And, if it is a reality, what are the consequences of accepting it?

“Digital technology surfaces the importance of understanding two different approaches to governance. Our current understanding of governance, including democracies, is hierarchical, mechanistic and measures things on an absolute scale. The rules about making rules are assumed to be applied externally from outside systems of governance. And this means that those with power assume their power is external to the systems they inhabit. The Internet, as a set of protocols for inter-networking, is based on a different assumption. Its protocols are grounded in a shift in epistemology away from the mechanistic and toward the relational.

“It is a common pool resource and an example of the governance of complex adaptive self-organizing systems. In those systems, the rules about making rules are internal to each and every element of the system. They are not externally applied. This complexity means that the adaptive outcomes of such systems cannot be predicted from the sum of the parts. The assumption of control by leadership inherent in the organization of hierarchical systems is not present. In fact, the external imposition of management practices on a complex adaptive system is inherently Disruptive of the system’s equilibrium. So the system, like a packet-switched network, has to route around it to survive. …

“I do not think we understand what society becomes when machines are social agents. Code is the only language that’s executable. It is able to put a plan or instruction or design into effect on its own. It is a human utterance (artifact) that, once substantiated in hardware, has agency. We write the code and then the code writes us. Artificial intelligence intensifies that agency. That makes necessary a shift in our assumptions about the structure of society. All of us now inhabit dynamic systems of human-machine interaction. That complexifies our experience. Yes, we make our networks and our networks make us. Interdependently, we participate in the world and thus change its nature. We then adapt to an altered nature in which we have participated. But the ‘we’ in those phrases now includes encoded agents that interact autonomously in the dynamic alteration of culture. Those agents sense, experience and learn from the environment, modifying it in the process, just as we do. This represents an increase in the complexity of society and the capacity for radical change in social relations.

“ Ursula Franklin’s definition of technology – ‘ Technology involves organization, procedures, symbols, new words, equations, and, most of all, it involves a mindset ’ – is that it is the way we do things around here. It becomes different as a consequence of a shift in the definition of ‘we.’ AI increases our capacity to modify the world, and thus alter our experience of it. But it puts ‘us’ into a new social space we neither understand nor anticipate.”

Kunle Olorundare: There will be universal acceptance of open-source applications to help make AI and robotics safe and smart

Olorundare, vice president of the Nigeria Chapter of the Internet Society, wrote, “Digital technology has come to stay in our lives for good. One area that excites me about the future is the use of artificial intelligence, which of course is going to shape the way we live by 2035. We have started to see the dividends of artificial intelligence in our society. Essentially, the human-centered development of digital tools and systems is safely advancing human progress in the areas of transportation, health, finances, energy harvesting and so on.

“As an engineer who believes in the power of digital technology, I see limitless opportunities for our transportation system. Beyond the personal driverless cars and taxis, by 2035, our public transportation will be taken over by remote-controlled buses with accurate timing with a marginal error of 0.0099 which will make us feel the needless use of personal cars. This will be cheaper without disappointment.

“Autonomous public transport will be pocket-friendly to the general citizenry. This will come with less pollution as energy harvesting from green sources will take a tremendous positive turn with the use of IoT and other digital technologies that harvest energy from multiple sources by estimating what amount of energy is needed and which green sources are available at a particular time with plus one redundancy. Hence minimal inefficiencies. Deployment of bigger drones that can come directly to your house to pick you up after identifying you and debiting your digital wallet account and confirming the payment will be a reality. The use of paper tickets will be a thing of the past as digital wallets to pay for all services will be ubiquitous.

“In regard to human connections, governance and institutions and the improvement of social and political interactions, by 2035, the body of knowledge will be fully connected. There will be universal acceptance of open-source applications that make it possible to have a globally robust body of knowledge in artificial intelligence and robotics. There will be less depression in society. If your friends are far away, robots will be available as friends you can talk to and even watch TV with and analyze World Cup matches as you might do with your friends. Robots will also be able to contribute to your research work even more than what ChatGPT is capable of today. …

“Human knowledge and its verifying, updating, safe archiving by open-source AI will make research easier. Human ingenuity will still be needed to add value – we will work on the creative angles while secondary research is being conducted by AI. This will increase contributions to the body of knowledge and society will be better off.

“Human health and well-being will benefit greatly from the use of AI, bringing about a healthy population as sicknesses and diseases can be easily diagnosed. Infectious diseases will become less virulent because of the use of robots in highly infectious pandemics and pandemics can easily be curbed. With enhanced big data using AI and ML, pandemics can be easily predicted and prevented, and the impact curve flattened in the shortest possible time using AI-driven pandemic management systems.”

“It is pertinent to also look at the other side of the coin as we gain positive traction on digital technologies. There will be concern about the safety of humans as technology is used by scoundrels for crime, mischief and other negative ends. Technology is often used to attack innocent souls. It can be used to manipulate the public or destroy political enemies, thus it is not necessarily always the ‘bad guys’ who are endangering our society. Human rights may be abused. For example, a government may want to tie us to one digital wallet through a central bank of digital currencies and dictate how we spend our money. These are issues that need to be looked at in order not to trample on human rights. Technological decolonization may also raise a concern as unique cultures may be eroded due to global harmonization. This can create an unequal society in which some sovereignty may benefit more than others.”

Jeff Jarvis: Let’s hope media culture changes and focus our attention on discovering, recommending and supporting good speech

Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, wrote, “I shall share several hopes and one concern:

  • “I hope that the tools of connection will enable more and more diverse voices to at last be heard outside the hegemonic control of mass media and political power, leading to richer, more inclusive public discourse.
  • “I hope we begin to see past the internet’s technology as technology and understand the net as a means to connect us as humans in a more open society and to share our information and knowledge on a more equitable and secure basis for the benefit of us all.
  • “I hope we might finally move beyond mass media’s current moral panic over the internet as competition and, indeed, supersede the worst of mass media’s failing institutions, beginning with the notion of the mass and media’s invention of the attention economy.
  • “I hope that – as occurred at the birth of print – we will soon turn our attention away from the futile folly of trying to combat, control and outlaw all bad speech and instead focus our attention and resources on discovering, recommending and supporting good speech.
  • “I hope the tools of AI – the subject of mass media’s next moral panic – will help people intimidated by the tools of writing and research to better express their ideas and learn and create.
  • “I hope we will have learned the lesson taught us by Elon Musk: that placing our discourse in the hands of centralized corporations is perilous and antithetical to the architecture and aims of the Internet; federation at the edge is a far better model.
  • “I hope that regulators will support opening data for researchers to study the impact and value of the net – and will support that work with necessary resources.

“I fear the pincer movement from right and left, media and politics, against Section 230 and protection of freedom of expression will lead to regulation that raises liability for holding public conversation and places a chill over it, granting protection to and extending the corrupt reign of mass media and the hedge-fund-controlled news industry.”

Maja Vujovic: We will have tools that keep us from drowning in data

Vujovic, owner and director of Compass Communications in Belgrade, Serbia, wrote, “New technologies don’t just pop up out of the blue; they grow through iterative improvements of conceivable concepts moved forward by bold new ideas. Thus, in the decade ahead, we will see advances in most of the key breakthroughs we already know and use (automation and robotics, sensors and predictive maintenance, AR and VR, gaming and metaverse, generative arts and chatbots and digital humans) as they mature into the mass mainstream.

“Much as spreadsheet tech sprouted in the 1970s and first thrived on mainframe computers but became adopted en masse when those apps migrated onto personal desktops, in the same way, we will witness in the coming years countless variations of apps for personal use of our current top-tier technologies.

“The most useful among those tech-granulation trends will be the use of complex tech in personalized health care. We will see very likable robots serve as companions to ailing children and as care assistants to infirm elderly. Portable sensors will graduate from superfluous swagger to life-saving utility. We are willing and able to remotely track our pets now, but gradually we will track our small children or parents with dementia as well.

“Drowning in data, we will have tools for managing other tools and widgets for automating our digital lives. Apps will work silently in the background, or in our sleep, tagging our personal photos, tallying our daily expenses, planning our celebrations or curating our one (combined) social media feed. Rather than supplanting us and scaling our creative processes (which by definition only works on a scale of one!) technology will be deployed where we need it the most, in support of what we do best – and that is human creation.

“To extract the full value from tools like chatbots, we will all soon need to master the arcane art of prompting AI. A prompt engineer is already a highly paid job. In the next decade, prompting AI will be an advanced skill at first, then a realm of licensed practitioners and eventually an academic discipline.

“Of course, we still have many concerns. One of them is the limitations imposed by the ways in which AI is now being trained on limited sets of data. Our most advanced digital technologies are a result of unprecedented aggregation. Top apps have enlisted almost half of the global population. The only foreseeable scenario for them is to keep growing. Yet our global linguistic capital is not evenly distributed.

“By compiling the vocabularies of languages with far fewer users than English or Chinese have, a handful of private enterprises have captured and processed the linguistic equity of not only English, or Hindu or Spanish, but of many small cultures as well, such as Serbian, Welsh or Sinhalese. Those cultures have far less capacity to compile and digitally process their own linguistic assets by themselves. While most benign at times of peace, this dis-balance can have grave consequences during more tense periods. Effectively, it is a form of digital supremacy, which in time might prove taxing on smaller, less wealthy cultures and economies.

“Moreover, technology is always at the mercy of other factors, which get to determine whether it is used or misused. The more potent the technologies at hand, the more damage they can potentially inflict. Having known war firsthand and having gone through the related swift disintegration of social, economic and technical infrastructure around me, I am concerned to think how utterly devastating such disintegration would be in the near future, given our total dependence on an inherently frail digital infrastructure.

“With our global communication signals fully digitized in recent times, there would be absolutely no way to get vital information, talk to distant relatives or collect funds from online finance operators, in case of any accidental or intentional interruptions or blockades of Internet service. Virtually all amenities of contemporary living – our whole digital life – may be canceled with a flip of a switch, without recourse. As implausible as this sounds, it isn’t impossible. Indeed, we have witnessed implausible events take place in the recent years. So, I don’t like the odds.”

Paul Jones: ‘We used to teach people how to use computers. Now we teach computers how to use people’

Jones, professor emeritus at UNC-Chapel Hill School of Information and Library Science, wrote, “There is a specter haunting the internet – the specter of artificial intelligence. All the powers of old thinking and knowledge production have entered into a holy (?) alliance to exorcise this specter: frenzied authors, journalists, artists, teachers, legislators and, most of all, lawyers. We are still waiting to hear from the pope.

“In education, we used to teach people how to use computers. Now, we teach computers how to use people. By aggregating all that we can of human knowledge production in nearly every field, the computers can know more about humans as a mass and as individuals than we can know of ourselves. The upside is these knowledgeable computers can provide, and will quickly provide, better access to health, education and in many cases art and writing for humans. The cost is a loss of personal and social agency at individual, group, national and global levels.

“Who wouldn’t want the access? But who wouldn’t worry, rightly, about the loss of agency? That double desire is what makes answering these questions difficult. ‘Best and most beneficial’ and ‘most harmful and menacing’ are opposite so much as co-joined. Like conjoined twins sharing essential organs and blood systems. Unlike for some such twins, no known surgery can separate them. Just as cars gave us, over a short time, a democratization of travel and at the same time became major agents of death – immediately in wrecks, more slowly via pollution – AI and the infrastructure to support it will give us untold benefits and access to knowledge while causing untold harm.

“We can predict somewhat the direction of AI, but more difficult will be how to understand the human response. Humans are now, or will soon be, co-joined to AI even if they don’t use it directly. AI will be used on everyone just as one need not drive or even ride in a car to be affected by the existence of cars. AI changes will emerge when it possesses these traits:

  • “Distinctive presences (aka voices but also avatars personalized to suit the listener/reader in various situations). These will be created by merging distinctive human writing and speaking voices, say maybe Bob Dylan + Bruce Springsteen.
  • “The ability to emotionally connect with humans (aka presentation skills).
  • “Curiosity. AI will do more than respond. It will be interactive and heuristic, offering paths that have not yet been offered – we have witnessed this AI behavior in the playing of Go and chess. AI will continue to present novel solutions.
  • “A broad and unique worldview. Because AI can be trained on all digitizable human knowledge and can avail itself of information from sensors more in variance with those open to humans. AI will be able to apply, say, Taoism to questions about weather.
  • “Empathy. Humans do not have an endless well of empathy. We tire easily. But AI can seem persistently and constantly empathetic. You may say that AI empathy isn’t real, but human empathy isn’t always either.
  • “Situational Awareness. Thanks to input from a variety of sensors, AI can and will be able to understand situations even better than humans.
  • “No area of knowledge work will be unaffected by AI and sensor awareness.

“How will we greet our robot masters? With fear, awe, admiration, envy and desire.”

Marjory Blumenthal: Technology outpaces our responses to unintended consequences

Blumenthal, senior adjunct policy researcher at RAND Corporation, wrote, “In a little over a decade, it is reasonable to expect two kinds of progress in particular: First are improvements in the user experience, especially for people with various impairments (visual, auditory, tactile, cognitive). A lot is said about diversity, equity and inclusion that focuses broadly on factors like income and education, but to benefit from digital technology requires an ability to use it that today remains elusive for many people for physiological reasons. Globally, populations are aging, a process that often confronts people with impairments they didn’t used to have (and of course many experience impairments from birth onward).

“Second, and notwithstanding concerns about concentration in many digital-tech markets, more indigenous technology is likely, at least to serve local markets and cultures. In some cases, indigenous tech will take advantage of indigenous data, which technological progress will make easier to amass and use, and more generally it will leverage a wider variety of talent, especially in the Global South, plus motivations to satisfy a wider variety of needs and preferences (including, but not limited to, support for human rights).

“There are two areas in which technology seems to get ahead of people’s ability to deal with it, either as individuals or through governance. One is the information environment. For the last few years, people have been coming to grips with manipulated information and its uses, and it has been easier for people to avoid the marketplace of ideas by sticking with channels that suit narrow points of view.

“Commentators lament the decline in trust of public institutions and speculate about a new normal that questions everything to a degree that is counterproductive. Although technical and policy mechanisms are being explored to contend with these circumstances, the underlying technologies and commercial imperatives seem to drive innovation that continues to outpace responses. For example, the ability to detect tends to lag the ability to generate realistic but false images and sound, although both are advancing.

“At a time when there has been a flowering of principles and ethics surrounding computing, new systems like ChatGPT with a high cool factor are introduced without any apparent thought to second- and third-order effects of using them – thoughtfulness takes time and risks loss of leadership. The resulting distraction and confusion likely will benefit the mischievous more than the rest of us – recognizing that crime and sex have long impelled uses of new technology.

“The second is safety. Decades of experience with digital technology have shown our limitations in dealing with cybersecurity, and the rise of embedded and increasingly automated technology introduces new risks to physical safety even as some of those technologies (e.g., automated vehicles) are touted as long-term improvers of safety.

“Responses are likely to evolve on a sector-by-sector basis, which might make it hard to appreciate interactions among different kinds of technology in different contexts. Although progress on the safety of individual technologies will occur over the next decade, the cumulation of interacting technologies will add complexity that will challenge understanding and response.”

David Porush: Advances may come if there are breakthroughs in quantum computing and the creation of a global court of criminal justice

Porush, author and longtime professor at Rensselaer Polytechnic Institute, wrote, “There will be positive progress in many realms. Quantum computing will become a partner to human creativity and problem solving. We’ve shown sophisticated brute force computing achieve this already with ChatGPT. Quantum computing will surprise us and challenge us to exceed ourselves even further and in much more surprising ways. It will also challenge former expectations about nature and the supernatural, physics and metaphysics. It will rattle the cage of scientific axioms of the mechanist-vitalism duality. This is a belief, and a hope, with only hints in empirical evidence.

“We might establish a new worldwide court of criminal justice. Utopian dreams that the World Wide Web and new social technologies might change human behavior have failed – note the ongoing human criminality, predation, tribalism, hate speech, theft and deception, demagoguery, etc. Nonetheless, social networks also enable us to witness, record and testify to bad behavior almost instantly, no matter where in the world it happens.

“By 2035 I believe this will promote the creation (or beginning of the discussion of the creation) of a new worldwide court of criminal justice, including a means to prosecute and punish individual war crimes and bad nation actors. My hope is that this court would supersede our current broken UN and come to apolitical verdicts based on empirical evidence and universal laws. Citizens pretty universally have shown they will give up rights to privacy to corporations for convenience. It would also imply that the panopticon of technologies used for spying and intrusion, whether for profit or totalitarian control by governments, will be converted to serve global good.

“Social networking contributes to scientific progress, especially in the field of virology. The global reaction to the arrival of COVID-19 showed the power of data gathering, data sharing and collaboration on analysis to combat a pandemic. Worldwide virology the past two years is a fine avatar of what could be done for all sciences. We can make more effective use of global computing in regard to resource distribution. Politicians and nations have not shown enough political will to really address long-term solutions to crises like global warming, water shortages and hunger. At least emerging data on these crises arm us with knowledge as the predicate to solutions. For instance, there’s not one less molecule of H 2 O available on Earth than a billion years ago; it’s just collected, made usable and distributed terribly.

“If we combine the appropriate level of political will with technological solutions (many of which we have in hand), we can distribute scarce resources and monitor harmful human or natural phenomena and address these problems with much more timely and effective solutions.”

Nandi Nobell: New interfaces in the metaverse and virtual reality will extend the human experience

Nobell, futurist designer and senior associate at CallisonRTKL, a global architecture, planning and design practice, wrote, “Whether physical, digital or somewhere in-between, interfaces to human experiences are all we have and have ever had. The body-mind (consciousness) construct is already fully dependent on naturally evolved interfaces to both our surroundings and our inner lives, which is why designing more intuitive and seamless ways of interacting with all aspects of our human lives is both a natural and relevant step forward – it is crossing our current horizon to experience the next horizon. With this in mind, extended reality (XR), the metaverse and artificial intelligence become increasingly important all the time as there are many evident horizons we are crossing through our current endeavours simply by pursuing any advancement.

“Whether it is the blockchain we know of today, or something more useful, user- and environmentally-friendly and smooth to integrate that can allow simplification of instant contracts and permission-less activities of all sorts, this can enable our world to verify source and quality of content, along with many other benefits.

“The best interfaces to experiences and services that can be achieved will influence what we can think and do, not just as tools and services in everyday life but also as the path to education, communication and so many other things. Improving our interfaces – both physical and digital make the difference between having and not having superpowers as we advance.

“Connecting a wide range of technologies that bridge physical and digital possibilities grows the reach of both. This also means that thinking of the human habitat as belonging to all areas that the body and mind can traverse is more useful than inventing new categories and silos by which we classify experiences. Whatever the future version of multifaceted APIs is, they have to be flexible, largely open and easy to use. Connectivity between ways, directions, clarity, etc., of communication can extend the reach and multiplication of any possibilities – new or old.

“Drawbacks and challenges face us in the years ahead. First comes data – if the FAANGs [Facebook/Meta, Amazon, Apple, Netflix, Google] of the world (non-American equivalents are equally bad) are allowed to remain even nearly as powerful as they are today, problems will become ever-greater, as their strength as manipulators of individuals grow deeper and more advanced. Manipulation will become vastly more advanced and difficult to recognize.

“Artificial intelligence is already becoming so powerful and versatile it can soon shape any imagery, audio and text or geometry in an instant. This means anyone with the computational resources and some basic tools can trick just about anyone into new thoughts and ideas. The owners of the greatest databanks of individuals’ and companies’ history and preferences can easily shape strategies to manipulate groups, individuals and entire nations into new behaviours.

“Why invest in anything if you will have it stolen at some point? Is some sort of perfect fraud-prevention system (blockchain or better) relevant in a future in which any ownership of any sort of asset class – digital or physical – is under threat of loss or distortion?

“Extended reality and the metaverse often get a bit of a beating for how they can make people more vulnerable to harassment, and this is a real threat, but artificial intelligence is vastly more scalable – essentially it could impact every human with access to digital technology more or less simultaneously, while online harassment in an immersive context is not scalable in a similar sense.

“Striking a comfortable and reasonable balance between safe and sane human freedom and surveillance technologies to keep a legit bottom line of this human safety is going to be hard to achieve. There will be further and deeper abuses in many cultures. This may create a digital world and lifestyle that branches off quite heavily from the non-digital counterparts, as digital lives can be expected to be surveilled while the physical can at least in principle be somewhat free of eavesdropping if people are not in view or earshot of a digital device. This being said, a state or company may still reward behaviour that trades data of all sorts from anything happening offline – which has been the case in dictatorships throughout history. The very use and manufacturing of technology may also cost the planet more than it provides the human experience, and as long as the promises of the future drive the value of stock and investments, we are not likely to understand when to stop advancing on a frontier that is on a roll.

“Health care will likely become both better and worse – the class divide grows greater gaps – but long-term it is probably better for most people. The underlying factors generally have more to do with human individual values rather than with the technologies themselves.

“There might be artificial general intelligence by 2035. We don’t know what unintended consequences it may portend. Such AI may have great potential to be helpful. Perhaps one individual can create a value for humanity or planet that is a million times greater than the next person’s contribution. But we do not know whether this value will hold its value over time, or if the outcome will be just as bad as the one portrayed in Nick Bostrom’s ‘paper clip’ analogy .

“Most people are willing to borrow from the future; our children are meant to be this future. What do we make of it? Are children therefore multi-dimensional batteries?”

Charalambos Tsekeris: The surveillance-for-profit model can lead to more loss of privacy, cyber-feudalism and data oligarchy

Tsekeris, vice president of Greece’s Hellenic National Commission for Bioethics and Technoethics, wrote, “In a perfect world, by 2035 digital tools and systems would be developed in a human-centered way, guided by human design abilities and ingenuity. Regulatory frames and soft pressure from civil society would address the serious ethical, legal and social issues resulting from newly emerging forms of agency and privacy. And all in all, collective intelligence, combined with digital literacy, would increasingly cultivate responsibility and shape our environments (analog or digital) to make them safer and AI-friendly.

“Advancing futures-thinking and foresight analysis could substantially facilitate such understanding and preparedness. It would also empower digital users to be more knowledgeable and reflexive upon their rights and the nature and dynamics of the new virtual worlds.

“The power of ethics by design could ultimately orient internet-enabled technology toward updating the quality of human relations and democracy, also protecting digital cohesion, trust and truth from the dynamics of misinformation and fake news.

“In addition, digital assistants and coordination tools could support transparency and accountability, informational self-determination and participation. An inclusive digital agenda might help all users benefit from the fruits of the digital revolution. In particular, innovation in the sphere of AI, clouds and big data could create additional social value and help to support people in need.

“The best and most beneficial change might be achieved by 2035 only if there is a significant increase in digital human, social and institutional capital that creates a happy marriage between digital capitalism and democracy. Why?

“On the other hand, as things stand right now, by 2035 digital tools and systems will not be able to efficiently and effectively fight social divisions and exclusions. This is due to a lack of accountability, transparency and consensus in decision-making. Digital technology systems are likely to continue to function in shortsighted and unethical ways, forcing humanity to face unsustainable inequalities and an overconcentration of technoeconomic power. These new digital inequalities could amount to serious, alarming threats and existential risks for human civilization. These risks could put humanity in serious danger when combined with environmental degradation and the overcomplication of digital connectivity and the global system.

  • “It is likely that no globally-accepted ethical and regulatory frameworks will be found to fix social media algorithms, thus the vicious circle between collective blindness, populism and polarization will be dramatically reinforced.
  • “In addition, the fragmentation of the internet will continue (creating the ‘splinternet’), thus resulting in more geopolitical tensions, less international cooperation and less global peace.
  • “The dominant surveillance-for-profit model is likely to continue to prevail by 2035, leading to further loss of privacy, deconsolidation of global democracy and the expansion of cyber-feudalism and data oligarchy.
  • “The exponential speed and overcomplexity of datafication and digitalization in general will diminish the human capacity for critical reflection, futures thinking, information accuracy and fact-checking.
  • “The overwhelming processes of automation and personalization of information will intensify feelings of loneliness among atomized individuals and further disrupt the domains of mental health and well-being.
  • “By 2035, the ongoing algorithmization and platformization of markets and services will exercise more pressure on working and social rights, further worsening exploitation, injustice, labor conditions and labor relations. Ghost workers and contract breaching will dramatically proliferate.”

Davi Ottenheimer: An over-emphasis on automation instead of human augmentation is extremely dangerous

Ottenheimer, vice president for trust and digital ethics at Inrupt, a company applying the new Solid data protocol, predicted, “The best and most beneficial changes in digital life by 2035 by most accounts will be from innovations in machine learning, virtualization and interconnected things (IoT). Learning technology can reduce the cost of knowledge. Virtualization technology can reduce the cost of presence. Interconnected things can both improve the quantity of data for the previous two, while also delivering more accessibility.

“This all speaks mainly to infrastructure tools, however, which need a special kind of glue. Stewardship and ethics can chart a beneficent course for the tools by focusing on an improved digital life that takes those three pieces and weaves them together with open standards for data interoperability. We saw a similar transformation of the 1970s closed data-processing infrastructure into the 1990s interconnected open-standards Web.

“This shift from centralized data infrastructure to federated and distributed processing is happening again already, which is expected to provide ever higher-quality/higher-integrity data. For a practical example, a web page today can better represent details of a person or an organization than most things could 20 years ago. In fact, we trust the Web to process, store and transmit everything from personalized medicine to our hobbies and work.

“The next 20 years will continue a trend to Web 3.0 by allowing people to become more whole and real digital selves in a much safer and healthier format. The digital self could be free of self-interested moat platforms, using instead representative ones; a right to be understood, founded in a right to move and maintain data about ourselves for our purposes (including wider social benefit).

“Knowledge will improve, as it can be far more easily curated and managed by its owner when it isn’t locked away, divided into complex walled gardens and forgotten in a graveyard of consents. A blood pressure sensor, for example, would send data to a personal data store for processing and learning far more privately and accurately. Metadata then could be shared based narrowly on purpose and time, such as with a relative, coach, assistant or health care professional. People’s health and well-being benefit directly from coming improvements in data-integrity architecture, as we already are seeing in any consent-based, open-standards sharing infrastructure being delivered to transform lives for the better.

“The most harmful or menacing changes likely to occur by 2035 in digital technology are related to the disruptive social effects of domain shifts. A domain shift pulls people out of areas they are familiar with and forces them to reattach to unfamiliar technology, such as with the end of horses and the rise of cars. In retrospect, the wheel was inferior to four-legged transit in very particular ways (e.g., requirement for a well-maintained road in favorable weather, dumping highly toxic byproducts in its wake) yet we are very far away from realizing any technology-based legged transit system.

“Sophisticated or not-well-understood technology can be misrepresented using fear tactics such that groups will drive into decades of failure and harm, without realizing they’ve being fooled. We’ve seen this in the return push to driverless vehicles, which are not very new but presented lately as magically very near to being realized.

“Sensor-based learning machines are solicited unfairly at unqualified consumers to prey on their fear about loss of control; people want to believe a simple and saccharin digital assistant will make them safer without evidence. This has manifested as a form of addiction and over-dependence causing social and mental health issues, including an alarming rise in crashes and preventable deaths by inattentive drivers believing misinformation about automation.

“Even more to the point, an over-emphasis on automation instead of augmentation leaves necessary human safety controls and oversight out of the loop on extremely dangerous and centrally controlled machines. It quickly becomes more practical and probable to poison a driverless algorithm in a foreign country to unleash a mass casualty event using loitering cars as swarm kamikazes, than to fire remote missiles or establish airspace control for bombs.

“Another example, related to misinformation, is the domain shift in identity and digital self. Often referred to as deepfakes, an over-reliance on certain cues can be manipulated to target people who don’t use other forms of validation. Trust sometimes is based on the sound of a voice or based on the visual appearance of a face. That was a luxury, as any deaf or blind person can provide useful insight about. Now in the rapidly evolving digital tools market anyone can sound or look like anyone, like observers becoming deaf or blind and needing some other means of trust to be established. This erodes old domains of trust, yet it also could radically shift trust by fundamentally altering what credible sources should be based upon.”

Mauro R í os: We must create commonly accepted standards and generate a new social contract between humanity and technology

Ríos, an adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter, wrote, “In 2035, advances in technology can and surely will surprise us, but they will surprise us even more IF human beings are willing to change their relationship with technology. For example, we may possibly see the emergence of the real metaverse, something that does not yet exist. We will see a clear evolution of wearable tech, and we will also be surprised at how desktop computing undergoes a remake of the PC.

“But technological advances alone do not create the future, even as they continue to advance unfailingly. The ways in which people use them are what matter. What should occupy us is to understand if we and tech will be friends, lovers or have a happy marriage. We have discovered that – from the laws of robotics to the ethics behind artificial intelligence – our responsibility as a species is that as we create technology and dominate it. It is important that we generate a new social contract between it and we.

“The ubiquity of technology in our lives must lead us to question how we relate to it. Even back in the 1970s and 1980s it was very clear that the border between the human and the non-human was quite likely to blur soon. Today that border is blurry in certain scenarios that generate doubts, suspicions and concerns.

“By the year 2035, humans should have already resolved this discussion and have adapted and developed new, healthy models of interaction with technology. Digital technology is a permanent part of our world in an indissoluble way. It is necessary that we include a formal chapter on it in our social contract. Technology incites us, provokes us, corners us and causes us to question everything. There will be more complex challenges than we can imagine.

“One of the biggest risks today emerges from the fact that the technology industry is resistant to establishing common standards. Steps like those taken by the European Community in relation to connectors are important, but technology companies continue to insist on avoiding standardization to win economic gain. In the past, most of the battles over standardization were hardware-related, today they are software-related.

“If we want to develop things like the true metaverse or the conquest of Mars, technology has to have common criteria in key aspects. Standards should be established in artificial intelligence, automation, remote or virtual work, personal medical information, educational platforms, interoperability and communications, autonomous systems and others.”

David A. Banks: If Big Tech firms don’t change their ‘infinite expansion model,’ challenging new, humane systems will arise

Banks, director of globalization studies at the University at Albany-SUNY commented, “Between now and 2035, as the tech industry will experience a declining rate of profit and individual firms will seek to extract as much revenue as possible from existing core services, thus users could begin to critically reevaluate their reliance on large-scale social media, group chat systems (e.g., Slack, Teams), and perhaps even search as we know it. Advertising, the ‘internet’s original sin’ as Ethan Zuckerman so aptly put it in 2014, will combine with intractable free-speech debates, unsustainable increases in web stack complexity and increasingly unreliable core cloud services to trigger a mass exodus from Web 2.0 services. This is a good thing!

“If Big Tech gets the reputation it deserves, that could lead to a renaissance of libraries and human-centered knowledge searching as an alternative to the predatory, profit-driven search services. Buying clubs and human-authored product reviews could conceivably replace algorithmic recommendations, which would be correctly recognized as the advertisements that they are. Rather than wring hands about ‘echo chambers,’ media could finally return to a partisan stance where biases are acknowledged, and audiences can make fully informed decisions about the sources of their news and entertainment. It would be more common for audiences to directly support independent journalists and media makers who utilize a new, wider range of platforms.

“On the supply side, up-and-coming tech firms and their financial backers could respond by throwing out the infinite expansion model established by Facebook and Google in favor of niche markets that are willing to spend money directly on services that they use and enjoy, rather than passively pay for ostensibly free services through ad revenue. Call it the ‘Humble Net’ if you like – companies that are small and aspire to stay small in a symbiotic relationship with a core, loyal userbase. The smartest people in tech will recognize that they have to design around trust and sustainability rather than trustless platforms built for infinite growth.

“I am mostly basing my worst-case scenario prognostication on how the alt right has set up a wide range of social media services meant to foster and promulgate their worldview.

“In this scenario, venture capital firms will not be satisfied with the Humble Net and will likely put their money into firms that sell to institutional buyers (think weapons manufacturers, billing and finance tools, work-from-home hardware and software and biotech). This move by VCs will have the aggregate effect of privatizing much-needed public goods, supercharging overt surveillance technology and stifling innovation in basic research that takes more than a few years to produce marketable products.

“As big companies’ products lose their sheen and inevitably lose loyal customers, they will likely attempt to become infrastructure, rather than customer-facing brands. This can be seen as a retrenchment of control over markets and an attempt to become a market arbiter rather than a dominant competitor. This will likely lead to monopolistic behavior – price gouging, market manipulation, collusion with other firms in adjacent industries and markets – that will not be readily recognizable by the public or regulators. There is no reason to believe regulatory environments will strengthen to prevent this in the next decade.

“Big firms, in their desperation for new sources of revenue, will turn toward more aggressive freemium subscription models and push into what is left of bricks-and-mortar stores. I have called this phenomenon the ‘Subscriber City,’ where entire portions of cities will be put behind paywalls. Everything from your local coffee shop to public transportation will either offer deep discounts to subscribers of an Amazon Prime-esque service or refuse direct payments altogether. Transportation services like Uber and Waze will more obviously and directly act like managers of segregation than convenience and information services.

“Western firms will be dragged into trade wars by an increasingly antagonistic U.S. State Department, leading to increased prices on goods and services and more overt forms of censorship, especially with regard to international current events. This will likely drive people to their preferred Humble Nets to get news of varying veracity. Right-wing media consumers will seek out conspiratorial jingoism, centrists will enjoy a heavily censored corporate mainstream media, and the left will be left victim to con artists, would-be journalism influencers and vast lacunas of valuable information.”

Lee Warren McKnight: ‘Good, bad and evil AI will threaten societies, undermine social cohesion … and undermine human well-being’

McKnight, professor of entrepreneurship and innovation at Syracuse University’s School of Information Studies, wrote, “Human well-being and sustainable development is likely to be greatly improved by 2035. This will be supported by shared cognitive computing software and services at the edge, or perhaps by a digital twin of each village, and it will be operating to custom, decentralized design parameters decided by each community. The effects will significantly raise the incomes of rural residents worldwide. It will not eliminate the digital divide, but it will transform it. Digital tools and systems will be nearly universally available. The grassroots can be digitalized, empowering the 37% of the world who are still largely off the grid in 2023. With ‘worst-case-scenario survival-as-a-service’ widely available, human safety will progress.

“This will be partially accomplished by low-Earth-orbit (LEO) microsatellite systems. Right now, infrastructureless wireless or cyber-physical infrastructure can span any distance. But that is just a piece of a wider shared cognitive cyber-physical (IoT) technology, energy, connectivity, security, privacy, ethics, rights, governance and trust virtual services bundle. Decentralized communities will be adapting these digital, partially tokenized assets to their own needs and in working toward the UN’s Sustainable Development Goals through to 2035.

“Efforts are progressing through the ITU [International Telecommunication Union], Internet Society and many more UN and civil society organizations and governments, addressing the huge challenge to the global community to connect the next billion people. I foresee self-help, self-organized, adaptive cloud-to-edge Internet operators solving the problem of getting access to people’s homes and businesses everywhere. They are digitally transforming themselves and they are the new community services providers.

“The market effects of edge bandwidth management innovations, radically lower edge device and bandwidth costs through community traffic aggregation, and fantastically higher access to digital services will be significant enough to measurably raise the GDP in nations undertaking their own initiatives to digitalize the grassroots, beyond the current reach of telecommunications infrastructure. At the community level, the effect of these initiatives is immediately transformative for the youth of participating communities.

“How do I know all of this? Because we are already underway with the Africa Community Internet Program, launched by the UN Economic Commission for Africa in cooperation with the African Union, in 2022. Ongoing pilot projects are educating people in local governments and other Internet community multistakeholders about what is possible. …

“The second topic I’d like to touch on in the concept of trust in ‘zero trust’ environments. Right now, it comes at a premium. It will rely on sophisticated mechanisms in 2035. Certified ethical AI developers are the new Silicon Valley elite priesthood. They are the well-paid orchestrators of machine learning and cognitive communities, and they are certified as trained to be ethical in code and by design. Some liability insurance disputes have delayed the progress of this movement, but by 2035 the practice and profession of Certified Ethical AI Developer will have cleaned up many biased-by-poor-design legacy systems. And they will have begun to lead others toward this approach, which combines improved multi-dimensional security with attention to privacy, ethics and rights-awareness in the design of adaptive complex systems.

“Many developers and others in and around the technical community suddenly have a new interest in introductory level philosophy courses, and there is a rising demand for graduates who have double-majored in computer science and philosophy. Data scientists will work for and report to them. Of course, having a certification process for ethical AI developers does not automatically make firms’ business practices more ethical. It serves as a market signal that sloppy Silicon Valley practices also run risks, including loss of market share. We can hope that, standing alongside all of the statements of ethical AI principles, certified ethical AI developers will be 2035’s reality 5D TV stars, vanquishing bad and evil AI systems. …

“ I do have quite a few concerns over human-centered development of digital tools and systems falling short of advocates’ goals. Good, bad and evil AI will threaten societies, undermine social cohesion, spark suicides and domestic and global conflict, and undermine human well-being. Just as profit-motivated actors, nation-states and billionaire oligarchs have weaponized advocates for guns over people and led to skyrocketing murder rates and a shorter lifespan in the United States, similar groups – and groups manipulating machine learning and neural network systems to manipulate them – are arising under the influence of AI.

“They already have. To define terms, good AI is ethical and good by evidence-based design. Bad AI is ill-formed either by ignorance and human error or bad design. In 2035 evil AI could be a good AI or a bad AI gone bad due to a security compromise or malicious actor; or it could be bad-to-the-bone evil AI created intentionally to disrupt communities, crash systems and foster murders and death.

  • “The manufacturers of disinformation, both private sector and government information warfare campaign managers, will all be using a variety of ChatGPT-gone-bad-like tools to infect societal discourse, systems and communities.
  • “The manipulated media and surveillance systems will be integrated to infect communities as a wholesale, on-demand service.
  • “Custom evil AI services will be preferred by stalkers and rapists for their services.
  • “Mafia-like protection rackets will grow to pay off potential AI attackers as a cost of doing only modestly bad business.
  • “Both retail and wholesale market growth for evil AI will have compound effects, with both cyber-physical mass-casualty events and more psychologically damaging unfair-and-unbalanced artificially intelligent evil digital twins that are perfectly attuned to personalize evil effects. Evil robotic process automation will be a growth industry through to 2035, to improve scalability.”

Frank Odasz: ‘The battle between good and evil has changed due to the power of technology’

Odasz, president of Lone Eagle Consulting, wrote “By 2035, in a perfect world, everyone will have a relationship with AI in multiple forms. ChatGPT is an AI tool to draft essays on any topic. Jobs will require less training and will be continually aided by AI helpers. The Congressional Office of Technology Assessment will be reinstated to counter the exponential abuses of AI, deepfake videos and all other known abuses. Creating trust in online businesses and secure identities will become commonplace. Four-day work weeks and continued growth in remote work and remote learning will mean everyone can make the living they want, living wherever they want.

“Everyone will have a global citizenship mindset working toward those processes that empower everyone. Keeping humankind to the same instant of progress will become a shared goal as the volume of new innovations continues to increase, increasing opportunities for everyone to combine multiple innovations to create new integrated innovations.

“Developing human talent and agency will become a global shared goal. Purposeful use of our time will become a key component of learning. There will be those who spend many hours each day using VR goggles for work and gaming that feature increasingly social components. A significant portion of society will be able to opt out of most digital activities once universal basic income programs proliferate. Life, liberty and pursuit of happiness, equality before the law and new forms of self-exploration and self-care will proliferate.

“Collective values will emerge and become important regarding life choices. Reconnecting with nature and our responsibility for stewardship of our planet’s environments, and each other, will take a very purposeful role in the lives of everyone. As more people learn the benefits of being positive, progressive, tolerant of differences and open-minded, most people will agree that people are basically good. The World Values Survey has recently recorded metrics reporting that 78% of Swedish citizens believe people are basically good, while Latin Americans give 15%, and those in Asia 5%.

“Pursuit of meaningful use of our time when we are freed from menial labor, we can create a new global culture of purpose to rally all global citizens to work together to sustain civil society and our planet.

“With all the advances in tech, what could go wrong? Well, by 2035, the vague promise of broadband for all, providing meaningful, measurable, transformational outcomes, will create a split society, extending what we already see in 2023, with the most-educated leaning toward a progressive, tolerant, open-learning society able to adapt easily to accelerating change. Those left behind without the mutual support necessary to grow to learn to love learning and benefit from accelerating technical innovation will grow fearful of change, of learning and of those who do understand the potential for transformational outcomes of motivated self-directed Internet learning and particularly of collaborating with others. If we all share what we know, we’ll all have access to all our knowledge.

“Lensa AI is an app from China that turns your photo into many choices for an avatar and/or a more compelling ID photo, requiring only that you sign away all intellectual rights to your own likeness. Abuses of social media are listed at the Ledger of Harms from the Center for Humane Tech. It is known that foreign countries continue to implement increasingly insidious methods for proliferating misinformation and propaganda. Certainly the United States, internally, has severe problems in this regard due to severe political polarization that went nearly ballistic in 2020 and 2021.

“If a unified global value system evolves, there is hope international law can contain moral and ethical abuses. Note: The Scout Law, created in 1911, has a dozen generic values for common decency and served as the basis for the largest uniformed organizations in the world – Boy Scouts and Girl Scouts. Reverence is one trait that encompasses all religions. ‘Leave no one behind’ must be used to refer to those without a moral compass; positive, supportive culture; self-esteem; and common sense.

“Mental health problems are rampant worldwide. Vladimir Putin controls more than 4,500 nuclear missiles. In the United States, proliferation of mass shootings shows us that one person can wreak havoc on the lives of many others. If 99% of society evolves to be good people with moral values and generous spirits, the reality is that human society might still end in nuclear fires due to the actions of a few individuals, or even a single individual with a finger on the red button, capable of destroying billions and making huge parts of the planet uninhabitable. How can technology assure our future? Finland has built underground cities to house their entire population in the event of nuclear war.

“The battle between good and evil has changed due to the power of technology. The potential disaster only a few persons can exact upon society continues to grow disproportionally to the security the best efforts of good folks can deliver. This dichotomy, taken to extremes, might spell doom for us all unless radical measures are taken, down to the level of monitoring individuals every moment of the day.

“A-cultural worldviews need to evolve to create a common bond accepting our differences as allowable commonalities. This is the key to sustainability of the human race, and it is not a given. Our human-caused climate changes are already creating dire outcomes: drought, sea levels rising and much more. The risk of greater divisiveness will increase as impacts of climate change continue to increase. Migration pressure is but one example.”

Frank Kaufmann: It all comes down to how humans use digital technology

Kaufmann ,  president of Twelve Gates Foundation and Values in Knowledge Foundation, wrote, “I find all technological development good if developed and managed by humans who are good.

“The punchline is always this: To the extent that humans are impulsively driven by compassion and concern for others and for the good of the whole, there is not a single prospective technological or digital breakthrough that bodes ill in its own right. Yet, to the extent that humans are impulsively driven for self-gain, with others and the good of the whole as expendable in the equation, even the most primitive industrial/technological development is to be feared.

“I am extreme in this view as simple, fundamental and universal. For example, if humans were fixed in an inescapable makeup characterized by care and compassion, the development of an exoskeletal, indestructible, AI-controlled, military robot that could anticipate my movements up to four miles away, and morph to look just like my loving grandmother could be a perfectly wonderful development for the good of humankind. On the other hand, if humans cannot be elevated above the grotesque makeup in which others and the greater good are expendable in the pursuit of selfish gain, then even the invention of a fork is a dangerous, even horrifying thing.

“The Basis to Assess Tech – Human Purpose, Human Nature: I hold that the existence of humans is intentional, not random. This starting point establishes for me two bases for assessing technological progress: How does technological/digital development relate to 1) human purpose and 2) human nature?

“Human purpose: Two things are the basis for assessing anything, the purpose and the nature of the agent. This is the same for whether we assess the CRISPR gene editing, or if I turn left or right at a streetlight. The question in both cases is: Does this action serve our purpose? This tells us if the matter in question is good or bad. It simply depends on what we are trying to do (our purpose). If our purpose is to get to our mom’s house, then turning left at the light is a very bad thing to do. If the development of CRISPR gene editing is to elevate dignity for honorable people, it is good. If it is to advance the lusts of a demonic corporation, or the career of an ego-insane, medical monster, then likewise breakthroughs in CRISPR gene editing are worrisome.

“Unfortunately, it is very difficult to know what human purpose is. Only religious and spiritual systems recommend what that might be.

“Human nature: The second basis for assessing things (including digital and technological advances) relates to human nature. This is more accessible. We can ask: Does the action comport with our nature? For simplicity I’ve created a limited list of what humans desire (human nature):

Original desires

  • To love and be loved
  • Privacy (personal sovereignty)
  • To be safe and healthy
  • Artistic expression
  • Sports and leisure, physical and athletic experience

Perverse and broken desires

  • Pursuit of and addiction to power
  • Willingness to indulge in conflict

“Three bases to assess: In sum then, analyzing and assessing technological and digital development by the year 2035 should move along three lines of measure.

  • Does the breakthrough serve the reason why humans exist (human purpose)?
  • Which part of human nature does the breakthrough relate to?
  • Can the technology have built-in protections to prevent perfectly exciting, wonderful breakthroughs from becoming a dark and malign force over our lives and human history?

“All technology coming in the next 15 years sits on a two-edged sword according to measures for the analysis described above. The following danger-level categories help describe things further.

“Likely benign, little danger – Some coming breakthroughs are merely exciting, such as open-air gesture technology, prosthetics with a sense of touch, printed food, printed organs, space tourism, self-driving vehicles and much more.

“Medium danger – Some coming digital and tech breakthroughs have medium levels of concern for social or ethical implications, such as hybrid-reality environments, tactile holograms, domestic service and workplace robots, quantum-encrypted-information, biotechnology and nanotechnology and much more.

“Dangerous, great care needed – Finally, there is a category of coming developments that should be put in the high concern category. These include brain-computer interfaces and brain-implant technology, genome editing, cloning, selective breeding, genetic engineering, artificial general intelligence (AGI), deepfakes, people-hacking, clumsy efforts to fix the environment through potentially risky geoengineering, CRISPR gene editing and many others.

“Applying the three bases in assessing the benefits and dangers of technological advances in our time can be done rigorously, systematically and extensively on any pending digital and tech developments. They are listed here on a spectrum from less worrisome to potentially devastating. It is not the technology itself that marks it as hopeful or dystopic. This divergence is independent of the inherent quality of the precise technology itself; it is tied to the maturation of human divinity, ideal human nature.”

Charles Fadel: Try to discover the ‘unknown unknowns’

Fadel, founder of the Center for Curriculum Redesign and co-author of “Artificial Intelligence in Education: Promises and Implications for Teaching and Learning,” wrote, “The amazing thing about this moment is how quickly artificial intelligence is spreading and being applied. With that in mind, let’s walk through some big-picture topics. On human-centered development of digital tools and systems: I do believe significant autonomy will be achieved by specialized robotic systems, assisting in driving (U.S.), (air and land) package delivery, or bedside patient care (Japan), etc. But we don’t know exactly what ‘significant’ entails. In other words, the degree of autonomy may vary by the life-criticality of the applications – the more life-critical, the less trustworthy the application (package delivery on one end, being driven safely on the other).

“On human knowledge: Foundational AI models like GPT-3 are surprising everyone and will lead to hard-to-imagine transformations. What can a quadrillion-item system achieve? Is there a diminishing return? We will find out in the next six months, possibly even before the time this is published. We’ve already seen how very modest technological changes disrupt societies. I was witness to the discussion regarding the Global System for Mobile Communications (GSM) effort years ago, when technologists were trying to see if we could use a bit of free bandwidth that was available between voice communications channels. They came up with short messages – 140 characters that only needed 10 kilohertz of bandwidth. I wondered at the time: Who would care about this?

“Well, people did care, and they started exchanging astonishing volumes of messages. The humble text message has led to societal transformations that were complete ‘unknown unknowns.’ First, it led to the erosion of commitments (by people not showing up when they said they would) and not soon afterward it led to the erosion of democracy via Twitter and other social media.

“If something that small can have such an impact it’s impossible to imagine what foundation models will have. For now, I’d recommend that everybody take a deep breath and wait to see what the emerging impact of these models is. We are talking about punctuated equilibria a la Stephen Jay Gould for AI, but we’re not sure how far we will go before the next plateauing.

“Human connections, governance and institutions: I worry about regulation. I continue to marvel at the inability of lawyers and politicians, who are typically humanities types, to understand the impact of technologies for a decade or more after they erupt. This leads to catastrophes before anyone is galvanized to react. Look at the catastrophe of Facebook and Cambridge Analytica and the 2016 election. No one in the political class was paying attention then, and there still aren’t any real regulations. There is no anticipation in political circles about how technology changes things and the dangers that are obvious. It takes two to three decades for them to react, yet regulations should come within three years at worst.”

Anonymous: To avoid bad outcomes, generative AI must be shaped to serve people, not exploit them

A director of applied science for one of the top tech companies beginning to develop generative AI wrote, “I am deeply concerned about the societal implications of the emerging generative AI paradigm, but not for the reasons that are currently in the news. Specifically, on the current path, we risk both destroying the potential of these AI systems and, quite worryingly, the business models (and employment) of anyone who generates content. If we get this right, we can create a virtuous loop that will benefit all stakeholders, but that will require significant changes in policy, law and market dynamics.

“Key to this concern is the misperception that generative AI is in fact AI. Given how these technologies work – and in particular their voracious appetite for a truly astonishing amount of textual and imagery content to learn from – they’re best understood as collective intelligence, not artificial intelligence . Without hundreds of thousands of scientific articles, news articles, Wikipedia articles, user-generated content Q&A, books, e-commerce listings, etc., these things would be dumb as a doorknob.

“The risk here is that AI companies and content producers fail to recognize that they have extensive mutual dependence with respect to these systems. If people attribute all of the value to the AI systems (and AI companies delude themselves into this), all the benefits (economic and otherwise) will flow to AI companies. This risk is exacerbated by the fact that these technologies are able to write news articles, Wikipedia articles, etc., disrupting the methods of production for these datasets. The implications of this are very serious:

  • “Generative AI will substantially increase economic inequality, which is associated with terrible societal outcomes.
  • “Generative AI will threaten some of society’s most important institutions: news institutions, science, organizations like the Wikimedia Foundation, etc.
  • “Generative AI will eventually fail as it destroys the training data it needs to work.

“To avoid these outcomes, we urgently need a few things:

  • “We must strengthen content ownership laws to make clear that if you want to train an AI on a website or document, you need permission from the content owner. This can come both via new laws and lawsuits that lead to new legal interpretations.
  • “We need people to realize that they have a lot of power to stop AI companies from using all of their content without permission. There are very simple solutions that range from website owners using robots.txt, scientific authors using the copyright information they have, etc. Even expressing their wish to be opted-out has worked in a number of important early cases.
  • “We need companies to understand the market opportunities in strengthened content ownership laws and practices, which can put the force of the market behind a virtuous loop. For instance, an AI company that seeks to gain exclusive licenses to particularly valuable training content would be a smart AI company and one that will share the benefits of its technologies with all the people who helped create them.”

The following four essays are reprinted with the authors’ permission from the section “Hopes for 2023” in the Dec. 28, 2022, edition of Andrew Ng’s The Batch AI newsletter .

Yoshua bengio: our ai models should feature a human-like ability to discover and reason with high-level concepts and relationships.

Bengio, scientific director of Mila Quebec AI Institute and co-winner of the 2018 Alan Turing Award for his contributions to breakthroughs in the AI field of deep learning, wrote, “In the near future we will see models that reason. Recent advances in deep learning largely have come by brute force: taking the latest architectures and scaling up compute power, data and engineering. Do we have the architectures we need, and all that remains is to develop better hardware and datasets so we can keep scaling up? Or are we still missing something?

“I believe we’re missing something, and I hope for progress toward finding it in the coming year.

“I’ve been studying, in collaboration with neuroscientists and cognitive neuroscientists, the performance gap between state-of-the-art systems and humans. The differences lead me to believe that simply scaling up is not going to fill the gap. Instead, building into our models a human-like ability to discover and reason with high-level concepts and relationships between them can make the difference.

“Consider the number of examples necessary to learn a new task, known as sample complexity. It takes a huge amount of gameplay to train a deep learning model to play a new video game, while a human can learn this very quickly. Related issues fall under the rubric of reasoning. A computer needs to consider numerous possibilities to plan an efficient route from here to there, while a human doesn’t.

“Humans can select the right pieces of knowledge and paste them together to form a relevant explanation, answer or plan. Moreover, given a set of variables, humans are pretty good at deciding which is a cause of which. Current AI techniques don’t come close to this human ability to generate reasoning paths. Often, they’re highly confident that their decision is right, even when it’s wrong. Such issues can be amusing in a text generator, but they can be life-threatening in a self-driving car or medical diagnosis system.

“Current systems behave in these ways partly because they’ve been designed that way. For instance, text generators are trained simply to predict the next word rather than to build an internal data structure that accounts for the concepts they manipulate and how they are related to each other. But we can design systems that track the meanings at play and reason over them while keeping the numerous advantages of current deep learning methodologies. In doing so, we can address a variety of challenges from excessive sample complexity to overconfident incorrectness.

“I’m excited by generative flow networks, or GFlowNets, an approach to training deep nets that my group started about a year ago. This idea is inspired by the way humans reason through a sequence of steps, adding a new piece of relevant information at each step. It’s like reinforcement learning, because the model sequentially learns a policy to solve a problem. It’s also like generative modeling; it can sample solutions in a way that corresponds to making a probabilistic inference.

“If you think of an interpretation of an image, your thought can be converted to a sentence, but it’s not the sentence itself. Rather, it contains semantic and relational information about the concepts in that sentence. Generally, we represent such semantic content as a graph, in which each node is a concept or variable. GFlowNets generate such graphs one node or edge at a time, choosing which concept should be added and connected to which others in what kind of relation. I don’t think this is the only possibility, and I look forward to seeing a multiplicity of approaches. Through a diversity of exploration, we’ll increase our chance to find the ingredients we’re missing to bridge the gap between current AI and human-level AI.”

Douwe Kiela: We must move past today’s AI and its many shortcomings like ‘hallucinations’ They are also too easily misused or abused

Kiela, an adjunct professor in symbolic systems at Stanford University, previously the head of research at Hugging Face and a scientist at Facebook Research, wrote, “Expect less hype and more caution. In 2022 we really started to see AI go mainstream. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field. These are exciting times, and it feels like we are on the cusp of something great: a shift in capabilities that could be as impactful as – without exaggeration – the Industrial Revolution.

“But amidst that excitement, we should be extra wary of hype and extra careful to ensure that we proceed responsibly. Consider large language models. Whether or not such systems really have meaning, lay people will anthropomorphize them anyway, given their ability to perform arguably the most quintessentially human thing: to produce language. It is essential that we educate the public on the capabilities and limitations of these and other AI systems, especially because the public largely thinks of computers as good old-fashioned symbol processors – for example, that they are good at math and bad at art, while currently the reverse is true.

“Modern AI has important and far-reaching shortcomings. Among them:

  • “Systems are too easily misused or abused for nefarious purposes, intentionally or inadvertently.
  • “Not only do they hallucinate information, but they do so with seemingly very high confidence and without the ability to attribute or credit sources.
  • “They lack a rich enough understanding of our complex multimodal human world and do not possess enough of what philosophers call ‘folk psychology,’ the capacity to explain and predict the behavior and mental states of other people.
  • “They are arguably unsustainably resource-intensive, and we poorly understand the relationship between the training data going in and the model coming out.
  • “Lastly, despite the unreasonable effectiveness of scaling – for instance, certain capabilities appear to emerge only when models reach a certain size – there are also signs that with that scale comes even greater potential for highly problematic biases and even less-fair systems.

“In 2023 we’ll see work on improving all of these issues. Research on multimodality, grounding and interaction can lead to systems that understand us better because they understand our world and our behavior better. Work on alignment, attribution and uncertainty may lead to safer systems less prone to hallucination and with more accurate reward models. Data-centric AI will hopefully show the way to steeper scaling laws, and more efficient ways to turn data into more robust and fair models. Finally, we should focus much more seriously on AI’s ongoing evaluation crisis. We need better and more holistic measurements – of data and models – to ensure that we can characterize our progress and limitations and understand, in terms of  ecological validity  (for instance, real-world use cases), what we really want out of these systems.”

Alon Halevy: We can take advantage of our personal data to improve our health, vitality and productivity

Halevy, a director with the Reality Labs Research brand of Meta Platforms, wrote, “Your personal data timeline lies ahead. The important question of how companies and organizations use our data has received a lot of attention in the technology and policy communities. An equally important question that deserves more focus in 2023 is how we, as individuals, can take advantage of the data we generate to improve our health, vitality and productivity.

“We create a variety of data throughout our days. Photos capture our experiences, phones record our workouts and locations, Internet services log the content we consume and our purchases. We also record our want-to-do lists: desired travel and dining destinations, books and movies we plan to enjoy and social activities we want to pursue.

“Soon smart glasses will record our experiences in even more detail. However, this data is siloed in dozens of applications. Consequently, we often struggle to retrieve important facts from our past and build upon them to create satisfying experiences on a daily basis. But what if all this information were fused in a personal timeline designed to help us stay on track toward our goals, hopes, and dreams? This idea is not new. Vannevar Bush envisioned it in 1945 , calling it a memex. In the 1990s, Gordon Bell and his colleagues at Microsoft Research built MyLifeBits , a prototype of this vision. The prospects and pitfalls of such a system have been depicted in film and literature.

“Privacy is obviously a key concern in terms of keeping all our data in a single repository and protecting it against intrusion or government overreach. Privacy means that your data is available only to you, but if you want to share parts of it, you should be able to do it on the fly by uttering a command such as, ‘Share my favorite cafes in Tokyo with Jane.’ No single company has all our data or the trust to store all our data. Therefore, building technology that enables personal timelines should be a community effort that includes protocols for the exchange of data, encrypted storage and secure processing.

“Building personal timelines will also force the AI community to pay attention to two technical challenges that have broader application. The first challenge is answering questions over personal timelines. We’ve made significant progress on question answering over text and multimodal data. However, in many cases, question answering requires that we reason explicitly about sets of answers and aggregates computed over them. This is the bread and butter of database systems. For example, answering ‘what cafes did I visit in Tokyo?’ or ‘how many times did I run a half marathon in under two hours?’ requires that we retrieve sets as intermediate answers, which is not currently done in natural language processing. Borrowing more inspiration from databases, we also need to be able to explain the provenance of our answers and decide when they are complete and correct.

“The second challenge is to develop techniques that use our timelines responsibly to improve personal well-being. Taking inspiration from the field of positive psychology, we can all flourish by creating positive experiences for ourselves and adopting better habits. An AI agent that has access to our previous experiences and goals can give us timely reminders and suggestions of things to do or avoid. Ultimately, what we choose to do is up to us, but I believe that an AI with a holistic view of our day-to-day activities, better memory and superior planning capabilities would benefit everyone.”

Reza Zadeh: Active learning is set to revolutionize machine learning, allowing AI systems to continuously improve and adapt over time

Zadeh, founder and CEO at Matroid, a computer-vision company, and adjunct professor at Stanford University, wrote, “As we enter 2023, there is a growing hope that the recent explosion of generative AI will bring significant progress in active learning. This technique, which enables machine learning systems to generate their own training examples and request them to be labeled, contrasts with most other forms of machine learning, in which an algorithm is given a fixed set of examples and usually learns from those alone.

“Active learning can enable machine learning systems to:

  • Adapt to changing conditions;
  • Learn from fewer labels;
  • Keep humans in the loop for the most valuable, difficult examples; and
  • Achieve higher performance.

“The idea of active learning has been in the community for decades, but it has never really taken off. Previously, it was very hard for a learning algorithm to generate images or sentences that were simultaneously realistic enough for a human to evaluate and useful to advance a learning algorithm. But with recent advances in generative AI for images and text, active learning is primed for a major breakthrough. Now, when a learning algorithm is unsure of the correct label for some part of its encoding space, it can actively generate data from that section to get input from a human.

“Active learning has the potential to revolutionize the way we approach machine learning, as it allows systems to continuously improve and adapt over time. Rather than relying on a fixed set of labeled data, an active learning system can seek out new information and examples that will help it better understand the problem it is trying to solve. This can lead to more accurate and effective machine learning models, and it could reduce the need for large amounts of labeled data.

“I have a great deal of hope and excitement that active learning will build upon the recent advances in generative AI. We are likely to see more machine learning systems that implement active learning techniques; 2023 could be the year it truly takes off.”

Sign up for our Internet, Science and Tech newsletter

New findings, delivered monthly

Report Materials

Table of contents, a majority of americans have heard of chatgpt, but few have tried it themselves, public awareness of artificial intelligence in everyday activities, the metaverse in 2040, the future of digital spaces and their role in democracy, experts say the ‘new normal’ in 2025 will be far more tech-driven, presenting more big challenges, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

  • Publications

News - February 12, 2020

February Focus: Promoting Digital Sobriety

Clean data center

Written by Tristan Lebleu 4 min read

How can we use the power of digital technologies to accelerate the ecological transition while reducing its impact on the environment?

How did you end up reading this article? According to Google Analytics, around half of you got here through a web search. But did you know that this simple digital enquiry emitted approximately 7g of CO2?

That’s actually a very decent amount of pollution, easily absorbable by a single tree in a day. However, every day Google processes 3.5 billion search queries... And now those 7 grams of carbon dioxide become a much larger issue. An issue known as “digital pollution”, and which accounts for 4% of all CO2 emissions, according to The Shift Project . The issue is expected to grow fast, as digital transformation of all aspects of our lives takes place.

The first step is to recognise the problem and its scale. Everything we do with digital technologies has an environmental impact, and web searches are just the tip of the iceberg. While the Internet may seem very abstract, it actually relies on very concrete elements to function, such as cables, servers, data centers, routers to name a few… All this equipment requires electricity to build and to function.

According to The Shift Project, in 2020 digital accounts for 3.3% of world energy consumption. As the world still mostly gets its electricity from fossil fuels, this means watching on-demand videos, sending emails, uploading photos into the cloud, using apps or scrolling through social networks… All these little things we do everyday on our phones and computers emit carbon dioxide.

Just a few figures:

  • 1 email emits 10g of CO2eq;
  • Every day, an average of 294 billion emails are sent;
  • The average weight of web pages increased by 115 between 1995 and 2015;
  • Watching a 30min show leads to emissions of 1.6kg of CO2eq;
  • Online video streaming produced 30 million tons of CO2 emissions (equivalent to a country like Spain).

Digital pollution comes from the use of IT infrastructure, as much as from the manufacturing of digital devices. Energy and resources are also necessary to build hardware: there are currently approximately 5.5 billion smartphones in service, as well as computers, tablets, and IoT devices connected to the Internet. It is estimated that there are at least 40 metals present in a smartphone, and building a laptop requires 240kg of fossil fuel, 22kg of chemicals, and 1,5 litres of water. So the habit of changing our phones, tablets and computers as soon as a new version appears is very harmful to the environment. According to Frédéric Bordage in his book “Digital Sobriety”, 80% of the energy costs of a smartphone occurs at the time of its manufacture, rather than during its usage. Repairing and refurbishing old devices can therefore significantly lower their environmental costs.

Once used and thrown away, these devices become electronic waste (known as e-waste), which pollute the environment and can be dangerous for people’s health. Copper, lead and tin, gold, silicon for semiconductors, tantalum or lithium... e-waste contains 5 of the world's 6 most dangerous pollutants listed by Green Cross International.

As the world is increasingly dependent on digital tools, we need to seriously rethink our use of these technologies and promote “digital sobriety”, defined by The Shift project as the following: “buy the least powerful equipment possible, change them as rarely as possible, and reduce unnecessary energy-intensive uses”.

Individuals and companies have a key role to play to promote digital sobriety thanks to responsible behaviours. Deleting old emails, cleaning your inbox and unsubscribing from polluting newsletters, limiting receivers copied in your emails, stopping useless queries when searching via search engine, sending lighter emails, limiting usage of Cloud to maximum, or prioritising television over streaming… These are just some of the things you can do to limit your impact on the environment.

Technology can also help us reduce the impact of network infrastructure. Here are some of the labelled Solar Impulse Efficient Solutions tackling the issue of digital pollution:

digital pollution essay

LIFI WIRELESS MOBILE COMMUNICATIONS NETWORK

A high speed bidirectional networked and mobile communication of data using light

  • Buildings & Shelters

digital pollution essay

Data centre energy management system for resource usage in the cloud

digital pollution essay

A large-scale, profitable electronic waste refurbishing and recycling

  • Waste & Pollution

digital pollution essay

Qarnot (QH-1)

Making buildings smart with free computing waste heat

Written by Tristan Lebleu on February 12, 2020

  • Digital pollution

Do you like this article? Share it with your friends!

WEEECAM, tackling the issue of e-waste in developing countries

1000 Solutions - September 13, 2019

Written by Tristan Lebleu - 3 min read

WEEECAM, tackling the issue of e-waste in developing countries

QH-1 turns computing servers into heaters

1000 Solutions - February 20, 2020

QH-1 turns computing servers into heaters

Mountain View, CA

digital pollution essay

Mountain View

Around the Globe

Hurricane tracker.

Severe Weather

Radar & Maps

News & features, winter center, news / weather news, digital pollution: what it is and how you're contributing to it.

By Ashley Williams , AccuWeather staff writer

You might not realize it, but you're likely contributing to digital pollution every day. Try these five tips to reduce your digital impact on the environment.

From littering the land with waste and discarding harmful plastic into the ocean to burning fuels that send toxic vapors and particles into our air, most people are aware that their daily actions can potentially impact the planet in the form of pollution.

However, many might not realize that they’re regularly contributing to another type of environmental impact: digital pollution.

“Digitalization of our culture produces negative external effects on the environment,” said Robert Godes, founder, president and chief technology officer for Brillouin Energy Corporation .

“Manufacturing, use and disposal of our gadgets create increased demand for energy, produce toxic waste and contribute to air pollution,” Godes told AccuWeather.

Person typing on computer - Pexels image

Anytime we use our computers and smartphones, for example, we leave a considerable ecological footprint behind , from the moment we buy our device up until we replace it with a newer model, according to Digital for the Planet , a Global Earth Project that works to develop digital sustainability.

Last year, the Guardian reported that billions of internet-connected devices could produce 3.5 percent of global emissions within a decade, and that could rise to 14 percent by 2040.

“Every time you perform simple daily actions like browsing a website, sending and receiving email, using an app on your phone, saving a file to your cloud drive or searching Google, data gets transferred between your device and the server that the website, app or software is hosted on,” said Ben Clifford, managing director of London-based Erjjio Studios Limited, a green hosting, website design and development company.

Just powering the internet consumes a massive amount of electricity through its generation in power stations, creating at least 2 percent of global carbon emissions, Erjjio Studios states on its website.

“The amount of data getting transferred and stored around the world through the internet is growing at an exponential rate,” Clifford told AccuWeather. “The global information and communications technology sector’s carbon emissions are already roughly equal to those of the global aviation industry, but it’s expected to continue quickly rising even further.”

Experts with Digital for the Planet have said that digital pollution can be attributed to three sources: manufacturing, practices and e-waste/recycling.

In 2015, the 710 million electronic devices manufactured were made of rare metals that deplete nonrenewable resources. These devices also generated 1.5 million tons of waste – equal to 166 times the size of the Eiffel Tower, Digital for the Planet reported.

Additionally, the entire process of making a smartphone, from material sourcing to assembly, comprises more than 80 percent of environmental impacts. “The ores and precious metals contained in electronic devices can be toxic for manufacturers, if in contact with waste, and for the environment,” according to a Digital for the Planet blog. “Some components such as chromium are now prohibited because of their toxicity.”

The sustainability blog also notes other statistics regarding digital pollution, including that digitization represents 16 percent of electricity consumption, and that electricity consumption due to digitization rises by 8.5 percent annually.

"Gradually, some data centers are starting to commit to using electricity sourced from 100 percent renewable energy, but there’s a long way to go, and it isn’t an issue that seems to be widely known about or understood by the general public," said Clifford, whose own company is among those working to educate the public on this topic as well as offer solutions.

How you can reduce your digital footprint

Digital for the Planet recommends the following steps for helping to reduce one’s digital impact on our environment:

Infographic - Reducing your digital footprint

Weather News

digital pollution essay

Chasing storms in the Plains on April Fools Day

digital pollution essay

Cars stuck on flooded Montecito highway

digital pollution essay

Rounds of heavy April rain to boost flood risk in central, eastern US

Top Stories

Trending Today

Accuweather early, accuweather prime.

Solar Eclipse 2024

Severe weather packs a punch as powerful storms press into eastern US

8 hours ago

digital pollution essay

Total solar eclipse cloud forecast: Where will clouds spoil the show?

18 hours ago

digital pollution essay

Winter Weather

April nor'easter to unload feet of snow in northern New England, NY

digital pollution essay

7 taken to New York hospital after ‘severe turbulence’ on flight

20 hours ago

digital pollution essay

Crews begin 'complex process' of removing wrecked Baltimore bridge

15 hours ago

digital pollution essay

Featured Stories

LIVE: Over 40 million in path of upcoming solar eclipse

LATEST ENTRY

Over 40 million people to see total solar eclipse, weather permitting

21 hours ago

digital pollution essay

Infamous wooden plank from 'Titanic' sold for over $700,000

digital pollution essay

Sharks wearing cameras revealed the world’s largest seagrass ecosystem

digital pollution essay

Pandas aren’t all black and white and scientists now understand why

digital pollution essay

Everest climbers must now pack their poop with them

digital pollution essay

We have updated our Privacy Policy and Cookie Policy .

Get AccuWeather alerts as they happen with our browser notifications.

Notifications Enabled

Thanks! We’ll keep you informed.

ComScore

  • Members Area
  • Sign up to Newsletter
  • Who we work with
  • What is CSR
  • Become a member
  • Our Members
  • Our Areas of Expertise
  • Business and Human Rights
  • The Business Case for CSR
  • Sustainable Development Goals
  • State of the Nation Research
  • Sustainability Handbook
  • Hitting The Mark
  • Collective Movement for Change
  • Testimonials
  • About The Leaders’ Group
  • The Low Carbon Pledge
  • The Inclusive Workplace Pledge
  • Building Capabilities
  • Are you a Business?
  • Are you a School?
  • Success Stories
  • EPIC Programme
  • Ukraine Employment Programme
  • EmployAbility Dublin West
  • Traveller Employment Programme

What is Digital Pollution and easy actions to reduce our digital carbon footprint

digital pollution essay

What is digital pollution?

Digital pollution is responsible for 3.7% of CO2 emissions globally – 50% more than air transport (2.4%)

However digital pollution is an issue that is rarely identified, let alone addressed. People often think that online streaming, sending emails, and conducting online searches have no external environmental effect. Because digital pollution is invisible, it is often underestimated. But what exactly is digital pollution?

Digital pollution encompasses three elements: manufacturing, practices, and e-waste. Manufacturing and e-waste are the elements most talked about, and for good reason: as devices get smaller, and the number of internal components gets larger, the manufacturing and the environmental waste of devices has never been greater. Indeed, the mobile phones used in the 1960s were made of just 10 components, while each of today’s smartphones are composed of approximately 54. The ways that components are extracted are problematic, and the fact that the number of components required is growing is similarly problematic.

The huge internet infrastructure is also often not considered. This comprises data centres, servers, undersea optic fibre cables, relay antennas, wi-fi boxes and much more, all of which process every little action you perform online. Hence every time an email is sent, it can literally go around the world. As a result, sending a standard email can produce 4g CO2e while a longer email with attachments can produce 50g CO2e.

That’s not all, each time a search query is performed, the carbon footprint generated and released into the atmosphere from this action amounts to about 0.9 g of CO2, and each web page that remains open must continuously connect to its server. While watching a video online seems harmless, it’s one of the most CO2e intensive online activities, watching a video for an hour is equivalent to releasing 130g of CO2 into the atmosphere or plugging in a refrigerator for a full year! Streaming is responsible for 60% of data traffic on the web, and the total amount of online videos on the internet equal 1% of global GHG.

Some of the actions we can all take to reduce our digital carbon footprint include:

  • Unsubscribe from newsletters you don’t read
  • Avoid sending images where possible. Use online tools such as WeTransfer, or an internal link can be provided to the folder where the picture is located
  • Save websites frequently visited on your favourites tab
  • Click in your history browser to access a site directly
  • Type the URL directly into the address bar when you know it
  • Watch videos in lower resolution
  • Think twice before scrolling through videos on social media
  • Turn off cameras during meetings when appropriate
  • Delete emails and files where appropriate, and avoid duplications of files
  • Remove your email signature when sending out emails internally ( check with your internal comms team first!)

There are other small steps we each can take to reduce the digital pollution we are responsible for, such as, turning off your computer at the end of each day and unplugging computers and phones when fully charged. In terms of company actions, one way of embedding change is to establish new norms, raise awareness of the issues and communicate with your teams.

Establish new rules for everyone, so no one will be surprised if you turn off your camera during a meeting.

You can also organise a digital clean-up day like we are planning at BITCI and involve everyone in the action, make space and declutter your digital storage, on computer, your hard drive, the cloud, and your mind!

At BITCI we have formed a dedicated project team who are actively working on the issue of Digital Pollution. The focus of the group is to raise awareness within our organisation, identify changes we can make to tackle the issue of Digital Pollution within the BITCI and our sister organisation The Community Foundation.

Share on twitter

We Value Your Privacy

Privacy overview.

What you need to know about digital pollution according to four industry experts - Friends of Friends / Freunde von Freunden (FvF)

What you need to know about digital pollution according to four industry experts.

digital pollution essay

Advertisement

Over the past decade, the work of writers, editors, and photographers, has come to rely on digital technologies to an unprecedented extent. From emails and cloud services to digital websites featuring dynamic layouts and immersive experiences, the internet has re-shaped creative work in a lot of exciting ways.

At the same time, the beauty and seemingly endless possibilities brought on by the web, paired with growing concerns over the social and environmental impact of material production, consumption, and accumulation (think printed matter and fast fashion) have perhaps overshadowed the environmental cost of digital and virtual services. Recent estimates show that digital technologies account for between 5 to 9% of global electricity consumption, a clear indication that the internet is far from being the insubstantial and ethereal space it is often imagined to be.

But how exactly does this relate back to our creative routines and processes? And how can digital publishing, along with other branches of the creative industry be rendered more sustainable? In the attempt to address these questions, broaden our views, and challenge our own creative practices—an interest we share with our design team at MoreSleep —we discovered a number of eye-opening projects spanning web design, digital design, publishing, and digital communication. We spoke to four of them about their insights and learnings. 

Journalist Kris De Decker’s low-tech approach

According to Kris De Decker, the pathway to a more sustainable internet is “low-tech”. A journalist, researcher, and “low-tech advocate”, De Decker has been writing about the web’s energy use and how mechanical as well as analog devices might be of help since 2002. He is the founder of Low-Tech Magazine and No Tech Magazine , two pioneering publications exploring unconventional, somewhat nostalgic solutions to issues including the energy use of websites and of modern offices . Wanting to demonstrate that “the internet can be low-tech too,” in 2018 both publications were transferred to servers running on solar power with the help of designers Marie Otsuka and Lauren Traugott-Campbell . The current version is as beautiful as ever, but remember: it goes offline when it rains! 

digital pollution essay

How did you first become aware of digital pollution?

As a journalist, I have been writing about the energy use of the internet for many years. From the beginning, the internet was presented as something that seems to defy the material world (everything is in the “clouds”), but I never bought that idea. Obviously, there is a massive infrastructure supporting our websites, and that infrastructure needs resources—energy and materials.

Based on your experience, what are three things that everyone working in media should know about the environmental impact of digital content:

(1) Our websites are getting “heavier” all the time, (2) we spend more and more time online, and (3) there is a lot of hidden data traffic (surveillance capitalism) that also pushes up energy use. These are the trends that are pushing up the energy use of the internet. Without addressing these, the problem won’t be solved.

Wonderland’s Sustainable Digital Design platform

Wonderland ’s Sustainable Digital Design (SDD) platform is less critical of innovation and focuses more specifically on digital design—the design of digital assets, including graphics and websites—in the internet era. As Hala Alsadi, a project manager at the Amsterdam-based design studio explains, the project is a reflection of Wonderland’s own journey to increasing the sustainability of its practices. Reuniting perspectives from different creative fields, as well as the studio’s own research, the platform provides designers and curious passersby with entry-level concepts, industry reports, and handy guidelines for familiarizing with the topic.

digital pollution essay

How would you describe your work in connection with Sustainable Digital Design?

Our drive as a design studio is to ensure that our learnings from SDD are being applied to our work, and through this, we educate our clients on how to make their products and digital footprints more sustainable. Whilst still keeping the right balance of creativity and beauty in everything we produce, our goal is to achieve this in a more environmentally conscious manner.

Could you tell us about three things that everyone working in media should know about the environmental impact of digital content?

(1) Every link clicked, video streamed, or image downloaded leaves a mark on our planet. (2) There are different elements that contribute to how much energy a website can consume. These elements include choices of color, typography, animation, code size, and hosting servers. (3) When thinking about digital design more critically, you become aware that not every transition from physical to digital is ultimately sustainable. From here, you can embark on your own journey towards taking conscious steps to make your creative work have a positive impact.

View this post on Instagram A post shared by Sustainable Digital Design (@sustainabledigitaldesign)

Anyways Creative’s research into email clutter

Exploring yet another facet of our digital habits and their environmental cost is the London-based agency Anyways Creative . Their recent, self-initiated project Thanks in Advance unpacks the cost of people’s inboxes—did you know that, over a year, a single email inbox consumes enough energy to run a hot shower for about four min?—and encourages people to delete old emails. As Jeanne Harignordoquy, who worked closely on the project puts it, “the carbon cost of an email inbox is small, [but] it’s a nice entry point to explain how joint effort can make a real change.” In line with its ethos, the project’s findings and learnings were collected on a low-energy website that weighs as little as 36KB and is 97% greener than other sites, and translated into powerful visuals with the help of illustrator Jose Flores . The project, in other words, was also seen as an opportunity to apply their findings on people’s collective energy use to the design of a website. 

digital pollution essay

What inspired you to explore the environmental impact of emails and inboxes?

As a company that communicates a lot digitally, we felt that email was something quite tangible and an accessible route to consider in order to raise awareness about the wider topic of digital consumption. This is especially prevalent as the world continues to become ever more reliant on technology.

Could you tell us about three things that everyone working in media should know about the environmental impact of digital communication?

(1) Everything we do online is physically living somewhere on our planet and is energy hungry. And although technology is evolving and is becoming more and more efficient, the demand is multiplying, the construction of new data centers is booming, and waste is increasing. (2) BUT people should still be informed, inspired, and encouraged to search for better. The message here is not to stop making but to make better. In fact, the energy efficiency of what we do online can be improved with thoughtful design and habits. (3) Individual actions add up, and there is no project too small to start making a change.

View this post on Instagram A post shared by Anyways Creative (@anywayscreative)

Wholegrain Digital and Mightybyte’s Sustainable Web Design

Another great resource for learning about digital pollution and how to avoid it is Sustainable Web Design . Co-developed by London-based agencies Mightybytes and Wholegrain Digital , the site is particularly suited for web developers and designers looking to future-proof their practices. 

Significantly, SWD’s approach relies on the Sustainable Web Manifesto , a set of principles—clean, efficient, open, honest, regenerative, and resilient—formulated by Wholegrain’s Managing Director, Tom Greenwood, in collaboration with a team of industry experts, and published in 2019 in an effort to encourage and guide the sector towards a more sustainable way of doing things.

How did you become aware of digital pollution and the need for more sustainable forms of web design?

When initially reviewing the B Impact assessment in preparation for becoming a B Corp, it became apparent that we could not provide any data about the environmental impact of the ‘products’ that we produce. This led to us asking whether there was any impact to digital services and triggered an internal research project from which we learned of the large energy consumption and carbon emissions of digital technology.

From your perspective, what should people working in media know about the environmental impact of websites?

(1) It’s bigger than you think. (2) Individual improvements—when multiplied by millions of pageviews—can make a big difference. (3) We already have most of the tools necessary to address the issue; we just need the collective knowledge and will to change.

For those wondering where to start, we collected our experts’ top tips below .

  • 1 UPLOAD THE RIGHT IMAGE SIZE TO YOUR PLATFORM. “Large images are important and great at keeping your audience engaged and inspired but be smart about it. It will make the website faster to load and save a few hungry MegaBits.” — Jeanne Harignordoquy, Anyways Creative
  • 2 AVOID ONLINE MAGAZINE VIEWERS. “PDF viewers are inefficient, slow, and have poor usability and accessibility. Delivering content as true web content rather than trying to put print documents online is a better approach in every sense.” — Tom Greenwood, Sustainable Web Design
  • 3 IT’S BETTER TO HAVE VIDEOS HOSTED ON 3RD PARTY SITES. “Some of them have quite efficient ways to compress files without losing your resolution. And avoid Autoplay when not needed!”— Jeanne Harignordoquy, Anyways Creative
  • 4 ASK YOUR DEVELOPMENT TEAM TO EVALUATE YOUR SITE. “Normally, websites are a build-up of code from Adhoc interactions—where the code can be optimized to reduce the number of requests. We made a 3 part guide series for how to make websites more sustainable, you can find these on the learnings section of our website” — Hala Alsadi, Sustainable Digital Design
  • 5 CONSIDER BUILDING A STATIC WEBSITE. “A static website saves a lot of energy but it can also save you a lot of money for hosting your website.” — Kris De Decker, Low-tech Magazine
  • 6 USE A GREEN HOSTING SERVICE. “It only uses renewable energy to power its servers and specializes in energy-efficient architecture. For Thanks in Advance, we used Kristal but there are lots of great other options available.” — Jeanne Harignordoquy, Anyways Creative
  • 7 DELETE OLD FOLDERS. “On an organizational level, what about a digital spring clean sometimes? Encourage your team to delete old folders and calendar invite emails as a quick win.” — Jeanne Harignordoquy, Anyways Creative
  • 8 MEASURE YOUR MAGAZINE’S IMPACT. “If you’re a company owner, use a free tool like the B Impact Assessment or similar to better understand your company’s social, environmental, and economic impact on stakeholders.” — Tim Frick, Sustainable Web Design

This Deep Dive explores the environmental impact of our digital habits and what can be done to reduce it in the context of media and publishing. If you’re interested in reading more about digital pollution in context of creative practice check out our Link List on building a more sustainable web . Text: Amelie Varzi Images: Diego Marmolejo , Jose Flores

What to read next

Rebuilding the art ecosystem: super super markt brings together different generations of art lovers, charlotte sarrazin crafts a curatorial vision in the scenic landscape of val poschiavo, beyond trends: how white label project’s founders are shifting the narrative around design and fashion, privacy overview.

  • Share full article

Advertisement

Supported by

Guest Essay

A.I.-Generated Garbage Is Polluting Our Culture

A colorful illustration of a series of blue figures lined up on a bright pink floor with a red background. The farthest-left figure is that of a robot; every subsequent figure is slightly more mutated until the final figure at the right is strangely disfigured.

By Erik Hoel

Mr. Hoel is a neuroscientist and novelist and the author of The Intrinsic Perspective newsletter.

Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

digital pollution essay

Adjectives associated with A.I.-generated text have increased in peer reviews of scientific papers about A.I.

Frequency of adjectives per one million words

Commendable

digital pollution essay

A study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?).

What about when A.I. is used in one of its intended ways — to assist with writing? Recently, there was an uproar when it became obvious that simple searches of scientific databases returned phrases like “As an A.I. language model” in places where authors relying on A.I. had forgotten to cover their tracks. If the same authors had simply deleted those accidental watermarks, would their use of A.I. to write their papers have been fine?

What’s going on in science is a microcosm of a much bigger problem. Post on social media? Any viral post on X now almost certainly includes A.I.-generated replies, from summaries of the original post to reactions written in ChatGPT’s bland Wikipedia-voice, all to farm for follows. Instagram is filling up with A.I.-generated models, Spotify with A.I.-generated songs. Publish a book? Soon after, on Amazon there will often appear A.I.-generated “workbooks” for sale that supposedly accompany your book (which are incorrect in their content; I know because this happened to me). Top Google search results are now often A.I.-generated images or articles. Major media outlets like Sports Illustrated have been creating A.I.-generated articles attributed to equally fake author profiles. Marketers who sell search engine optimization methods openly brag about using A.I. to create thousands of spammed articles to steal traffic from competitors.

Then there is the growing use of generative A.I. to scale the creation of cheap synthetic videos for children on YouTube. Some example outputs are Lovecraftian horrors, like music videos about parrots in which the birds have eyes within eyes, beaks within beaks, morphing unfathomably while singing in an artificial voice, “The parrot in the tree says hello, hello!” The narratives make no sense, characters appear and disappear randomly, and basic facts like the names of shapes are wrong. After I identified a number of such suspicious channels on my newsletter, The Intrinsic Perspective, Wired found evidence of generative A.I. use in the production pipelines of some accounts with hundreds of thousands or even millions of subscribers.

As a neuroscientist, this worries me. Isn’t it possible that human culture contains within it cognitive micronutrients — things like cohesive sentences, narrations and character continuity — that developing brains need? Einstein supposedly said : “If you want your children to be intelligent, read them fairy tales. If you want them to be very intelligent, read them more fairy tales.” But what happens when a toddler is consuming mostly A.I.-generated dream-slop? We find ourselves in the midst of a vast developmental experiment.

There’s so much synthetic garbage on the internet now that A.I. companies and researchers are themselves worried, not about the health of the culture, but about what’s going to happen with their models. As A.I. capabilities ramped up in 2022, I wrote on the risk of culture’s becoming so inundated with A.I. creations that when future A.I.s are trained, the previous A.I. output will leak into the training set, leading to a future of copies of copies of copies, as content became ever more stereotyped and predictable. In 2023 researchers introduced a technical term for how this risk affected A.I. training: model collapse . In a way, we and these companies are in the same boat, paddling through the same sludge streaming into our cultural ocean.

With that unpleasant analogy in mind, it’s worth looking to what is arguably the clearest historical analogy for our current situation: the environmental movement and climate change. For just as companies and individuals were driven to pollute by the inexorable economics of it, so, too, is A.I.’s cultural pollution driven by a rational decision to fill the internet’s voracious appetite for content as cheaply as possible. While environmental problems are nowhere near solved, there has been undeniable progress that has kept our cities mostly free of smog and our lakes mostly free of sewage. How?

Before any specific policy solution was the acknowledgment that environmental pollution was a problem in need of outside legislation. Influential to this view was a perspective developed in 1968 by Garrett Hardin, a biologist and ecologist. Dr. Hardin emphasized that the problem of pollution was driven by people acting in their own interest, and that therefore “we are locked into a system of ‘fouling our own nest,’ so long as we behave only as independent, rational, free-enterprisers.” He summed up the problem as a “tragedy of the commons.” This framing was instrumental for the environmental movement, which would come to rely on government regulation to do what companies alone could or would not.

Once again we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap A.I. content to maximize clicks and views, which in turn pollutes our culture and even weakens our grasp on reality. And so far, major A.I. companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

A common justification for inaction is that human editors can always fiddle around with whatever patterns are used if they know enough. Yet many of the issues we’re experiencing are not caused by motivated and technically skilled malicious actors; they’re caused mostly by regular users’ not adhering to a line of ethical use so fine as to be nigh nonexistent. Most would be uninterested in advanced countermeasures to statistical patterns enforced into outputs that should, ideally, mark them as A.I.-generated.

That’s why the independent researchers were able to detect A.I. outputs in the peer review system with surprisingly high accuracy: They actually tried. Similarly, right now teachers across the nation have created home-brewed output-side detection methods , like adding hidden requests for patterns of word use to essay prompts that appear only when copied and pasted.

In particular, A.I. companies appear opposed to any patterns baked into their output that can improve A.I.-detection efforts to reasonable levels, perhaps because they fear that enforcing such patterns might interfere with the model’s performance by constraining its outputs too much — although there is no current evidence this is a risk. Despite public pledges to develop more advanced watermarking, it’s increasingly clear that the companies are dragging their feet because it goes against the A.I. industry’s bottom line to have detectable products.

To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture.

Erik Hoel is a neuroscientist, a novelist and the author of The Intrinsic Perspective newsletter.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow the New York Times Opinion section on Facebook , Instagram , TikTok , WhatsApp , X and Threads .

Press Release

Fraunhofer at the hannover messe 2024, circular economy: a digital eu product passport for batteries.

Research News / April 02, 2024

Starting in February 2027, all new traction batteries, two-wheeled vehicle batteries and industrial batteries with a capacity of over 2 kWh that are marketed in the EU will require a digital battery passport. The purpose is to ensure transparency and sustainability in the battery value chain, reduce environmental impacts and encourage the secondary use of batteries. The Battery Pass Consortium, with the participation of the Fraunhofer Institute for Production Systems and Design Technology IPK, is developing frameworks and recommendations in terms of content and technology for implementing the passport. Researchers from Fraunhofer IPK are responsible for the design and implementation of the technical standards. From April 22 to 26, 2024, they will be at the Hannover Messe (Hall 2, Booth B24) presenting a draft technical reference standard designed to enable battery passports — and all types of digital product passports — to be implemented in a way that is scalable and interoperable.

Ab Februar 2027 benötigen alle in der EU neu auf den Markt gebrachten Traktionsbatterien über 2 kWh, die etwa in Elektrofahrzeugen verbaut werden, einen digitalen Batteriepass.

Batteries are key to the transition to climate-friendly mobility and the widespread use of renewable energies. As crucial components of electric vehicles, they need to be produced and used sustainably and reincorporated into the material cycle easily. It is important to prolong the life cycle of the entire battery system as much as possible and to recycle the raw resources, materials and components after they are first used. Transparent supply chains also need to be formed, from the raw materials all the way to the assembly of batteries. In the future, manufacturers will need to document all emissions resulting from the manufacture, use and disposal of their products. To support these ambitions, the new EU Batteries Act will require a digital passport for all traction batteries, two-wheeled vehicle batteries and industrial batteries with a capacity of over 2 kWh from February 2027. This will also affect LMT (light means of transport) batteries built into electric bicycles and electric scooters.

Transparency around the electric car battery

The purpose of the battery passport is to support seamless documentation of a battery’s life, from raw material extraction and production to use, reuse and recycling. It holds a record of a battery’s origin and logs the relevant uses. To this end, it documents data that comprehensively describes the sustainability and responsibility of the supply chain, such as data on the carbon footprint, the working conditions for raw material extraction, battery materials and components, hazardous substances contained, resource efficiency, performance and service life, battery status, and other data including information on recyclability and repair as well as how to implement these steps. Disassembly instructions contained in the battery passport help to facilitate the secondary use of as many of the battery’s components as possible.

“The battery passport provides a digital record of all of the socially, ecologically and economically relevant information on a battery’s life cycle. By providing verified and verifiable information, it can create transparency, support second-life uses or optimize processing by recycling providers. This supports the development of sustainable business models along the battery value chain while complying with relevant sustainability and ethical criteria. The aim is to reduce child labor and pollution in countries where the raw materials are produced and keep track of the export of old batteries, for example,” says Prof. Thomas Knothe, a scientist at Fraunhofer IPK, which is a member of the project consortium (see box) and which draws up the technical standards that are relevant to industry and transforms them into European standards. To enable battery manufacturers and importers to present the battery passport in 2027, all of the necessary groundwork, technical specifications and test systems must be completed by the end of 2025.

Decentralized data

The battery passport takes the form of a software system where all data is stored in distributed data spaces and responsibility for the data is decentralized. Certain functions, such as the central registration of passports and a so-called Data Portal, which will provide an aggregated view of a majority of battery passports, will be the responsibility of the European Commission. Some data elements will only be made available to national authorities’ data systems for purposes such as market conformity checks. The manufacturer will be responsible for managing the rest of the data. Any changes to the battery data will need to be updated in the passport. Each manufacturer must appoint a third-party provider to ensure that there is a backup of the data in the event of insolvency. The necessary interfaces, access rights and functions will need to be implemented in the software system. To ensure that this happens, there are numerous questions being addressed in the Battery Pass Consortium: What battery data will be required? Who should store them, how, when and where? Who will be able to access the data and how? How will access to the data be kept secure? How can solutions incorporate existing systems but also new ones? The Battery Pass Consortium is proposing existing technical standards as well as standards that are still to be developed and is illustrating the integrative application of those standards using a software demonstrator. “One of the challenges of putting the specifications into practice is interoperability,” explains Knothe. For example, the software system needs to support as many different data carriers as possible, which supply information to the product in a way that is similar to a barcode or QR code. The same applies to unique identifiers, which are like ID numbers assigned uniquely to a product. As well as this, the system also needs to be able to represent the rules in different countries and be compatible with a range of data management technologies and platforms. The data requirements of different sectors also need to be considered as the battery passport will also be used as a basis for other passports. “A system like this is too complex to be driven by a single company or even a consortium. That’s why we’ve included a large community of partners and supporters in the project activities from an early stage. That also gives the system the momentum it needs to gain wider acceptance in practice,” says Knothe.

Battery passport paving the way for other product passports

The battery passport is the first digital product passport to be introduced at European level. It will serve as a pilot — other passports for products such as textiles, electronics and building materials are currently in the planning stage to ensure the exchange of data in supply and value chains and compliance with environmental and social standards. “This is rightly considered to be an important pilot for digital product passports more generally, which will be extended to other sectors in the future and become increasingly important,” says the researcher.

From April 22 to 26, 2024, the Fraunhofer IPK researchers, together with the project partners, will be at the joint Fraunhofer booth at the Hannover Messe (Hall 2, Booth B24), depicting the ecosystem of a battery with the help of a demonstrator and presenting a value stream scenario for the manufacture and use of batteries for electric cars. Another demonstrator will be showing how the necessary data is aggregated in the battery passport.

  • Research News April 2024 - Circular economy: A digital EU product passport for batteries [ PDF  0.21 MB ]
  • Fraunhofer Institute for Production Systems and Design Technology IPK  (ipk.fraunhofer.de)
  • Digital Press Kit for the Hannover Messe 2024

Crypto miner, Pennsylvania hit with lawsuit over pollution from bitcoin mine

A look at a U.S. crypto mine

  • Lawsuit claims coal waste, old tires are burned to run mine
  • Community group is seeking damages, and injunctive relief

Jumpstart your morning with the latest legal news delivered straight to your inbox from The Daily Docket newsletter. Sign up here.

Reporting by Clark Mindock

Our Standards: The Thomson Reuters Trust Principles. , opens new tab

Read Next / Editor's Picks

Wind turbines spin during a winter storm near Palm Springs, California

Industry Insight

digital pollution essay

Mike Scarcella, David Thomas

digital pollution essay

Karen Sloan

digital pollution essay

Henry Engler

digital pollution essay

Diana Novak Jones

  • International edition
  • Australia edition
  • Europe edition

Sewage being discharged into a brook after heavy rainfall

Water companies in England face outrage over record sewage discharges

Call for environmental emergency to be declared after data reveals 105% rise in raw sewage discharges over past 12 months

  • How polluted is your local river and which regions are worst hit?

Water companies in England have faced a barrage of criticism as data revealed raw sewage was discharged for more than 3.6m hours into rivers and seas last year in a 105% increase on the previous 12 months.

The scale of the discharges of untreated waste made 2023 the worst year for storm water pollution. Early data seen by the Guardian put the scale of discharges at more than 4m hours, but officials said the figures were an early estimate.

The Liberal Democrat leader, Ed Davey, said the scandal of raw sewage pouring into waterways should be declared a national environmental emergency. He called on the government to convene an urgent meeting of the Scientific Advisory Group for Emergencies (Sage) to look at the impact of sewage pollution on people’s health.

Total discharges from the 14,000 storm overflows owned by English water companies that release untreated sewage into rivers and coastal waters increased by 54% to 464,056, according to data submitted to the Environment Agency by the industry.

Senior industry figures highlighted the heavy rainfall over the autumn and winter that put huge pressure on the sewerage system. But storm overflows are supposed to cope with heavy rainfall and only be used in exceptional circumstances, like major storm events. Climate change has long been predicted to bring higher rainfall levels.

One senior executive told the Guardian: “We have wasted 15 years, we have not been investing enough.”

The data on discharges from storm overflows reveals the duration and the number of discharges from individual overflows across the network in England. The 3.6m-plus hours of raw sewage and rainwater discharged over the year includes huge spikes in some outflows. Forty per cent of South West Water outflows discharged raw sewage more than 40 times, while nearly a third of United Utilities outflows and 23% owned by Yorkshire Water discharged 60 times or more.

Any outflow that has more than 60 discharges a year should prompt an Environment Agency investigation.

As well as total discharges soaring from just over 301,000 in 2022, the average discharge per storm overflow has increased to 33, an increase of more than 43.7%. Some companies had much higher average spills per outflow, with South West Water averaging 43 per outflow and United Utilities 45.

Some of the highest rises in the hours of raw sewage pouring into rivers were by Anglian Water, with a 205% increase to 273,163 hours, Wessex Water a 186% increase to 372,341 hours, Thames Water a 163% increase to 196,414 hours, and Northumbrian Water a 160% increase to 280,029 hours.

Severn Trent discharged raw sewage into waterways for 440,446 hours, South West Water for 530,737 hours, an 82.5% increase, Southern Water for 317,285, a 116% rise, and United Utilities for 656,014 hours, a 54% increase.

Thames Water was responsible for the biggest increase in the number of discharges, with its overflows dumping on 16,990 occasions, a 112% increase on 2022.

A Guardian analysis of the data revealed the River Irwell and its tributary, the Croal, which flows through to Salford and Manchester, had the highest levels of sewage spills. Nearby storm overflows spilled just under 12,000 times in 2023, or 95 spills per mile of water, the highest rate of all rivers in England .

Second worst in England was the River Darwen, near Blackburn and Preston, where there were more than 3,000 sewage spills from nearby overflows in 2023 – equivalent to 83 spills per mile. Just one river in the south of England features in the worst 10: the River Avon, as it makes its way through Bath and Bristol. This urban section of the river had 6,573 sewage spills in 2023, or 74 spills per mile, making it the third most polluted in England.

Also top of the list for sewage spills was the River Calder near Huddersfield, the Aire near Bradford and the lower section of the Tyne around Newcastle and Sunderland, the Guardian’s analysis of Environment Agency data found.

Criticism was not reserved for the industry. The government’s much vaunted plan to tackle raw sewage pollution gives water companies a deadline of 2035 to reduce the amount of sewage flowing into bathing water and areas of ecological importance, but discharges would continue being released into other waterways until 2050, at a time when the climate crisis is increasing rainfall intensity and frequency, putting more strain on the sewerage system.

Davey said the scandal had to be treated as an environmental national emergency. He said: “Only by treating the sewage scandal with the urgency it demands can we save our rivers and beaches for future generations to enjoy. Rishi Sunak and the Conservative party have failed to listen and as a result sewage spills are increasing, our precious countryside is being destroyed and swimmers are falling sick.”

The record sewage discharges were revealed as a major investigation into illegal sewage dumping by the regulator Ofwat into more than 2,000 treatment plants was nearing its conclusion. The Environment Agency is running a parallel criminal inquiry into illegal sewage dumping by companies.

after newsletter promotion

Storm overflows are supposed to be used only in extreme weather but for many years they have been used routinely, discharging raw sewage even on dry days in some cases. The academic Peter Hammond has shown how water companies are routinely using storm overflow discharges in their water management.

Campaigners turned their ire on the industry as the scale of the discharges was published. Ash Smith, who has investigated sewage pollution in the River Windrush for several years, said: “Water companies will blame the weather but it’s very clear from the data analysis done by Prof Peter Hammond that many sewage-dumping – we refuse to call this spilling – events are illegal either because sewage works simply don’t treat the amount they are required to or they do it in dry conditions.

“This is the information that needs to be made public along with volume, not just hours.”

Only two water companies, Southern and Thames Water , publish real-time data on raw sewage releases from outflows. Smith said greater transparency was needed. “How the other companies have been allowed to get away with keeping easily provided data public is a mystery. It is compounded by the secretary of state’s silence on getting them to reveal what will undoubtedly be a scandalous state of affairs.”

The shadow environment secretary, Steve Reed, said the government should immediately impose his plan for a ban on bonuses for water company executives. “Despite being responsible for this illegal behaviour, water company bosses have brazenly awarded themselves over £25m in bonuses and incentives since the last election,” said Reed.

Labour has not committed to any restructuring of the privatised water industry, even as Thames Water, which is struggling with debts of more than £14bn, is facing being taken into special administration. The Liberal Democrats are calling for Thames, the biggest of the privatised companies, to be put into special administration and turned into a public benefit company.

The Environment Agency director of water, Helen Wakeham, appeared to play down the scale of the increased pollution, saying it was not surprising that the discharges had increased. “We are pleased to see record investment from the water sector, but we know it will take time for this to be reflected in spill data – it is a complex issue that won’t be solved overnight.”

The water minister, Robbie Moore, said: “Today’s data shows water companies must go further and faster to tackle storm overflows and clean up our precious waterways. We will be ensuring the Environment Agency closely scrutinise these findings and take enforcement action where necessary.”

The revelation of the scale of releases into waterways comes as rivers in England are at crisis point, suffering from a toxic cocktail of raw and treated sewage pollution, chemical toxins and agricultural runoff.

In the last few weeks, ministers have engaged in a flurry of announcements in anticipation of the shocking data on record sewage spills. These included an announcement of a £180m plan to fast-track action on sewage discharges, in the face of criticism not enough is being done.

The industry is planning a record £96bn to the end of the decade to tackle sewage discharges, leaks and the impending water supply crisis but has been criticised for passing on the costs to customers for investment that should have been carried out years ago.

Water UK, which represents the industry, said: “These results are unacceptable and demonstrate exactly why we urgently need regulatory approval to upgrade our system so it can better cope with the weather. We have a plan to sort this out by tripling investment which will cut spills by 40% by 2030 – more than double the government’s target.”

Ofwat has to decide whether to allow companies to increase water bills to pay for the investment. Water UK said the investment was vital and Ofwat must give the industry the green light to get on with it.

  • Water industry
  • Thames Water

More on this story

digital pollution essay

Regulators urged to act over water companies’ record sewage discharge

digital pollution essay

England’s sewage crisis: how polluted is your local river and which regions are worst hit?

digital pollution essay

People in the UK: tell us about your local river and the environmental issues affecting it

digital pollution essay

4m hours of raw sewage discharges in England in 2023, data expected to show

digital pollution essay

British and Irish rivers in desperate state from pollution, report reveals

digital pollution essay

Environment Agency failed to protect River Wye from chicken waste, court hears

digital pollution essay

Boat Race organisers warn rowers not to enter water after E coli discovery

digital pollution essay

Water firms’ profits in England and Wales almost double since 2019, find Lib Dems

digital pollution essay

England’s only three swimming rivers given ‘poor’ water quality status

digital pollution essay

England to diverge from EU water monitoring standards

Most viewed.

National City to consider homeless camping ban

National City police officers speak with a woman

Akin to San Diego’s ban, the ordinance would make camping illegal in specific areas regardless of shelter bed availability

  • Show more sharing options
  • Copy Link URL Copied!

National City could soon become the latest municipality in San Diego County to adopt a homeless encampment ban.

On Tuesday, the City Council will consider a proposed ordinance that would make camping on public property illegal if shelter beds are available.

However, if an encampment poses an immediate threat to the public’s health or safety, the city’s ordinance would be enforceable regardless of shelter bed availability. Camping would also be illegal at all times anywhere within two blocks of a school, at any transit hub or along trolley tracks, and in any waterway or natural area abutting a waterway.

Enforcement would include misdemeanor citations and arrest.

The law would go into effect 30 days after adoption.

“San Diego’s got (their camping ban) and Chula Vista is considering theirs now,” said Mayor Ron Morrison. “We’re surrounded on three sides by those two cities. We want to be prepared because we are seeing an influx of homeless people and they’re not from National City.”

National City’s Homeless Outreach and Mobile Engagement team, or HOME team, can attest to that, saying it has seen more homeless people from San Diego move to National City since San Diego’s encampment ordinance took effect last year.

The city’s homeless population has steadily jumped in recent years, however. According to the San Diego Regional Task Force on Homelessness’ point-in-time count data, there were 125 people on the streets in 2020, 149 in 2022 and 159 in 2023. Results for 2024 are pending.

The HOME team, composed of a code enforcement officer and homelessness service coordinator who work with social workers and the Police Department, helped place 25 people into housing between June and December 2023, according to the city.

Morrison said that with the implementation of a local ban and a new shelter, “we will be able to give them some more incentive that the streets are not a humane place to be.”

National City has no shelter beds. The San Diego Rescue Mission plans to open one this summer on Euclid Avenue near 24th Street. It will be a 30-day site with 162 beds for single men, women and families. It will be more than just a shelter, but a place where people will be connected with services that could lead to long-term solutions to overcome homelessness, according to the organization. Outreach will be done not by a police homeless outreach team, but by the Rescue Mission.

A challenge the city may face when enforcing the camping ban, if approved, is that the shelter will accept people from anywhere in the county, not just from National City.

The ordinance would require a second reading and vote before taking effect. If approved, it would be the latest such law in the county. Poway recently passed a similar ordinance and some lawmakers want to take the rules statewide .

The City Council meeting starts at 6 p.m.

Get Essential San Diego, weekday mornings

Get top headlines from the Union-Tribune in your inbox weekday mornings, including top news, local, sports, business, entertainment and opinion.

You may occasionally receive promotional content from the San Diego Union-Tribune.

digital pollution essay

More from this Author

National City

South County

National City Council members each have $100K to spend. There are no rules governing how to use it.

April 1, 2024

Chula Vista, CA - September 15: The proposed open space site for a future university in east Chula Vista near the intersection of Eastlake Parkway and Hunte Parkway. Thursday, Sept. 15, 2022 in Chula Vista, CA. (Nelvin C. Cepeda / The San Diego Union-Tribune)

Chula Vista to study feasibility of developing University and Innovation District in phases

Chula Vista, CA - December 12: Chula Vista City Hall in Chula Vista, CA on Monday, Dec. 12, 2022. (Adriana Heldiz / The San Diego Union-Tribune)

Former deputy city manager sues city of Chula Vista over $199K in unpaid severance

March 29, 2024

April12, 2012-SAN DIEGO, CA| A major sewage spill April 4 sent 2 million gallons of raw sewage into the river do to a computer malfunction and operator error at the South Bay International Wastewater Treatment Plant in San Ysidro, the facility's owner said Thursday. | Howard Lipin /UT San Diego). Mandatory to Credit HOWARD LIPIN/U-T San Diego/ZUMA PRESS, U-T San Diego

Environment

‘A step in the right direction’: Congress OKs budget increase for agency to address cross-border pollution

March 24, 2024

digital pollution essay

Chula Vista says no to Senate Bill 10 and denser housing

March 21, 2024

Chula Vista, CA - December 13: Left to right: Councilmember Andrea Cardenas, Chula Vista Mayor John McCann, Councilmember Carolina Chavez and Councilmember Jose Preciado seat on the dais at Chula Vista City Hall in Chula Vista, CA on Tuesday, Dec. 13, 2022. (Adriana Heldiz / The San Diego Union-Tribune)

This is who applied for a chance to temporarily replace disgraced Chula Vista council member

March 20, 2024

More in this section

SAN DIEGO CA 4/13/2019: Jim Bliesner of City Heights stopped by the San Diego Professional Editors Network booth during Writers Festival San Diego at the San Diego Central Library. Photo by Howard Lipin/ The San Diego Union-Tribune

Community events in San Diego County: From Tequila and Taco Music Festival to Spring Fling Street Fair

Sneak peek at upcoming concerts, street fairs, festivals, performances, art shows, library events, blood drives, community meetings and more

March 28, 2024

Rendering of a new 278-unit apartment complex in Chula Vista

New 278-unit luxury apartment complex planned for Chula Vista

The Ryan Companies project is in the Millenia section of Otay Ranch. First apartments are expected to open in late 2025

San Diego, California - February 07: Just south of the levee, near Camino de la Plaza north of the border, runoff from the Tijuana River. A flooded field with massive buildup of trash, raw sewage, and debris litters the Tijuana River Valley on Wednesday, Feb. 7, 2024 in San Diego, California. (Alejandro Tamayo / The San Diego Union-Tribune)

New county report to track stomach bugs in South Bay amid fears of sewage pollution

Initial report shows what health director has said previously: No spikes in reportable illness or ER symptoms

March 27, 2024

Chula Vista, CA - March 19: Health care workers along with family and friends demonstrated in front of Scripps Mercy Hospital Chula Vista on Tuesday, March 19, 2024 in Chula Vista, CA. Scripps has announced that they will closing the Labor and Delivery Department at Scripps Mercy Hospital Chula Vista. (Nelvin C. Cepeda / The San Diego Union-Tribune)

State says it will not investigate Scripps Chula Vista maternity ward closure

Providers continue to decry decision, saying that consolidation effort would impact long-running residency program

March 23, 2024

A restored Navy C-2A Greyhound “Carrier Onboard Delivery” aircraft was recently added to the USS Midway Museum.

Community events across San Diego County: From commemoration for Vietnam War veterans to egg hunts

Sneak peek at upcoming concerts, street fairs, festivals, performances, art shows, community meetings, blood drives, library events and more

March 22, 2024

San Diego, CA - February 1: A big rig pulls into the 10th Avenue Marine Terminal in Barrio Logan on Wednesday, February 1, 2023. A deal between the Port of San Diego and Mitsubishi Cement Corp. for a storage facility at the 10th Avenue Marine Terminal has fallen through, officials announced Wednesday. (K.C. Alfred / The San Diego Union-Tribune)

Port of San Diego selects developer for zero-emission truck stop in National City

Officials raised concerns about project, including fire risks associated with lithium-ion batteries that electric vehicles use

IMAGES

  1. Tips on how to write a pollution essay

    digital pollution essay

  2. SOLUTION: write Essay on pollution

    digital pollution essay

  3. Digital pollution: the new challenge of tomorrow

    digital pollution essay

  4. Essay on Pollution: 500+ Words Essay For Students

    digital pollution essay

  5. Essay On Pollution In English With Pictures

    digital pollution essay

  6. Essay On Pollution And Its Causes

    digital pollution essay

VIDEO

  1. Essay on Digitalization in Daily Life 400 words || Digitalization in Daily Life Essay ||CBSE series

  2. Pollution Essay In English|| Pollution Essay For Students

  3. Write an Essay About Environmental Pollution

  4. Essay on pollution || Pollution essay || Pollution paragraph || Essay on pollution in English

  5. Eassy Problem Of Pollution / English handwriting /simple essay problem of pollution/Essay Pollution

  6. Article on pollution/essay on pollution in english/pollution trick

COMMENTS

  1. The World Is Choking on Digital Pollution

    The World Is Choking on Digital Pollution. Society figured out how to manage the waste produced by the Industrial Revolution. We must do the same thing with the Internet today. by Judy Estrin and ...

  2. Digital Pollution Essay

    Digital Pollution Essay. This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples. We learned from elementary school that pollution is damaging the natural environment and putting it in danger, and we have seen its representation in various forms: soil ...

  3. Digital Pollution: What is it?

    Digital pollution in numbers. Digital technology contributes significantly to humanity's environmental impact. According to a study carried out in 2019 by Frédéric Bordage, a French digital expert, it would represent nearly 3.8% of global greenhouse gas emissions. That is the equivalent of about 116 million round-the-world car journeys!

  4. All You Need to Know about Digital Pollution

    The impacts of digital pollution. Digital pollution has some pretty astounding impacts. For example, data centres alone are estimated to consume about 1,000 kWh per square metre, which is about ten times the power consumption of a typical American home. The production of digital technology is also pressurising on the environment, as it often ...

  5. (PDF) A Systematic Review of the Pros and Cons of Digital Pollution and

    A Systematic Review of the Pros and Cons of Digital Pollution and its Impact on the Environment March 2023 Journal of Sustainability and Environmental Management 2(1):61-73

  6. Internet pollution: how can its impact be reduced?

    Internet pollution is defined as all digital actions emitting greenhouse gases. In fact, this negative external use of new technologies tends to be unknown by consumers. Nevertheless, the digital world represents a substantial environmental impact and creates a large carbon footprint: 4% of all greenhouse gases.

  7. CAUSES OF DIGITAL POLLUTION

    3- E-waste and recycling. The 710 million electronic devices manufactured in 2015 generated 1.5 million tonnes of waste and are the equivalent of 166 times the size of the Eiffel tower. Devices ...

  8. Gatherings of an Infovore*: Digital Pollution

    Add Another Type of Pollution to the List: Digital Pollution, as defined in the Merriam-Webster online dictionary, is "the action of polluting especially by environmental contamination with man-made waste" and has been a constant in the world ever since humans began living in groups. Anthropologists have found human waste among the ruins of ancient settlements. The word pollution took over ...

  9. Our Digital Carbon Footprint: What's the Environmental Impact ...

    The short study "Climate protection through digital technologies" (Klimaschutz durch digitale Technologien) from the Borderstep Institute compares various studies and comes to the conclusion that the greenhouse gas emissions caused by the production, operation and disposal of digital end devices and infrastructures are between 1.8 and 3.2 ...

  10. Digital Pollution and Its Impact on the Family and Social Interactions

    Abstract. The present study was an attempt to identify the most prevailing means of digital devices and its impact as digital pollution on family and social interactions. Despite the obvious benefits of digital devices, in recent years researchers have taken more concern about its potential negative effect on human attitude and behavior, which ...

  11. 2. Expert essays on the expected impact of digital change by 2035

    Pew Research Center June 21, 2023. As AI Spreads, Experts Predict the Best and Worst Changes in Digital Life by 2035. 2. Expert essays on the expected impact of digital change by 2035. By Janna Anderson and Lee Rainie. Most respondents to this canvassing wrote brief reactions to this research question.

  12. The growing footprint of digitalisation

    The 27th edition of UNEP's Foresight Brief explores the environmental impact of internet use and the increasing digitalization of the economy. It outlines some of the mitigating factors that can be implemented to green our digital future. Since 2010, the number of internet users worldwide has doubled, and the global internet traffic has grown 12-fold.

  13. News / Digital pollution and IT impact on Co2eq emission

    The average weight of web pages increased by 115 between 1995 and 2015; Watching a 30min show leads to emissions of 1.6kg of CO2eq; Online video streaming produced 30 million tons of CO2 emissions (equivalent to a country like Spain). Digital pollution comes from the use of IT infrastructure, as much as from the manufacturing of digital devices.

  14. Digital Pollution and Its Impact on the Family and ...

    All statistical analyses were performed using SPSS 23.0, AMOS 23.0, and SmartPLS 3.0. The results indicated that as the use of smartphone and computer/laptop increases, levels of digital pollution ...

  15. Digital pollution: What it is and how you're contributing to it

    Experts with Digital for the Planet have said that digital pollution can be attributed to three sources: manufacturing, practices and e-waste/recycling. In 2015, the 710 million electronic devices ...

  16. What is Digital Pollution and easy actions to reduce our digital carbon

    What is digital pollution? Digital pollution is responsible for 3.7% of CO2 emissions globally - 50% more than air transport (2.4%) However digital pollution is an issue that is rarely identified, let alone addressed. People often think that online streaming, sending emails, and conducting online searches have no external environmental effect.

  17. How digital technology and innovation can help protect the planet

    Experts say, in the years to come, a digital ecosystem of data platforms will be crucial to helping the world understand and combat a host of environmental hazards, from air pollution to methane emissions. "Various private and public sector actors are harnessing data and digital technologies to accelerate global environmental action and ...

  18. What you need to know about digital pollution according to four

    Wonderland's Sustainable Digital Design (SDD) platform is less critical of innovation and focuses more specifically on digital design—the design of digital assets, including graphics and websites—in the internet era. As Hala Alsadi, a project manager at the Amsterdam-based design studio explains, the project is a reflection of Wonderland's own journey to increasing the sustainability ...

  19. The Impact of Digital Transformation on Environmental ...

    Recently, digital transformation is supposed to affect all aspects of human life profoundly. Nevertheless, there is a lack of summaries map digital transformation in the environmental sustainability domain. To address this knowledge gap, this study examines the impacts of digital transformation on environmental sustainability, including both positive and negative effects. Furthermore, the ...

  20. Digital Pollution: Going Beyond the Limits of Virtual

    Environmental protection should promote social-environmental measures in order to explicit the effects of degradation originated from digital environment and, thus, try to measure levels of pollution produced by digital data storage, electronic information generation, internet services and cumulative storage of data at electronic servers.

  21. The world is choking on digital pollution

    Watch the video to know how you are responsible for digital pollution and what you can do to handle it.#T... Your internet habits are not as clean as you think. Watch the video to know how you are ...

  22. AI Garbage Is Already Polluting the Internet

    A.I.-Generated Garbage Is Polluting Our Culture. March 29, 2024. Jim Stoten. Share full article. 602. By Erik Hoel. Mr. Hoel is a neuroscientist and novelist and the author of The Intrinsic ...

  23. The Problem Of Electronic Pollution Essay

    The Problem Of Electronic Pollution Essay. Each year more than 20-50 million tons of e-waste is generated worldwide, more than 100,000 tons of which is exported from the UK and the US to other third world countries (Phys 1). A large portion of e-waste is transported to small cities in China and India (Phys 1).

  24. ELA-Informative Essay on digital pollution.docx

    Digital pollution comes with great harm, for example; candidates losing their votes due to the opposing candidate making up false information that seems like he/she would be bad if elected. Urban legends are myths about things that is most likely not true and could cause big and harmful outcomes.

  25. Circular economy: A digital EU product passport for batteries

    Starting in February 2027, all new traction batteries, two-wheeled vehicle batteries and industrial batteries with a capacity of over 2 kWh that are marketed in the EU will require a digital battery passport. The purpose is to ensure transparency and sustainability in the battery value chain, reduce envi-ronmental impacts and encourage the secondary use of batteries.

  26. World Health Day 2024

    World Health Day 2024 is 'My health, my right'. This year's theme was chosen to champion the right of everyone, everywhere to have access to quality health services, education, and information, as well as safe drinking water, clean air, good nutrition, quality housing, decent working and environmental conditions, and freedom from discrimination.

  27. Crypto miner, Pennsylvania hit with lawsuit over pollution from bitcoin

    Stronghold Digital Mining co-chairman and CEO Greg Beard stands near a bank of cryptocurrency miners at the Scrubgrass Plant in Kennerdell, Pennsylvania, U.S., March 8, 2022. Picture taken March 8 ...

  28. Water companies in England face outrage over record sewage discharges

    Wed 27 Mar 2024 14.21 EDT. First published on Wed 27 Mar 2024 07.37 EDT. Water companies in England have faced a barrage of criticism as data revealed raw sewage was discharged for more than 3.6m ...

  29. National City to consider homeless camping ban

    April 1, 2024 5:02 PM PT. National City could soon become the latest municipality in San Diego County to adopt a homeless encampment ban. On Tuesday, the City Council will consider a proposed ...