U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Information technologies of 21st century and their impact on the society

Mohammad yamin.

Department of MIS, Faculty of Economics and Admin, King Abdulaziz University, Jeddah, Saudi Arabia

Twenty first century has witnessed emergence of some ground breaking information technologies that have revolutionised our way of life. The revolution began late in 20th century with the arrival of internet in 1995, which has given rise to methods, tools and gadgets having astonishing applications in all academic disciplines and business sectors. In this article we shall provide a design of a ‘spider robot’ which may be used for efficient cleaning of deadly viruses. In addition, we shall examine some of the emerging technologies which are causing remarkable breakthroughs and improvements which were inconceivable earlier. In particular we shall look at the technologies and tools associated with the Internet of Things (IoT), Blockchain, Artificial Intelligence, Sensor Networks and Social Media. We shall analyse capabilities and business value of these technologies and tools. As we recognise, most technologies, after completing their commercial journey, are utilised by the business world in physical as well as in the virtual marketing environments. We shall also look at the social impact of some of these technologies and tools.

Introduction

Internet, which was started in 1989 [ 1 ], now has 1.2 million terabyte data from Google, Amazon, Microsoft and Facebook [ 2 ]. It is estimated that the internet contains over four and a half billion websites on the surface web, the deep web, which we know very little about, is at least four hundred times bigger than the surface web [ 3 ]. Soon afterwards in 1990, email platform emerged and then many applications. Then we saw a chain of web 2.0 technologies like E-commerce, which started, social media platforms, E-Business, E-Learning, E-government, Cloud Computing and more from 1995 to the early 21st century [ 4 ]. Now we have a large number of internet based technologies which have uncountable applications in many domains including business, science and engineering, and healthcare [ 5 ]. The impact of these technologies on our personal lives is such that we are compelled to adopt many of them whether we like it or not.

In this article we shall study the nature, usage and capabilities of the emerging and future technologies. Some of these technologies are Big Data Analytics, Internet of Things (IoT), Sensor networks (RFID, Location based Services), Artificial Intelligence (AI), Robotics, Blockchain, Mobile digital Platforms (Digital Streets, towns and villages), Clouds (Fog and Dew) computing, Social Networks and Business, Virtual reality.

With the ever increasing computing power and declining costs of data storage, many government and private organizations are gathering enormous amounts of data. Accumulated data from the years’ of acquisition and processing in many organizations has become enormous meaning that it can no longer be analyzed by traditional tools within a reasonable time. Familiar disciplines to create Big data include astronomy, atmospheric science, biology, genomics, nuclear physics, biochemical experiments, medical records, and scientific research. Some of the organizations responsible to create enormous data are Google, Facebook, YouTube, hospitals, proceedings of parliaments, courts, newspapers and magazines, and government departments. Because of its size, analysis of big data is not a straightforward task and often requires advanced methods and techniques. Lack of timely analysis of big data in certain domains may have devastating results and pose threats to societies, nature and echo system.

Big medic data

Healthcare field is generating big data, which has the potential to surpass other fields when it come to the growth of data. Big Medic data usually refers to considerably bigger pool of health, hospital and treatment records, medical claims of administrative nature, and data from clinical trials, smartphone applications, wearable devices such as RFID and heart beat reading devices, different kinds of social media, and omics-research. In particular omics-research (genomics, proteomics, metabolomics etc.) is leading the charge to the growth of Big data [ 6 , 7 ]. The challenges in omics-research are data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving. Data analytics requirements include several tasks like those of data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving. These tasks are required for the Big data in some of the omics datasets like genomics, tran-scriptomics, proteomics, metabolomics, metagenomics, phenomics [ 6 ].

According to [ 8 ], in 2011 alone, the data in the United States of America healthcare system amounted to one hundred and fifty Exabyte (One Exabyte = One billion Gigabytes, or 10 18  Bytes), and is expected soon reach to 10 21 and later 10 24 . Some scientist have classified Medical into three categories having (a) large number of samples but small number of parameters; (b) small number of samples and small number of parameters; (c) large small number of samples and small number of parameters [ 9 ]. Although the data in the first category may be analyzed by classical methods but it may be incomplete, noisy, and inconsistent, data cleaning. The data in the third category could be big and may require advanced analytics.

Big data analytics

Big data cannot be analyzed in real time by traditional analytical methods. The analysis of Big data, popularly known as Big Data Analytics, often involves a number of technologies, sophisticated processes and tools as depicted in Fig.  1 . Big data can provide smart decision making and business intelligence to the businesses and corporations. Big data unless analyzed is impractical and a burden to the organization. Big data analytics involves mining and extracting useful associations (knowledge discovery) for intelligent decision-making and forecasts. The challenges in Big Data analytics are computational complexities, scalability and visualization of data. Consequently, the information security risk increases with the surge in the amount of data, which is the case in Big Data.

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig1_HTML.jpg

Big Data Analytics

The aim of data analytics has always been knowledge discovery to support smart and timely decision making. With big data, knowledge base becomes widened and sharper to provide greater business intelligence and assist businesses in becoming a leader in the market. Conventional processing paradigm and architecture are inefficient to deal with the large datasets from the Big data. Some of the problems of Big Data are to deal with the size of data sets in Big Data, requiring parallel processing. Some of the recent technologies like Spark, Hadoop, Map Reduce, R, Data Lakes and NoSQL have emerged to provide Big Data analytics. With all these and other data analytics technologies, it is advantageous to invest in designing superior storage systems.

Health data predominantly consists of visual, graphs, audio and video data. Analysing such data to gain meaningful insights and diagnoses may depend on the choice of tools. Medical data has traditionally been scattered in the organization, often not organized properly. What we find usually are medical record keeping systems which consist of heterogeneous data, requiring more efforts to reorganize the data into a common platform. As discussed before, the health profession produces enormous data and so analysing it in an efficient and timely manner can potentially save many lives.

Commercial operations of Clouds from the company platforms began in the year 1999 [ 10 ]. Initially, clouds complemented and empowered outsourcing. At earlier stages, there were some privacy concerns associated with Cloud Computing as the owners of data had to give the custody of their data to the Cloud owners. However, as time passed, with confidence building measures by Cloud owners, the technology became so prevalent that most of the world’s SMEs started using it in one or the other form. More information on Cloud Computing can be found in [ 11 , 12 ].

Fog computing

As faster processing became the need for some critical applications, the clouds regenerated Fog or Edge computing. As can be seen in Gartner hyper cycles in Figs.  2 and ​ and3, 3 , Edge computing, as an emerging technology, has also peaked in 2017–18. As shown in the Cloud Computing architecture in Fig.  4 , the middle or second layers of the cloud configuration are represented by the Fog computing. For some applications delay in communication between the computing devices in the field and data in a Cloud (often physically apart by thousands of miles), is detrimental of the time requirements, as it may cause considerable delay in time sensitive applications. For example, processing and storage for early warning of disasters (stampedes, Tsunami, etc.) must be in real time. For these kinds of applications, computing and storing resources should be placed closer to where computing is needed (application areas like digital street). In these kind of scenarios Fog computing is considered to be suitable [ 13 ]. Clouds are integral part of many IoT applications and play central role on ubiquitous computing systems in health related cases like the one depicted in Fig.  5 . Some applications of Fog computing can be found in [ 14 – 16 ]. More results on Fog computing are also available in [ 17 – 19 ].

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig2_HTML.jpg

Emerging Technologies 2018

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig3_HTML.jpg

Emerging Technologies 2017

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig4_HTML.jpg

Relationship of Cloud, Fog and Dew computing 

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig5_HTML.jpg

Snapshot of a Ubiquitous System

Dew computing

When Fog is overloaded and is not able to cater for the peaks of high demand applications, it offloads some of its data and/or processing to the associated cloud. In such a situation, Fog exposes its dependency to a complementary bottom layer of the cloud architectural organisation as shown in the Cloud architecture of Fig.  4 . This bottom layer of hierarchical resources organization is known as the Dew layer. The purpose of the Dew layer is to cater for the tasks by exploiting resources near to the end-user with minimum internet access [ 17 , 20 ]. As a feature, Dew computing takes care of determining as to when to use for its services linking with the different layers of the Cloud architecture. It is also important to note that the Dew computing [ 20 ] is associated with the distributed computing hierarchy and is integrated by the Fog computing services, which is also evident in Fig.  4 . In summary, Cloud architecture has three layers, first being Cloud, second as Fog and the third Dew.

Internet of things

Definition of Internet of Things (IoT), as depicted in Fig.  6 , has been changing with the passage of time. With growing number of internet based applications, which use many technologies, devices and tools, one would think, the name of IoT seems to have evolved. Accordingly, things (technologies, devices and tools) used together in internet based applications to generate data to provide assistance and services to the users from anywhere, at any time. The internet can be considered as a uniform technology from any location as it provides the same service of ‘connectivity’. The speed and security however are not uniform. The IoT as an emerging technology has peaked during 2017–18 as is evident from Figs.  2 and ​ and3. 3 . This technology is expanding at a very fast rate. According to [ 21 – 24 ], the number of IoT devices could be in millions by the year 2021.

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig6_HTML.jpg

Internet of Things

IoT is providing some amazing applications in tandem with wearable devices, sensor networks, Fog computing, and other technologies to improve some the critical facets of our lives like healthcare management, service delivery, and business improvements. Some applications of IoT in the field of crowd management are discussed in [ 14 ]. Some applications in of IoT in the context of privacy and security are discussed in [ 15 , 16 ]. Some of the key devices and associated technologies to IoT include RFID Tags [ 25 ], Internet, computers, cameras, RFID, Mobile Devices, coloured lights, RFIDs, Sensors, Sensor networks, Drones, Cloud, Fog and Dew.

Applications of blockchain

Blockchain is usually associated with Cryptocurrencies like Bitcoin (Currently, there are over one and a half thousand cryptocurrencies and the numbers are still rising). But the Blockchain technology can also be used for many more critical applications of our daily lives. The Blockchain is a distributed ledger technology in the form of a distributed transactional database, secured by cryptography, and governed by a consensus mechanism. A Blockchain is essentially a record of digital events [ 26 ]. A block represents a completed transaction or ledger. Subsequent and prior blocks are chained together, displaying the status of the most recent transaction. The role of chain is to provide linkage between records in a chronological order. This chain continues to grow as and when further transactions take place, which are recorded by adding new blocks to the chain. User security and ledger consistency in the Blockchain is provided by Asymmetric cryptography and distributed consensus algorithms. Once a block is created, it cannot be altered or removed. The technology eliminates the need for having a bank statement for verification of the availability of funds or that of a lawyer for certifying the occurrence of an event. The benefits of Blockchain technology are inherited in its characteristics of decentralization, persistency, anonymity and auditability [ 27 , 28 ].

Blockchain for business use

Blockchain, being the technology behind Cryptocurrencies, started as an open-source Bitcoin community to allow reliable peer-to-peer financial transactions. Blockchain technology has made it possible to build a globally functional currency relying on code, without using any bank or third-party platforms [ 28 ]. These features have made the Blockchain technology, secure and transparent for business transactions of any kind involving any currencies. In literature, we find many applications of Blockchain. Nowadays, the applications of Blockchain technology involve various kinds of transactions requiring verification and automated system of payments using smart contracts. The concept of Smart Contacts [ 28 ] has virtually eliminated the role of intermediaries. This technology is most suitable for businesses requiring high reliability and honesty. Because of its security and transparency features, the technology would benefit businesses trying to attract customers. Blockchain can be used to eliminate the occurrence of fake permits as can be seen in [ 29 ].

Blockchain for healthcare management

As discussed above, Blockchain is an efficient and transparent way of digital record keeping. This feature is highly desirable in efficient healthcare management. Medical field is still undergoing to manage their data efficiently in a digital form. As usual the issues of disparate and non-uniform record storage methods are hampering the digitization, data warehouse and big data analytics, which would allow efficient management and sharing of the data. We learn the magnitude of these problem from examples of such as the target of the National Health Service (NHS) of the United Kingdom to digitize the UK healthcare is by 2023 [ 30 ]. These problems lead to inaccuracies of data which can cause many issues in healthcare management, including clinical and administrative errors.

Use of Blockchain in healthcare can bring revolutionary improvements. For example, smart contracts can be used to make it easier for doctors to access patients’ data from other organisations. The current consent process often involves bureaucratic processes and is far from being simplified or standardised. This adds to many problems to patients and specialists treating them. The cost associated with the transfer of medical records between different locations can be significant, which can virtually be reduced to zero by using Blockchain. More information on the use of Blockchain in the healthcare data can be found in [ 30 , 31 ].

Environment cleaning robot

One of the ongoing healthcare issue is the eradication of deadly viruses and bacteria from hospitals and healthcare units. Nosocomial infections are a common problem for hospitals and currently they are treated using various techniques [ 32 , 33 ]. Historically, cleaning the hospital wards and operating rooms with chlorine has been an effective way. On the face of some deadly viruses like EBOLA, HIV Aids, Swine Influenza H1N1, H1N2, various strands of flu, Severe Acute Respiratory Syndrome (SARS) and Middle Eastern Respiratory Syndrome (MERS), there are dangerous implications of using this method [ 14 ]. An advanced approach is being used in the USA hospitals, which employs “robots” to purify the space as can be seen in [ 32 , 33 ]. However, certain problems exist within the limitations of the current “robots”. Most of these devices require a human to place them in the infected areas. These devices cannot move effectively (they just revolve around themselves); hence, the UV light will not reach all areas but only a very limited area within the range of the UV light emitter. Finally, the robot itself maybe infected as the light does not reach most of the robot’s surfaces. Therefore, there is an emerging need to build a robot that would not require the physical presence of humans to handle it, and could purify the entire room by covering all the room surfaces with UV light while, at the same time, will not be infected itself.

Figure  7 is an overview of the design of a fully motorized spider robot with six legs. This robot supports Wi-Fi connectivity for the purpose of control and be able to move around the room and clean the entire area. The spider design will allow the robot to move in any surface, including climbing steps but most importantly the robot will use its legs to move the UV light emitter as well as clear its body before leaving the room. This substantially reduces the risk of the robot transmitting any infections.

An external file that holds a picture, illustration, etc.
Object name is 41870_2019_355_Fig7_HTML.jpg

Spider Robot for virus cleaning

Additionally, the robot will be equipped with a motorized camera allowing the operator to monitor space and stop the process of emitting UV light in case of unpredicted situations. The operator can control the robot via a networked graphical user interface and/or from an augmented reality environment which will utilize technologies such as the Oculus Touch. In more detail, the user will use the oculus rift virtual reality helmet and the oculus touch, as well as hand controllers to remote control the robot. This will provide the user with the vision of the robot in a natural manner. It will also allow the user to control the two front robotic arms of the spider robot via the oculus touch controller, making it easy to do conduct advance movements, simply by move the hands. The physical movements of the human hand will be captured by the sensors of oculus touch and transmitted to the robot. The robot will then use reverse kinematics to translate the actions and position of the human hand to movements of the robotic arm. This technique will also be used during the training phase of the robot, where the human user will teach the robot how to clean various surfaces and then purify itself, simply by moving their hands accordingly. The design of the spider robot was proposed in a project proposal submitted to the King Abdulaziz City of Science and Technology ( https://www.kacst.edu.sa/eng/Pages/default.aspx ) by the author and George Tsaramirsis ( https://www.researchgate.net/profile/George_Tsaramirsis ).

Conclusions

We have presented details of some of the emerging technologies and real life application, that are providing businesses remarkable opportunities, which were previously unthinkable. Businesses are continuously trying to increase the use of new technologies and tools to improve processes, to benefit their client. The IoT and associated technologies are now able to provide real time and ubiquitous processing to eliminate the need for human surveillance. Similarly, Virtual Reality, Artificial Intelligence robotics are having some remarkable applications in the field of medical surgeries. As discussed, with the help of the technology, we now can predict and mitigate some natural disasters such as stampedes with the help of sensor networks and other associated technologies. Finally, the increase in Big Data Analytics is influencing businesses and government agencies with smarter decision making to achieve targets or expectations.

Issue Cover

  • Previous Article
  • Next Article

Promises and Pitfalls of Technology

Politics and privacy, private-sector influence and big tech, state competition and conflict, author biography, how is technology changing the world, and how should the world change technology.

[email protected]

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Guest Access
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Josephine Wolff; How Is Technology Changing the World, and How Should the World Change Technology?. Global Perspectives 1 February 2021; 2 (1): 27353. doi: https://doi.org/10.1525/gp.2021.27353

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing relies largely on digital technologies and artificial intelligence, and therefore involves less human-to-human contact than ever before and more opportunities for biases to be embedded and codified in our technological systems in ways we may not even be able to identify or recognize. Bioengineering advances are opening up new terrain for challenging philosophical, political, and economic questions regarding human-natural relations. Additionally, the management of these large and small devices and systems is increasingly done through the cloud, so that control over them is both very remote and removed from direct human or social control. The study of how to make technologies like artificial intelligence or the Internet of Things “explainable” has become its own area of research because it is so difficult to understand how they work or what is at fault when something goes wrong (Gunning and Aha 2019) .

This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions. This can seem like an impossible task in light of the rapid pace of technological change and the sense that its continued advancement is inevitable, but many countries around the world are only just beginning to take significant steps toward regulating computer technologies and are still in the process of radically rethinking the rules governing global data flows and exchange of technology across borders.

These are exciting times not just for technological development but also for technology policy—our technologies may be more advanced and complicated than ever but so, too, are our understandings of how they can best be leveraged, protected, and even constrained. The structures of technological systems as determined largely by government and institutional policies and those structures have tremendous implications for social organization and agency, ranging from open source, open systems that are highly distributed and decentralized, to those that are tightly controlled and closed, structured according to stricter and more hierarchical models. And just as our understanding of the governance of technology is developing in new and interesting ways, so, too, is our understanding of the social, cultural, environmental, and political dimensions of emerging technologies. We are realizing both the challenges and the importance of mapping out the full range of ways that technology is changing our society, what we want those changes to look like, and what tools we have to try to influence and guide those shifts.

Technology can be a source of tremendous optimism. It can help overcome some of the greatest challenges our society faces, including climate change, famine, and disease. For those who believe in the power of innovation and the promise of creative destruction to advance economic development and lead to better quality of life, technology is a vital economic driver (Schumpeter 1942) . But it can also be a tool of tremendous fear and oppression, embedding biases in automated decision-making processes and information-processing algorithms, exacerbating economic and social inequalities within and between countries to a staggering degree, or creating new weapons and avenues for attack unlike any we have had to face in the past. Scholars have even contended that the emergence of the term technology in the nineteenth and twentieth centuries marked a shift from viewing individual pieces of machinery as a means to achieving political and social progress to the more dangerous, or hazardous, view that larger-scale, more complex technological systems were a semiautonomous form of progress in and of themselves (Marx 2010) . More recently, technologists have sharply criticized what they view as a wave of new Luddites, people intent on slowing the development of technology and turning back the clock on innovation as a means of mitigating the societal impacts of technological change (Marlowe 1970) .

At the heart of fights over new technologies and their resulting global changes are often two conflicting visions of technology: a fundamentally optimistic one that believes humans use it as a tool to achieve greater goals, and a fundamentally pessimistic one that holds that technological systems have reached a point beyond our control. Technology philosophers have argued that neither of these views is wholly accurate and that a purely optimistic or pessimistic view of technology is insufficient to capture the nuances and complexity of our relationship to technology (Oberdiek and Tiles 1995) . Understanding technology and how we can make better decisions about designing, deploying, and refining it requires capturing that nuance and complexity through in-depth analysis of the impacts of different technological advancements and the ways they have played out in all their complicated and controversial messiness across the world.

These impacts are often unpredictable as technologies are adopted in new contexts and come to be used in ways that sometimes diverge significantly from the use cases envisioned by their designers. The internet, designed to help transmit information between computer networks, became a crucial vehicle for commerce, introducing unexpected avenues for crime and financial fraud. Social media platforms like Facebook and Twitter, designed to connect friends and families through sharing photographs and life updates, became focal points of election controversies and political influence. Cryptocurrencies, originally intended as a means of decentralized digital cash, have become a significant environmental hazard as more and more computing resources are devoted to mining these forms of virtual money. One of the crucial challenges in this area is therefore recognizing, documenting, and even anticipating some of these unexpected consequences and providing mechanisms to technologists for how to think through the impacts of their work, as well as possible other paths to different outcomes (Verbeek 2006) . And just as technological innovations can cause unexpected harm, they can also bring about extraordinary benefits—new vaccines and medicines to address global pandemics and save thousands of lives, new sources of energy that can drastically reduce emissions and help combat climate change, new modes of education that can reach people who would otherwise have no access to schooling. Regulating technology therefore requires a careful balance of mitigating risks without overly restricting potentially beneficial innovations.

Nations around the world have taken very different approaches to governing emerging technologies and have adopted a range of different technologies themselves in pursuit of more modern governance structures and processes (Braman 2009) . In Europe, the precautionary principle has guided much more anticipatory regulation aimed at addressing the risks presented by technologies even before they are fully realized. For instance, the European Union’s General Data Protection Regulation focuses on the responsibilities of data controllers and processors to provide individuals with access to their data and information about how that data is being used not just as a means of addressing existing security and privacy threats, such as data breaches, but also to protect against future developments and uses of that data for artificial intelligence and automated decision-making purposes. In Germany, Technische Überwachungsvereine, or TÜVs, perform regular tests and inspections of technological systems to assess and minimize risks over time, as the tech landscape evolves. In the United States, by contrast, there is much greater reliance on litigation and liability regimes to address safety and security failings after-the-fact. These different approaches reflect not just the different legal and regulatory mechanisms and philosophies of different nations but also the different ways those nations prioritize rapid development of the technology industry versus safety, security, and individual control. Typically, governance innovations move much more slowly than technological innovations, and regulations can lag years, or even decades, behind the technologies they aim to govern.

In addition to this varied set of national regulatory approaches, a variety of international and nongovernmental organizations also contribute to the process of developing standards, rules, and norms for new technologies, including the International Organization for Standardization­ and the International Telecommunication Union. These multilateral and NGO actors play an especially important role in trying to define appropriate boundaries for the use of new technologies by governments as instruments of control for the state.

At the same time that policymakers are under scrutiny both for their decisions about how to regulate technology as well as their decisions about how and when to adopt technologies like facial recognition themselves, technology firms and designers have also come under increasing criticism. Growing recognition that the design of technologies can have far-reaching social and political implications means that there is more pressure on technologists to take into consideration the consequences of their decisions early on in the design process (Vincenti 1993; Winner 1980) . The question of how technologists should incorporate these social dimensions into their design and development processes is an old one, and debate on these issues dates back to the 1970s, but it remains an urgent and often overlooked part of the puzzle because so many of the supposedly systematic mechanisms for assessing the impacts of new technologies in both the private and public sectors are primarily bureaucratic, symbolic processes rather than carrying any real weight or influence.

Technologists are often ill-equipped or unwilling to respond to the sorts of social problems that their creations have—often unwittingly—exacerbated, and instead point to governments and lawmakers to address those problems (Zuckerberg 2019) . But governments often have few incentives to engage in this area. This is because setting clear standards and rules for an ever-evolving technological landscape can be extremely challenging, because enforcement of those rules can be a significant undertaking requiring considerable expertise, and because the tech sector is a major source of jobs and revenue for many countries that may fear losing those benefits if they constrain companies too much. This indicates not just a need for clearer incentives and better policies for both private- and public-sector entities but also a need for new mechanisms whereby the technology development and design process can be influenced and assessed by people with a wider range of experiences and expertise. If we want technologies to be designed with an eye to their impacts, who is responsible for predicting, measuring, and mitigating those impacts throughout the design process? Involving policymakers in that process in a more meaningful way will also require training them to have the analytic and technical capacity to more fully engage with technologists and understand more fully the implications of their decisions.

At the same time that tech companies seem unwilling or unable to rein in their creations, many also fear they wield too much power, in some cases all but replacing governments and international organizations in their ability to make decisions that affect millions of people worldwide and control access to information, platforms, and audiences (Kilovaty 2020) . Regulators around the world have begun considering whether some of these companies have become so powerful that they violate the tenets of antitrust laws, but it can be difficult for governments to identify exactly what those violations are, especially in the context of an industry where the largest players often provide their customers with free services. And the platforms and services developed by tech companies are often wielded most powerfully and dangerously not directly by their private-sector creators and operators but instead by states themselves for widespread misinformation campaigns that serve political purposes (Nye 2018) .

Since the largest private entities in the tech sector operate in many countries, they are often better poised to implement global changes to the technological ecosystem than individual states or regulatory bodies, creating new challenges to existing governance structures and hierarchies. Just as it can be challenging to provide oversight for government use of technologies, so, too, oversight of the biggest tech companies, which have more resources, reach, and power than many nations, can prove to be a daunting task. The rise of network forms of organization and the growing gig economy have added to these challenges, making it even harder for regulators to fully address the breadth of these companies’ operations (Powell 1990) . The private-public partnerships that have emerged around energy, transportation, medical, and cyber technologies further complicate this picture, blurring the line between the public and private sectors and raising critical questions about the role of each in providing critical infrastructure, health care, and security. How can and should private tech companies operating in these different sectors be governed, and what types of influence do they exert over regulators? How feasible are different policy proposals aimed at technological innovation, and what potential unintended consequences might they have?

Conflict between countries has also spilled over significantly into the private sector in recent years, most notably in the case of tensions between the United States and China over which technologies developed in each country will be permitted by the other and which will be purchased by other customers, outside those two countries. Countries competing to develop the best technology is not a new phenomenon, but the current conflicts have major international ramifications and will influence the infrastructure that is installed and used around the world for years to come. Untangling the different factors that feed into these tussles as well as whom they benefit and whom they leave at a disadvantage is crucial for understanding how governments can most effectively foster technological innovation and invention domestically as well as the global consequences of those efforts. As much of the world is forced to choose between buying technology from the United States or from China, how should we understand the long-term impacts of those choices and the options available to people in countries without robust domestic tech industries? Does the global spread of technologies help fuel further innovation in countries with smaller tech markets, or does it reinforce the dominance of the states that are already most prominent in this sector? How can research universities maintain global collaborations and research communities in light of these national competitions, and what role does government research and development spending play in fostering innovation within its own borders and worldwide? How should intellectual property protections evolve to meet the demands of the technology industry, and how can those protections be enforced globally?

These conflicts between countries sometimes appear to challenge the feasibility of truly global technologies and networks that operate across all countries through standardized protocols and design features. Organizations like the International Organization for Standardization, the World Intellectual Property Organization, the United Nations Industrial Development Organization, and many others have tried to harmonize these policies and protocols across different countries for years, but have met with limited success when it comes to resolving the issues of greatest tension and disagreement among nations. For technology to operate in a global environment, there is a need for a much greater degree of coordination among countries and the development of common standards and norms, but governments continue to struggle to agree not just on those norms themselves but even the appropriate venue and processes for developing them. Without greater global cooperation, is it possible to maintain a global network like the internet or to promote the spread of new technologies around the world to address challenges of sustainability? What might help incentivize that cooperation moving forward, and what could new structures and process for governance of global technologies look like? Why has the tech industry’s self-regulation culture persisted? Do the same traditional drivers for public policy, such as politics of harmonization and path dependency in policy-making, still sufficiently explain policy outcomes in this space? As new technologies and their applications spread across the globe in uneven ways, how and when do they create forces of change from unexpected places?

These are some of the questions that we hope to address in the Technology and Global Change section through articles that tackle new dimensions of the global landscape of designing, developing, deploying, and assessing new technologies to address major challenges the world faces. Understanding these processes requires synthesizing knowledge from a range of different fields, including sociology, political science, economics, and history, as well as technical fields such as engineering, climate science, and computer science. A crucial part of understanding how technology has created global change and, in turn, how global changes have influenced the development of new technologies is understanding the technologies themselves in all their richness and complexity—how they work, the limits of what they can do, what they were designed to do, how they are actually used. Just as technologies themselves are becoming more complicated, so are their embeddings and relationships to the larger social, political, and legal contexts in which they exist. Scholars across all disciplines are encouraged to join us in untangling those complexities.

Josephine Wolff is an associate professor of cybersecurity policy at the Fletcher School of Law and Diplomacy at Tufts University. Her book You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches was published by MIT Press in 2018.

Recipient(s) will receive an email with a link to 'How Is Technology Changing the World, and How Should the World Change Technology?' and will not need an account to access the content.

Subject: How Is Technology Changing the World, and How Should the World Change Technology?

(Optional message may have a maximum of 1000 characters.)

Citing articles via

Email alerts, affiliations.

  • Special Collections
  • Review Symposia
  • Info for Authors
  • Info for Librarians
  • Editorial Team
  • Emerging Scholars Forum
  • Open Access
  • Online ISSN 2575-7350
  • Copyright © 2024 The Regents of the University of California. All Rights Reserved.

Stay Informed

Disciplines.

  • Ancient World
  • Anthropology
  • Communication
  • Criminology & Criminal Justice
  • Film & Media Studies
  • Food & Wine
  • Browse All Disciplines
  • Browse All Courses
  • Book Authors
  • Booksellers
  • Instructions
  • Journal Authors
  • Journal Editors
  • Media & Journalists
  • Planned Giving

About UC Press

  • Press Releases
  • Seasonal Catalog
  • Acquisitions Editors
  • Customer Service
  • Exam/Desk Requests
  • Media Inquiries
  • Print-Disability
  • Rights & Permissions
  • UC Press Foundation
  • © Copyright 2024 by the Regents of the University of California. All rights reserved. Privacy policy    Accessibility

This Feature Is Available To Subscribers Only

Sign In or Create an Account

How has technology changed - and changed us - in the past 20 years?

An internet surfer views the Google home page at a cafe in London, August 13, 2004.

Remember this? Image:  REUTERS/Stephen Hird

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Madeleine Hillyer

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved .chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, davos agenda.

  • Since the dotcom bubble burst back in 2000, technology has radically transformed our societies and our daily lives.
  • From smartphones to social media and healthcare, here's a brief history of the 21st century's technological revolution.

Just over 20 years ago, the dotcom bubble burst , causing the stocks of many tech firms to tumble. Some companies, like Amazon, quickly recovered their value – but many others were left in ruins. In the two decades since this crash, technology has advanced in many ways.

Many more people are online today than they were at the start of the millennium. Looking at broadband access, in 2000, just half of Americans had broadband access at home. Today, that number sits at more than 90% .

More than half the world's population has internet access today

This broadband expansion was certainly not just an American phenomenon. Similar growth can be seen on a global scale; while less than 7% of the world was online in 2000, today over half the global population has access to the internet.

Similar trends can be seen in cellphone use. At the start of the 2000s, there were 740 million cell phone subscriptions worldwide. Two decades later, that number has surpassed 8 billion, meaning there are now more cellphones in the world than people

Have you read?

The future of jobs report 2023, how to follow the growth summit 2023.

At the same time, technology was also becoming more personal and portable. Apple sold its first iPod in 2001, and six years later it introduced the iPhone, which ushered in a new era of personal technology. These changes led to a world in which technology touches nearly everything we do.

Technology has changed major sectors over the past 20 years, including media, climate action and healthcare. The World Economic Forum’s Technology Pioneers , which just celebrated its 20th anniversary, gives us insight how emerging tech leaders have influenced and responded to these changes.

Media and media consumption

The past 20 years have greatly shaped how and where we consume media. In the early 2000s, many tech firms were still focused on expanding communication for work through advanced bandwidth for video streaming and other media consumption that is common today.

Others followed the path of expanding media options beyond traditional outlets. Early Tech Pioneers such as PlanetOut did this by providing an outlet and alternative media source for LGBTQIA communities as more people got online.

Following on from these first new media options, new communities and alternative media came the massive growth of social media. In 2004 , fewer than 1 million people were on Myspace; Facebook had not even launched. By 2018, Facebook had more 2.26 billion users with other sites also growing to hundreds of millions of users.

The precipitous rise of social media over the past 15 years

While these new online communities and communication channels have offered great spaces for alternative voices, their increased use has also brought issues of increased disinformation and polarization.

Today, many tech start-ups are focused on preserving these online media spaces while also mitigating the disinformation which can come with them. Recently, some Tech Pioneers have also approached this issue, including TruePic – which focuses on photo identification – and Two Hat , which is developing AI-powered content moderation for social media.

Climate change and green tech

Many scientists today are looking to technology to lead us towards a carbon-neutral world. Though renewed attention is being given to climate change today, these efforts to find a solution through technology is not new. In 2001, green tech offered a new investment opportunity for tech investors after the crash, leading to a boom of investing in renewable energy start-ups including Bloom Energy , a Technology Pioneer in 2010.

In the past two decades, tech start-ups have only expanded their climate focus. Many today are focuses on initiatives far beyond clean energy to slow the impact of climate change.

Different start-ups, including Carbon Engineering and Climeworks from this year’s Technology Pioneers, have started to roll out carbon capture technology. These technologies remove CO2 from the air directly, enabling scientists to alleviate some of the damage from fossil fuels which have already been burned.

Another expanding area for young tech firms today is food systems innovation. Many firms, like Aleph Farms and Air Protein, are creating innovative meat and dairy alternatives that are much greener than their traditional counterparts.

Biotech and healthcare

The early 2000s also saw the culmination of a biotech boom that had started in the mid-1990s. Many firms focused on advancing biotechnologies through enhanced tech research.

An early Technology Pioneer, Actelion Pharmaceuticals was one of these companies. Actelion’s tech researched the single layer of cells separating every blood vessel from the blood stream. Like many other biotech firms at the time, their focus was on precise disease and treatment research.

While many tech firms today still focus on disease and treatment research, many others have been focusing on healthcare delivery. Telehealth has been on the rise in recent years , with many young tech expanding virtual healthcare options. New technologies such as virtual visits, chatbots are being used to delivery healthcare to individuals, especially during Covid-19.

Many companies are also focusing their healthcare tech on patients, rather than doctors. For example Ada, a symptom checker app, used to be designed for doctor’s use but has now shifted its language and interface to prioritize giving patients information on their symptoms. Other companies, like 7 cups, are focused are offering mental healthcare support directly to their users without through their app instead of going through existing offices.

The past two decades have seen healthcare tech get much more personal and use tech for care delivery, not just advancing medical research.

The World Economic Forum was the first to draw the world’s attention to the Fourth Industrial Revolution, the current period of unprecedented change driven by rapid technological advances. Policies, norms and regulations have not been able to keep up with the pace of innovation, creating a growing need to fill this gap.

The Forum established the Centre for the Fourth Industrial Revolution Network in 2017 to ensure that new and emerging technologies will help—not harm—humanity in the future. Headquartered in San Francisco, the network launched centres in China, India and Japan in 2018 and is rapidly establishing locally-run Affiliate Centres in many countries around the world.

The global network is working closely with partners from government, business, academia and civil society to co-design and pilot agile frameworks for governing new and emerging technologies, including artificial intelligence (AI) , autonomous vehicles , blockchain , data policy , digital trade , drones , internet of things (IoT) , precision medicine and environmental innovations .

Learn more about the groundbreaking work that the Centre for the Fourth Industrial Revolution Network is doing to prepare us for the future.

Want to help us shape the Fourth Industrial Revolution? Contact us to find out how you can become a member or partner.

In the early 2000s, many companies were at the start of their recovery from the bursting dotcom bubble. Since then, we’ve seen a large expansion in the way tech innovators approach areas such as new media, climate change, healthcare delivery and more.

At the same time, we have also seen tech companies rise to the occasion of trying to combat issues which arose from the first group such as internet content moderation, expanding climate change solutions.

The Technology Pioneers' 2020 cohort marks the 20th anniversary of this community - and looking at the latest awardees can give us a snapshot of where the next two decades of tech may be heading.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Davos Agenda .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

information technology in 21st century essay

Building trust amid uncertainty – 3 risk experts on the state of the world in 2024

Andrea Willige

March 27, 2024

information technology in 21st century essay

Why obesity is rising and how we can live healthy lives

Shyam Bishen

March 20, 2024

information technology in 21st century essay

Global cooperation is stalling – but new trade pacts show collaboration is still possible. Here are 6 to know about

Simon Torkington

March 15, 2024

information technology in 21st century essay

How messages of hope, diversity and representation are being used to inspire changemakers to act

Miranda Barker

March 7, 2024

information technology in 21st century essay

AI, leadership, and the art of persuasion – Forum  podcasts you should hear this month

Robin Pomeroy

March 1, 2024

information technology in 21st century essay

This is how AI is impacting – and shaping – the creative industries, according to experts at Davos

Kate Whiting

February 28, 2024

Science, technology and innovation in a 21st century context

  • Published: 27 August 2011
  • Volume 44 , pages 209–213, ( 2011 )

Cite this article

  • John H. Marburger III 1  

23k Accesses

8 Citations

3 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

This editorial essay was prepared by John H. “Jack” Marburger for a workshop on the “science of science and innovation policy” held in 2009 that was the basis for this special issue. It is published posthumously .

Linking the words “science,” “technology,” and “innovation,” may suggest that we know more about how these activities are related than we really do. This very common linkage implicitly conveys a linear progression from scientific research to technology creation to innovative products. More nuanced pictures of these complex activities break them down into components that interact with each other in a multi-dimensional socio-technological-economic network. A few examples will help to make this clear.

Science has always functioned on two levels that we may describe as curiosity-driven and need-driven, and they interact in sometimes surprising ways. Galileo’s telescope, the paradigmatic instrument of discovery in pure science, emerged from an entirely pragmatic tradition of lens-making for eye-glasses. And we should keep in mind that the industrial revolution gave more to science than it received, at least until the last half of the nineteenth century when the sciences of chemistry and electricity began to produce serious economic payoffs. The flowering of science during the era, we call the enlightenment owed much to its links with crafts and industry, but as it gained momentum science created its own need for practical improvements. After all, the frontiers of science are defined by the capabilities of instrumentation, that is, of technology. The needs of pure science are a huge but poorly understood stimulus for technologies that have the capacity to be disruptive precisely because these needs do not arise from the marketplace. The innovators who built the World Wide Web on the foundation of the Internet were particle physicists at CERN, struggling to satisfy their unique need to share complex information. Others soon discovered “needs” of which they had been unaware that could be satisfied by this innovation, and from that point the Web transformed the Internet from a tool for the technological elite into a broad platform for a new kind of economy.

Necessity is said to be the mother of invention, but in all human societies, “necessity” is a mix of culturally conditioned perceptions and the actual physical necessities of life. The concept of need, of what is wanted, is the ultimate driver of markets and an essential dimension of innovation. And as the example of the World Wide Web shows, need is very difficult to identify before it reveals itself in a mass movement. Why did I not know I needed a cell phone before nearly everyone else had one? Because until many others had one I did not, in fact, need one. Innovation has this chicken-and-egg quality that makes it extremely hard to analyze. We all know of visionaries who conceive of a society totally transformed by their invention and who are bitter that the world has not embraced their idea. Sometimes we think of them as crackpots, or simply unrealistic about what it takes to change the world. We practical people necessarily view the world through the filter of what exists, and fail to anticipate disruptive change. Nearly always we are surprised by the rapid acceptance of a transformative idea. If we truly want to encourage innovation through government policies, we are going to have to come to grips with this deep unpredictability of the mass acceptance of a new concept. Works analyzing this phenomenon are widely popular under titles like “ The Tipping Point ” by Gladwell ( 2000 ) or more recently the book by Taleb ( 2007 ) called The Black Swan , among others.

What causes innovations to be adopted and integrated into economies depends on their ability to satisfy some perceived need by consumers, and that perception may be an artifact of marketing, or fashion, or cultural inertia, or ignorance. Some of the largest and most profitable industries in the developed world—entertainment, automobiles, clothing and fashion accessories, health products, children’s toys, grownups’ toys!—depend on perceptions of need that go far beyond the utilitarian and are notoriously difficult to predict. And yet these industries clearly depend on sophisticated and rapidly advancing technologies to compete in the marketplace. Of course, they do not depend only upon technology. Technologies are part of the environment for innovation, or in a popular and very appropriate metaphor—part of the innovation ecology .

This complexity of innovation and its ecology is conveyed in Chapter One of a currently popular best-seller in the United States called Innovation Nation by the American innovation guru, Kao ( 2007 ), formerly on the faculty of the Harvard Business School:

“I define it [innovation],” writes Kao, “as the ability of individuals, companies, and entire nations to continuously create their desired future. Innovation depends on harvesting knowledge from a range of disciplines besides science and technology, among them design, social science, and the arts. And it is exemplified by more than just products; services, experiences, and processes can be innovative as well. The work of entrepreneurs, scientists, and software geeks alike contributes to innovation. It is also about the middlemen who know how to realize value from ideas. Innovation flows from shifts in mind-set that can generate new business models, recognize new opportunities, and weave innovations throughout the fabric of society. It is about new ways of doing and seeing things as much as it is about the breakthrough idea.” (Kao 2007 , p. 19).

This is not your standard government-type definition. Gurus, of course, do not have to worry about leading indicators and predictive measures of policy success. Nevertheless, some policy guidance can be drawn from this high level “definition,” and I will do so later.

The first point, then, is that the structural aspects of “science, technology, and innovation” are imperfectly defined, complex, and poorly understood. There is still much work to do to identify measures, develop models, and test them against actual experience before we can say we really know what it takes to foster innovation. The second point I want to make is about the temporal aspects: all three of these complex activities are changing with time. Science, of course, always changes through the accumulation of knowledge, but it also changes through revolutions in its theoretical structure, through its ever-improving technology, and through its evolving sociology. The technology and sociology of science are currently impacted by a rapidly changing information technology. Technology today flows increasingly from research laboratories but the influence of technology on both science and innovation depends strongly on its commercial adoption, that is, on market forces. Commercial scale manufacturing drives down the costs of technology so it can be exploited in an ever-broadening range of applications. The mass market for precision electro-mechanical devices like cameras, printers, and disk drives is the basis for new scientific instrumentation and also for further generations of products that integrate hundreds of existing components in new devices and business models like the Apple iPod and video games, not to mention improvements in old products like cars and telephones. Innovation is changing too as it expands its scope beyond individual products to include all or parts of systems such as supply chains and inventory control, as in the Wal-Mart phenomenon. Apple’s iPod does not stand alone; it is integrated with iTunes software and novel arrangements with media providers.

With one exception, however, technology changes more slowly than it appears because we encounter basic technology platforms in a wide variety of relatively short-lived products. Technology is like a language that innovators use to express concepts in the form of products, and business models that serve (and sometimes create) a variety of needs, some of which fluctuate with fashion. The exception to the illusion of rapid technology change is the pace of information technology, which is no illusion. It has fulfilled Moore’s Law for more than half a century, and it is a remarkable historical anomaly arising from the systematic exploitation of the understanding of the behavior of microscopic matter following the discovery of quantum mechanics. The pace would be much less without a continually evolving market for the succession of smaller, higher capacity products. It is not at all clear that the market demand will continue to support the increasingly expensive investment in fabrication equipment for each new step up the exponential curve of Moore’s Law. The science is probably available to allow many more capacity doublings if markets can sustain them. Let me digress briefly on this point.

Many science commentators have described the twentieth century as the century of physics and the twenty-first as the century of biology. We now know that is misleading. It is true that our struggle to understand the ultimate constituents of matter has now encompassed (apparently) everything of human scale and relevance, and that the universe of biological phenomena now lies open for systematic investigation and dramatic applications in health, agriculture, and energy production. But there are two additional frontiers of physical science, one already highly productive, the other very intriguing. The first is the frontier of complexity , where physics, chemistry, materials science, biology, and mathematics all come together. This is where nanotechnology and biotechnology reside. These are huge fields that form the core of basic science policy in most developed nations. The basic science of the twenty-first century is neither biology nor physics, but an interdisciplinary mix of these and other traditional fields. Continued development of this domain contributes to information technology and much else. I mentioned two frontiers. The other physical science frontier borders the nearly unexploited domain of quantum coherence phenomena . It is a very large domain and potentially a source of entirely new platform technologies not unlike microelectronics. To say more about this would take me too far from our topic. The point is that nature has many undeveloped physical phenomena to enrich the ecology of innovation and keep us marching along the curve of Moore’s Law if we can afford to do so.

I worry about the psychological impact of the rapid advance of information technology. I believe it has created unrealistic expectations about all technologies and has encouraged a casual attitude among policy makers toward the capability of science and technology to deliver solutions to difficult social problems. This is certainly true of what may be the greatest technical challenge of all time—the delivery of energy to large developed and developing populations without adding greenhouse gases to the atmosphere. The challenge of sustainable energy technology is much more difficult than many people currently seem to appreciate. I am afraid that time will make this clear.

Structural complexities and the intrinsic dynamism of science and technology pose challenges to policy makers, but they seem almost manageable compared with the challenges posed by extrinsic forces. Among these are globalization and the impact of global economic development on the environment. The latter, expressed quite generally through the concept of “sustainability” is likely to be a component of much twenty-first century innovation policy. Measures of development, competitiveness, and innovation need to include sustainability dimensions to be realistic over the long run. Development policies that destroy economically important environmental systems, contribute to harmful global change, and undermine the natural resource basis of the economy are bad policies. Sustainability is now an international issue because the scale of development and the globalization of economies have environmental and natural resource implications that transcend national borders.

From the policy point of view, globalization is a not a new phenomenon. Science has been globalized for centuries, and we ought to be studying it more closely as a model for effective responses to the globalization of our economies. What is striking about science is the strong imperative to share ideas through every conceivable channel to the widest possible audience. If you had to name one chief characteristic of science, it would be empiricism. If you had to name two, the other would be open communication of data and ideas. The power of open communication in science cannot be overestimated. It has established, uniquely among human endeavors, an absolute global standard. And it effectively recruits talent from every part of the globe to labor at the science frontiers. The result has been an extraordinary legacy of understanding of the phenomena that shape our existence. Science is the ultimate example of an open innovation system.

Science practice has received much attention from philosophers, social scientists, and historians during the past half-century, and some of what has been learned holds valuable lessons for policy makers. It is fascinating to me how quickly countries that provide avenues to advanced education are able to participate in world science. The barriers to a small but productive scientific activity appear to be quite low and whether or not a country participates in science appears to be discretionary. A small scientific establishment, however, will not have significant direct economic impact. Its value at early stages of development is indirect, bringing higher performance standards, international recognition, and peer role models for a wider population. A science program of any size is also a link to the rich intellectual resources of the world scientific community. The indirect benefit of scientific research to a developing country far exceeds its direct benefit, and policy needs to recognize this. It is counterproductive to base support for science in such countries on a hoped-for direct economic stimulus.

Keeping in mind that the innovation ecology includes far more than science and technology, it should be obvious that within a small national economy innovation can thrive on a very small indigenous science and technology base. But innovators, like scientists, do require access to technical information and ideas. Consequently, policies favorable to innovation will create access to education and encourage free communication with the world technical community. Anything that encourages awareness of the marketplace and all its actors on every scale will encourage innovation.

This brings me back to John Kao’s definition of innovation. His vision of “the ability of individuals, companies, and entire nations to continuously create their desired future” implies conditions that create that ability, including most importantly educational opportunity (Kao 2007 , p. 19). The notion that “innovation depends on harvesting knowledge from a range of disciplines besides science and technology” implies that innovators must know enough to recognize useful knowledge when they see it, and that they have access to knowledge sources across a spectrum that ranges from news media and the Internet to technical and trade conferences (2007, p. 19). If innovation truly “flows from shifts in mind-set that can generate new business models, recognize new opportunities, and weave innovations throughout the fabric of society,” then the fabric of society must be somewhat loose-knit to accommodate the new ideas (2007, p. 19). Innovation is about risk and change, and deep forces in every society resist both of these. A striking feature of the US innovation ecology is the positive attitude toward failure, an attitude that encourages risk-taking and entrepreneurship.

All this gives us some insight into what policies we need to encourage innovation. Innovation policy is broader than science and technology policy, but the latter must be consistent with the former to produce a healthy innovation ecology. Innovation requires a predictable social structure, an open marketplace, and a business culture amenable to risk and change. It certainly requires an educational infrastructure that produces people with a global awareness and sufficient technical literacy to harvest the fruits of current technology. What innovation does not require is the creation by governments of a system that defines, regulates, or even rewards innovation except through the marketplace or in response to evident success. Some regulation of new products and new ideas is required to protect public health and environmental quality, but innovation needs lots of freedom. Innovative ideas that do not work out should be allowed to die so the innovation community can learn from the experience and replace the failed attempt with something better.

Do we understand innovation well enough to develop policy for it? If the policy addresses very general infrastructure issues such as education, economic, and political stability and the like, the answer is perhaps. If we want to measure the impact of specific programs on innovation, the answer is no. Studies of innovation are at an early stage where anecdotal information and case studies, similar to John Kao’s book—or the books on Business Week’s top ten list of innovation titles—are probably the most useful tools for policy makers.

I have been urging increased attention to what I call the science of science policy —the systematic quantitative study of the subset of our economy called science and technology—including the construction and validation of micro- and macro-economic models for S&T activity. Innovators themselves, and those who finance them, need to identify their needs and the impediments they face. Eventually, we may learn enough to create reliable indicators by which we can judge the health of our innovation ecosystems. The goal is well worth the sustained effort that will be required to achieve it.

Gladwell, M. (2000). The tipping point: How little things can make a big difference . Boston: Little, Brown and Company.

Google Scholar  

Kao, J. (2007). Innovation nation: How America is losing its innovation edge, why it matters, and what we can do to get it back . New York: Free Press.

Taleb, N. N. (2007). The black swan: The impact of the highly improbable . New York: Random House.

Download references

Author information

Authors and affiliations.

Stony Brook University, Stony Brook, NY, USA

John H. Marburger III

You can also search for this author in PubMed   Google Scholar

Additional information

John H. Marburger III—deceased

Rights and permissions

Reprints and permissions

About this article

Marburger, J.H. Science, technology and innovation in a 21st century context. Policy Sci 44 , 209–213 (2011). https://doi.org/10.1007/s11077-011-9137-3

Download citation

Published : 27 August 2011

Issue Date : September 2011

DOI : https://doi.org/10.1007/s11077-011-9137-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Visual Life

  • Creative Projects
  • Write Here!

Social Interaction Vs Electronic Media Use

Karunaratne, Indika & Atukorale, Ajantha & Perera, Hemamali. (2011). Surveillance of human- computer interactions: A way forward to detection of users' Psychological Distress. 2011 IEEE Colloquium on Humanities, Science and Engineering, CHUSER 2011. 10.1109/CHUSER.2011.6163779.

June 9, 2023 / 0 comments / Reading Time: ~ 12 minutes

The Digital Revolution: How Technology is Changing the Way We Communicate and Interact

This article examines the impact of technology on human interaction and explores the ever-evolving landscape of communication. With the rapid advancement of technology, the methods and modes of communication have undergone a significant transformation. This article investigates both the positive and negative implications of this digitalization. Technological innovations, such as smartphones, social media, and instant messaging apps, have provided unprecedented accessibility and convenience, allowing people to connect effortlessly across distances. However, concerns have arisen regarding the quality and authenticity of these interactions. The article explores the benefits of technology, including improved connectivity, enhanced information sharing, and expanded opportunities for collaboration. It also discusses potential negative effects including a decline in in-person interactions, a loss of empathy, and an increase in online anxiety. This article tries to expand our comprehension of the changing nature of communication in the digital age by exposing the many ways that technology has an impact on interpersonal interactions. It emphasizes the necessity of intentional and thoughtful communication techniques to preserve meaningful connections in a society that is becoming more and more reliant on technology.

Introduction:

Technology has significantly transformed our modes of communication and interaction, revolutionizing the way we connect with one another over the past few decades. However, the COVID-19 pandemic has acted as a catalyst, expediting this transformative process, and necessitating our exclusive reliance on digital tools for socializing, working, and learning. Platforms like social media and video conferencing have emerged in recent years, expanding our options for virtual communication. The impact of these changes on our lives cannot be ignored. In this article, we will delve into the ways in which technology has altered our communication and interaction patterns and explore the consequences of these changes for our relationships, mental well-being, and society.

To gain a deeper understanding of this topic, I have conducted interviews and surveys, allowing us to gather firsthand insights from individuals of various backgrounds. Additionally, we will compare this firsthand information with the perspectives shared by experts in the field. By drawing on both personal experiences and expert opinions, we seek to provide a comprehensive analysis of how technology influences our interpersonal connections. Through this research, we hope to get a deeper comprehension of the complex interactions between technology and people, enabling us to move mindfully and purposefully through the rapidly changing digital environment.

The Evolution of Communication: From Face-to-Face to Digital Connections:

In the realm of communication, we have various mediums at our disposal, such as face-to-face interactions, telephone conversations, and internet-based communication. According to Nancy Baym, an expert in the field of technology and human connections, face-to-face communication is often regarded as the most personal and intimate, while the phone provides a more personal touch than the internet. She explains this in her book Personal Connections in the Digital Age by stating, “Face-to-face is much more personal; phone is personal as well, but not as intimate as face-to-face… Internet would definitely be the least personal, followed by the phone (which at least has the vocal satisfaction) and the most personal would be face-to-face” (Baym 2015).  These distinctions suggest that different communication mediums are perceived to have varying levels of effectiveness in conveying emotion and building relationships. This distinction raises thought-provoking questions about the impact of technology on our ability to forge meaningful connections. While the internet offers unparalleled convenience and connectivity, it is essential to recognize its limitations in reproducing the depth of personal interaction found in face-to-face encounters. These limitations may be attributed to the absence of nonverbal cues, such as facial expressions, body language, and tone of voice, which are vital elements in understanding and interpreting emotions accurately.

Traditionally, face-to-face interactions held a prominent role as the primary means of communication, facilitating personal and intimate connections. However, the rise of technology has brought about significant changes, making communication more convenient but potentially less personal. The rise of phones, instant messaging, and social media platforms has revolutionized how we connect with others. While these digital tools offer instant connectivity and enable us to bridge geographical distances, they introduce a layer of blockage that may impact the depth and quality of our interactions. It is worth noting that different communication mediums have their strengths and limitations. Phone conversations, for instance, retain a certain level of personal connection through vocal interactions, allowing for the conveyance of emotions and tones that text-based communication may lack. However, even with this advantage, phone conversations still fall short of the depth and richness found in face-to-face interactions, as they lack visual cues and physical presence.

Internet-based communication, on the other hand, is considered the least personal medium. Online interactions often rely on text-based exchanges, which may not fully capture the nuances of expression, tone, and body language. While the internet offers the ability to connect with a vast network of individuals and share information on a global scale, it may not facilitate the same depth and authenticity that in-person or phone conversations can provide. As a result, establishing meaningful connections and building genuine relationships in an online setting can be challenging. Research and observations support these ideas. Figure 1. titled “Social Interaction after Electronic Media Use,” shows the potential impact of electronic media on social interaction (source: ResearchGate). This research highlights the need to carefully consider the effects of technology on our interpersonal connections. While technology offers convenience and connectivity, it is essential to strike a balance, ensuring that we do not sacrifice the benefits of face-to-face interactions for the sake of digital convenience.

Social interaction vs. electronic media use: Hours per day of face-to-face social interaction declines as use of electronic media [6]. 

Figure 1:  Increased reliance on electronic media has led to a noticeable decrease in social interaction.

The Limitations and Effects of Digital Communication

In today’s digital age, the limitations and effects of digital communication are becoming increasingly evident. While the phone and internet offer undeniable benefits such as convenience and the ability to connect with people regardless of geographical distance, they fall short in capturing the depth and richness of a face-to-face conversation. The ability to be in the same physical space as the person we’re communicating with, observing their facial expressions, body language, and truly feeling their presence, is something unique and irreplaceable.

Ulrike Schultze, in her thought-provoking TED Talk titled “How Social Media Shapes Identity,” delves further into the impact of digital communication on our lives by stating, “we construct the technology, but the technology also constructs us. We become what technology allows us to become” (Schultze 2015). This concept highlights how our reliance on digital media for interaction has led to a transformation in how we express ourselves and relate to others.

The influence of social media has been profound in shaping our communication patterns and interpersonal dynamics. Research conducted by Kalpathy Subramanian (2017) examined the influence of social media on interpersonal communication, highlighting the changes it brings to the way we interact and express ourselves (Subramanian 2017). The study found that online communication often involves the use of abbreviations, emoticons, and hashtags, which have become embedded in our online discourse. These digital communication shortcuts prioritize speed and efficiency, but they also contribute to a shift away from the physical action of face-to-face conversation, where nonverbal cues and deeper emotional connections can be fostered.

Additionally, the study emphasizes the impact of social media on self-presentation and identity construction. With the rise of platforms like Facebook, Instagram, and Twitter, individuals have a platform to curate and present themselves to the world. This online self-presentation can influence how we perceive ourselves and how others perceive us, potentially shaping our identities in the process. The study further suggests that the emphasis on self-presentation and the pressure to maintain a certain image on social media can lead to increased stress and anxiety among users.

Interviews:

I conducted interviews with individuals from different age groups to gain diverse perspectives on how technology and social media have transformed the way we connect with others. By exploring the experiences of a 21-year-old student and an individual in their 40s, we can better understand the evolving dynamics of interpersonal communication in the digital age. These interviews shed light on the prevalence of digital communication among younger generations, their preference for convenience, and the concerns raised by individuals from older age groups regarding the potential loss of deeper emotional connections.

When I asked the 21-year-old classmate about how technology has changed the way they interact with people in person, they expressed, “To be honest, I spend more time texting, messaging, or posting on social media than actually talking face-to-face with others. It’s just so much more convenient.” This response highlights the prevalence of digital communication among younger generations and their preference for convenience over traditional face-to-face interactions. It suggests that technology has significantly transformed the way young people engage with others, with a greater reliance on virtual interactions rather than in-person conversations. Additionally, the mention of convenience as a driving factor raises questions about the potential trade-offs in terms of depth and quality of interpersonal connections.

To gain insight from an individual in their 40s, I conducted another interview. When asked about their experiences with technology and social media, they shared valuable perspectives. They mentioned that while they appreciate the convenience and accessibility offered by technology, they also expressed concerns about its impact on interpersonal connections. They emphasized the importance of face-to-face interactions in building genuine relationships and expressed reservations about the potential loss of deeper emotional connections in digital communication. Additionally, they discussed the challenges of adapting to rapid technological advancements and the potential generational divide in communication preferences.

Comparing the responses from both interviews, it is evident that there are generational differences in the perception and use of technology for communication. While the 21-year-old classmate emphasized convenience as a primary factor in favor of digital communication, the individual in their 40s highlighted the importance of face-to-face interactions and expressed concerns about the potential loss of meaningful connections in the digital realm. This comparison raises questions about the potential impact of technology on the depth and quality of interpersonal relationships across different age groups. It also invites further exploration into how societal norms and technological advancements shape individuals’ preferences and experiences.

Overall, the interviews revealed a shift towards digital communication among both younger and older individuals, with varying perspectives. While convenience and connectivity are valued, concerns were raised regarding the potential drawbacks, including the pressure to maintain an idealized online presence and the potential loss of genuine connections. It is evident that technology and social media have transformed the way we communicate and interact with others, but the interviews also highlighted the importance of maintaining a balance and recognizing the value of face-to-face interactions in fostering meaningful relationships.

I have recently conducted a survey with my classmates to gather insights on how technology and social media have influenced communication and interaction among students in their daily lives. Although the number of responses is relatively small, the collected data allows us to gain a glimpse into individual experiences and perspectives on this matter.

One of the questions asked in the survey was how often students rely on digital communication methods, such as texting, messaging, or social media, in comparison to engaging in face-to-face conversations. The responses indicated a clear trend towards increased reliance on digital communication, with 85% of participants stating that they frequently use digital platforms as their primary means of communication. This suggests a significant shift away from traditional face-to-face interactions, highlighting the pervasive influence of technology in shaping our communication habits.

Furthermore, the survey explored changes in the quality of interactions and relationships due to the increased use of technology and social media. Interestingly, 63% of respondents reported that they had noticed a decrease in the depth and intimacy of their connections since incorporating more digital communication into their lives. Many participants expressed concerns about the difficulty of conveying emotions effectively through digital channels and the lack of non-verbal cues that are present in face-to-face interactions. It is important to note that while the survey results provide valuable insights into individual experiences, they are not representative of the entire student population. The small sample size limits the generalizability of the findings. However, the data collected does shed light on the potential impact of technology and social media on communication and interaction patterns among students.

Expanding on the topic, I found an insightful figure from Business Insider that sheds light on how people utilize their smartphones (Business Insider). Figure 2. illustrates the average smartphone owner’s daily time spent on various activities. Notably, communication activities such as texting, talking, and social networking account for a significant portion, comprising 59% of phone usage. This data reinforces the impact of digital communication on our daily lives, indicating the substantial role it plays in shaping our interactions with others.  Upon comparing this research with the data, I have gathered, a clear trend emerges, highlighting that an increasing number of individuals primarily utilize their smartphones for communication and interaction purposes.

Figure 2: The breakdown of daily smartphone usage among average users clearly demonstrates that the phone is primarily used for interactions.

The Digital Make Over:

In today’s digital age, the impact of technology on communication and interaction is evident, particularly in educational settings. As a college student, I have witnessed the transformation firsthand, especially with the onset of the COVID-19 pandemic. The convenience of online submissions for assignments has led to a growing trend of students opting to skip physical classes, relying on the ability to submit their work remotely. Unfortunately, this shift has resulted in a decline in face-to-face interactions and communication among classmates and instructors.

The decrease in physical attendance raises concerns about the potential consequences for both learning and social connections within the academic community. Classroom discussions, collaborative projects, and networking opportunities are often fostered through in-person interactions. By limiting these experiences, students may miss out on valuable learning moments, diverse perspectives, and the chance to establish meaningful connections with their peers and instructors.

Simon Lindgren, in his thought-provoking Ted Talk , “Media Are Not Social, but People Are,” delves deeper into the effects of technology and social media on our interactions. Lindgren highlights a significant point by suggesting that while technology may have the potential to make us better individuals, we must also recognize its potential pitfalls. Social media, for instance, can create filter bubbles that limit our exposure to diverse viewpoints, making us less in touch with reality and more narrow-minded. This cautionary reminder emphasizes the need to approach social media thoughtfully, seeking out diverse perspectives and avoiding the pitfalls of echo chambers. Furthermore, it is crucial to strike a balance between utilizing technology for educational purposes and embracing the benefits of in-person interactions. While technology undoubtedly facilitates certain aspects of education, such as online learning platforms and digital resources, we must not overlook the importance of face-to-face communication. In-person interactions allow for nuanced non-verbal cues, deeper emotional connections, and real-time engagement that contribute to a more comprehensive learning experience.

A study conducted by Times Higher Education delved into this topic, providing valuable insights. Figure 3. from the study illustrates a significant drop in attendance levels after the pandemic’s onset. Undeniably, technology played a crucial role in facilitating the transition to online learning. However, it is important to acknowledge that this shift has also led to a decline in face-to-face interactions, which have long been regarded as essential for effective communication and relationship-building. While technology continues to evolve and reshape the educational landscape, it is imperative that we remain mindful of its impact on communication and interaction. Striking a balance between digital tools and in-person engagement can help ensure that we leverage the benefits of technology while preserving the richness of face-to-face interactions. By doing so, we can foster a holistic educational experience that encompasses the best of both worlds and cultivates meaningful connections among students, instructors, and the academic community.

University class attendance plummets post-Covid | Times Higher Education (THE)

Figure 3:  This graph offers convincing proof that the COVID-19 pandemic and the extensive use of online submission techniques are to blame for the sharp reduction in in-person student attendance.

When asked about the impact of online submissions for assignments on physical attendance in classes, the survey revealed mixed responses. While 73% of participants admitted that the convenience of online submissions has led them to skip classes occasionally, 27% emphasized the importance of in-person attendance for better learning outcomes and social interactions. This finding suggests that while technology offers convenience, it also poses challenges in maintaining regular face-to-face interactions, potentially hindering educational and social development, and especially damaging the way we communicate and interact with one another. Students are doing this from a young age, and it comes into huge effect once they are trying to enter the work force and interact with others. When examining the survey data alongside the findings from Times Higher Education, striking similarities become apparent regarding how students approach attending classes in person with the overall conclusion being a massive decrease in students attending class which hinders the chance for real life interaction and communication. the convenience and instant gratification provided by technology can create a sense of detachment and impatience in interpersonal interactions. Online platforms allow for quick and immediate responses, and individuals can easily disconnect or switch between conversations. This can result in a lack of attentiveness and reduced focus on the person with whom one is communicating, leading to a superficial engagement that may hinder the establishment of genuine connections.

Conclusion:

Ultimately, the digital revolution has profoundly transformed the way we communicate and interact with one another. The COVID-19 pandemic has accelerated this transformation, leading to increased reliance on digital tools for socializing, working, and learning. While technology offers convenience and connectivity, it also introduces limitations and potential drawbacks. The shift towards digital communication raises concerns about the depth and quality of our connections, as well as the potential loss of face-to-face interactions. However, it is essential to strike a balance between digital and in-person engagement, recognizing the unique value of physical presence, non-verbal cues, and deeper emotional connections that face-to-face interactions provide. By navigating the digital landscape with mindfulness and intentionality, we can harness the transformative power of technology while preserving and nurturing the essential elements of human connection.

Moving forward, it is crucial to consider the impact of technology on our relationships, mental well-being, and society. As technology continues to evolve, we must be cautious of its potential pitfalls, such as the emphasis on self-presentation, the potential for increased stress and anxiety, and the risk of forgetting how to interact in person. Striking a balance between digital and face-to-face interactions can help ensure that technology enhances, rather than replaces, genuine human connections. By prioritizing meaningful engagement, valuing personal interactions, and leveraging the benefits of technology without compromising the depth and quality of our relationships, we can navigate the digital revolution in a way that enriches our lives and fosters authentic connections.

References:

Ballve, M. (2013, June 5). How much time do we really spend on our smartphones every day? Business Insider. Retrieved April 27, 2023. https://www.businessinsider.com/how-much-time-do-we-spend-on-smartphones-2013-6

Baym, N. (2015). Personal Connections in the Digital Age (2nd ed.). Polity.

Karunaratne, Indika & Atukorale, Ajantha & Perera, Hemamali. (2011). Surveillance of human-       computer interactions: A way forward to detection of users’ Psychological Distress. 2011 IEEE Colloquium on Humanities, Science and Engineering, CHUSER 2011.             10.1109/CHUSER.2011.6163779.  https://www.researchgate.net/figure/Social-interaction-vs-electronic-media-use-Hours-per-day-of-face-to-face-social_fig1_254056654

Lindgren, S. (2015, May 20). Media are not social, but people are | Simon Lindgren | TEDxUmeå . YouTube. Retrieved April 27, 2023, from https://www.youtube.com/watch?v=nQ5S7VIWE6k

Ross, J., McKie, A., Havergal, C., Lem, P., & Basken, P. (2022, October 24). Class attendance plummets post-Covid . Times Higher Education (THE). Retrieved April 27, 2023, from https://www.timeshighereducation.com/news/class-attendance-plummets-post-covid

Schultze, U. (2015, April 23). How social media shapes identity | Ulrike Schultze | TEDxSMU . YouTube. Retrieved April 27, 2023, from https://www.youtube.com/watch?v=CSpyZor-Byk

Subramanian, Dr. K .R. “Influence of Social Media in Interpersonal Communication – Researchgate.” ResearchGate.Net , www.researchgate.net/profile/Kalpathy-Subramanian/publication/319422885_Influence_of_Social_Media_in_Interpersonal_Communication/links/59a96d950f7e9b2790120fea/Influence-of-Social-Media-in-Interpersonal-Communication.pdf. Accessed 12 May 2023 .

And So It Was Written

information technology in 21st century essay

Author: Anonymous

Published: June 9, 2023

Word Count: 3308

Reading time: ~ 12 minutes

Edit Link: (emailed to author) Request Now

Creative Commons CC-BY=ND Attribution-NoDerivs License

ORGANIZED BY

Articles , Published

MORE TO READ

Provide feedback cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

A TRU Writer powered SPLOT : Visual Life

Blame @cogdog — Up ↑

information technology in 21st century essay

25,000+ students realised their study abroad dream with us. Take the first step today

Meet top uk universities from the comfort of your home, here’s your new year gift, one app for all your, study abroad needs, start your journey, track your progress, grow with the community and so much more.

information technology in 21st century essay

Verification Code

An OTP has been sent to your registered mobile no. Please verify

information technology in 21st century essay

Thanks for your comment !

Our team will review it before it's shown to our readers.

Leverage Edu

  • School Education /

Essay on Information Technology in 400 Words

' src=

  • Updated on  
  • Dec 2, 2023

Essay on Information Technology

Essay on Information Technology: Information Technology is the study of computer systems and telecommunications for storing, retrieving, and transmitting information using the Internet. Today, we rely on information technology to collect and transfer data from and on the internet. Say goodbye to the conventional lifestyle and hello to the realm of augmented reality (AR) and virtual reality (VR).

Check out all the latest updates on all board exams 2024

Also Read: Essay on Internet

Scientific discoveries have given birth to Information Technology (IT), which has revolutionized our way of living. Sudden developments in technology have given a boost to IT growth, which has changed the entire world. Students are taught online using smartboards, virtual meetings are conducted between countries to enhance diplomatic ties, online surveys are done to spread social awareness, e-commerce platforms are used for online shopping, etc.

Information Technology has made sharing and collecting information at our fingertips easier. We can learn new things with just a click. IT tools have enhanced global communication, through which we can foster economic cooperation and innovation. Almost every business in the world relies on Information Technology for growth and development. The addiction to information technology is thriving throughout the world.

Also Read: Essay on 5G Technology

  • Everyday activities like texting, calling, and video chatting have made communication more efficient.
  • E-commerce platforms like Amazon and Flipkart have become a source of online shopping.
  • E-learning platforms have made education more accessible.
  • The global economy has significantly improved.
  • The healthcare sector has revolutionized with the introduction of Electronic Health Records (EHR) and telemedicine.
  • Local businesses have expanded into global businesses. 
  • Access to any information on the internet in real-time.

Also Read: Essay on Mobile Phone

Disadvantages

Apart from the above-mentioned advantages of Information Technology, there are some disadvantages also.

  • Cybersecurity and data breaches are one of the most important issues.
  • There is a digital divide in people having access to information technology.
  • Our over-relying attitude towards the IT sector makes us vulnerable to technical glitches, system failures and cyber-attacks.
  • Excessive use of electronic devices and exposure to screens contribute to health issues.
  • Short lifecycles of electronic devices due to rapid changes in technological developments.
  • Challenges like copyright infringement and intellectual property will rise because of ease in digital reproduction and distribution.
  • Our traditional ways of entertainment have been transformed by online streaming platforms, where we can watch movies and play games online.

The modern world heavily relies on information technology. Indeed, it has fundamentally reshaped our way of living and working, but, we also need to strike a balance between its use and overuse. We must pay attention to the challenges it brings for a sustainable and equitable society.

Also Read: Essay on Technology

Paragraph on Information Technology

Also Read: Essay on Wonder of Science

Ans: Information technology is an indispensable part of our lives and has revolutionized the way we connect, work, and live. The IT sector involves the use of computers and electronic gadgets to store, transmit, and retrieve data. In recent year, there has been some rapid changes in the IT sector, which has transformed the world into a global village, where information can be exchanged in real-time across vast distances.

Ans: The IT sector is one of the fastest-growing sectors in the world. The IT sector includes IT services, e-commerce, the Internet, Software, and Hardware products. IT sector helps boost productivity and efficiency. Computer applications and digital systems have allowed people to perform multiple tasks at a faster rate. IT sector creates new opportunities for everyone; businesses, professionals, and consumers.

Ans: There are four basic concepts of the IT sector: Information security, business software development, computer technical support, and database and network management.

Related Articles

For more information on such interesting topics, visit our essay writing page and follow Leverage Edu .

' src=

Shiva Tyagi

With an experience of over a year, I've developed a passion for writing blogs on wide range of topics. I am mostly inspired from topics related to social and environmental fields, where you come up with a positive outcome.

Leave a Reply Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Contact no. *

information technology in 21st century essay

Connect With Us

information technology in 21st century essay

25,000+ students realised their study abroad dream with us. Take the first step today.

information technology in 21st century essay

Resend OTP in

information technology in 21st century essay

Need help with?

Study abroad.

UK, Canada, US & More

IELTS, GRE, GMAT & More

Scholarship, Loans & Forex

Country Preference

New Zealand

Which English test are you planning to take?

Which academic test are you planning to take.

Not Sure yet

When are you planning to take the exam?

Already booked my exam slot

Within 2 Months

Want to learn about the test

Which Degree do you wish to pursue?

When do you want to start studying abroad.

January 2024

September 2024

What is your budget to study abroad?

information technology in 21st century essay

How would you describe this article ?

Please rate this article

We would like to hear more.

Have something on your mind?

information technology in 21st century essay

Make your study abroad dream a reality in January 2022 with

information technology in 21st century essay

India's Biggest Virtual University Fair

information technology in 21st century essay

Essex Direct Admission Day

Why attend .

information technology in 21st century essay

Don't Miss Out

  • Open access
  • Published: 04 December 2018

The computer for the 21st century: present security & privacy challenges

  • Leonardo B. Oliveira 1 ,
  • Fernando Magno Quintão Pereira 2 ,
  • Rafael Misoczki 3 ,
  • Diego F. Aranha 4 ,
  • Fábio Borges 5 ,
  • Michele Nogueira 6 ,
  • Michelle Wangham 7 ,
  • Min Wu 8 &
  • Jie Liu 9  

Journal of Internet Services and Applications volume  9 , Article number:  24 ( 2018 ) Cite this article

34k Accesses

7 Citations

2 Altmetric

Metrics details

Decades went by since Mark Weiser published his influential work on the computer of the 21st century. Over the years, some of the UbiComp features presented in that paper have been gradually adopted by industry players in the technology market. While this technological evolution resulted in many benefits to our society, it has also posed, along the way, countless challenges that we have yet to surpass. In this paper, we address major challenges from areas that most afflict the UbiComp revolution:

Software Protection: weakly typed languages, polyglot software, and networked embedded systems.

Long-term Security: recent advances in cryptanalysis and quantum attacks.

Cryptography Engineering: lightweight cryptosystems and their secure implementation.

Resilience: issues related to service availability and the paramount role of resilience.

Identity Management: requirements to identity management with invisibility.

Privacy Implications: sensitivity data identification and regulation.

Forensics: trustworthy evidence from the synergy of digital and physical world.

We point out directions towards the solutions of those problems and claim that if we get all this right, we will turn the science fiction of UbiComp into science fact.

1 Introduction

In 1991, Mark Weiser described a vision of the Computer for the 21st Century [ 1 ]. Weiser, in his prophetic paper, argued the most far-reaching technologies are those that allow themselves to disappear, vanish into thin air. According to Weiser, this oblivion is a human – not a technological – phenomenon: “Whenever people learn something sufficiently well, they cease to be aware of it,” he claimed. This event is called “tacit dimension” or “compiling” and can be witnessed, for instance, when drivers react to street signs without consciously having to process the letters S-T-O-P [ 1 ].

A quarter of a century later, however, Weiser’s dream is far from becoming true. Over the years, many of his concepts regarding pervasive and ubiquitous computing (UbiComp) [ 2 , 3 ] have been materialized into what today we call Wireless Sensor Networks [ 4 , 5 ], Internet of Things [ 6 , 7 ], Wearables [ 8 , 9 ], and Cyber-Physical Systems [ 10 , 11 ]. The applications of these systems range from traffic accident and CO 2 emission monitoring to autonomous automobile and patient in-home care. Nevertheless, besides all their benefits, the advent of those systems per se have also brought about some drawbacks. And, unless we address them appropriately, the continuity of Weiser’s prophecy will be at stake.

UbiComp poses new drawbacks because, vis-à-vis traditional computing, it exhibits an entirely different outlook [ 12 ]. Computer systems in UbiComp, for instance, feature sensors, CPU, and actuators. Respectively, this means they can hear (or spy on) the user, process her/his data (and, possibly, find out something confidential about her/him), and respond to her/his actions (or, ultimately, expose she/he by revealing some secret). Those capabilities, in turn, make proposals for conventional computers ill-suited in the UbiComp setting and present new challenges.

In the above scenarios, some of the most critical challenges lie in the areas of Security and Privacy [ 13 ]. This is so because the market and users often pursue a system full of features at the expense of proper operation and protection; although, conversely, as computing elements pervade our daily lives, the demand for stronger security schemes becomes greater than ever. Notably, there is a dire need for a secure mechanism able to encompass all aspects and manifestations of UbiComp, across time as well as space, and in a seamless and efficient manner.

In this paper, we discuss contemporary security and privacy issues in the context of UbiComp (Fig.  1 ). We examine multiple research problems still open and point to promising approaches towards their solutions. More precisely, we investigate the following challenges and their ramifications.

figure 1

Current security and privacy issues in UbiComp

Software protection in Section 2 : we study the impact of the adoption of weakly typed languages by resource-constrained devices and discuss mechanisms to mitigate this impact. We go over techniques to validate polyglot software (i.e., software based on multiple programming languages), and revisit promising methods to analyze networked embedded systems.

Long-term security in Section 3 : we examine the security of today’s widely used cryptosystems (e.g., RSA and ECC-based), present some of the latest threats (e.g., the advances in cryptanalysis and quantum attacks), and explore new directions and challenges to guarantee long-term security in the UbiComp setting.

Cryptography engineering in Section 4 : we restate the essential role of cryptography in safeguarding computers, discuss the status quo of lightweight cryptosystems and their secure implementation, and highlight challenges in key management protocols.

Resilience in Section 5 : we highlight issues related to service availability and we reinforce the importance of resilience in the context of UbiComp.

Identity Management in Section 6 : we examine the main requirements to promote identity management (IdM) in UbiComp systems to achieve invisibility, revisit the most used federated IdM protocols, and explore open questions, research opportunities to provide a proper IdM approach for pervasive computing.

Privacy implications in Section 7 : we explain why security is necessary but not sufficient to ensure privacy, go over important privacy-related issues (e.g., sensitivity data identification and regulation), and discuss some tools of the trade to fix those (e.g., privacy-preserving protocols based on homomorphic encryption).

Forensics in Section 8 we present the benefit of the synergistic use of physical and digital evidences to facilitate trustworthy operations of cyber systems.

We believe that only if we tackle these challenges right, we can turn the science fiction of UbiComp into science fact.

Particularly, we choose to address the areas above because they represent promising research directions e cover different aspects of UbiComp security and privacy.

2 Software protection

Modern UbiComp systems are rarely built from scratch. Components developed by different organizations, with different programming models and tools, and under different assumptions are integrated to offer complex capabilities. In this section, we analyze the software ecosystem that emerges from such a world. Figure  2 provides a high-level representation of this ecosystem. In the rest of this section, we shall focus specially on three aspects of this environment, which pose security challenges to developers: the security shortcomings of C and C++, the dominant programming languages among cyber-physical implementations; the interactions between these languages and other programming languages, and the consequences of these interactions on the distributed nature of UbiComp applications. We start by diving deeper into the idiosyncrasies of C and C++.

figure 2

A UbiComp System is formed by modules implemented as a combination of different programming languages. This diversity poses challenges to software security

2.1 Type safety

A great deal of the software used in UbiComp systems is implemented in C or in C++. This fact is natural, given the unparalleled efficiency of these two programming languages. However, if, on the one hand, C and C++ yield efficient executables, on the other hand, their weak type system gives origin to a plethora of software vulnerabilities. In programming language’s argot, we say that a type system is weak when it does not support two key properties: progress and preservation [ 14 ]. The formal definitions of these properties are immaterial for the discussion that follows. It suffices to know that, as a consequence of weak typing, neither C, nor C++, ensure, for instance, bounded memory accesses. Therefore, programs written in these languages can access invalid memory positions. As an illustration of the dangers incurred by this possibility, it suffices to know that out-of-bounds access are the principle behind buffer overflow exploits.

The software security community has been developing different techniques to deal with the intrinsic vulnerabilities of C/C++/assembly software. Such techniques can be fully static, fully dynamic or a hybrid of both approaches. Static protection mechanisms are implemented at the compiler level; dynamic mechanisms are implemented at the runtime level. In the rest of this section, we list the most well-known elements in each category.

Static analyses provide a conservative estimate of the program behavior, without requiring the execution of such a program. This broad family of techniques includes, for instance, abstract interpretation [ 15 ], model checking [ 16 ] and guided proofs [ 17 ]. The main advantage of static analyses is the low runtime overhead, and its soundness: inferred properties are guaranteed to always hold true. However, static analyses have also disadvantages. In particular, most of the interesting properties of programs lay on undecidable land [ 18 ]. Furthermore, the verification of many formal properties, even though a decidable problem, incur a prohibitive computational cost [ 19 ].

Dynamic analyses come in several flavors: testing (KLEE [ 20 ]), profiling (Aprof [ 21 ], Gprof [ 22 ]), symbolic execution (DART [ 23 ]), emulation (Valgrind [ 24 ]), and binary instrumentation (Pin [ 25 ]). The virtues and limitations of dynamic analyses are exactly the opposite of those found in static techniques. Dynamic analyses usually do not raise false alarms: bugs are described by examples, which normally lead to consistent reproduction [ 26 ]. However, they are not required to always find security vulnerabilities in software. Furthermore, the runtime overhead of dynamic analyses still makes it prohibitive to deploy them into production software [ 27 ].

As a middle point, several research groups have proposed ways to combine static and dynamic analyses, producing different kinds of hybrid approaches to secure low-level code. This combination might yield security guarantees that are strictly more powerful than what could be obtained by either the static or the dynamic approaches, when used separately [ 28 ]. Nevertheless, negative results still hold: if an attacker can take control of the program, usually he or she can circumvent state-of-the-art hybrid protection mechanisms, such as control flow integrity [ 29 ]. This fact is, ultimately, a consequence of the weak type system adopted by languages normally seen in the implementation of UbiComp systems. Therefore, the design and deployment of techniques that can guard such programming languages, without compromising their efficiency to the point where they will no longer be adequate to UbiComp development, remains an open problem.

In spite of the difficulties of bringing formal methods to play a larger role in the design and implementation of programming languages, much has already been accomplished in this field. Testimony to this statement is the fact that today researchers are able to ensure the safety of entire operating system kernels, as demonstrated by Gerwin et al. [ 30 ], and to ensure that compilers meet the semantics of the languages that they process [ 31 ]. Nevertheless, it is reasonable to think that certain safety measures might come at the cost of performance and therefore we foresee that much of the effort of the research community in the coming years will be dedicated to making formal methods not only more powerful and expressive, but also more efficient to be used in practice.

2.2 Polyglot programming

Polyglot programming is the art and discipline of writing source code that involves two or more programming languages. It is common among implementations of cyber-physical systems. As an example, Ginga, the Brazilian protocol for digital TV, is mostly implemented in Lua and C [ 32 ]. Figure  3 shows an example of communication between a C and a Lua program. Other examples of interactions between programming languages include bindings between C and Python [ 33 ], C and Elixir [ 34 ] and the Java Native Interface [ 35 ]. Polyglot programming complicates the protection of systems. Difficulties arise due to a lack of multi-language tools and due to unchecked memory bindings between C/C++ and other languages.

figure 3

Two-way communication between a C and a Lua program

An obstacle to the validation of polyglot software is the lack of tools that analyze source code written in different programming languages, under a unified framework. Returning to Fig.  3 , we have a system formed by two programs, written in different programming languages. Any tool that analyzes this system as a whole must be able to parse these two distinct syntaxes and infer the connection points between them. Work has been performed towards this end, but solutions are still very preliminary. As an example, Maas et al. [ 33 ] have implemented automatic ways to check if C arrays are correctly read by Python programs. As another example, Furr and Foster [ 36 ] have described techniques to ensure type-safety of OCaml-to-C and Java-to-C bindings.

A promising direction to analyze polyglot systems is based on the idea of compilation of source code partially available. This feat consists in the reconstruction of the missing syntax and the missing declarations necessary to produce a minimal version of the original program that can be analyzed by typical tools. The analysis of code partially available makes it possible to test parts of a polyglot program in separate, in a way to produce a cohesive view of the entire system. This technique has been demonstrated to yield analyzable Java source code [ 37 ], and compilable C code [ 38 ]. Notice that this type of reconstruction is not restricted to high-level programming languages. Testimony of this fact is the notion of micro execution , introduced by Patrice Godefroid [ 39 ]. Godefroid’s tool allows the testing of x86 binaries, even when object files are missing. Nevertheless, in spite of these developments, the reconstruction is still restricted to the static semantics of programs. The synthesis of behavior is a thriving discipline in computer science [ 40 ], but still far away from enabling the certification of polyglot systems.

2.3 Distributed programming

Ubiquitous computing systems tend to be distributed. It is even difficult to conceive any use for an application in this world that does not interact with other programs. And it is common knowledge that distributed programming opens up several doors to malicious users. Therefore, to make cyber-physical technology safer, security tools must be aware of the distributed nature of such systems. Yet, two main challenges stand in front of this requirement: the difficulty to build a holistic view of the distributed application, and the lack of semantic information bound to messages exchanged between processes that communicate through a network.

To be accurate, the analysis of a distributed system needs to account for the interactions between the several program parts that constitute this system [ 41 ]. Discovering such interactions is difficult, even if we restrict ourselves to code written in a single programming language. Difficulties stem from a lack of semantic information associated with operations that send and receive messages. In other words, such operations are defined as part of a library, not as part of the programming language itself. Notwithstanding this fact, there are several techniques that infer communication channels between different pieces of source code. As examples, we have the algorithms of Greg Bronevetsky [ 42 ], and Teixeira et al. [ 43 ], which build a distributed view of a program’s control flow graph (CFG). Classic static analyses work without further modification on this distributed CFG. However, the distributed CFG is still a conservative approximation of the program behavior. Thus, it forces already imprecise static analyses to deal with communication channels that might never exist during the execution of the program. The rising popularization of actor-based libraries, like those available in languages such as Elixir [ 34 ] and Scala [ 44 ] is likely to mitigate the channel-inference problem. In the actor model channels are explicit in the messages exchanged between the different processing elements that constitute a distributed system. Nevertheless, if such model will be widely adopted by the IoT community is still a fact to be seen.

Tools that perform automatic analyses in programs rely on static information to produce more precise results. In this sense, types are core for the understanding of software. For instance, in Java and other object-oriented languages, the type of objects determines how information flows along the program code. However, despite this importance, messages exchanged in the vast majority of distributed systems are not typed. Reason for this is the fact that such messages, at least in C, C++ and assembly software, are arrays of bytes. There have been two major efforts to mitigate this problem: the addition of messages as first class values to programming languages, and the implementation of points-to analyses able to deal with pointer arithmetics in languages that lack such feature. Concerning the first front, several programming languages, such as Scala, Erlang and Elixir, incorporate messages as basic constructs, providing developers with very expressive ways to implement the actor model [ 45 ] – a core foundation of distributed programming. Even though the construction of programming abstractions around the actor model is not a new idea [ 45 ], their raising popularity seems to be a phenomenon of the 2000’s, boosted by increasingly more expressive abstractions [ 46 ] and increasingly more efficient implementations [ 47 ]. In the second front, researchers have devised analyses that infer the contents [ 48 ] and the size of arrays [ 49 ] in weakly-typed programming languages. More importantly, recent years have seen a new flurry of algorithms designed to analyze C/C++ style pointer arithmetics [ 50 – 53 ]. The wide adoption of higher-level programming languages coupled with the construction of new tools to analyze lower-level languages is exciting. This trend seems to indicate that the programming languages community is dedicating each time more attention to the task of implementing safer distributed software. Therefore, even though the design of tools able to analyze the very fabric of UbiComp still poses several challenges to researchers, we can look to the future with optimism.

3 Long-term security

Various UbiComp systems are designed to withstand a lifespan of many years, even decades [ 54 , 55 ]. Systems in the context of critical infrastructure, for example, often require an enormous financial investment to be designed and deployed in the field [ 56 ], and therefore would offer a better return on investment if they remain in use for a longer period of time. The automotive area is a field of particular interest. Vehicles are expected to be reliable for decades [ 57 ], and renewing vehicle fleets or updating features ( recalls ) increase costs for their owners. Note that modern vehicles are part of the UbiComp ecosystem as they are equipped with embedded devices with Internet connectivity. In the future, it is expected that vehicles will depend even more on data collected and shared across other vehicles/infrastructure through wireless technologies [ 58 ] in order to enable enriched driving experiences such as autonomous driving [ 59 ].

It is also worth mentioning that systems designed to endure a lifespan of several years or decades might suffer from lack of future maintenance. The competition among players able to innovate is very aggressive leading to a high rate of companies going out of business within a few years [ 60 ]. A world inundate by devices without proper maintenance will offer serious future challenges [ 61 ].

From the few aforementioned examples, it is already evident that there is an increasing need for UbiComp systems to be reliable for a longer period of time and, whenever possible, requiring as few updates as possible. These requirements have a direct impact on the security features of such systems: comparatively speaking, they would offer fewer opportunities for patching eventual security breaches than conventional systems. This is a critical situation given the intense and dynamic progresses on devising and exploiting new security breaches. Therefore, it is of utmost importance to understand what the scientific challenges are to ensure long-term security from the early stage of the design of an UbiComp system, instead of resorting to palliative measures a posteriori.

3.1 Cryptography as the core component

Ensuring long-term security is a quite challenging task for any system, not only for UbiComp systems. At a minimum, it requires that every single security component is future-proof by itself and also when connected to other components. To simplify this excessively large attack surface and still be able to provide helpful recommendations, we will focus our attention on the main ingredient of most security mechanisms, as highlighted in Section 4 , i.e. Cryptography.

There are numerous types of cryptographic techniques. The most traditional ones rely on the hardness of computational problems such as integer factorization [ 62 ] and discrete logarithm problems [ 63 , 64 ]. These problems are believed to be intractable by current cryptanalysis techniques and the available technological resources. Because of that, cryptographers were able to build secure instantiation of cryptosystems based on such computational problems. For various reasons (to be discussed in the following sections), however, the future-proof condition of such schemes is at stake.

3.2 Advancements in classical cryptanalysis

The first threat for the future-proof condition of any cryptosystem refers to potential advancements on cryptanalysis, i.e., on techniques aiming at solving the underlying security problem in a more efficient way (with less processing time, memory, etc.) than originally predicted. Widely-deployed schemes have a long track of academic and industrial scrutiny and therefore one would expect little or no progress on the cryptanalysis techniques targeting such schemes. Yet, the literature has recently shown some interesting and unexpected results that may suggest the opposite.

In [ 65 ], for example, Barbulescu et al. introduced a new quasi-polynomial algorithm to solve the discrete logarithm problem in finite fields of small characteristics. The discrete logarithm problem is the underlying security problem of the Diffie-Hellman Key Exchange [ 66 ], the Digital Signature Algorithm [ 67 ] and their elliptic curve variants (ECDH [ 68 ] and ECDSA [ 67 ], respectively), just to mention a few widely-deployed cryptosystems. This cryptanalytic result is restricted to finite fields of small characteristics, something that represents an important limitation to attack real-world implementations of the aforementioned schemes. However, any sub-exponential algorithm that solves a longstanding problem should be seen as a relevant indication that the cryptanalysis literature might still be subject to eventual breakthroughs.

This situation should be considered by architects designing UbiComp systems that have long-term security as a requirement. Implementations that support various (i.e. higher than usual) security levels are preferred when compared to fixed, single key size support. The same approach used for keys should be used to other quantities in the scheme that somehow impact on its overall security. In this way, UbiComp systems would be able to consciously accommodate future cryptanalytic advancements or, at the very least, reduce the costs for security upgrades.

3.3 Future disruption due to quantum attacks

Quantum computers are expected to offer dramatic speedups to solve certain computational problems, as foreseen by Daniel R. Simon in his seminal paper on quantum algorithms [ 69 ]. Some of these speedups may enable significant advancements to technologies currently limited by its algorithmic inefficiency [ 70 ]. On the other hand, to our misfortune, some of the affected computational problems are the ones currently being used to secure widely-deployed cryptosystems.

As an example, Lov K. Grover introduced a quantum algorithm [ 71 ] able to find an element in the domain of a function (of size N ) which leads, with high probability, to a desired output in only \(O(\sqrt {N})\) steps. This algorithm can be used to speed up the cryptanalysis of symmetric cryptography. Block ciphers of n bits keys, for example, would offer only n /2 bits of security against a quantum adversary. Hash functions would be affected in ways that depend on the expected security property. In more details, hash functions of n bits digests would offer only n /3 bits of security against collision attacks and n /2 bits of security against pre-image attacks. Table  1 summarizes this assessment. In this context, AES-128 and SHA-256 (collision-resistance) would not meet the minimum acceptable security level of 128-bits (of quantum security). Note that both block ciphers and hash function constructions will still remain secure if longer keys and digest sizes are employed. However, this would lead to important performance challenges. AES-256, for example, is about 40% less efficient than AES-128 (due to the 14 rounds, instead of 10).

Even more critical than the scenario for symmetric cryptography, quantum computers will offer an exponential speedup to attack most of the widely-deployed public-key cryptosystems. This is due to Peter Shor’s algorithm [ 72 ] which can efficiently factor large integers and compute the discrete logarithm of an element in large groups in polynomial time. The impact of this work will be devastating to RSA and ECC-based schemes as increasing the key sizes would not suffice: they will need to be completely replaced.

In the field of quantum resistant public-key cryptosystems, i.e. alternative public key schemes that can withstand quantum attacks, several challenges need to be addressed. The first one refers to establishing a consensus in both academia and industry on how to defeat quantum attacks. In particular, there are two main techniques considered as capable to withstand quantum attacks, namely: post-quantum cryptography (PQC) and quantum cryptography (QC). The former is based on different computational problems believed to be so hard that not even quantum computers would be able to tackle them. One important benefit of PQC schemes is that they can be implemented and deployed in the computers currently available [ 73 – 77 ]. The latter (QC) depends on the existence and deployment of a quantum infrastructure, and is restricted to key-exchange purposes [ 78 ]. The limited capabilities and the very high costs for deploying quantum infrastructure should eventually lead to a consensus towards the post-quantum cryptography trend.

There are several PQC schemes available in the literature. Hash-Based Signatures (HBS), for example, are the most accredited solutions for digital signatures. The most modern constructions [ 76 , 77 ] represent improvements of the Merkle signature scheme [ 74 ]. One important benefit of HBS is that their security relies solely on certain well-known properties of hash functions (thus they are secure against quantum attacks, assuming appropriate digest sizes are used). Regarding other security features, such as key exchange and asymmetric encryption, the academic and industry communities have not reached a consensus yet, although both code-based and lattice-based cryptography literatures have already presented promising schemes [ 79 – 85 ]. Isogeny-based cryptography [ 86 ] is a much more recent approach that enjoys certain practical benefits (such as fairly small public key sizes [ 87 , 88 ]) although it has just started to benefit from a more comprehensive understanding of its cryptanalysis properties [ 89 ]. Regarding standardization efforts, NIST has recently started a Standardization Process on Post-Quantum Cryptography schemes [ 90 ] which should take at least a few more years to be concluded. The current absence of standards represents an important challenge. In particular, future interoperability problems might arise.

Finally, another challenge in the context of post-quantum public-key cryptosystems refers to potentially new implementation requirements or constraints. As mentioned before, hash-based signatures are very promising post-quantum candidates (given efficiency and security related to hash functions) but also lead to a new set of implementation challenges, such as the task of keeping the scheme state secure. In more details, most HBS schemes have private-keys (their state ) that evolve along the time. If rigid state management policies are not in place, a signer can re-utilize the same private-key twice, something that would void the security guarantees offered by the scheme. Recently, initial works to address these new implementation challenges have appeared in the literature [ 91 ]. A recently introduced HBS construction [ 92 ] showed how to get rid of the state management issue at the price of much larger signatures. These examples indicate potentially new implementation challenges for PQC schemes that must be addressed by UbiComp systems architects.

4 Cryptographic engineering

UbiComp systems involve building blocks of very different natures: hardware components such as sensors and actuators, embedded software implementing communication protocols and interface with cloud providers, and ultimately operational procedures and other human factors. As a result, pervasive systems have a large attack surface that must be protected using a combination of techniques.

Cryptography is a fundamental part of any modern computing system, but unlikely to be the weakest component in its attack surface. Networking protocols, input parsing routines and even interface code with cryptographic mechanisms are components much more likely to be vulnerable to exploitation. However, a successful attack on cryptographic security properties is usually disastrous due to the risk concentrated in cryptographic primitives. For example, violations of confidentiality may cause massive data breaches involving sensitive information. Adversarial interference on communication integrity may allow command injection attacks that deviate from the specified behavior. Availability is crucial to keep the system accessible by legitimate users and to guarantee continuous service provisioning, thus cryptographic mechanisms must also be lightweight to minimize potential for abuse by attackers.

Physical access by adversaries to portions of the attack surface is a particularly challenging aspect of deploying cryptography in UbiComp systems. By assumption, adversaries can recover long-term secrets and credentials that provide some control over a (hopefully small) portion of the system. Below we will explore some of the main challenges in deploying cryptographic mechanisms for pervasive systems, including how to manage keys and realize efficient and secure implementation of cryptography.

4.1 Key management

UbiComp systems are by definition heterogeneous platforms, connecting devices of massively different computation and storage power. Designing a cryptographic architecture for any heterogeneous system requires assigning clearly defined roles and corresponding security properties for the tasks under responsibility of each entity in the system. Resource-constrained devices should receive less computationally intensive tasks, and their lack of tamper-resistance protections indicate that long-term secrets should not reside in these devices. More critical tasks involving expensive public-key cryptography should be delegated to more powerful nodes. A careful trade-off between security properties, functionality and cryptographic primitives must then be addressed per device or class of devices [ 93 ], following a set of guidelines for pervasive systems:

Functionality: key management protocols must manage lifetime of cryptographic keys and ensure accessibility to the currently authorized users, but handling key management and authorization separately may increase complexity and vulnerabilities. A promising way of combining the two services into a cryptographically-enforced access control framework is attribute-based encryption [ 94 , 95 ], where keys have sets of capabilities and attributes that can be authorized and revoked on demand.

Communication: components should minimize the amount of communication, at risk of being unable to operate if communication is disrupted. Non-interactive approaches for key distribution [ 96 ] are recommended here, but advanced protocols based on bilinear pairings should be avoided due to recent advances on solving the discrete log problem (in the so called medium prime case [ 97 ]). These advances forcedly increase the parameter sizes, reduce performance/scalability and may be improved further, favoring more traditional forms of asymmetric cryptography.

Efficiency: protocols should be lightweight and easy to implement, mandating that traditional public key infrastructures (PKIs) and expensive certificate handling operations are restricted to the more powerful and connected nodes in the architecture. Alternative models supporting implicit certification include identity-based [ 98 ] (IBC) and certificate-less cryptography [ 99 ] (CLPKC), the former implying inherent key escrow. The difficulties with key revocation still impose obstacles for their wide adoption, despite progress [ 100 ]. A lightweight pairing and escrow-less authenticated key agreement based on an efficient key exchange protocol and implicit certificates combines the advantages of the two approaches, providing high performance while saving bandwidth [ 101 ].

Interoperability: pervasive systems are composed of components originating from different manufacturers. Supporting a cross-domain authentication and authorization framework is crucial for interoperability [ 102 ].

Cryptographic primitives involved in joint functionality must then be compatible with all endpoints and respect the constraints of the less powerful devices.

4.2 Lightweight cryptography

The emergence of huge collections of interconnected devices in UbiComp motivate the development of novel cryptographic primitives, under the moniker lightweight cryptography . The term lightweight does not imply weaker cryptography, but application-tailored cryptography that is especially designed to be efficient in terms of resource consumption such as processor cycles, energy and memory footprint [ 103 ]. Lightweight designs aim to target common security requirements for cryptography but may adopt less conservative choices or more recent building blocks.

As a first example, many new block ciphers were proposed as lightweight alternatives to the Advanced Encryption Standard (AES) [ 104 ]. Important constructions are LS-Designs [ 105 ], modern ARX and Feistel networks [ 106 ], and substitution-permutation networks [ 107 , 108 ]. A notable candidate is the PRESENT block cipher, with a 10-year maturity of resisting cryptanalytic attempts [ 109 ], and whose performance recently became competitive in software [ 110 ].

In the case of hash functions, a design may even trade-off advanced security properties (such as collision resistance) for simplicity in some scenarios. A clear case is the construction of short Message Authentication Codes (MAC) from non-collision resistant hash functions, such as in SipHash [ 111 ], or digital signatures from short-input hash functions [ 112 ]. In conventional applications, BLAKE2 [ 113 ] is a stronger drop-in replacement to recently cryptanalyzed standards [ 114 ] and faster in software than the recently published SHA-3 standard [ 115 ].

Another trend is to provide confidentiality and authentication in a single step, through Authenticated Encryption with Associated Data (AEAD). This can be implemented with a block cipher operation mode (like GCM [ 116 ]) or a dedicated design. The CAESAR competition Footnote 1 selected new AEAD algorithms for standardization across multiple use cases, such as lightweight and high-performance applications and a defense-in-depth setting. NIST has followed through and started its own standardization process for lightweight AEAD algorithms and hash functions Footnote 2 .

In terms of public-key cryptography, Elliptic Curve Cryptography (ECC) [ 63 , 117 ] continues to be the main contender in the space against factoring-based cryptosystems [ 62 ], due to an underlying problem conjectured to be fully exponential in classical computers. Modern instantiations of ECC enjoy high performance and implementation simplicity and are very suited for embedded systems [ 118 – 120 ]. The dominance of number-theoretic primitives is however threatened by quantum computers as described in Section 3 .

The plethora of new primitives must be rigorously evaluated from both the security and performance point of views, involving both theoretical work and engineering aspects. Implementations are expected to consume smaller amounts of energy [ 121 ], cycles and memory [ 122 ] in ever decreasing devices and under more invasive attacks.

4.3 Side-channel resistance

If implemented without care, an otherwise secure cryptographic algorithm or protocol can leak critical information which may be useful to an attacker. Side-channel attacks [ 123 ] are a significant threat against cryptography and may use timing information, cache latency, power and electromagnetic emanations to recover secret material. These attacks emerge from the interaction between the implementation and underlying computer architecture and represent an intrinsic security problem to pervasive computing environments, since the attacker is assumed to have physical access to at least some of the legitimate devices.

Protecting against intrusive side-channel attacks is a challenging research problem, and countermeasures typically promote some degree of regularity in computation. Isochronous or constant time implementations were among the first strategies to tackle this problem in the case of variances in execution time or latency in the memory hierarchy. The application of formal methods has enabled the first tools to verify isochronicity of implementations, such as information flow analysis [ 124 ] and program transformations [ 125 ].

While there is a recent trend towards constructing and standardizing cryptographic algorithms with some embedded resistance against the simpler timing and power analysis attacks [ 105 ], more powerful attacks such as differential power analysis [ 126 ] or fault attacks [ 127 ] are very hard to prevent or mitigate. Fault injection became a much more powerful attack methodology it was after demonstrated in software [ 128 ].

Masking techniques [ 129 ] are frequently investigated as a countermeasure to decorrelate leaked information from secret data, but frequently require robust entropy sources to achieve their goal. Randomness recycling techniques have been useful as a heuristic, but formal security analysis of such approaches is an open problem [ 130 ]. Modifications in the underlying architecture in terms of instruction set extensions, simplified execution environments and transactional mechanisms for restarting faulty computation are another promising research direction but may involve radical and possibly cost-prohibitive changes to current hardware.

5 Resilience

UbiComp relies on essential services as connectivity, routing and end-to-end communication. Advances in those essential services make possible the envisioned Weiser’s pervasive applications, which can count on transparent communication while reaching the expectations and requirements of final users in their daily activities. Among user’s expectations and requirements, the availability of services – not only communication services, but all services provided to users by UbiComp – is a paramount. Users more and more expect, and pay, for 24/7 available services. This is even more relevant when we think about critical UbiComp systems, such as those related to healthcare, urgency, and vehicular embedded systems.

Resilience is highlighted in this article, because it is one of the pillars of security. Resilience aims at identifying, preventing, detecting and responding to process or technological failures to recover or mitigate damages and financial losses resulted from service unavailability [ 131 ]. In general, service unavailability has been associated with non-intentional failures, however, more and more the intentional exploitation of service availability breaches is becoming disruptive and out of control, as seen in the latest Distributed Denial of Service (DDoS) attack against the company DYN, a leading DNS provider, and the DDoS attack against the company OVH, the French website hosting giant [ 132 , 133 ]. The latter reached an intense volume of malicious traffic of approximately 1 TB/s, generated from a large amount of geographically distributed and infected devices, such as printers, IP cameras, residential gateways and baby monitors. Those devices are directly related to the modern concept of UbiComp systems [ 134 ] and they intend to provide ubiquitous services to users.

However, what attracts the most the attention here is the negative side effect of the ubiquity exploitation against service availability. It is fact today that the Mark Weiser’s idea of Computer for the 21st Century has open doors to new kind of highly disruptive attacks. Those attacks are in general based on the idea of invisibility and unawareness for the devices in our homes, works, cities, and countries. But, exactly because of this, people seems to not pay enough attention to basic practices, such as change default passwords in Internet connect devices as CCTV cameras, baby monitors, smart TVs and other. This simple fact has been pointed as the main cause of the two DDoS attacks mentioned before and a report by global professional services company Deloitte suggests that Distributed Denial of Service (DDoS) attacks, that compromise exactly service availability, increased in size and scale in 2017, thanks in part to the growing multiverse of connected things Footnote 3 . They also mentioned that DDoS attacks will be more frequent, with an estimated 10 million attacks in few months.

As there is no guarantee to completely avoid these attacks, resilient solutions become a way to mitigate damages and quickly resume the availability of services. Resilience is then necessary and complementary to the other solutions we observe in the previous sections of this article. Hence, this section focuses on highlighting the importance of resilience in the context of UbiComp systems. We overview the state-of-the-art regarding to resilience in the UbiComp systems and point out future directions for research and innovation [ 135 – 138 ]. We also understand that resilience in these systems still requires a lot of investigations, however we believe that it was our role to raise this point to discussion through this article.

In order to contextualize resilience in the scope of UbiComp, it is important to observe that improvements on information and communication technologies, such as wireless networking, have increased the use of distributed systems in our everyday lives. Network access is becoming ubiquitous through portable devices and wireless communications, making people more and more dependent on them. This raising dependence claims for simultaneous high level of reliability and availability. The current networks are composed of heterogeneous portable devices, communicating among themselves generally in a wireless multi-hop manner [ 139 ]. These wireless networks can autonomously adapt to changes in their environment such as device position, traffic pattern and interference. Each device can dynamically reconfigure its topology, coverage and channel allocation in accordance with changes.

UbiComp poses nontrivial challenges to resilience design due to the characteristics of the current networks, such as shared wireless medium, highly dynamic network topology, multi-hop communication and low physical protection of portable devices [ 140 , 141 ]. Moreover, the absence of central entities in different scenarios increases the complexity of resilience management, particularly, when it is associated with access control, node authentication and cryptographic key distribution.

Network characteristics, as well as constraints on other kind of solutions against attacks that disrupt service availability, reinforce the fact that no network is totally immune to attacks and intrusions. Therefore, new approaches are required to promote the availability of network services. Such requirements motivate the design of resilient network services. In this work, we focus on the delivery of data from one UbiComp device to another as a fundamental network functionality and we emphasize three essential services: physical and link-layer connectivity, routing and end-to-end logical communication. However, resilience has also been observed under other perspectives. We follow the claim that resilience is achieved upon a cross-layer security solution that integrates preventive (i.e., cryptography and access control), reactive (i.e., intrusion detection systems) and tolerant (i.e., packet redundancy) defense lines in a self-adaptive and coordinated way [ 131 , 142 ].

However, what are still the open challenges to achieve resilience in the UbiComp context? First of all, we emphasize the heterogeneity of devices and technologies that compose UbiComp environments. The integration from large-scale systems, such as Cloud data centers, to tiny devices, such as wearable and implantable sensors, is a huge challenge itself due to the complexity resulted from it. Then, in addition, providing integration of preventive, reactive and tolerant solutions and their adaptation is even harder in face of the different requirements of these devices, their capabilities in terms of memory and processing, and application requirements. Further, dealing with heterogeneity in terms of communication technology and protocols makes challenging the analysis of network behavior and topologies, what in conventional systems are employed to assist in the design of resilient solutions.

Another challenge is how to deal with scale. First, the UbiComp systems tend to be hyper-scale and geographically distributed. How to cope, then, with the complexity resulted from that? How to define and construct models to understand these systems and offer resilient services? Finally, we also point out as challenges the uncertainty and speed. If on the one hand, it is so hard to model, analyze and define resilient services in this complex system, on the other hand uncertainly is a norm on them, being speed and low response time a strong requirement for the applications in these systems. Hence, how to address all these elements together? How to manage them in order to offer resilient services considering diverse kind of requirements from the various applications?

All these questions lead to deep investigation and challenges. However, they also show opportunities for applied research in designing and engineering resilient systems, mainly for the UbiComp context. Particularly, if we advocate for designing resilient systems that manage the three defense lines in an adaptive way. We believe that this management can promote a great advance for applied research and for resilience.

6 Identity management

Authentication and Authorization Infrastructure (AAI) is the central element for providing security in distributed applications. AAI is a way to fulfill the security requirements in UbiComp systems. It is possible to provide identity management with this infrastructure to prevent legitimate or illegitimate users/devices to access non-authorized resources. IdM can be defined as a set of processes, technologies and policies used for assurance of identity information (e.g., identifiers, credentials, attributes), assurance of the identity of an entity (e.g., users, devices, systems), and enabling businesses and security applications [ 143 ]. Thus, IdM allows these identities to be used for authentication, authorization and auditing mechanisms [ 144 ]. A proper identity management approach is necessary for pervasive computing to be invisible to users [ 145 ]. Figure  4 provides an overview of the topics discussed in this section.

figure 4

Pervasive IdM Challenges

According to [ 143 ], electronic identity (eID) comprises a set of data about an entity that is sufficient to identify that entity in a particular digital context. An eID may be comprised of:

Identifier - a series of digits, characters and symbols or any other form of data used to uniquely identify an entity (e.g., UserID, e-mail addresses, URI and IP addresses). IoT requires a global unique identifier for each entity in the network;

Credentials - an identifiable object that can be used to authenticate the claimant (e.g., digital certificates, keys, tokens and biometrics);

Attributes - descriptive information bound to an entity that specifies its characteristics.

In UbiComp systems, identity has both a digital and a physical component. Some entities might have only an online or physical representation, whereas others might have a presence in both planes. IdM requires relationships not only between entities in the same planes but also across them [ 145 ].

6.1 Identity management system

An IdM system deals with the lifecycle of an identity, which consists of registration, storage, retrieval, provisioning and revocation of identity attributes [ 146 ]. Note that the management of devices’ identify lifecycle is more complicated than people’s identity lifecycle due to the complexity of operational phases of a device (i.e., from the manufacturing to the removed and re-commissioned) in the context of a given application or use case [ 102 , 147 ].

For example, consider a given device life-cycle. In the pre-deployment, some cryptographic material is loaded into the device during its manufacturing process. Next, the owner of the device purchases it and gets a PIN that grants the owner the initial access to the device. The device is later installed and commissioned within a network by an installer during the bootstrapping phase. The device identity and the secret keys used during normal operation are provided to the device during this phase. After being bootstrapped, the device is in operational mode. During this operational phase, the device will need to prove its identity (D2D communication) and to control the access to its resources/data. For devices with lifetimes spanning several years, maintenance cycles should be required. During each maintenance phase, the software on the device can be upgraded, or applications (running on the device) can be reconfigured. The device continues to loop through the operational phase until the device is decommissioned at the end of its lifecycle. Furthermore, the device can also be removed and re-commissioned to be used in a different system under a different owner thereby starting the lifecycle all over again. During this phase, the cryptographic material held by the device is wiped, and the owner is unbound from the device [ 147 ].

An IdM system involves two main entities: identity provider (IdP - responsible for authentication and user/device information management in a domain) and service provider (SP - also known as relying party, which provides services to user/device based on their attributes). The arrangement of these entities in an IdM system and the way in which they interact with each other characterize the IdM models, which can be traditional (isolated or silo), centralized, federated or user-centric [ 146 ].

In traditional model, IdP and SP are grouped into a single entity whose role is to authenticate and control access to their users or devices without relying on any other entity. In this model, the providers do not have any mechanisms to share this identity information with other organizations/entities. This makes the identity provisioning cumbersome for the end user or device, since the users and devices need to proliferate their sensitive data to different providers [ 146 , 148 ].

The centralized model emerged as a possible solution to avoid the redundancies and inconsistencies in the traditional model and to give the user/device a seamless experience. Here, a central IdP became responsible for collecting and provisioning the user’s or device’s identity information in a manner that enforced the preferences of the user/device. The centralized model allows the sharing of identities among SPs and provides Single Sign-On (SSO). This model has several drawbacks as the IdP not only becomes a single point of failure but also may not be trusted by all users, devices and service providers [ 146 ]. In addition, a centralized IdP must provide different mechanisms to authenticate either users or autonomous devices to be adequate with UbiComp system requirements [ 149 ].

UbiComp systems are composed of heterogeneous devices that need to prove their authenticity to the entities they communicate with. One of the problems in this scenario is the possibility of devices being located in different security domains using distinct authentication mechanisms. An approach for providing IdM in a scenario with multiple security domains is through an AAI that uses the federated IdM model (FIM) [ 150 , 151 ]. In a federation, trust relationships are established among IdPs and SPs to enable the exchange of identity information and service sharing. Existing trust relationships guarantee that users/devices authenticated in home IdP may access protected resources provided by SPs from other federation security domains [ 148 ]. Single Sign-On (SSO) is obtained when the same authentication event can be used to access different federated services [ 146 ].

Considering the user authentication perspective, the negative points of the centralized and federated models focus primarily on the IdP, as it has full control over the user’s data [ 148 ]. Besides, the user depends on an online IdP to provide the required credentials. In the federated model, users cannot guarantee that their information will not be disclosed to third parties without the users’ consent [ 146 ].

The user-centric model provides the user full control over transactions involving his or her identity data [ 148 ]. In the user-centric model, the user identity can be stored on a Personal Authentication Device, such as, a smartphone or a smartcard. Users have the freedom to choose the IdPs which will be used and to control the personal information disclosed to SPs. In this model, the IdPs continue acting as a trusted third party between users and SPs. However, IdPs act according to the user’s preferences [ 152 ]. The major drawback of the user-centric model is that it is not able to handle delegations. Several solutions that adopted this model combine it with FIM or centralized model, however, novel solutions prefer federated model.

6.1.1 Authentication

User and device authentication within an integrated authentication infrastructure (IdP is responsible for user and device authentication) might use a centralized IdM model [ 149 , 153 ] or a traditional model [ 154 ]. Other works [ 155 – 157 ] proposed AAIs for IoT using the federated model, however, only for user authentication and not for device authentication. Kim et al. [ 158 ] proposes a centralized solution that enables the use of different authentication mechanisms for devices that are chosen based on device energy autonomy. However, user authentication is not provided.

Based on the traditional model, an AAI composed by a suite of protocols that incorporate authentication and access control during the entire IoT device lifecycle is proposed in [ 102 ]. Domenech et al. [ 151 ] proposes an AAI for the Web of Things, which is based on the federated IdM model (FIM) and enables SSO for users and devices. In this AAI, IdPs may be implemented as a service in a Cloud (IdPaaS - Identity Provider as a Service) or on premise. Some IoT platforms provide IdPaaS to user and device authentication such as Amazon Web Services (AWS) IoT, Microsoft Azure IoT, Google Cloud IoT platform.

Authentication mechanisms and protocols consume computational resources. Thus, to integrate an AAI into a resource constrained embedded device can be a challenge. As mentioned in Section 4.2 , a set of lightweight cryptographic algorithms, which do not impose certificate-related overheads on devices, can be used to provide device authentication in UbiComp systems. There is a recent trend that investigates the benefits of using identity-based (IBC) cryptography to provide cross-domain authentication for constrained devices [ 102 , 151 , 159 ]. However, some IoT platforms still provide certificate-based device authentication such as Azure IoT, WSO2 or per-device public/private key authentication (RSA and Elliptic Curve algorithms) using JSON Web Tokens such as Google Cloud IoT Platform and WSO2.

Identity theft is the fastest growing crime in recent years. Currently, password-based credentials are the most used by user authentication mechanisms, despite of their weaknesses [ 160 ]. There are multiple opportunities for impersonation and other attacks that fraudulently claim another subject’s identity [ 161 ]. Multi-factor authentication (MFA) is a solution created to improve the authentication process robustness and it generally combines two or more authentication factors ( something you know , something you have , and something you are ) for successful authentication [ 161 ]. In this type of authentication, an attacker needs to compromise two or more factors which makes the task more complex. Several IdPs and SPs already offer MFA to authenticate its users, however, device authentication is still an open question.

6.1.2 Authorization

In UbiComp system, a security domain can have client devices and SPs devices (SP embedded). In this context, physical devices and online providers can offer services. Devices join and leave, SPs appear and disappear, and access control must adapt itself to maintain the user perception of being continuously and automatically authenticated [ 145 ]. The data access control provided by AAI embedded in the device is also a significant requirement. Since these devices are cyber-physical systems (CPS), a security threat against these can likely impact the physical world. Thus, if a device is improperly accessed, there is a chance that this violation will affect the physical world risking people’s well-being and even their lives [ 151 ].

Physical access control systems (PACS) provide access control to physical resources, such as buildings, offices or any other protected areas. Current commercial PACS are based on traditional IdM model and usually use low-cost devices such as smart cards. However, there is a trend to threat PACS as a (IT) service, i.e. unified physical and digital access [ 162 ]. Considering IoT scenarios, the translation of SSO authentication credentials for PACS across multiple domains (in a federation), is also a challenge due to interoperability, assurance and privacy concerns.

In the context of IoT, authorization mechanisms are based on access control models used in classic Internet such as Discretionary model, for example Access Control List (ACL) [ 163 ]), Capability Based Access Control (CapBAC) [ 164 , 165 ], Role Based Access Control (RBAC) [ 156 , 166 , 167 ] and Attribute Based Access Control (ABAC) [ 102 , 168 , 169 ]. ABAC and RBAC are the models better aligned to federated IdM and UbiComp systems. As proposed in [ 151 ], an IdM system that supports different access control models, such as RBAC and ABAC, can more easily adapt to the needs of the administration processes in the context of UbiComp.

Regarding policy management models to access devices, there are two approaches: provisioning [ 151 , 170 ] and outsourcing [ 150 , 151 , 171 , 172 ]. In provisioning, the device is responsible for the authorization decision making, which requires the policy to be in a local base. In this approach, Policy Enforcement Point (PEP), which controls the access to the device, and Policy Decision Point (PDP) are both in the same device. In outsourcing, the decision making takes place outside the device, in a centralized external service, that replies to all policy evaluation requests from all devices (PEPs) of a domain. In this case, the decision making can be offered as a service (PDPaaS) in the cloud or on premise [ 151 ].

For constrained devices, the provisioning approach is robust since it does not depend on an external service. However, in this approach, the decision making and the access policy management can be costly for the device. The outsourcing approach simplifies the policy management, but it has communication overhead and single point of failure (centralized PDP).

6.2 Federated identity management system

The IdM models guide the construction of policies and business processes for IdM systems but do not indicate which protocols or technologies should be adopted. SAML (Security Assertion Markup Language) [ 173 ], OAuth2 [ 174 ] and OpenId Connect specifications stand out in the federated IdM context [ 175 , 176 ] and are adequate for UbiComp systems. SAML, developed by OASIS, is an XML-based framework for describing and exchanging security information between business partners. It defines syntax and rules for requesting, creating, communicating and using SAML Assertions, which enables SSO across domain boundaries. Besides, SAML can describe authentication events that use different authentication mechanisms [ 177 ]. These characteristics are very important for the interoperability between security technologies of different administrative domains to be accomplished. According to [ 151 , 178 , 179 ], the first step toward achieving interoperability is the adoption of SAML. However, XML-based SAML is not a lightweight standard and has a high computational cost for IoT resource-constrained devices [ 176 ].

Enhanced Client and Proxy (ECP), a SAML profile, defines the security information exchange that involves clients who do not use a web browser and consequently allows device SSO authentication. Nevertheless, ECP requires SOAP protocol, which is not suitable due to its high computational cost [ 180 ]. Presumably, due to its computational cost, this profile is still not widely used in IoT devices.

OpenID Connect (OIDC) is an open framework that adopts user-centric and federated IdM models. It is decentralized, which means no central authority approves or registers SPs. With OpenID, an user can choose the OpenID Provider (IdP) he or she wants to use. OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients (SPs) to verify user or device identity based on the authentication performed by an Authorization Server (OpenID Provider), as well as to obtain basic profile information about the user or device in an interoperable and REST-like manner [ 181 ]. OIDC uses JSON-based security token (JWT) that enables identity and security information to be shared across security domains, consequently it is a lightweight standard and suitable for IoT. Nevertheless, it is a developing standard that requires more time and enterprise acceptance to become a established standard [ 176 ].

An IoT architecture based on OpenID, which treats authentication and access control in a federated environment was proposed in [ 156 ]. Devices and users may register at a trusted third party of the home domain, which helps the user’s authentication process. In [ 182 ], OpenId connect is used for authentication and authorization of users and devices and to establish trust relationships among entities in an ambient assisted living environment (medical devices acting as a SP), in a federated approach.

SAML and OIDC are used for user authentication in Cloud platforms (Google, AWS, Azure). FIWARE platform Footnote 4 (an open source IoT platform), via Keyrock Identity Management Generic Enabler, which brings support to SAML and OAuth2-based for authentication of users. However, platforms usually use certification-based or token-based certification for device authentication using a centralized or traditional model. In future works, it may be interesting to perform practical investigations on SAML (ECP profile with different lightweight authentication mechanisms) and OIDC for various types of IoT devices and cross-domain scenarios and compare them with current authentication solutions.

OAuth protocol Footnote 5 is an open authorization framework that allows an user/ application to delegate Web resources to a third-party without sharing its credentials. With OAuth protocol it is possible to use a Json Web Token or a SAML assertion as a means for requesting an OAuth 2.0 access token as well as for client authentication [ 176 ]. Fremantle et al. [ 150 ] discusses the use of OAuth for IoT applications that use MQTT protocol, which is a lightweight message queue protocol (publish/subscribe model) for small sensors and mobile devices.

A known standard for authorization in distributed systems is XACML (eXtensible Access Control Markup Language). XACML is a language based on XML for authorization policy description and request/response for access control decisions. Authorization decisions may be based on user/device attributes, on requested actions, and environment characteristics. Such features enable the building of flexible authorization mechanisms. Furthermore, XACML is generic, regardless of the access control model used (RBAC, ABAC) and enables the use of a local authorization decision making (provisioning model) or by an external service provider (outsourcing model). Another important aspect is that there are profiles and extensions that provide interoperability between XACML and SAML [ 183 ].

6.3 Pervasive IdM challenges

Current federation technologies rely on preconfigured static agreements, which are not well-suited for the open environments in UbiComp scenarios. These limitations negatively impact scalability and flexibility [ 145 ]. Trust establishment is the key for scalability. Although FIM protocols can cover security aspects, dynamic trust relationship establishment are open question [ 145 ]. Some requirements, such as usability, device authentication and the use of lightweight cryptography, were not properly considered in Federated IdM solutions for UbiComp systems.

Interoperability is another key requirement for successful IdM system. UbiComp systems integrates heterogeneous devices that interact with humans, systems in the Internet, and with other devices, which leads to interoperability concerns. These systems can be formed by heterogeneous domains (organizations) that go beyond the barriers of a Federation with the same AAI. The interoperability between federations that use different federated identity protocols (SAML, OpenId and OAuth) is still a problem and also a research opportunity.

Lastly, IdM systems for UbiComp systems must appropriately protect user information and adopt proper personal data protection policies. Section 7 discusses the challenges to provide privacy in UbiComp systems.

7 Privacy implications

UbiComp systems tend to collect a lot of data and generate a lot of information. Correctly used, information generates innumerable benefits to our society that has provided us with a better life over the years. However, the information can be used for illicit purposes, just as computer systems are used for attacks. Protecting private information is a great challenge that can often seem impractical, for instance, protecting customers’ electrical consumption data from their electricity distribution company [ 184 – 186 ].

Ensuring security is a necessary condition for ensuring privacy, for instance, if the communication between clients and a service provider is not secure, then privacy is not guaranteed. However, it is not a sufficient condition, for instance, the communication is secure, but a service provider uses the data in a not allowed way. We can use cryptography to ensure secure as well as privacy. Nevertheless, even though one uses encrypted communication, the metadata from the network traffic might reveal private information. The first challenge is to find the extend of the data relevance and the impact of data leakage.

7.1 Application scenario challenges

Finding which data might be sensitive is a challenging task. Some cultures classify some data as sensitive when others classify the same data as public. Another challenge is to handle regulations from different countries.

7.1.1 Identifying sensitive data

Classifying what may be sensitive data might be a challenging task. The article 12 of the Universal Declaration of Human Rights proclaimed by the United Nations General Assembly in Paris on 10 December 1948 states: No one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks. Lawmakers have improved privacy laws around the world. However, there is still plenty of room for improvements, specially, when we consider data from people, animals, and products. Providers can use such data to profile and manipulate people and market. Unfair competitors might use private industrial data to get advantages over other industries.

7.1.2 Regulation

UbiComp systems tend to run worldwide. Thus, their developers need to deal with several laws from distinct cultures. The abundance of laws is a challenge for international institutions. The absence of laws too. On the one hand, the excess of laws compels institutions to handle a huge bureaucracy to follow several laws. On the other hand, the absence of laws causes unfair competition because unethical companies can use private data to get advantages over ethical companies. Business models must use privacy-preserving protocols to ensure democracy and avoid a surveillance society (see [ 187 ]). Such protocols are the solution for the dilemma between privacy and information. However, they have their own technological challenges.

7.2 Technological challenges

We can deal with already collected data from legacy systems or private-by-design data that are collected by privacy-preserving protocols, for instance, databases used in old systems and messages from privacy-preserving protocols, respectively. If a scenario can be classified as both, we can just tackle it as an already collected data in the short term.

7.3 Already collected data

One may use a dataset for information retrieval while keeping the anonymity of the true owners’ data. One may use data mining techniques over a private dataset. Several techniques are used in privacy preserving data mining [ 188 ]. ARX Data Anonymization Tool Footnote 6 is a very interesting tool for anonymization of already collected data. In the following, we present several techniques used to provide privacy in already collected data.

7.3.1 Anonymization

Currently, we have several techniques for anonymization and to evaluate the level of anonymization, for instance, k -anonymity, l -diversity, and t -closeness [ 189 ]. They use a set E from data indistinguishable for an identifier in a table.

The method k -anonymity suppresses table columns or replace them for keeping each E with at least k registers. It seems safe, but only 4 points marking the position on the time are enough to identify uniquely 95% of the cellphone users [ 190 ].

The method l -diversity requires that each E have at least l values “well-represented” for each sensitive column. Well-represented can be defined in three ways:

at least l distinct values for each sensitive column;

for each E , the Shannon entropy is limited, such that \(H(E)\geqslant \log _{2} l\) , where \(H(E)=-\sum _{s\in S}\Pr (E,s)\log _{2}(\Pr (E,s)),\) S is the domain of the sensitive column, and Pr( E , s ) is the probability of the lines in E that have sensitive values s ;

the most common values cannot appear frequently, and the most uncommon values cannot appear infrequently.

Note that some tables do not have l distinct sensitive values. Furthermore, the table entropy should be at least log2 l . Moreover, the frequency of common and uncommand values usually are not close to each other.

We say that E is t -closeness if the distance between the distribution of a sensitive column E end the distribution of column in all the table is not more than a threshold t . Thus, we say that a table has t -closeness if every E in a table have t -closeness. In this case, the method generates a trade-off between data usefulness and privacy.

7.3.2 Differential privacy

The idea of differential privacy is similar to the idea of indistinguishability in cryptography. For defining it, let ε be a positive real number and \(\mathcal {A}\) be a probabilistic algorithm with a dataset as input. We say that \(\mathcal {A}\) is ε -differentially private if for every dataset D 1 and D 2 that differ in one element, and for every subset S of the image of \(\mathcal {A}\) , we have \(\Pr \left [{\mathcal {A}}\left (D_{1}\right)\in S\right ]\leq e^{\epsilon }\times \Pr \left [{\mathcal {A}}\left (D_{2}\right)\in ~S\right ],\) where the probability is controlled for the algorithm randomness.

Differential privacy is not a metric in the mathematical sense. However, if the algorithms keep the probabilities based on the input, we can construct a metric d to compare the distance between two algorithms with \(d\left (\mathcal {A}_{1},\mathcal {A}_{2}\right)=|\epsilon _{1}-\epsilon _{2}|.\) In this way, we can determine if two algorithms as equivalents ε 1 = ε 2 , and we can determine the distance from an ideal algorithm computing

7.3.3 Entropy and the degree of anonymity

The degree of anonymity g can be measured with the Shannon entropy \(H(X)=\sum _{{i=1}}^{{N}}\left [p_{i}\cdot \log _{2} \left ({\frac {1}{p_{i}}}\right)\right ],\) where H ( X ) is the network entropy, N is the number of nodes, and p i is the probability for each node i . The maximal entropy happens when the probability is uniform, i.e., all nodes are equiprobably 1/ N , hence H M = log2( N ). Therefore, the anonymity degree g is defined by \(g=1-{\frac {H_{M}-H(X)}{H_{M}}}={\frac {H(X)}{H_{M}}}.\)

Similar to differential privacy, we can construct a metric to compare the distance between two networks computing d ( g 1 , g 2 )=| g 1 − g 2 |. Similarly, we can compare if they are equivalent g 1 = g 2 . Thus, we can determine the distance from an ideal anonymity network computing d ( g 1 , g ideal )=| g 1 −1|.

The network can be replaced by a dataset, but in this model, each register should have a probability.

7.3.4 Complexity

Complexity analysis also can be used as a metric to measure the time required in the best case for retrieving information from an anonymized dataset. It can also be used in private-by-design data as the time required to break a privacy-preserving protocol. The time measure can be done with asymptotical analysis or counting the number of steps to break the method.

All techniques have their advantages and disadvantages. However, even though the complexity prevents the leakage, even though the algorithm has differential privacy, even though the degree of anonymity is the maximum, privacy might be violated. For example, in an election with 3 voters, if 2 collude, then the third voters will have the privacy violated independent of the algorithm used. In [ 191 ], we find how to break protocols based on noise for smart grids, even when they are provided with the property of differential privacy.

Cryptography should ensure privacy in the same way that ensures security. An encrypted message should have maximum privacy metrics as well as cryptography ensures for security. We should use the best algorithm that leaks privacy and compute its worst-case complexity.

7.3.5 Probability

We can use probabilities to measure the chances of leakage. This approach is independent of algorithm used to protect privacy.

For example, consider an election with 3 voters. If 2 voters cast yes and 1 voter cast no, an attacker knows that the probability of a voter cast yes is 2/3 and for no is 1/3. The same logics applies if the number of voters and candidates grow.

Different from the case of yes and no, we may keep the privacy from valued measured. For attackers to discover the time series of three points, they represent each point for a number of stars, i.e., symbols ⋆ . Thus, attackers can split the total number of stars in three boxes. Let the sum of the series be 7, a probability would be ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ . For simplicity, attackers can split the stars by bars instead of boxes. Hence, ⋆ ⋆ ⋆ ⋆ | ⋆ | ⋆ ⋆ is the same solution. With such notation, the binomial of 7 stars plus 2 bars chosen 7 stars determines the possible number of solutions, i.e., \( {7+2 \choose 7}=\frac {9!}{7!(9-7)!}=36.\)

Generalizing, if t is the number of points in a time series and s its sum, then the number of possible time series for the attackers to decide the correct is determined by s plus t −1 chosen s , i.e.,

If we collect multiple time series, we can form a table, e.g., a list of candidates with the number of votes by states. The tallyman cold reveal only the total number of voter by state and the total number of votes by candidate, who could infer the possible number of votes by state [ 191 ]. Data from previous elections may help the estimation. The result of the election could be computed over encrypted data in a much more secure way than anonymization by k -anonymity, l -diversity, and t -closeness. Still, depending on the size of the table and its values, the time series can be found.

In general, we can consider measurements instead of values. Anonymity techniques try to reduce the number of measurements in the table. Counterintuitively, smaller the number of measurements, bigger the chances of discover them [ 191 ].

If we consider privacy by design, we do not have already collected data.

7.4 Private-by-design data

Messages is the common word for private-by-design data. Messages are transmitted data, processed, and stored. For privacy-preserving protocols, individual messages should not be leaked. CryptDB Footnote 7 is an interesting tool, which allows us to make queries over encrypted datasets. Although messages are stored in a dataset, they are encrypted messages with the users’ keys. To keep performance reasonable, privacy-preserving protocols aggregate or consolidate messages and solve a specific problem.

7.4.1 Computing all operators

In theory, we can compute a Turin machine over encrypted data, i.e., we can use a technique called fully homomorphic encryption [ 192 ] to compute any operator over encrypted data. The big challenge of fully homomorphic encryption is performance. Hence, constructing a fully homomorphic encryption for many application scenarios is a herculean task. The most usual operation is addition. Thus, most privacy-preserving protocols use additive homomorphic encryption [ 193 ] and DC-Nets (from “Dining Cryptographers”) [ 194 ]. Independent of the operation, the former generates functions, and the latter generates families of functions. We can construct an asymmetric DC-Net based on an additive homomorphic encryption [ 194 ].

7.4.2 Trade-off between enforcement and malleability

The privacy enforcement has a high cost. With DC-Nets, we can enforce privacy. However, every encrypted message need to be considered in the computation for users to decrypt and to access the protocol output. It is good for privacy but bad for fault tolerance. For illustration, consider an election where all voters need to vote. Homomorphic encryption enables protocols to decrypt and output even missing an encrypted message. Indeed, it enables the decryption of a single encrypted message. Therefore, homomorphic encryption cannot ensure privacy. For illustration, consider an election where one can read and change all votes. Homomorphic encryption techniques are malleable, and DC-Nets are non-malleable. On the one hand, mailability simplifies the process and improve fault tolerance but disables privacy enforcement. On the other hand, non-mailability enforces privacy but complicates the process and diminishes fault tolerance. In addition, the key distribution with homomorphic encryption is easier than with DC-Net schemes.

7.4.3 Key distribution

Homomorphic encryption needs a public-private key pair. Who owns the private key controls all the information. Assume that a receiver generates the key pair and send the public key to the senders in a secure communication channel. Thus, senders will use the same key to encrypt their messages. Since homomorphic encryption schemes are probabilistic, sender can use the same key to encrypt the same message that their encrypted messages will be different from each other. However, the receiver does not know who sent the encrypted messages.

DC-Net needs a private key for each user and a public key for the protocol. Since DC-Nets do not require senders and receiver, the users are usually named participants. They generate their own private key. Practical symmetric DC-Nets need that participants send a key to each other in a secure communication channel. Afterward, each participant has a private key given by the list of shared keys. Hence, each participant encrypts computing \(\mathfrak {M}_{i,j}\leftarrow \text {Enc}\left (m_{i,j}\right)=m_{i,j}+\sum _{o\in \mathcal {M}-\{i\}}\, \text {Hash}\left (s_{i,o}\ || \ j\right)-\text {Hash}\left (s_{o,i}\ || \ j\right),\) where m i , j is the message sent by the participant i in the time j , Hash is a secure hash function predefined by the participants, s i , o is the secret key sent from participant i to participant o , similarly, s o , i is the secret key received by i from o , and || is the concatenation operator. Each participant i can send the encrypted message \(\mathfrak {M}_{i,j}\) to each other. Thus, participants can decrypt the aggregated encrypted messages computing \(\text {Dec}=\sum _{i\in \mathcal {M}}\, \mathfrak {M}_{i,j}=\sum _{i\in \mathcal {M}}\, m_{i,j}.\) Note that if one or more messages are missing, the decryption is infeasible. Asymmetric DC-Nets do not require a private key based on shared keys. Each participant simply generates a private key. Subsequently, they use a homomorphic encryption or a symmetric DC-Net to add their private keys generating the decryption key.

Homomorphic encryption schemes have low overhead than DC-Nets for setting up keys and for distributing them. Symmetric DC-Nets need O ( I 2 ) messages to set up the keys, where I is the number of participants. Figure  5 depicts the messages to set up keys using (a) symmetric DC-Nets and (b) homomorphic encryption. Asymmetric DC-Nets can be settled easier than symmetric DC-Nets with the price of trusting the homomorphic encryption scheme.

figure 5

Setting up the keys. a Symmetric DC-Nets b Homomorphic encryption

7.4.4 Aggregation and consolidation

The aggregation and consolidation with DC-Nets are easier than with homomorphic encryption. Using DC-Nets, participants can just broadcast their encrypted messages or just send directly to an aggregator. Using homomorphic encryption, senders cannot send encrypted messages directly to the receiver, who can decrypt individual messages. Somehow, senders should aggregate the encrypted messages, and the receiver should receive only the encrypted aggregation, which is a challenge in homomorphic encryption and trivial in DC-Nets due to the trade-off described in Section 7.4.2 . In this work, we are referencing DC-Nets as fully connected DC-Nets. For non-fully connected DC-Nets, aggregation is based on trust and generates new challenges. Sometimes, aggregation and consolidation are used as synonym. However, consolidation is more complicated and generates more elaborate information than the aggregation. For example, the aggregation of the encrypted textual messages is just to join them, while the consolidation of encrypted textual messages generates a speech synthesis.

7.4.5 Performance

Fully homomorphic encryption tends to have big keys and requires a prohibitive processing time. On the contrary, asymmetric DC-Nets and partially homomorphic encryption normally use modular multi-exponentiations, which can be computed in logarithmic time [ 195 ]. Symmetric DC-Nets are efficient only for a small number of participants, because each participant need an iteration over the number of participants to encrypt a message. The number of participants is not relevant for asymmetric DC-Nets and for homomorphic encryption.

8 Forensics

Digital forensics is a branch of forensic science addressing the recovery and investigation of material found in digital devices. Evidence collection and interpretation play a key role in forensics. Conventional forensic approaches separately address issues related to computer forensics and information forensics. There is, however, a growing trend in security and forensics research that utilizes interdisciplinary approaches to provide a rich set of forensic capabilities to facilitate the authentication of data as well as the access conditions including who, when, where, and how.

In this trend, there are two major types of forensic evidences [ 196 ]. One type is intrinsic to the device, the information processing chain, or the physical environment, in such forms as the special characteristics associated with specific types of hardware or software processing or environment, the unique noise patterns as a signature of a specific device unit, certain regularities or correlations related to certain device, processing or their combinations, and more. Another type is extrinsic approaches, whereby specially designed data are proactively injected into the signals/data or into the physical world and later extracted and examined to infer or verify the hosting data’s origin, integrity, processing history, or capturing environment.

In mid of the convergence between digital and physical systems with sensors, actuators and computing devices becoming closely tied together, an emerging framework has been proposed as Proof-Carrying Sensing (PCS) [ 197 ]. This was inspired by Proof-Carrying Code, a trusted computing framework that associates foreign executables with a model to prove that they have not been tampered with and they function as expected. In the new UbiComp context involving cyber physical systems where mobility and resource constraints are common, the physical world can be leveraged as a channel that encapsulates properties difficult to be tampered with remotely, such as proximity and causality, in order to create a challenge-response function. Such a Proof-Carrying Sensing framework can help authenticate devices, collected data, and locations, and compared to traditional multifactor or out-of-band authentication mechanisms, it has a unique advantage that authentication proofs are embedded in sensor data and can be continuously validated over time and space at without running complicated cryptographic algorithms.

In terms of the above-mentioned intrinsic and extrinsic view point, the physical data available to establish a mutual trust in the PCS framework can be intrinsic to the physical environment (such as temperature, luminosity, noise, electrical frequency), or extrinsic to it, for example, they are actively injected by the device into the physical world. By monitoring the propagation of intrinsic or extrinsic data, a device can confirm its reception by other devices located within its vicinity. The challenge in designing and securely implementing such protocols can be addressed by the synergy of combined expertises such as signal processing, statistical detection and learning, cryptography, software engineering, and electronics.

To help appreciate the intrinsic and extrinsic evidences in addressing the security and forensics in UbiComp that involves both digital and physical elements, we now discuss two examples. Consider first an intrinsic signature of power grids. The electric network frequency (ENF) is the supply frequency of power distribution grids, with a nominal value of 60Hz (North America) or 50Hz (Europe). At any given time, the instantaneous value of ENF usually fluctuates around its nominal value as a result of the dynamic interaction between the load variations in the grid and the control mechanisms for power generation. These variations are nearly identical in all locations of the same grid at a given time due to the interconnected nature of the grid. The changing values of instantaneous ENF over time forms an ENF signal, which can be intrinsically captured by audio/visual recordings (Fig.  6 ) or other sensors [ 198 , 199 ]. This has led to recent forensic applications, such as validating the time-of-recording of an ENF-containing multimedia signal and estimating its recording location using concurrent reference signals from power grids based on the use of ENF signals.

figure 6

An example of intrinsic evidence related to the power grid. Showing here are spectrograms of ENF signals in concurrent recordings of a audio, b visual, and c power main. Cross-correlation study can show the similarity between media and power line reference at different time lags, where a strong peak appears at the temporal alignment of the matching grid

Next, consider the recent work by Satchidanandan and Kumar [ 200 ] introducing a notion of watermarking in a cyber-physical system, which can be viewed as a class of extrinsic signatures. If an actuator injects into the system a properly designed probing signal that is unknown in advance to other nodes in the system, then based on the knowledge of the cyber-physical system’s dynamics and other properties, the actuator can examine the sensors’ report about the signals at various points and can potentially infer whether there is malicious activity in the system or not, and if so, where and how.

A major challenge and research opportunity lies on discovering and characterizing suitable intrinsic and extrinsic evidences. Although qualitative properties of some signatures are known, it is important to develop quantitative models to characterize the normal and abnormal behavior in the context of the overall system. Along this line, the exploration of physical models might yield analytic approximations of such properties; and in the meantime, data-driven learning approaches can be used to gather statistical data characterizing normal and abnormal behaviors. Building on these elements, a strong synergy across the boundaries of traditionally separate domains of computer forensics, information forensics, and device forensics should be developed so as to achieve comprehensive capabilities of system forensics in UbiComp.

9 Conclusion

In the words of Mark Weiser, Ubiquitous Computing is “the idea of integrating computers seamlessly into the world at large” [ 1 ]. Thus, far from being a phenomenon from this time, the design and practice of UbiComp systems were already being discussed one quarter of a century ago. In this article, we have revisited this notion, which permeates the most varied levels of our society, under a security and privacy point of view. In the coming years, these two topics will occupy much of the time of researchers and engineers. In our opinion, the use of this time should be guided by a few observations, which we list below:

UbiComp software is often produced as the combination of different programming languages, sharing a common core often implemented in a type-unsafe language such as C, C++ or assembly. Applications built in this domain tend to be distributed, and their analysis, i.e., via static analysis tools, needs to consider a holistic view of the system.

The long-life span of some of these systems, coupled with the difficulty (both operational and cost-wise) to update and re-deploy them, makes them vulnerable to the inexorable progress of technology and cryptanalysis techniques. This brings new (and possibly disruptive) players to this discussion, such as quantum adversaries.

Key management is a critical component of any secure or private real-world system. After security roles and key management procedures are clearly defined for all entities in the framework, a set of matching cryptographic primitives must be deployed. Physical access and constrained resources complicate the design of efficient and secure cryptographic algorithms, which are often amenable to side-channel attacks. Hence, current research challenges in the space include more efficient key management schemes, in particular supporting some form of revocation; the design of lightweight cryptographic primitives which facilitate correct and secure implementation; cheaper side-channel resistance countermeasures made available through advances in algorithms and embedded architectures.

Given the increasing popularization of UbiComp systems, people become more and more dependent on their services for performing different commercial, financial, medical and social transactions. This rising dependence requires simultaneous high level of reliability, availability and security. This observation strengthens the importance of the design and implementation of resilient UbiComp systems.

One of the main challenges to providing pervasive IdM is to ensure the authenticity of devices and users and adaptive authorization in scenarios with multiple and heterogeneous security domains.

Several databases currently store sensitive data. Moreover, a vast number of sensors are constantly collecting new sensitive data and storing them in clouds. Privacy-preserving protocols are being designed and perfected to enhance user’s privacy in specific scenarios. Cultural interpretations of privacy, the variety of laws, big data from legacy systems in clouds, processing time, latency, key distribution and management, among other aforementioned are challenges for us to develop privacy-preserving protocols.

The convergence between the physical and digital systems poses both challenges and opportunities in offering forensic capabilities to facilitate the authentication of data as well as the access conditions including who, when, where, and how; a synergistic use of intrinsic and extrinsic evidences with interdisciplinary expertise will be the key.

Given these observations, and the importance of ubiquitous computing, it is easy to conclude that the future holds fascinating challenges waiting for the attention of the academia and the industry.

Finally, note the observations and the predictions presented in this work regarding how UbiComp may evolve represent our view of the field based on the technology landscape today. New scientific discoveries, technology inventions as well as economic, social, and policy factors may lead to new and/or different trends in the technology evolutionary paths.

https://competitions.cr.yp.to/caesar.html

https://csrc.nist.gov/projects/lightweight-cryptography

Deloitte’s annual Technology, Media and Telecommunications Predictions 2017 report: https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/ gx-deloitte-2017-tmt-predictions.pdf

https://www.fiware.org .

OAuth 2.0 core authorization framework is described by IETF in RFC 6749 and other specifications and profiles.

https://arx.deidentifier.org/

https://css.csail.mit.edu/cryptdb/

Abbreviations

Authentication and Authorization Infrastructure

Attribute Based Access Control

Access Control List

Advanced Encryption Standard

Capability Based Access Control

control flow graph

Certificateless cryptography

Distributed Denial of Service

Elliptic Curve Cryptography

Enhanced Client and Proxy

Electronic identity

Electric network frequency

Federated Identity Management Model

Hash-Based Signatures

Identity-based

Identity Management

Identity Provider

Identity Provider as a Service

Internet of things

Message Authentication Codes

Multi-factor authentication

Physical access control systems

Proof-Carrying Sensing

Policy Decision Point

Policy Decision Point as a Service

Policy Enforcement Point

Public key infrastructures

Post-quantum cryptography

Quantum cryptography

Role Based Access Control

Service Provider

Single Sign-On

Pervasive and ubiquitous computing

eXtensible Access Control Markup Language

Weiser M. The computer for the 21st century. Sci Am. 1991; 265(3):94–104.

Article   Google Scholar  

Weiser M. Some computer science issues in ubiquitous computing. Commun ACM. 1993; 36(7):75–84.

Lyytinen K, Yoo Y. Ubiquitous computing. Commun ACM. 2002; 45(12):63–96.

Estrin D, Govindan R, Heidemann JS, Kumar S. Next century challenges: Scalable coordination in sensor networks. In: MobiCom’99. New York: ACM: 1999. p. 263–70.

Google Scholar  

Pottie GJ, Kaiser WJ. Wireless integrated network sensors. Commun ACM. 2000; 43(5):51–8.

Ashton K. That ’Internet of Things’ Thing. RFiD J. 2009; 22:97–114.

Atzori L, Iera A, Morabito G. The internet of things: a survey. Comput Netw. 2010; 54(15):2787–805.

Article   MATH   Google Scholar  

Mann S. Wearable computing: A first step toward personal imaging. Computer. 1997; 30(2):25–32.

Martin T, Healey J. 2006’s wearable computing advances and fashions. IEEE Pervasive Comput. 2007; 6(1):14–6.

Lee EA. Cyber-physical systems-are computing foundations adequate. In: NSF Workshop On Cyber-Physical Systems: Research Motivation, Techniques and Roadmap, volume 2. Citeseer: 2006.

Rajkumar RR, Lee I, Sha L, Stankovic J. Cyber-physical systems: the next computing revolution. In: 47th Design Automation Conference. ACM: 2010.

Abowd GD, Mynatt ED. Charting past, present, and future research in ubiquitous computing. ACM Trans Comput Human Interact (TOCHI). 2000; 7(1):29–58.

Stajano F. Security for ubiquitous computing.Hoboken: Wiley; 2002.

Book   Google Scholar  

Pierce BC. Types and programming languages, 1st edition. Cambridge: The MIT Press; 2002.

MATH   Google Scholar  

Cousot P, Cousot R. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL. New York: ACM: 1977. p. 238–52.

McMillan KL. Symbolic model checking. Norwell: Kluwer Academic Publishers; 1993.

Book   MATH   Google Scholar  

Leroy X. Formal verification of a realistic compiler. Commun ACM. 2009; 52(7):107–15.

Rice HG. Classes of recursively enumerable sets and their decision problems. Trans Amer Math Soc. 1953; 74(1):358–66.

Article   MathSciNet   MATH   Google Scholar  

Wilson RP, Lam MS. Efficient context-sensitive pointer analysis for c programs. In: PLDI. New York: ACM: 1995. p. 1–12.

Cadar C, Dunbar D, Engler D. KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI. Berkeley: USENIX: 2008. p. 209–24.

Coppa E, Demetrescu C, Finocchi I. Input-sensitive profiling. In: PLDI. New York: ACM: 2012. p. 89–98.

Graham SL, Kessler PB, McKusick MK. gprof: a call graph execution profiler (with retrospective). In: Best of PLDI. New York: ACM: 1982. p. 49–57.

Godefroid P, Klarlund N, Sen K. Dart: directed automated random testing. In: PLDI. New York: ACM: 2005. p. 213–23.

Nethercote N, Seward J. Valgrind: a framework for heavyweight dynamic binary instrumentation. In: PLDI. New York: ACM: 2007. p. 89–100.

Luk C-K, Cohn R, Muth R, Patil H, Klauser A, Lowney G, Wallace S, Reddi VJ, Hazelwood K. Pin: Building customized program analysis tools with dynamic instrumentation. In: PLDI. New York: ACM: 2005. p. 190–200.

Rimsa AA, D’Amorim M, Pereira FMQ. Tainted flow analysis on e-SSA-form programs. In: CC. Berlin: Springer: 2011. p. 124–43.

Serebryany K, Bruening D, Potapenko A, Vyukov D. Addresssanitizer: a fast address sanity checker. In: ATC. Berkeley: USENIX: 2012. p. 28.

Russo A, Sabelfeld A. Dynamic vs. static flow-sensitive security analysis. In: CSF. Washington: IEEE: 2010. p. 186–99.

Carlini N, Barresi A, Payer M, Wagner D, Gross TR. Control-flow bending: On the effectiveness of control-flow integrity. In: SEC. Berkeley: USENIX: 2015. p. 161–76.

Klein G, Elphinstone K, Heiser G, Andronick J, Cock D, Derrin P, Elkaduwe D, Engelhardt K, Kolanski R, Norrish M, Sewell T, Tuch H, Winwood S. sel4: Formal verification of an os kernel. In: SOSP. New York: ACM: 2009. p. 207–20.

Jourdan J-H, Laporte V, Blazy S, Leroy X, Pichardie D. A formally-verified c static analyzer. In: POPL. New York: ACM: 2015. p. 247–59.

Soares LFG, Rodrigues RF, Moreno MF. Ginga-NCL: the declarative environment of the brazilian digital tv system. J Braz Comp Soc. 2007; 12(4):1–10.

Maas AJ, Nazaré H, Liblit B. Array length inference for c library bindings. In: ASE. New York: ACM: 2016. p. 461–71.

Fedrecheski G, Costa LCP, Zuffo MK. ISCE. Washington: IEEE: 2016.

Rellermeyer JS, Duller M, Gilmer K, Maragkos D, Papageorgiou D, Alonso G. The software fabric for the internet of things. In: IOT. Berlin, Heidelberg: Springer-Verlag: 2008. p. 87–104.

Furr M, Foster JS. Checking type safety of foreign function calls. ACM Trans Program Lang Syst. 2008; 30(4):18:1–18:63.

Dagenais B, Hendren L. OOPSLA. New York: ACM: 2008. p. 313–28.

Melo LTC, Ribeiro RG, de Araújo MR, Pereira FMQ. Inference of static semantics for incomplete c programs. Proc ACM Program Lang. 2017; 2(POPL):29:1–29:28.

Godefroid P. Micro execution. In: ICSE. New York: ACM: 2014. p. 539–49.

Manna Z, Waldinger RJ. Toward automatic program synthesis. Commun ACM. 1971; 14(3):151–65.

López HA, Marques ERB, Martins F, Ng N, Santos C, Vasconcelos VT, Yoshida N. Protocol-based verification of message-passing parallel programs. In: OOPSLA. New York: ACM: 2015. p. 280–98.

Bronevetsky G. Communication-sensitive static dataflow for parallel message passing applications. In: CGO. Washington: IEEE: 2009. p. 1–12.

Teixeira FA, Machado GV, Pereira FMQ, Wong HC, Nogueira JMS, Oliveira LB. Siot: Securing the internet of things through distributed system analysis. In: IPSN. New York: ACM: 2015. p. 310–21.

Lhoták O, Hendren L. Context-sensitive points-to analysis: Is it worth it? In: CC. Berlin, Heidelberg: Springer: 2006. p. 47–64.

Agha G. An overview of actor languages. In: OOPWORK. New York: ACM: 1986. p. 58–67.

Haller P, Odersky M. Actors that unify threads and events. In: Proceedings of the 9th International Conference on Coordination Models and Languages. COORDINATION’07. Berlin, Heidelberg: Springer-Verlag: 2007. p. 171–90.

Imam SM, Sarkar V. Integrating task parallelism with actors. In: OOPSLA. New York: ACM: 2012. p. 753–72.

Cousot P, Cousot R, Logozzo F. A parametric segmentation functor for fully automatic and scalable array content analysis. In: POPL. New York: ACM: 2011. p. 105–18.

Nazaré H, Maffra I, Santos W, Barbosa L, Gonnord L, Pereira FMQ. Validation of memory accesses through symbolic analyses. In: OOPSLA. New York: ACM: 2014.

Paisante V, Maalej M, Barbosa L, Gonnord L, Pereira FMQ. Symbolic range analysis of pointers. In: CGO. New York: ACM: 2016. p. 171–81.

Maalej M, Paisante V, Ramos P, Gonnord L, Pereira FMQ. Pointer disambiguation via strict inequalities. In: Proceedings of the 2017 International Symposium on Code Generation and Optimization, CGO ’17 . Piscataway: IEEE Press: 2017. p. 134–47.

Maalej M, Paisante V, Pereira FMQ, Gonnord L. Combining range and inequality information for pointer disambiguation. Sci Comput Program. 2018; 152(C):161–84.

Sui Y, Fan X, Zhou H, Xue J. Loop-oriented pointer analysis for automatic simd vectorization. ACM Trans Embed Comput Syst. 2018; 17(2):56:1–56:31.

Poovendran R. Cyber-physical systems: Close encounters between two parallel worlds [point of view]. Proc IEEE. 2010; 98(8):1363–6.

Conti JP. The internet of things. Commun Eng. 2006; 4(6):20–5.

Article   MathSciNet   Google Scholar  

Rinaldi SM, Peerenboom JP, Kelly TK. Identifying, understanding, and analyzing critical infrastructure interdependencies. IEEE Control Syst. 2001; 21(6):11–25.

US Bureau of Transportation Statistics BTS. Average age of automobiles and trucks in operation in the united states. 2017. Accessed 14 Sept 2017.

U.S. Department of Transportation. IEEE 1609 - Family of Standards for Wireless Access in Vehicular Environments WAVE. 2013.

Maurer M, Gerdes JC, Lenz B, Winner H. Autonomous driving: technical, legal and social aspects.Berlin: Springer; 2016.

Patel N. 90% of startups fail: Here is what you need to know about the 10%. 2015. https://www.forbes.com/sites/neilpatel/2015/01/16/90-of-startups-will-fail-heres-what-you-need-to-know-about-the-10/ . Accessed 09 Sept 2018.

Jacobsson A, Boldt M, Carlsson B. A risk analysis of a smart home automation system. Futur Gener Comput Syst. 2016; 56(Supplement C):719–33.

Rivest RL, Shamir A, Adleman LM. A method for obtaining digital signatures and public-key cryptosystems. Commun ACM. 1978; 21(2):120–6.

Miller VS. Use of elliptic curves in cryptography. In: CRYPTO, volume 218 of Lecture Notes in Computer Science. Berlin: Springer: 1985. p. 417–26.

Koblitz N. Elliptic curve cryptosystems. Math Comput. 1987; 48(177):203–9.

Barbulescu R, Gaudry P, Joux A, Thomé E. A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. In: EUROCRYPT 2014. Berlin: Springer: 2014. p. 1–16.

Diffie W, Hellman M. New directions in cryptography. IEEE Trans Inf Theor. 2006; 22(6):644–54.

Barker E. Federal Information Processing Standards Publication (FIPS PUB) 186-4 Digital Signature Standard (DSS). 2013.

Barker E, Johnson D, Smid M. Special publication 800-56A recommendation for pair-wise key establishment schemes using discrete logarithm cryptography. 2006.

Simon DR. On the power of quantum computation. In: Symposium on Foundations of Computer Science (SFCS 94). Washington: IEEE Computer Society: 1994. p. 116–23.

Knill E. Physics: quantum computing. Nature. 2010; 463(7280):441–3.

Grover LK. A fast quantum mechanical algorithm for database search. In: Proceedings of ACM STOC 1996. New York: ACM: 1996. p. 212–19.

Shor PW. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J Comput. 1997; 26(5):1484–509.

McEliece RJ. A public-key cryptosystem based on algebraic coding theory. Deep Space Netw. 1978; 44:114–6.

Merkle RC. Secrecy, authentication and public key systems / A certified digital signature. PhD thesis, Stanford. 1979.

Regev O. On lattices, learning with errors, random linear codes, and cryptography. In: Proceedings of ACM STOC ’05. STOC ’05. New York: ACM: 2005. p. 84–93.

Buchmann J, Dahmen E, Hülsing A. Xmss - a practical forward secure signature scheme based on minimal security assumptions In: Yang B-Y, editor. PQCrypto. Berlin: Springer: 2011. p. 117–29.

McGrew DA, Curcio M, Fluhrer S. Hash-Based Signatures. Internet Engineering Task Force (IETF). 2017. https://datatracker.ietf.org/doc/html/draft-mcgrew-hash-sigs-13 . Accessed 9 Sept 2018.

Bennett CH, Brassard G. Quantum cryptography: public key distribution and coin tossing. In: Proceedings of IEEE ICCSSP’84. New York: IEEE Press: 1984. p. 175–9.

Bos J, Costello C, Ducas L, Mironov I, Naehrig M, Nikolaenko V, Raghunathan A, Stebila D. Frodo: Take off the ring! practical, quantum-secure key exchange from LWE. Cryptology ePrint Archive, Report 2016/659. 2016. http://eprint.iacr.org/2016/659 .

Alkim E, Ducas L, Pöppelmann T, Schwabe P. Post-quantum key exchange - a new hope. Cryptology ePrint Archive, Report 2015/1092. 2015. http://eprint.iacr.org/2015/1092 .

Misoczki R, Tillich J-P, Sendrier N, PBarreto LSM. MDPC-McEliece: New McEliece variants from moderate density parity-check codes. In: IEEE International Symposium on Information Theory – ISIT’2013. Istambul: IEEE: 2013. p. 2069–73.

Hoffstein J, Pipher J, Silverman JH. Ntru: A ring-based public key cryptosystem. In: International Algorithmic Number Theory Symposium. Berlin: Springer: 1998. p. 267–88.

Bos J, Ducas L, Kiltz E, Lepoint T, Lyubashevsky V, Schanck JM, Schwabe P, Stehlé D. Crystals–kyber: a CCA-secure module-lattice-based KEM. IACR Cryptol ePrint Arch. 2017; 2017:634.

Aragon N, Barreto PSLM, Bettaieb S, Bidoux L, Blazy O, Deneuville J-C, Gaborit P, Gueron S, Guneysu T, Melchor CA, Misoczki R, Persichetti E, Sendrier N, Tillich J-P, Zemor G. BIKE: Bit flipping key encapsulation. Submission to the NIST Standardization Process on Post-Quantum Cryptography. 2017. https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-1-Submissions .

Barreto PSLM, Gueron S, Gueneysu T, Misoczki R, Persichetti E, Sendrier N, Tillich J-P. Cake: Code-based algorithm for key encapsulation. In: IMA International Conference on Cryptography and Coding. Berlin: Springer: 2017. p. 207–26.

Jao D, De Feo L. Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies. In: International Workshop on Post-Quantum Cryptography. Berlin: Springer: 2011. p. 19–34.

Costello C, Jao D, Longa P, Naehrig M, Renes J, Urbanik D. Efficient compression of sidh public keys. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques. Berlin: Springer: 2017. p. 679–706.

Jao D, Azarderakhsh R, Campagna M, Costello C, DeFeo L, Hess B, Jalali A, Koziel B, LaMacchia B, Longa P, Naehrig M, Renes J, Soukharev V, Urbanik D. SIKE: Supersingular isogeny key encapsulation. Submission to the NIST Standardization Process on Post-Quantum Cryptography. 2017. https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-1-Submissions .

Galbraith SD, Petit C, Shani B, Ti YB. On the security of supersingular isogeny cryptosystems. In: International Conference on the Theory and Application of Cryptology and Information Security. Berlin: Springer: 2016. p. 63–91.

National Institute of Standards and Technology (NIST). Standardization Process on Post-Quantum Cryptography. 2016. http://csrc.nist.gov/groups/ST/post-quantum-crypto/ . Accessed 9 Sept 2018.

McGrew D, Kampanakis P, Fluhrer S, Gazdag S-L, Butin D, Buchmann J. State management for hash-based signatures. In: International Conference on Research in Security Standardization. Springer: 2016. p. 244–60.

Bernstein DJ, Hopwood D, Hülsing A, Lange T, Niederhagen R, Papachristodoulou L, Schneider M, Schwabe P, Wilcox-O’Hearn Z. SPHINCS: Practical Stateless Hash-Based Signatures. Berlin, Heidelberg: Springer Berlin Heidelberg; 2015. p. 368–97.

Barker E, Barker W, Burr W, Polk W, Smid M. Recommendation for key management part 1: General (revision 3). NIST Spec Publ. 2012; 800(57):1–147.

Waters B. Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization. In: Public Key Cryptography. LNCS, 6571 vol.Berlin: Springer: 2011. p. 53–70.

Liu Z, Wong DS. Practical attribute-based encryption: Traitor tracing, revocation and large universe. Comput J. 2016; 59(7):983–1004.

Oliveira LB, Aranha DF, Gouvêa CPL, Scott M, Câmara DF, López J, Dahab R. Tinypbc: Pairings for authenticated identity-based non-interactive key distribution in sensor networks. Comput Commun. 2011; 34(3):485–93.

Kim T, Barbulescu R. Extended tower number field sieve: A new complexity for the medium prime case. In: CRYPTO (1). LNCS, 9814 vol.Berlin: Springer: 2016. p. 543–71.

Boneh D, Franklin MK. Identity-based encryption from the weil pairing. SIAM J Comput. 2003; 32(3):586–615.

Al-Riyami SS, Paterson KG. Certificateless public key cryptography. In: ASIACRYPT. LNCS, 2894 vol.Berlin: Springer: 2003. p. 452–73.

Boldyreva A, Goyal V, Kumar V. Identity-based encryption with efficient revocation. IACR Cryptol ePrint Arch. 2012; 2012:52.

Simplício Jr. MA, Silva MVM, Alves RCA, Shibata TKC. Lightweight and escrow-less authenticated key agreement for the internet of things. Comput Commun. 2017; 98:43–51.

Neto ALM, Souza ALF, Cunha ÍS, Nogueira M, Nunes IO, Cotta L, Gentille N, Loureiro AAF, Aranha DF, Patil HK, Oliveira LB. Aot: Authentication and access control for the entire iot device life-cycle. In: SenSys. New York: ACM: 2016. p. 1–15.

Mouha N. The design space of lightweight cryptography. IACR Cryptol ePrint Arch. 2015; 2015:303.

Daemen J, Rijmen V. The Design of Rijndael: AES - The Advanced Encryption Standard. Information Security and Cryptography. Berlin: Springer; 2002.

Grosso V, Leurent G, Standaert F-X, Varici K. Ls-designs: Bitslice encryption for efficient masked software implementations. In: FSE. LNCS, 8540 vol.Berlin: Springer: 2014. p. 18–37.

Dinu D, Perrin L, Udovenko A, Velichkov V, Großschädl J, Biryukov A. Design strategies for ARX with provable bounds: Sparx and LAX. In: ASIACRYPT (1). LNCS, 10031 vol.Berlin: Springer: 2016. p. 484–513.

Albrecht MR, Driessen B, Kavun EB, Leander G, Paar C, Yalçin T. Block ciphers - focus on the linear layer (feat. PRIDE). In: CRYPTO (1). LNCS, 8616 vol.Berlin: Springer: 2014. p. 57–76.

Beierle C, Jean J, Kölbl S, Leander G, Moradi A, Peyrin T, Sasaki Y, Sasdrich P, Sim SM. The SKINNY family of block ciphers and its low-latency variant MANTIS. In: CRYPTO (2). LNCS, 9815 vol.Berlin: Springer: 2016. p. 123–53.

Bogdanov A, Knudsen LR, Leander G, Paar C, Poschmann A, Robshaw MJB, Seurin Y, Vikkelsoe C. PRESENT: an ultra-lightweight block cipher. In: CHES. LNCS, 4727 vol.Berlin: Springer: 2007. p. 450–66.

Reis TBS, Aranha DF, López J. PRESENT runs fast - efficient and secure implementation in software. In: CHES, volume 10529 of Lecture Notes in Computer Science. Berlin: Springer: 2017. p. 644–64.

Aumasson J-P, Bernstein DJ. Siphash: A fast short-input PRF. In: INDOCRYPT. LNCS, 7668 vol.Berlin: Springer: 2012. p. 489–508.

Kölbl S, Lauridsen MM, Mendel F, Rechberger C. Haraka v2 - efficient short-input hashing for post-quantum applications. IACR Trans Symmetric Cryptol. 2016; 2016(2):1–29.

Aumasson J-P, Neves S, Wilcox-O’Hearn Z, Winnerlein C. BLAKE2: simpler, smaller, fast as MD5. In: ACNS. LNCS, 7954 vol.Berlin: Springer: 2013. p. 119–35.

Stevens M, Karpman P, Peyrin T. Freestart collision for full SHA-1. In: EUROCRYPT (1). LNCS, 9665 vol.Berlin: Springer: 2016. p. 459–83.

NIST Computer Security Division. SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions. FIPS Publication 202, National Institute of Standards and Technology, U.S. Department of Commerce, May 2014.

McGrew DA, Viega J. The security and performance of the galois/counter mode (GCM) of operation. In: INDOCRYPT. LNCS, 3348 vol.Berlin: Springer: 2004. p. 343–55.

Koblitz N. A family of jacobians suitable for discrete log cryptosystems. In: CRYPTO, volume 403 of LNCS. Berlin: Springer: 1988. p. 94–99.

Bernstein DJ. Curve25519: New diffie-hellman speed records. In: Public Key Cryptography. LNCS, 3958 vol.Berlin: Springer: 2006. p. 207–28.

Bernstein DJ, Duif N, Lange T, Schwabe P, Yang B-Y. High-speed high-security signatures. J Cryptographic Eng. 2012; 2(2):77–89.

Costello C, Longa P. Four \(\mathbb {Q}\) : Four-dimensional decompositions on a \(\mathbb {Q}\) -curve over the mersenne prime. In: ASIACRYPT (1). LNCS, 9452 vol.Berlin: Springer: 2015. p. 214–35.

Banik S, Bogdanov A, Regazzoni F. Exploring energy efficiency of lightweight block ciphers. In: SAC. LNCS, 9566 vol.Berlin: Springer: 2015. p. 178–94.

Dinu D, Corre YL, Khovratovich D, Perrin L, Großschädl J, Biryukov A. Triathlon of lightweight block ciphers for the internet of things. NIST Workshop on Lightweight Cryptography. 2015.

Kocher PC. Timing attacks on implementations of diffie-hellman, rsa, dss, and other systems. In: CRYPTO. LNCS, 1109 vol.Berlin: Springer: 1996. p. 104–13.

Rodrigues B, Pereira FMQ, Aranha DF. Sparse representation of implicit flows with applications to side-channel detection In: Zaks A, Hermenegildo MV, editors. Proceedings of the 25th International Conference on Compiler Construction, CC 2016, Barcelona, Spain, March 12-18, 2016. New York: ACM: 2016. p. 110–20.

Almeida JB, Barbosa M, Barthe G, Dupressoir F, Emmi M. Verifying constant-time implementations. In: USENIX Security Symposium. Berkeley: USENIX Association: 2016. p. 53–70.

Kocher PC, Jaffe J, Jun B. Differential power analysis. In: CRYPTO. LNCS, 1666 vol. Springer: 1999. p. 388–97.

Biham E, Shamir A. Differential fault analysis of secret key cryptosystems. In: CRYPTO. LNCS, 1294 vol.Berlin: Springer: 1997. p. 513–25.

Kim Y, Daly R, Kim J, Fallin C, Lee J-H, Lee D, Wilkerson C, Lai K, Mutlu O. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. In: ISCA. Washington, DC: IEEE Computer Society: 2014. p. 361–72.

Ishai Y, Sahai A, Wagner D. Private circuits: Securing hardware against probing attacks. In: CRYPTO. LNCS, 2729 vol. Springer: 2003. p. 463–81.

Balasch J, Gierlichs B, Grosso V, Reparaz O, Standaert F-X. On the cost of lazy engineering for masked software implementations. In: CARDIS. LNCS, 8968 vol.Berlin: Springer: 2014. p. 64–81.

Nogueira M, dos Santos AL, Pujolle G. A survey of survivability in mobile ad hoc networks. IEEE Commun Surv Tutor. 2009; 11(1):66–77.

Mansfield-Devine S. The growth and evolution of ddos. Netw Secur. 2015; 2015(10):13–20.

Thielman S, Johnston C. Major Cyber Attack Disrupts Internet Service Across Europe and US. https://www.theguardian.com/technology/2016/oct/21/ddos-attack-dyn-internet-denial-service . Accessed 3 July 2018.

DDoS attacks: For the hell of it or targeted – how do you see them off? http://www.theregister.co.uk/2016/09/22/ddos_attack_defence/ . Accessed 14 Feb 2017.

Santos AA, Nogueira M, Moura JMF. A stochastic adaptive model to explore mobile botnet dynamics. IEEE Commun Lett. 2017; 21(4):753–6.

Macedo R, de Castro R, Santos A, Ghamri-Doudane Y, Nogueira M. Self-organized SDN controller cluster conformations against DDoS attacks effects. In: 2016 IEEE Global Communications Conference, GLOBECOM, 2016, Washington, DC, USA, December 4–8, 2016. Piscataway: IEEE: 2016. p. 1–6.

Soto J, Nogueira M. A framework for resilient and secure spectrum sensing on cognitive radio networks. Comput Netw. 2015; 79:313–22.

Lipa N, Mannes E, Santos A, Nogueira M. Firefly-inspired and robust time synchronization for cognitive radio ad hoc networks. Comput Commun. 2015; 66:36–44.

Zhang C, Song Y, Fang Y. Modeling secure connectivity of self-organized wireless ad hoc networks. In: IEEE INFOCOM. Piscataway: IEEE: 2008. p. 251–5.

Salem NB, Hubaux J-P. Securing wireless mesh networks. IEEE Wirel Commun. 2006; 13(2):50–5.

Yang H, Luo H, Ye F, Lu S, Zhang L. Security in mobile ad hoc networks: challenges and solutions. IEEE Wirel Commun. 2004; 11(1):38–47.

Nogueira M. SAMNAR: A survivable architecture for wireless self-organizing networks. PhD thesis, Université Pierre et Marie Curie - LIP6. 2009.

ITU. NGN identity management framework: International Telecommunication Union (ITU); 2009. Recommendation Y.2720.

Lopez J, Oppliger R, Pernul G. Authentication and authorization infrastructures (aais): a comparative survey. Comput Secur. 2004; 23(7):578–90.

Arias-Cabarcos P, Almenárez F, Trapero R, Díaz-Sánchez D, Marín A. Blended identity: Pervasive idm for continuous authentication. IEEE Secur Priv. 2015; 13(3):32–39.

Bhargav-Spantzel A, Camenisch J, Gross T, Sommer D. User centricity: a taxonomy and open issues. J Comput Secur. 2007; 15(5):493–527.

Garcia-Morchon O, Kumar S, Sethi M, Internet Engineering Task Force. State-of-the-art and challenges for the internet of things security. Internet Engineering Task Force; 2017. https://datatracker.ietf.org/doc/html/draft-irtf-t2trg-iot-seccons-04 .

Torres J, Nogueira M, Pujolle G. A survey on identity management for the future network. IEEE Commun Surv Tutor. 2013; 15(2):787–802.

Hanumanthappa P, Singh S. Privacy preserving and ownership authentication in ubiquitous computing devices using secure three way authentication. In: Proceedings. International Conference on Innovations in Information Technology (IIT): 2012. p. 107–12.

Fremantle P, Aziz B, Kopecký J, Scott P. Federated identity and access management for the internet of things. In: 2014 International Workshop on Secure Internet of Things: 2014. p. 10–17.

Domenech MC, Boukerche A, Wangham MS. An authentication and authorization infrastructure for the web of things. In: Proceedings of the 12th ACM Symposium on QoS and Security for Wireless and Mobile Networks, Q2SWinet ’16. New York: ACM: 2016. p. 39–46.

Birrell E, Schneider FB. Federated identity management systems: A privacy-based characterization. IEEE Secur Priv. 2013; 11(5):36–48.

Nguyen T-D, Al-Saffar A, Huh E-N. A dynamic id-based authentication scheme. In: Proceedings. Sixth International Conference on Networked Computing and Advanced Information Management (NCM), 2010.2010. p. 248–53.

Gusmeroli S, Piccione S, Rotondi D. A capability-based security approach to manage access control in the internet of things. Math Comput Model. 2013; 58:1189–205.

Akram H, Hoffmann M. Supports for identity management in ambient environments-the hydra approach. In: Proceedings. 3rd International Conference on Systems and Networks Communications, 2008. ICSNC’08.2008. p. 371–7.

Liu J, Xiao Y, Chen CLP. Authentication and access control in the internet of things. In: Proceedings. 32nd International Conference on Distributed Computing Systems Workshops (ICDCSW) 2012.2012. p. 588–92.

Ndibanje B, Lee H-J, Lee S-G. Security analysis and improvements of authentication and access control in the internet of things. Sensors. 2014; 14(8):14786–805.

Kim Y-P, Yoo S, Yoo C. Daot: Dynamic and energy-aware authentication for smart home appliances in internet of things. In: Consumer Electronics (ICCE), 2015 IEEE International Conference on.2015. p. 196–7.

Markmann T, Schmidt TC, Wählisch M. Federated end-to-end authentication for the constrained internet of things using ibc and ecc. SIGCOMM Comput Commun Rev. 2015; 45(4):603–4.

Dasgupta D, Roy A, Nag A. Multi-factor authentication. Cham: Springer International Publishing; 2017. p. 185–233.

NIST. Digital Identity Guidelines. NIST Special Publication 800-63-3. 2017. https://doi.org/10.6028/NIST.SP.800-63-3 .

Dzurenda P, Hajny J, Zeman V, Vrba K. Modern physical access control systems and privacy protection. In: 2015 38th International Conference on Telecommunications and Signal Processing (TSP).2015. p. 1–5.

Guinard D, Fischer M, Trifa V. Sharing using social networks in a composable web of things. In: Proceedings. 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), 2010.2010. p. 702–7.

Rotondi D, Seccia C, Piccione S. Access control & IoT: Capability based authorization access control system. In: Proceedings. 1st IoT International Forum: 2011.

Mahalle PN, Anggorojati B, Prasad NR, Prasad R. Identity authentication and capability based access control (iacac) for the internet of things. J Cyber Secur Mob. 2013; 1(4):309–48.

Moreira Sá De Souza L, Spiess P, Guinard D, Köhler M, Karnouskos S, Savio D. Socrades: A web service based shop floor integration infrastructure. In: The internet of things. Springer: 2008. p. 50–67.

Jindou J, Xiaofeng Q, Cheng C. Access control method for web of things based on role and sns. In: Proceedings. IEEE 12th International Conference on Computer and Information Technology (CIT), 2012. Washington: IEEE Computer Society: 2012. p. 316–21.

Han Q, Li J. An authorization management approach in the internet of things. J Inf Comput Sci. 2012; 9(6):1705–13.

Zhang G, Liu J. A model of workflow-oriented attributed based access control. Int J Comput Netw Inf Secur (IJCNIS). 2011; 3(1):47–53.

do Prado Filho TG, Vinicius Serafim Prazeres C. Multiauth-wot: A multimodal service for web of things athentication and identification. In: Proceedings of the 21st Brazilian Symposium on Multimedia and the Web, WebMedia ’15. New York: ACM: 2015. p. 17–24.

Alam S, Chowdhury MMR, Noll J. Interoperability of security-enabled internet of things. Wirel Pers Commun. 2011; 61(3):567–86.

Seitz L, Selander G, Gehrmann C. Authorization framework for the internet-of-things. In: Proceedings. IEEE 14th International Symposium and Workshops on a World of Wireless, Mobile and Multimedia Networks (WoWMoM). Washington, DC: IEEE Computer Society: 2013. p. 1–6.

OASIS. Saml v2.0 executive overview. 2005. https://www.oasis-open.org/committees/download.php/13525/sstc-saml-exec-overview-2.0-cd-01-2col.pdf .

Hardt D. The oauth 2.0 authorization framework. RFC 6749, RFC Editor; 2012. http://www.rfc-editor.org/rfc/rfc6749.txt .

Maler E, Reed D. The venn of identity: Options and issues in federated identity management. IEEE Secur Priv. 2008; 6(2):16–23.

Naik N, Jenkins P. Securing digital identities in the cloud by selecting an apposite federated identity management from saml, oauth and openid connect. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS). Piscataway: IEEE: 2017. p. 163–74.

OASIS. Authentication context for the oasis security assertion markup language (saml) v2.0. 2005. http://docs.oasis-open.org/security/saml/v2.0/saml-authn-context-2.0-os.pdf .

Paci F, Ferrini R, Musci A, Jr KS, Bertino E. An interoperable approach to multifactor identity verification. Computer. 2009; 42(5):50–7.

Pöhn D, Metzger S, Hommel W. Géant-trustbroker: Dynamic, scalable management of saml-based inter-federation authentication and authorization infrastructures In: Cuppens-Boulahia N, Cuppens F, Jajodia S, El Kalam AA, Sans T, editors. ICT Systems Security and Privacy Protection. Berlin, Heidelberg: Springer Berlin Heidelberg: 2014. p. 307–20.

Zeng D, Guo S, Cheng Z. The web of things: A survey. J Commun. 2011;6(6). http://ojs.academypublisher.com/index.php/jcm/article/view/jcm0606424438 .

The OpenID Foundation. Openid connect core 1.0. 2014. http://openid.net/specs/openid-connect-core-1\_0.html .

Domenech MC, Comunello E, Wangham MS. Identity management in e-health: A case study of web of things application using openid connect. In: 2014 IEEE 16th International Conference on e-Health Networking, Applications and Services (Healthcom). Piscataway: IEEE: 2014. p. 219–24.

OASIS. Extensible access control markup language (xacml) version 3.0. 2013. http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.pdf .

Borges F, Demirel D, Bock L, Buchmann JA, Mühlhäuser M. A privacy-enhancing protocol that provides in-network data aggregation and verifiable smart meter billing. In: ISCC. USA: IEEE: 2014. p. 1–6.

Borges de Oliveira F. Background and Models. Cham: Springer International Publishing; 2017. p. 13–23.

Borges de Oliveira F. Reasons to Measure Frequently and Their Requirements. Cham: Springer International Publishing; 2017. p. 39–47.

Holvast J. The Future of Identity in the Information Society, volume 298 of IFIP Advances in Information and Communication Technology In: Matyáš V, Fischer-Hübner S, Cvrček D, Švenda P, editors. Berlin: Springer Berlin Heidelberg: 2009. p. 13–42.

Toshniwal D. Privacy preserving data mining techniques privacy preserving data mining techniques for hiding sensitive data hiding sensitive data: a step towards open data open data. Singapore: Springer Singapore: 2018. p. 205–12.

Li N, Li T, Venkatasubramanian S. t-closeness: Privacy beyond k-anonymity and l-diversity. In: 2007 IEEE 23rd International Conference on Data Engineering. USA: IEEE: 2007. p. 106–15.

De Montjoye Y-A, Hidalgo CA, Verleysen M, Blondel VD. Unique in the crowd: The privacy bounds of human mobility. Sci Rep. 2013; 3:1–5.

Borges de Oliveira F. Quantifying the aggregation size. Cham: Springer International Publishing; 2017. p. 49–60.

Gentry C. A Fully Homomorphic Encryption Scheme. Stanford: Stanford University; 2009. AAI3382729.

Borges de Oliveira F. A Selective Review. Cham: Springer International Publishing; 2017. p. 25–36.

Borges de Oliveira F. Selected Privacy-Preserving Protocols. Cham: Springer International Publishing; 2017. p. 61–100.

Borges F, Lara P, Portugal R. Parallel algorithms for modular multi-exponentiation. Appl Math Comput. 2017; 292:406–16.

MathSciNet   Google Scholar  

Stamm MC, Wu M, Liu KJR. Information forensics: An overview of the first decade. IEEE Access. 2013; 1:167–200.

Wu M, Quintão Pereira FM, Liu J, Ramos HS, Alvim MS, Oliveira LB. New directions: Proof-carrying sensing — Towards real-world authentication in cyber-physical systems. In: Proceedings of ACM Conf. on Embedded Networked Sensor Systems (SenSys). New York: ACM: 2017.

Grigoras C. Applications of ENF analysis in forensic authentication of digital audio and video recordings. J Audio Eng Soc. 2009; 57(9):643–61.

Garg R, Varna AL, Hajj-Ahmad A, Wu M. “seeing” enf: Power-signature-based timestamp for digital multimedia via optical sensing and signal processing. TIFS. 2013; 8(9):1417–32.

Satchidanandan B, Kumar PR. Dynamic watermarking: Active defense of networked cyber–physical systems. Proc IEEE. 2017; 105(2):219–40.

Download references

Acknowledgments

We would like to thank Artur Souza for contributing with fruitful discussions to this work.

This work was partially supported by the CNPq, NSF, RNP, FAPEMIG, FAPERJ, and CAPES.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Author information

Authors and affiliations.

UFMG, Av. Antônio Carlos, 6627, Prédio do ICEx, Anexo U, sala 6330 Pampulha, Belo Horizonte, MG, Brasil

Leonardo B. Oliveira

Federal University of Minas Gerais, Belo Horizonte, Campinas, Brasil

Fernando Magno Quintão Pereira

Intel Labs, Hillsboro, Campinas, Brasil

Rafael Misoczki

University of Campinas, Campinas, Brasil

Diego F. Aranha

National Laboratory for Scientific Computing, Campinas, Petrópolis, Brasil

Fábio Borges

Federal University of Paraná, Campinas, Curitiba, Brasil

Michele Nogueira

Universidade do Vale do Itajaí, Campinas, Florianópolis, Brasil

Michelle Wangham

University of Maryland, Maryland, USA

Microsoft Research, Redmond, MD, USA

You can also search for this author in PubMed   Google Scholar

Contributions

All authors wrote and reviewed the manuscript. Mainly, LBO focused on the introduction and the whole paper conception, FM focused on Software Protection, RM focused on Long-Term Security, DFA focused on Cryptographic Engineering, MN focused on Resilience, MW focused on Identity Management, FB focused on Privacy, MW focused on Forensics, JL focused on the conclusion and the whole paper conception. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Leonardo B. Oliveira .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Authors’ information.

Leonardo B. Oliveira is an associate professor of the CS Department at UFMG, a visiting associate professor of the CS Department at Stanford, and a research productivity fellow of the Brazilian Research Council (CNPq). Leonardo has been awarded the Microsoft Research Ph.D. Fellowship Award, the IEEE Young Professional Award, and the Intel Strategic Research Alliance Award. He published papers on the security of IoT/Cyber-Physical Systems in publication venues like IPSN and SenSys, and he is the (co)inventor of an authentication scheme for IoT (USPTO Patent Application No. 62287832). Leonardo served as General Chair and TPC Chair of the Brazilian Symposium on Security (SBSeg) in 2014 and 2016, respectively, and as a member in the Advisory Board of the Special Interest Group on Information and Computer System Security (CESeg) of the Brazilian Computer Society. He is a member of the Technical Committee of Identity Management (CT-GId) of the Brazilian National Research and Education Network (RNP).

Fernando M. Q. Pereira is an associate professor at UFMG’s Computer Science Department. He got his Ph.D at the University of California, Los Angeles, in 2008, and since then does research in the field of compilers. He seeks to develop techniques that let programmers to produce safe, yet efficient code. Fernando’s portfolio of analyses and optimizations is available at http://cuda.dcc.ufmg.br/ . Some of these techniques found their way into important open source projects, such as LLVM, PHC and Firefox.

Rafael Misoczki is a Research Scientist at Intel Labs, USA. His work is focused on post-quantum cryptography and conventional cryptography. He contributes to international standardization efforts on cryptography (expert member of the USA delegation for ISO/IEC JTC1 SC27 WG2, expert member of INCITS CS1, and submitter to the NIST standardization competition on post-quantum cryptography). He holds a PhD degree from Sorbonne Universités (University of Paris - Pierre et Marie Curie), France (2013). He also holds an MSc. degree in Electrical Engineering (2010) and a BSc. degree in Computer Science (2008), both from the Universidade de São Paulo, Brazil.

Diego F. Aranha is an Assistant Professor in the Institute of Computing at the University of Campinas (Unicamp). He holds a PhD degree in Computer Science from the University of Campinas and has worked as a visiting PhD student for 1 year at the University of Waterloo. His professional experience is in Cryptography and Computer Security, with a special interest in the efficient implementation of cryptographic algorithms and security analysis of real world systems. Coordinated the first team of independent researchers capable of detecting and exploring vulnerabilities in the software of the Brazilian voting machine during controlled tests organized by the electoral authority. He received the Google Latin America Research Award for research on privacy twice, and the MIT TechReview’s Innovators Under 35 Brazil Award for his work in electronic voting.

Fábio Borges is Professor in the doctoral program at Brazilian National Laboratory for Scientific Computing (LNCC in Portuguese). He holds a Ph.D. degree in Doctor of Engineering (Dr.-Ing.) in the Department of Computer Science at TU Darmstadt, a master’s degree in Computational Modeling at LNCC, and a bachelor’s degree in mathematics at Londrina State University (UEL). Currently, he is developing research at the LNCC in the field of Algorithms, Security, Privacy, and Smart Grid. Further information is found at http://www.lncc.br/~borges/ .

Michele Nogueira is an Associate Professor of the Computer Science Department at Federal University of Paraná. She received her doctorate in Computer Science from the UPMC — Sorbonne Universités, Laboratoire d’Informatique de Paris VI (LIP6) in 2009. Her research interests include wireless networks, security and dependability. She has been working on providing resilience to self-organized, cognitive and wireless networks by adaptive and opportunistic approaches for many years. Dr. Nogueira was one of the pioneers in addressing survivability issues in self-organized wireless networks, being her works “A Survey of Survivability in Mobile Ad Hoc Networks” and “An Architecture for Survivable Mesh Networking” her prominent scientific contributions. She is an Associate Technical Editor for the IEEE Communications Magazine and the Journal of Network and Systems Management. She serves as Vice-chair for the IEEE ComSoc - Internet Technical Committee. She is an ACM and IEEE Senior Member.

Michelle S. Wangham is a Professor at University of Vale do Itajaí (Brazil). She received her M.Sc. and Ph.D. on Electrical Engineering from the Federal University of Santa Catarina (UFSC) in 2004. Recently, she was a Visiting Researcher at University of Ottawa. Her research interests are vehicular networks, security in embedded and distributed systems, identity management, and network security. She is a consultant of the Brazilian National Research and Education Network (RNP) acting as the coordinator of Identity Management Technical Committee (CT-GID) and member of Network Monitoring Technical Committee. Since 2013, she is coordinating the GIdLab project, a testbed for R&D in Identity Management.

Min Wu received the B.E. degree (Highest Honors) in electrical engineering - automation and the B.A. degree (Highest Honors) in economics from Tsinghua University, Beijing, China, in 1996, and the Ph.D. degree in electrical engineering from Princeton University in 2001. Since 2001, she has been with the University of Maryland, College Park, where she is currently a Professor and a University Distinguished Scholar-Teacher. She leads the Media and Security Team, University of Maryland, where she is involved in information security and forensics and multimedia signal processing. She has coauthored two books and holds nine U.S. patents on multimedia security and communications. Dr. Wu coauthored several papers that won awards from the IEEE, ACM, and EURASIP, respectively. She also received an NSF CAREER award in 2002, a TR100 Young Innovator Award from the MIT Technology Review Magazine in 2004, an ONR Young Investigator Award in 2005, a ComputerWorld “40 Under 40” IT Innovator Award in 2007, an IEEE Mac Van Valkenburg Early Career Teaching Award in 2009, a University of Maryland Invention of the Year Award in 2012 and in 2015, and an IEEE Distinguished Lecturer recognition in 2015–2016. She has served as the Vice President-Finance of the IEEE Signal Processing Society (2010–2012) and the Chair of the IEEE Technical Committee on Information Forensics and Security (2012–2013). She is currently the Editor-in-Chief of the IEEE Signal Processing Magazine. She was elected IEEE Fellow for contributions to multimedia security and forensics.

Jie Liu Dr. Jie Liu is a Principal Researcher at Microsoft AI and Research Redmond, WA. His research interests root in sensing and interacting with the physical world through computing. Examples include time, location, and energy awareness, and Internet/Intelligence of Things. He has published broadly in areas such as sensor networking, embedded devices, mobile and ubiquitous computing, and data center management. He has received 6 best paper awards in top academic conferences in these fields. In addition, he holds more than 100 patents. He is the Steering Committee chair of Cyber-Physical-System (CPS) Week, and ACM/IEEE IPSN, and a Steering Committee member of ACM SenSys. He is an Associate Editor of ACM Trans. on Sensor Networks, was an Associate Editor of IEEE Trans. on Mobile Computing, and has chaired a number of top-tier conferences. Among other recognitions, he received the Leon Chua Award from UC Berkeley in 2001; Technology Advance Award from (Xerox) PARC in 2003; and a Gold Star Award from Microsoft in 2008. He received his Ph.D. degree from Electrical Engineering and Computer Sciences, UC Berkeley in 2001, and his Master and Bachelor degrees from Department of Automation, Tsinghua University, Beijing, China. From 2001 to 2004, he was a research scientist in Palo Alto Research Center (formerly Xerox PARC). He is an ACM Distinguished Scientist and an IEEE Senior Member.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Oliveira, L., Pereira, F., Misoczki, R. et al. The computer for the 21st century: present security & privacy challenges. J Internet Serv Appl 9 , 24 (2018). https://doi.org/10.1186/s13174-018-0095-2

Download citation

Received : 13 April 2018

Accepted : 27 August 2018

Published : 04 December 2018

DOI : https://doi.org/10.1186/s13174-018-0095-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cryptography

information technology in 21st century essay

Operationalizing the Information Age, Knowledge Economy, 21st Century

  • Share article

Though only a few weeks into my doctoral work, I have already started to finalize the Problem of Practice that I plan to address through my applied dissertation. Based on my ongoing work with EdTechTeacher , I am exploring the scalability of high-quality, sustained professional development that would help teachers to design student-centric, inquiry-based, technology-rich curricula that would facilitate the development of the problem-solving and critical thinking skills required for success in the knowledge economy/Information Age/21st Century.

My professors, in reviewing some of my initial work, issued a challenge. While I did my best to define all of those terms, I need to operationalize the concepts of Information Age, or Knowledge Economy, or 21st Century, or whatever term I choose to use so that it becomes more concrete. To compare these three terms, and try to see which might actually have more associated resources, I used the Google Ngram Viewer as well as the New York Times Chronicle to track their usage over time. Based on this analysis, I discovered that I would have a lot more success finding 21st Century versus Information Age or Knowledge Economy when conducting research purely based on the frequency of the term in Books and articles.

Chart created with GoogleNGram Viewer

Chart created with Chronicle

Information Age

In terms of historical and educational context, the idea of preparing students for the Information Age featured prominently in the 1984 A Nation at Risk report. According to Mehta (2013), A Nation at Risk intrinsically linked educational success to economic success and called for students to become globally competitive in a Postindustrial, Information Age . I have started to think of this term as a historical descriptor. Think about the concept of the Enlightenment, the Renaissance, or the Industrial Revolution. Each of these eras possessed unique qualities that manifested themselves within society, culture, economics, and education.

The Renaissance can be characterized by advances in art and literature. Assembly lines, mass production, and a migration to urban areas illustrate the Industrial Revolution. With the Information Age, individuals have started to move online, rapidly adopting new technologies and communication strategies that allow them unprecedented access to content and data as well as the ability to connect to individuals around the world.

Knowledge Economy

The Knowledge Economy , at least for my purposes, can then be viewed as the economic principle of the Information Age evolving from the advances in technology. In this new economy, “human work will increasingly shift toward two kinds of tasks: solving problems for which standard operating procedures do not currently exist, and working with new information-- acquiring it, making sense of it, communicating it to others” (Levy & Murnane, 2013). Combine these work requirements with the increasing globalization and connectedness of the Information Age, and new skills will be required if people hope to remain competitive in the marketplace.

21st Century

21st Century may be the most prolific buzzword as it encompasses both the concepts of Information Age and Knowledge Economy. Essentially, the 21st Century will be marked in time from January 1, 2000 until December 31, 2099. Within that time frame, the Information Age and Knowledge Economy will exist.

Where the 20th Century was characterized by the end of the Industrial Revolution, the rise of Nationalism, and the dawn of the Digital Age, the 21st Century will possess its own unique qualities as it builds upon the tenets of the Information Age.

As evidenced by a recent report from the Pew Research Center , since the start of the century, Internet usage among Americans has increased from 52% to 84%, and adults between the ages of 18-49 have over a 93% adoption rate. As this younger demographic continues to drive the economy, and more early adopters enter the workforce, ubiquitous technology will become a hallmark at least of this first part of the century. Considering the fact that most people communicated via mail or telegram at the turn of the 20th century and by cellphone at the end, who knows what may exist by 2099.

Operationalizing my Problem of Practice

The 21st Century spans 100 years. Currently, it encompasses the Information Age - an era marked by rapid adoption of new technologies. This Information Age is being fueled by a Knowledge Economy that values problem solving and critical thinking over the rote skills of the Industrial era. That said, the 21st Century may also ultimately include other ages and economies that have not yet come into existence. For this reason, teachers need to help students acquire the critical-thinking, creativity, communication, and collaboration skills that will benefit them as they enter the workforce. In order to successfully do this, teachers will need to shift their classroom practices. That shift will require professional development and support, which does not currently exist for a majority of K-12 public school teachers.

The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Sign Up for The Savvy Principal

BUILDING SKILLS FOR LIFE

This report makes the case for expanding computer science education in primary and secondary schools around the world, and outlines the key challenges standing in the way. Through analysis of regional and national education systems at various stages of progress in implementing computer science education programs, the report offers transferable lessons learned across a wide range of settings with the aim that all students—regardless of income level, race, or sex—can one day build foundational skills necessary for thriving in the 21st century.

Download the full report

Introduction.

Access to education has expanded around the world since the late 1990s through the combined efforts of governments, bilateral and multilateral agencies, donors, civil society, and the private sector, yet education quality has not kept pace. Even before the COVID-19 pandemic led to school closures around the world, all young people were not developing the broad suite of skills they need to thrive in work, life, and citizenship (Filmer, Langthaler, Stehrer, & Vogel, 2018).

The impact of the pandemic on education investment, student learning, and longer-term economic outcomes threatens not only to dial back progress to date in addressing this learning crisis in skills development but also to further widen learning gaps within and between countries. Beyond the immediate and disparate impacts of COVID-19 on students’ access to quality learning, the global economic crisis it has precipitated will shrink government budgets, potentially resulting in lower education investment and impacting the ability to provide quality education (Vegas, 2020). There is also a concern that as governments struggle to reopen schools and/or provide sufficient distance-learning opportunities, many education systems will focus on foundational skills, such as literacy and numeracy, neglecting a broader set of skills needed to thrive in a rapidly changing, technologically-advanced world.

Among these broader skills, knowledge of computer science (CS) is increasingly relevant. CS is defined as “the study of computers and algorithmic processes, including their principles, their hardware and software designs, their [implementation], and their impact on society” (Tucker, 2003). 1 CS skills enable individuals to understand how technology works, and how best to harness its potential to improve lives. The goal of CS education is to develop computational thinking skills, which refer to the “thought processes involved in expressing solutions as computational steps or algorithms that can be carried out by a computer” (K-12 Computer Science Framework Steering Committee, 2016). CS education is also distinct from computer or digital literacy, in that it is more concerned with computer design than with computer use. For example, coding is a skill one would learn in a CS course, while creating a document or slideshow presentation using an existing program is a skill one would learn in a computer or digital literacy course.

Research has shown that students benefit from CS education by increasing college enrollment rates and developing problem-solving abilities (Brown & Brown, 2020; Salehi et al., 2020). Research has also shown that lessons in computational thinking improve student response inhibition, planning, and coding skills (Arfé et al., 2020). Importantly, CS skills pay off in the labor market through higher likelihood of employment and better wages (Hanson & Slaughter, 2016; Nager & Atkinson, 2016). As these skills take preeminence in the rapidly changing 21st century, CS education promises to significantly enhance student preparedness for the future of work and active citizenship.

The benefits of CS education extend beyond economic motivations. Given the increasing integration of technology into many aspects of daily life in the 21st century, a functional knowledge of how computers work—beyond the simple use of applications—will help all students.

Why expand CS education?

By this point, many countries have begun making progress toward offering CS education more universally for their students. The specific reasons for offering it will be as varied as the countries themselves, though economic arguments often top the list of motivations. Other considerations beyond economics, however, are also relevant, and we account for the most common of these varied motives here.

The economic argument

At the macroeconomic level, previous research has suggested that countries with more workers with ICT (information, communications, and technology) skills will have higher economic growth through increases in productivity (Maryska, Doucek, & Kunstova, 2012; Jorgenson & Vu, 2016). Recent global data indicate that there is a positive relationship between the share of a country’s workforce with ICT skills and its economic growth. For example, using data from the Organisation for Economic Cooperation and Development (OECD), we find that countries with a higher share of graduates from an ICT field tend to have higher rates of per capita GDP (Figure 1). The strength of the estimated relationship here is noteworthy: A one percentage point increase in the share of ICT graduates correlates with nearly a quarter percentage point increase in recent economic growth, though we cannot determine the causal nature of this relationship (if any). Nonetheless, this figure supports the common view that economic growth follows from greater levels of investment in technological education.

At the microeconomic level, CS skills pay off for individuals—both for those who later choose to specialize in CS and those who do not. Focusing first on the majority of students who pursue careers outside of CS, foundational training in CS is still beneficial. Technology is becoming more heavily integrated across many industrial endeavors and academic disciplines—not just those typically included under the umbrella of science, technology, engineering, and mathematics (STEM) occupations. Careers from law to manufacturing to retail to health sciences all use computing and data more intensively now than in decades past (Lemieux, 2014). For example, using data from Germany, researchers showed that higher education programs in CS compared favorably against many other fields of study, producing a relatively high return on investment for lower risk (Glocker and Storck, 2014). Notably, completing advanced training in CS is not necessary to attain these benefits; rather, even short introductions to foundational skills in CS can increase young students’ executive functions (Arfe et al., 2020). Further, those with CS training develop better problem-solving abilities compared to those with more general training in math and sciences, suggesting that CS education offers unique skills not readily developed in other more common subjects (Salehi et al., 2020).

For those who choose to pursue advanced CS studies, specializing in CS pays off both in employment opportunities and earnings. For example, data from the U.S. show workers with CS skills are less likely to be unemployed than workers in other occupations (Figure 2). Moreover, the average earnings for workers with CS skills are higher than for workers in other occupations (Figure 3). These results are consistent across multiple studies using U.S. data (Carnevale et al., 2013; Altonji et al., 2012) and international data (Belfield et al., 2019; Hastings et al., 2013; Kirkeboen et al., 2016). Further, the U.S. Bureau of Labor Statistics has projected that the market for CS professionals will continue to grow at twice the speed of the rest of the labor market between 2014 and 2024 (National Academies of Sciences, 2018).

A common, though inaccurate, perception about the CS field is that anybody with a passion for technology can succeed without formal training. There is a nugget of truth in this view, as many leaders of major technology companies including Bill Gates, Elon Musk, Mark Zuckerberg, and many others have famously risen to the top of the field despite not having bachelor’s degrees in CS. Yet, it is a fallacy to assume that these outliers are representative of most who are successful in the field. This misconception could lead observers to conclude that investments in universal CS education are, at best, ineffective: providing skills to people who would learn them on their own regardless, and spending resources on developing skills in people who will not use them. However, such conclusions are not supported by empirical evidence. Rather, across STEM disciplines, including CS, higher levels of training and educational attainment lead to stronger employment outcomes, on average, than those with lesser levels of training in the same fields (Altonji et al., 2016; Altonji and Zhong, 2021).

The inequality argument

Technology—and particularly unequal access to its benefits—has been a key driver of social and economic inequality within countries. That is, those with elite social status or higher wealth have historically gotten access to technology first for their private advantages, which tends to reinforce preexisting social and economic inequalities. Conversely, providing universal access to CS education and computing technologies can enable those with lower access to technological resources the opportunity to catch up and, consequently, mitigate these inequalities. Empirical studies have shown how technological skills or occupations, in particular, have reduced inequalities between groups or accelerated the assimilation of immigrants (Hanson and Slaughter, 2017; DeVol, 2016).

Technology and CS education are likewise frequently considered critical in narrowing income gaps between developed and developing countries. This argument can be particularly compelling for low-income countries, as global development gaps will only be expected to widen if low-income countries’ investments in these domains falter while high-income countries continue to move ahead. Rather, strategic and intensive technological investment is frequently seen as a key strategy for less-developed countries to leapfrog stages of economic development to quickly catch up to more advanced countries (Fong, 2009; Lee, 2019).

CS skills enable adaptation in a quickly changing world, and adaptability is critical to progress in society and the economy. Perhaps there is no better illustration of the ability to thrive and adapt than from the COVID-19 pandemic. The pandemic has forced closures of many public spaces across the globe, though those closures’ impacts have been disproportionately felt across workers and sectors. Workers with the skills and abilities to move their job functions online have generally endured the pandemic more comfortably than those without those skills. And even more importantly, the organizations and private companies that had the human capacity to identify how technology could be utilized and applied to their operations could adapt in the face of the pandemic, while those without the resources to pivot their operations have frequently been forced to close in the wake of pandemic-induced restrictions. Thus, the pandemic bestowed comparative benefits on those with access to technology, the skills to use it, and the vision to recognize and implement novel applications quickly, while often punishing those with the least access and resources (OECD, 2021).

Failing to invest in technology and CS education may result in constrained global competitiveness, leaving governments less able to support its citizens. We recognize that efforts to expand CS education will demand time and money of public officials and school leaders, often in the face of other worthy competing demands. Though the contemporary costs may even seem prohibitive in some cases, the costs of inaction (while less immediately visible) are also real and meaningful in most contexts.

Beyond economics

We expect the benefits of CS education to extend beyond economic motivations, as well. Many household activities that were previously performed in real life are now often performed digitally, ranging from banking, shopping, travel planning, and socializing. A functional knowledge of how computers work—beyond the simple use of applications—should benefit all students as they mature into adults given the increasing integration of technology into many aspects of daily life in the 21st century. For example, whether a person wants to find a job or a romantic partner, these activities frequently occur through the use of technology, and understanding how matching algorithms work make for more sophisticated technology users in the future. Familiarity with CS basic principles can provide users more flexibility in the face of constant innovation and make them less vulnerable to digital security threats or predators (Livingstone et al., 2011). Many school systems now provide lessons in online safety for children, and those lessons will presumably be more effective if children have a foundational understanding of how the internet works.

Global advances in expanding CS education

To better understand what is needed to expand CS education, we first took stock of the extent to which countries around the world have integrated CS education into primary and secondary schools, and how this varied by region and income level. We also reviewed the existing literature on integrating CS into K-12 education to gain a deeper understanding of the key barriers and challenges to expanding CS education globally. Then, we selected jurisdictions at various stages of progress in implementing CS education programs in from multiple regions of the world and income levels, and drafted in-depth case studies on the origins, key milestones, barriers, and challenges of CS expansion.

Progress in expanding CS education across the globe

As shown in Figure 4, the extent to which CS education is offered in primary and secondary schools varies across the globe. Countries with mandatory CS education are geographically clustered in Eastern Europe and East Asia. Most states and provinces in the U.S. and Canada offer CS on a school-to-school basis or as an elective course. Multiple countries in Western Europe offer CS education as a cross-curricular topic integrated into other subjects. Latin America and Central and Southeast Asia have the most countries that have announced CS education programs or pilot projects. Countries in Africa and the Middle East have integrated the least amount of CS education into school curricula. Nevertheless, the number of countries piloting programs or adopting CS curricula indicate a global trend of more education systems integrating the subject into their curriculum.

As expected, students living in higher-income countries generally have better access to CS education. As Figure 5 shows, 43 percent of high-income countries require students to learn CS education in primary and/or secondary schools. Additionally, high-income countries also offer CS as an elective course to the largest share of the population. A further 35 percent of high-income countries offer CS on a school-to-school basis while not making it mandatory for all schools. Interestingly, upper-middle income countries host the largest share of students (62 percent) who are required to learn CS at any point in primary or secondary schools. Presumably, many upper-middle income countries likely have national economic development strategies focused on expanding tech-related jobs, and thus see the need to expand the labor force with CS skills. By contrast, only 5 percent of lower-middle income countries require CS during primary or secondary school, while 58 percent may offer CS education on a school-to-school basis.

Key barriers and challenges to expand CS education globally

To expand quality CS education, education systems must overcome enormous challenges. Many countries do not have enough teachers who are qualified to teach CS, and even though there is growing interest among students to pursue CS, relatively few students pursue more advanced training like CS testing certifications (Department for Education, 2019) or CS undergraduate majors compared to other STEM fields like engineering or biology (Hendrickson, 2019). This is especially true for girls and underrepresented minorities, who generally have fewer opportunities to develop an interest in CS and STEM more broadly (Code.org & CSTA, 2018). Our review of the literature identified four key challenges to expanding CS education.:

1. Providing access to ICT infrastructure to students and educators

Student access to ICT infrastructure, including both personal access to computing devices and an internet connection, is critical to a robust CS education. Without this infrastructure, students cannot easily integrate CS skills into their daily lives, and they will have few opportunities to experiment with new approaches on their own.

However, some initiatives have succeeded by introducing elements of CS education in settings without adequate ICT infrastructure. For example, many educators use alternative learning strategies like CS Unplugged to teach CS and computational thinking when computers are unavailable (Bell & Vahrenhold, 2018). One study shows that analog lessons can help primary school students develop computational thinking skills (Harris, 2018). Even without laptops or desktop computers, it is still possible for teachers to use digital tools for computational thinking. In South Africa, Professor Jean Greyling of Nelson Mandela University Computing Sciences co-created Tanks, a game that uses puzzle pieces and a mobile application to teach coding to children (Ellis, 2021). This is an especially useful concept, as many households and schools in South Africa and other developing countries have smartphones and access to analog materials but do not have access to personal computers or broadband connectivity (McCrocklin, 2021).

Taking a full CS curriculum to scale, however, requires investing in adequate access to ICT infrastructure for educators and students (Lockwood & Cornell, 2013). Indeed, as discussed in Section 3, our analysis of numerous case studies indicates that ICT infrastructure in schools provides a critical foundation to expand CS education.

2. Ensuring qualified teachers through teacher preparation and professional development

Many education systems encounter shortages of qualified CS teachers, contributing to a major bottleneck in CS expansion. A well-prepared and knowledgeable teacher is the most important component for instruction in commonly taught subjects (Chetty et al. 2014 a,b; Rivkin et al., 2005). We suspect this is no different for CS, though major deficiencies in the necessary CS skills among the teacher workforce are evident. For example, in a survey of preservice elementary school teachers in the United States, only 10 percent responded that they understood the concept of computational thinking (Campbell & Heller, 2019). Until six years ago, 75 percent of teachers in the U.S. incorrectly considered “creating documents or presentations on the computer” as a topic one would learn in a CS course (Google & Gallup, 2015), demonstrating a poor understanding of the distinction between CS and computer literacy. Other case studies, surveys, and interviews have found that teachers in India, Saudi Arabia, the U.K., and Turkey self-report low confidence in their understanding of CS (Ramen et al., 2015; Alfayez & Lambert, 2019; Royal Society, 2017; Gülbahar & Kalelioğlu, 2017). Indeed, developing the necessary skills and confidence levels for teachers to offer effective CS instruction remains challenging.

To address these challenges, school systems have introduced continuous professional development (PD), postgraduate certification programs, and CS credentials issued by teacher education degree programs. PD programs are common approaches, as they utilize the existing teacher workforce to fill the needs for special skills, rather than recruiting specialized teachers from outside the school system. For example, the British Computing Society created 10 regional university-based hubs to lead training activities, including lectures and meetings, to facilitate collaboration as part of the network of excellence (Dickens, 2016; Heintz et al., 2016; Royal Society, 2017). Most hubs involve multi-day seminars and workshops meant to familiarize teachers with CS concepts and provide ongoing support to help teachers as they encounter new challenges in the classroom. Cutts et al. (2017) further recommend teacher-led PD groups so that CS teachers can form collaborative professional networks. Various teacher surveys have found these PD programs in CS helpful (Alkaria & Alhassan, 2017; Goode et al., 2014). Still, more evidence is needed on the effectiveness of PD programs in CS education specifically (Hill, 2009).

Less commonly, some education systems have worked with teacher training institutions to introduce certification schemes so teachers can signal their unique qualifications in CS to employers. This signal can make teacher recruitment more transparent and incentivize more teachers to pursue training. This approach does require, though, an investment in developing CS education faculty at the teacher training institution, which may be a critical bottleneck in many places (Delyser et al., 2018). Advocates of the approach have recommended that school systems initiate certification schemes quickly and with a low bar at first, followed by improvement over time (Code.org, 2017; Lang et al., 2013; Sentance & Csizmadia, 2017). Short-term recommendations include giving temporary licenses to teachers who meet minimum content and knowledge requirements. Long-term recommendations, on the other hand, encourage preservice teachers to take CS courses as part of their teaching degree programs or in-service teachers to take CS courses as part of their graduate studies to augment their skillset.2 Upon completing these courses, teachers would earn a full CS endorsement or certificate.

3. Fostering student engagement and interest in CS education

Surveys from various countries suggest that despite a clear economic incentive, relatively few K-12 students express interest in pursuing advanced CS education. For example, 3 out of 4 U.S. students in a recent survey declared no interest in pursuing a career in computer science. And the differences by gender are notable: Nearly three times as many male students (33 percent) compared to female students (12 percent) expressed interest in pursuing a computer science career in the future (Google & Gallup, 2020).

Generally, parents view CS education favorably but also hold distinct misconceptions. For instance, more than 80 percent of U.S. parents surveyed in a Google and Gallup (2016) study reported that they think CS is as important as any other discipline. Nevertheless, the same parents indicated biases around who should take CS courses: 57 percent of parents think that one needs to be “very smart” to learn CS (Google & Gallup, 2015). Researchers have equated this kind of thinking to the idea that some people could be inherently gifted or inept at CS, a belief that could discourage some students from developing an interest or talent in CS (McCartney, 2017). Contrary to this belief, Patitsas et al. (2019) found that only 5.8 percent of university-level exam distributions were multimodal, indicating that most classes did not have a measurable divide between those who were inherently gifted and those who were not. This signals that CS is no more specialized to specific groups of students than any other subject.

Fostering student engagement, however, does not equate to developing a generation of programmers. Employment projections suggest the future demand for workers with CS skills will likely outpace supply in the absence of promoting students’ interest in the field. Yet, no countries expand access to CS education with the expectation of turning all students into computer programmers. Forcing students into career paths that are unnatural fits for their interests and skill levels result in worse outcomes for students at the decision margins (Kirkeboen et al., 2016). Rather, current engagement efforts both expose students to foundational skills that help navigate technology in 21st century life and provide opportunities for students to explore technical fields.

A lack of diversity in CS education not only excludes some people from accessing high-paying jobs, but it also reduces the number of students who would enter and succeed in the field (Du & Wimmer, 2019). Girls and racial minorities have been historically underrepresented in CS education (Sax et al., 2016). Research indicates that the diversity gap is not due to innate talent differences among demographic groups (Sullivan & Bers, 2012; Cussó-Calabuig et al., 2017), but rather a disparity of access to CS content (Google & Gallup 2016; Code.org & CSTA, 2018; Du & Wimmer, 2019), widely held cultural perceptions, and poor representation of women and underrepresented minorities (URMs) among industry leaders and in media depictions (Google & Gallup, 2015; Ayebi-Arthur, 2011; Downes & Looker, 2011).

To help meet the demand for CS professionals, government and philanthropic organizations have implemented programs that familiarize students with CS. By increasing student interest among K-12 students who may eventually pursue CS professions, these strategies have the potential to address the well documented lack of diversity in the tech industry (Harrison, 2019; Ioannou, 2018). For example, some have used short, one-time lessons in coding to reduce student anxiety around CS. Of these lessons, perhaps the best known is Hour of Code, designed by Code.org. In multiple surveys, students indicated more confidence after exposure to this program (Phillips & Brooks, 2017; Doukaki et al., 2013; Lang et al., 2016). It is not clear, however, whether these programs make students more likely to consider semester-long CS courses (Phillips & Brooks, 2017; Lang et al., 2016).

Other initiatives create more time-intensive programs for students. The U.S. state of Georgia, for example, implemented a program involving after-school, weekend, and summer workshops over a six-year period. Georgia saw an increase in participation in the Advanced Placement (AP) CS exam during the duration of the program, especially among girls and URMs (Guzdial et al., 2014). Other states have offered similar programs, setting up summer camps and weekend workshops in universities to help high school students become familiar with CS (Best College Reviews, 2021). These initiatives, whether one-off introductions to CS or time-intensive programs, typically share the explicit goal of encouraging participation in CS education among all students, and especially girls and URMs.

Yet, while studies indicate that Hour of Code and summer camps might improve student enthusiasm for CS, they do not provide the kind of rigorous impact assessment one would need to make a definitive conclusion of their effectiveness. They do not use a valid control group, meaning that there is no like-for-like comparison to students who are similar except for no exposure to the program. It is not clear that the increase in girls and URMs taking CS would not have happened if it were not for Georgia’s after-school clubs.

4. Generating and using evidence on curriculum and core competencies, instructional methods, and assessment

There is no one-size-fits-all CS curriculum for all education systems, schools, or classrooms. Regional contexts, school infrastructure, prior access, and exposure to CS need to be considered when developing CS curricula and competencies (Ackovska et al., 2015). Some CS skills, such as programming language, require access to computer infrastructure that may be absent in some contexts (Lockwood & Cornell, 2013). Rather than prescribing a curriculum, the U.S. K-12 Computer Science Framework Steering Committee (2016) recommends foundational CS concepts and competencies for education systems to consider. This framework encourages curriculum developers and educators to create learning experiences that extend beyond the framework to encompass student interests and abilities.

There is increasing consensus around what core CS competencies students should master when they complete primary and secondary education. Core competencies that students may learn by the end of primary school include:

  • abstraction—creating a model to solve a problem;
  • generalization—remixing and reusing resources that were previously created;
  • decomposition—breaking a complex task into simpler subtasks;
  • algorithmic thinking—defining a series of steps for a solution, putting instructions in the correct sequence, and formulating mathematical and logical expressions;
  • programming—understanding how to code a solution using the available features and syntax of a programming language or environment; and
  • debugging—recognizing when instructions do not correspond to actions and then removing or fixing errors (Angeli, 2016).

Competencies that secondary school students may learn in CS courses include:

  • logical and abstract thinking;
  • representations of data, including various kinds of data structures;
  • problem-solving by designing and programming algorithms using digital devices;
  • performing calculations and executing programs;
  • collaboration; and,
  • ethics such as privacy and data security (Syslo & Kwiatkowska, 2015).

Several studies have described various methods for teaching CS core competencies. Integrated development environments are recommended especially for teaching coding skills (Florez et al., 2017; Saez-Lopez et al., 2016). 2 These environments include block-based programming languages that encourage novice programmers to engage with programming, in part by alleviating the burden of syntax on learners (Weintrop & Wilensky, 2017; Repenning, 1993). Others recommended a variety of teaching methods that blend computerized lessons with offline activities (Taub et al. 2009; Curzon et al., 2009, Ackovska et al., 2015). This approach is meant to teach core concepts of computational thinking while keeping students engaged in physical, as well as digital, environments (Nishida et al., 2009). CS Unplugged, for example, provides kinesthetic lesson plans that include games and puzzles that teach core CS concepts like decomposition and algorithmic thinking.

Various studies have also attempted to measure traditional lecture-based instruction for CS (Alhassan 2017; Cicek & Taspinar, 2016). 3 These studies, however, rely on small sample sizes wherein the experiment and control group each comprised of individual classes. More rigorous research is required to understand the effectiveness of teaching strategies for CS.

No consensus has emerged on the best ways to assess student competency in core CS concepts (So et al., 2019; Djambong & Freiman, 2016). Though various approaches to assessment are widely available—including classical cognitive tests, standardized tests in digital environments, and CS Unplugged activity tests—too many countries have yet to introduce regular assessments that may evaluate various curricula or instructional methods in CS. While several assessments have been developed for CS and CT at various grade levels as part of various research studies, there have been challenges to broader use. This is due to either a lack of large-scale studies using these assessments or diversity in programming environments used to teach programming and CS or simply a lack of interest in using objective tests of learning (as opposed to student projects and portfolios).

Fortunately, a growing number of organizations are developing standardized tests in CS and computational thinking. For example, the International Computer and Information Literacy Study included examinations in computational thinking in 2018 that had two 25-minute modules, where students were asked to develop a sequence of tasks in a program that related to a unified theme (Fraillon et al., 2020). The OECD’s PISA will also include questions in 2021 to assess computational thinking across countries (Schleicher & Partovi, 2019). The AP CS exam has also yielded useful comparisons that have indirectly evaluated CS teacher PD programs (Brown & Brown, 2019).

In summary, the current evidence base provides little consensus on the specific means of scaling a high-quality CS education and leaves wide latitude for experimentation. Consequently, in this report we do not offer prescriptions on how to expand CS education, even while arguing that expanding access to it generally is beneficial for students and the societies that invest in it. Given the current (uneven) distribution of ICT infrastructure and CS education resources, high-quality CS education may be at odds with expanded access. While we focus on ensuring universal access first, it is important to recognize that as CS education scales both locally and globally, the issues of curricula, pedagogies, instructor quality, and evaluation naturally become more pressing.

Lessons from education systems that have introduced CS education

Based on the available literature discussed in the previous section, we selected education systems that have implemented CS education programs and reviewed their progress through in-depth case studies. Intentionally, we selected jurisdictions at various levels of economic development, at different levels of progress in expanding CS education, and from different regions of the world. They include Arkansas (U.S.), British Columbia (Canada), Chile, England, Italy, New Brunswick (Canada), Poland, South Africa, South Korea, Thailand, and Uruguay. For each case, we reviewed the historical origins for introducing CS education and the institutional arrangements involved in CS education’s expansion. We also analyzed how the jurisdictions addressed the common challenges of ensuring CS teacher preparation and qualification, fostering student demand for CS education (especially among girls and URMs), and how they developed curriculum, identified core competencies, promoted effective instruction, and assessed students’ CS skills. In this section, we draw lessons from these case studies, which can be downloaded and read in full at the bottom of this page .

Figure 6 presents a graphical representation summarizing the trajectories of the case study jurisdictions as they expanded CS education. Together, the elements in the figure provide a rough approximation of how CS education has expanded in recent years in each case. For example, when South Korea focused its efforts on universal CS education in 2015, basic ICT infrastructure and broadband connectivity were already available in all schools and two CS education expansion policies had been previously implemented. Its movement since 2015 is represented purely in the vertical policy action space, as it moved up four intervals on the index. Uruguay, conversely, started expanding its CS education program t a lower level both in terms of ICT infrastructure (x-axis) and existing CS policies (y-axis). Since starting CS expansion efforts in 2007, though, it has built a robust ICT infrastructure in its school systems and implemented 4 of 7 possible policy actions.

Figure 6 suggests that first securing access to ICT infrastructure and broadband connectivity allows systems to dramatically improve access to and the quality of CS education. Examples include England, British Columbia, South Korea, and Arkansas. At the same time, Figure 6 suggests that systems that face the dual challenge of expanding ICT infrastructure and broadband connectivity and scaling the delivery of quality CS education, such as Chile, South Africa, Thailand, and Uruguay, may require more time and/or substantial investment to expand quality CS education to match the former cases. Even though Chile, Thailand, and especially Uruguay have made impressive progress since their CS education expansion efforts began, they continue to lag a few steps behind those countries that started with established ICT infrastructure in place.

Our analysis of these case studies surfaced six key lessons (Figure 7) for governments wishing to take CS education to scale in primary and secondary schools, which we discuss in further detail below.

1. Expanding tech-based jobs is a powerful lever for expanding CS education

In several of the case studies, economic development strategies were the underlying motivation to introduce or expand CS education. For example, Thailand’s 2017 20-year Strategic Plan marked the beginning of CS education in that country. The 72-page document, approved by the Thai Cabinet and Parliament, explained how Thailand could become a more “stable, prosperous, and sustainable” country and proposed to reform the education curriculum to prepare students for future labor demands (20-year National Strategy comes into effect, 2018). Similarly, Arkansas’s Governor Hutchinson made CS education a key part of his first campaign in 2014 (CS for All, n.d.), stating that “Through encouraging computer science and technology as a meaningful career path, we will produce more graduates prepared for the information-based economy that represents a wide-open job market for our young people” (Arkansas Department of Education, 2019).

Uruguay’s Plan Ceibal, named after the country’s national flowering tree, was likewise introduced in 2007 as a presidential initiative to incorporate technology in education and help close a gaping digital divide in the country. The initiative’s main objectives were to promote digital inclusion, graduate employability, a national digital culture, higher-order thinking skills, gender equity, and student motivation (Jara, Hepp, & Rodriguez, 2018)

Last, in 2018, the European Commission issued the Digital Education Action Plan that enumerated key digital skills for European citizens and students, including CS and computational thinking (European Commission, 2018). The plan encouraged young Europeans to understand the algorithms that underpin the technologies they use on a regular basis. In response to the plan, Italy’s 2018 National Indications and New Scenarios report included a discussion on the importance of computational thinking and the potential role of educational gaming and robotics in enhancing learning outcomes (Giacalone, 2019). Then, in 2019, the Italian Ministry of Education and the Parliament approved a legislative motion to include CS and computational thinking in primary school curricula by 2022 (Orizzontescuola, 2019).

In some cases, the impetus to expand CS education came more directly from demands from key stakeholders, including industry and parents. For example, British Columbia’s CS education program traces back to calls from a growing technology industry (Doucette, 2016). In 2016, the province’s technology sector employed 86,000 people—more than the mining, forestry, and oil and gas sectors combined, with high growth projections (Silcoff, 2016). The same year, leaders of the province’s technology companies revealed in interviews that access to talent had become their biggest concern (KPMG, 2016). According to a 2016 B.C. Technology Association report, the province needed 12,500 more graduates in CS from tertiary institutions between 2015 and 2021 to fill unmet demand in the labor market (Orton, 2018). The economic justification for improving CS education in the province was clear.

Growing parental demand helped create the impetus for changes to the CS curriculum in Poland. According to Kozlowski (2016), Polish parents perceive CS professions as some of the most desirable options for their children. And given the lack of options for CS education in schools, parents often seek out extracurricular workshops for their children to encourage them to develop their CS skills (Panskyi, Rowinska, & Biedron, 2019). The lack of in-school CS options for students created the push for curricular reforms to expand CS in primary and secondary schools. As former Minister of Education Anna Zalewska declared, Polish students “cannot afford to waste time on [the] slow, arduous task of building digital skills outside school [ and] only school education can offer systematic teaching of digital skills” (Szymański, 2016).

2. ICT in schools provides the foundation to expand CS education

Previous efforts to expand access to devices, connectivity, or basic computer literacy in schools provided a starting point in several jurisdictions to expand CS education. For example, the Uruguayan government built its CS education program after implementing expansive one-to-one computing projects, which made CS education affordable and accessible. In England, an ICT course was implemented in schools in the mid-1990s. These dedicated hours during the school day for ICT facilitated the expansion of CS education in the country.

The Chilean Enlaces program, developed in 1992 as a network of 24 universities, technology companies, and other organizations (Jara, Hepp, & Rodriguez, 2018; Sánchez & Salinas, 2008) sought to equip schools with digital tools and train teachers in their use (Severin, 2016). It provided internet connectivity and digital devices that enabled ICT education to take place in virtually all of Chile’s 10,000 public and subsidized private schools by 2008 (Santiago, Fiszbein, Jaramillo, & Radinger, 2017; Severin et al., 2016). Though Enlaces yielded few observable effects on classroom learning or ICT competencies (Sánchez & Salinas, 2008), the program provided the infrastructure needed to begin CS education initiatives years later.

While a history of ICT expansion can serve as a base for CS education, institutional flexibility to transform traditional ICT projects into CS education is crucial. The Chilean Enlaces program’s broader institutional reach resulted in a larger bureaucracy, slower implementation of new programs, and greater dependence on high-level political agendas (Severin, 2016). As a result, the program’s inflexibility prevented it from taking on new projects, placing the onus on the Ministry of Education to take the lead in initiating CS education. In Uruguay, Plan Ceibal’s initial top-down organizational structure enabled relatively fast implementation of the One Laptop per Child program, but closer coordination with educators and education authorities may have helped to better integrate education technology into teaching and learning. More recently, Plan Ceibal has involved teachers and school leaders more closely when introducing CS activities. In England, the transition from ICT courses to a computing curriculum that prioritized CS concepts, instead of computer literacy topics that the ICT teachers typically emphasized before the change, encountered some resistance. Many former ICT teachers were not prepared to implement the new program of study as intended, which leads us to the next key lesson.

3. Developing qualified teachers for CS education should be a top priority

The case studies highlight the critical need to invest in training adequate numbers of teachers to bring CS education to scale. For example, England took a modest approach to teacher training during the first five years of expanding its CS education K-12 program and discovered that its strategy fell short of its original ambitions. In 2013, the English Department for Education (DfE) funded the BCS to establish and run the Network of Excellence to create learning hubs and train a pool of “master” CS teachers. While over 500 master teachers were trained, the numbers were insufficient to expand CS education at scale. Then, in 2018 the DfE substantially increased its funding to establish the National Center for Computing Education (NCCE) and added 23 new computing hubs throughout England. Hubs offer support to primary and secondary computing teachers in their designated areas, including teaching, resources, and PD (Snowdon, 2019). In just over two years, England has come a long way toward fulfilling its goals of training teachers at scale with over 29,500 teachers engaged in some type of training (Teach Computing, 2020).

Several education systems partnered with higher education institutions to integrate CS education in both preservice and in-service teacher education programs. For example, two main institutions in British Columbia, Canada—the University of British Columbia and the University of Northern British Columbia—now offer CS courses in their pre-service teacher education programs. Similarly, in Poland, the Ministry of National Education sponsored teacher training courses in university CS departments. In Arkansas, state universities offer CS certification as part of preservice teacher training while partnering with the Arkansas Department of Education to host in-service professional development.

Still other systems partnered with nonprofit organizations to deliver teacher education programs. For instance, New Brunswick, Canada, partnered with the nonprofit organization Brilliant Labs to implement teacher PD programs in CS (Brilliant Labs, n.d.). In Chile, the Ministry of Education partnered with several nongovernmental organizations, including Code.org and Fundación Telefónica, to expand teacher training in CS education. Microsoft Philanthropies launched the Technology Education and Literacy in Schools (TEALS) in the United States and Canada to connect high school teachers to technology industry volunteers. The volunteer experts support instructors to learn CS independently over time and develop sustainable high school CS programs (Microsoft, n.d.).

To encourage teachers to participate in these training programs, several systems introduced teacher certification pathways in CS education. For example, in British Columbia, teachers need at least 24 credits of postsecondary coursework in CS education to be qualified to work in public schools. The Arkansas Department of Education incentivizes in-service teachers to attain certification through teaching CS courses and participating in approved PD programs (Code.org, CSTA, ECEP, 2019). In South Korea, where the teaching profession is highly selective and enjoys high social status, teachers receive comprehensive training on high-skill computational thinking elements, such as computer architecture, operating systems, programming, algorithms, networking, and multimedia. Only after receiving the “informatics–computer” teacher’s license may a teacher apply for the informatics teacher recruitment exam (Choi et al., 2015).

When faced with shortages of qualified teachers, remote instruction can provide greater access to qualified teachers. For example, a dearth of qualified CS teachers has been and continues to be a challenge for Uruguay. To address this challenge, in 2017, Plan Ceibal began providing remote instruction in computational thinking lessons for public school fifth and sixth graders and integrated fourth-grade students a year later. Students work on thematic projects anchored in a curricular context where instructors integrate tools like Scratch. 4 During the school year, a group of students in a class can work on three to four projects during a weekly 45-minute videoconference with a remote instructor, while another group can work on projects for the same duration led by the classroom teacher. In a typical week, the remote instructor introduces an aspect of computational thinking. The in-class teacher then facilitates activities like block-based programming, circuit board examination, or other exercises prescribed by the remote teacher (Cobo & Montaldo, 2018). 5 Importantly, Plan Ceibal implements Pensamiento Computacional, providing a remote instructor and videoconferencing devices at the request of schools, rather than imposing the curriculum on all classrooms (García, 2020). With the ongoing COVID-19 pandemic forcing many school systems across the globe to adopt remote instruction, at least temporarily, we speculate that remote learning is now well poised to become more common in expanding CS education in places facing ongoing teacher shortages.

4. Exposing students to CS education early helps foster demand, especially among underserved populations

Most education systems have underserved populations who lack the opportunity to develop an interest in CS, limiting opportunities later in life. For example, low CS enrollment rates for women at Italian universities reflect the gender gap in CS education. As of 2017, 21.6 percent and 12.3 percent of students completing bachelor’s degrees in information engineering and CS, respectively, were women (Marzolla, 2019). Further, female professors and researchers in these two subjects are also underrepresented. In 2018, only 15 percent and 24 percent of professors and researchers in CS and computer engineering, respectively, were women (Marzolla, 2019). Similar representation gaps at the highest levels of CS training are common globally. Thus, continuing to offer exposure to CS only in post-secondary education will likely perpetuate similar representation gaps.

To address this challenge, several education systems have implemented programs to make CS education accessible to girls and other underserved populations in early grades, before secondary school. For instance, to make CS education more gender balanced, the Italian Ministry of Education partnered with civil society organizations to implement programs to spur girls’ interest in CS and encourage them to specialize in the subject later (European Commission, 2009). An Italian employment agency (ironically named Men at Work) launched a project called Girls Code It Better to extend CS learning opportunities to 1,413 middle school girls across 53 schools in 2019 (Girls Code It Better, n.d.). During the academic year, the girls attended extracurricular CS courses before developing their own technologically advanced products and showcasing their work at an event at Bocconi University in Milan (Brogi, 2019). In addition to introducing the participants to CS, the initiative provided the girls with role models and generated awareness on the gender gap in CS education in Italy.

In British Columbia, students are exposed to computational thinking concepts as early as primary school, where they learn how to prototype, share, and test ideas. In the early grades of primary education, the British Columbia curriculum emphasizes numeracy using technology and information technology. Students develop numeracy skills by using models and learn information technology skills to apply across subjects. In kindergarten and first grade, curricular objectives include preparing students for presenting ideas using electronic documents. In grades 2 to 3, the curricular goals specify that students should “demonstrate an awareness of ways in which people communicate, including the use of technology,” in English language arts classes, as well as find information using information technology tools. By the time students are in grades 4 and 5, the curriculum expects students to focus more on prototyping and testing new ideas to solve a problem (Gannon & Buteau, 2018).

Several systems have also increased participation in CS education by integrating it as a cross-curricular subject. This approach avoids the need to find time during an already-packed school day to teach CS as a standalone subject. For example, in 2015, the Arkansas legislature began requiring elementary and middle school teachers to embed computational thinking concepts in other academic courses. As a result, teachers in the state integrate five main concepts of computational thinking into their lesson plans, including (1) problem-solving, (2) data and information, (3) algorithms and programs, (4) computers and communications, and, importantly, (5) community, global, and ethical impacts (Watson-Fisher, 2019). In the years following this reform, the share of African American students taking CS in high school reached 19.6 percent, a figure that slightly exceeds the percentage of African Americans among all students—a resounding sign of progress in creating student demand for CS education (Computer science on the rise in Arkansas schools, Gov. drafts legislation to make it a requirement for graduation, 2020).

After-school programs and summer camps, jointly organized with external partners, have also helped promote demand for CS education through targeted outreach programs to commonly underserved populations. For example, Microsoft Thailand has been holding free coding classes, Hour of Code, in partnership with nonprofit organizations, to encourage children from underprivileged backgrounds to pursue STEM education (Microsoft celebrates Hour of Code to build future ready generations in Asia, 2017). In the past decade, Microsoft has extended opportunities for ICT and digital skills development to more than 800,000 youth from diverse backgrounds—including those with disabilities and residents of remote communities (Thongnab, 2019). Their annual #MakeWhatsNext event for young Thai women showcases STEM careers and the growing demand for those careers (Making coding fun for Thailand’s young, 2018). Also in Thailand, Redemptorist Foundation for People with Disabilities, with over 30 years of experience working with differently abled communities in that country, expanded their services to offer computer trainings and information technology vocational certificate programs for differently abled youth (Mahatai, n.d.).

In British Columbia, Canada, the Ministry of Education and other stakeholders have taken steps to give girls, women, and aboriginal students the opportunity to develop an interest in CS education. For example, after-school programs have taken specific steps to increase girls’ participation in CS education. The UBC Department of Computer Science runs GIRLsmarts4tech, a program that focuses on giving 7th- grade girls role models and mentors that encourage them to pursue technology-related interests (GIRLsmarts4tech, n.d.). According to the latest census, in 2016, British Columbia’s First Nations and Indigenous Peoples (FNIP) population—including First Nations, Metis, and Inuits—was 270,585, an increase of 38 percent from 2006. With 42.5 percent of the FNIP population under 25, it is critical for the province to deliver quality education to this young and growing group (Ministry of Advanced Education, Skills and Training, 2018). To this end, part of the British Columbia curriculum for CS education incorporates FNIP world views, perspectives, knowledge, and practices in CS concepts. In addition, the B.C. based ANCESTOR project (AborigiNal Computer Education through STORytelling) has organized courses and workshops to encourage FNIP students to develop computer games or animated stories related to their culture and land (Westor & Binn, 2015).

As these examples suggest, private sector and nongovernmental organizations can play an important role in the expansion of CS education, an issue we turn to now.

5. Engaging key stakeholders can help address bottlenecks

In most reviewed cases, the private sector and nongovernmental organizations played a role in promoting the expansion of CS education. Technology companies not only helped to lobby for expanding CS education, but often provided much-needed infrastructure and subject matter expertise in the design and rollout of CS education. For example, Microsoft Thailand has worked with the Thai government since 1998 in various capacities, including contributing to the development and implementation of coding projects, digital skills initiatives, teacher training programs, and online learning platforms (Thongnab, 2019; Coding Thailand, n.d.). Since 2002, Intel’s Teach Thailand program has trained more than 150,000 teachers. Additionally, Google Coding Teacher workshops train educators on teaching computational thinking through CS Unplugged coding activities (EduTech Thailand, 2019). The workshop is conducted by Edutech (Thailand) Co., Ltd., an educational partner of Google, which adapted the Google curriculum to the Thailand education context. Samsung has been engaged in a smart classroom project that has built futuristic classroom prototypes and provided training for 21st century competencies (OECD/UNESCO, 2016).

In England, nongovernmental organizations have played an important role in supporting the government’s expansion of CS education. The DfE has relied on outside organizations for help in executing its CS education responsibilities. The DfE’s NCEE, for instance, is delivered by a consortium including the British Computing Society, STEM Learning, and the Raspberry Pi Foundation—three nonprofit organizations dedicated to advancing the computing industry and CS education in the country (British Computing Society, n.d; STEM Learning, n.d.; Raspberry Pi Foundation, n.d.).

Chile’s Ministry of Education developed partnerships with individual NGOs and private companies to engage more students, especially girls. These initiatives offer the opportunity for hands-on learning projects and programming activities that students can perform from their home computers. Some of the same partners also provide online training platforms for teacher PD.

Industry advocacy organizations can also play an important role in the expansion of CS education. For example, in Arkansas, the state’s business community has long supported CS education (Nix, 2017). Accelerate Arkansas was established in 2005 as an organization of 70 private and public sector members dedicated to moving Arkansas into a more innovation- and knowledge-based economy (State of Arkansas, 2018). Similarly, in England, a network of organizations called Computing at School established a coalition of industry representatives and teachers. It played a pivotal role in rebranding the ICT education program in 2014 to the computing program that placed a greater emphasis on CS (Royal Society, 2017).

To ensure sustainability, one key lesson is that the government should coordinate across multiple stakeholders. The reliance on inputs from external organizations to drive CS education implies that the heavy reliance on NGO-provided training and resources in Chile have been insufficient to motivate more schools and teachers to include CS and computational thinking in classroom learning activities. By contrast, the DfE has effectively coordinated across various nongovernmental organizations to expand CS education. Similarly, Arkansas’s Department of Education is leading an effort to get half of all school districts to form partnerships with universities and business organizations to give students opportunities to participate internships and college-level CS courses while in high school (Talk Business & Politics, 2020). In sum, the experience of decades of educational policies across the education systems reviewed shows that schools require long lasting, coordinated, and multidimensional support to achieve successful implementation of CS in classrooms.

6. When taught in an interactive, hands-on way, CS education builds skills for life

Several of the cases studied introduced innovative pedagogies using makerspaces (learning spaces with customizable layouts and materials) and project-based learning to develop not only skills specific to CS but also skills that are relevant more broadly for life. For example, Uruguayan CS education features innovative concepts like robotics competitions and makerspaces that allow students to creatively apply their computational thinking lessons and that can spark interest and deepen understanding. In addition, computational thinking has been integrated across subject areas (e.g., in biology, math, and statistics) (Vázquez et al., 2019) and in interdisciplinary projects that immerse students in imaginative challenges that foster creative, challenging, and active learning (Cobo & Montaldo, 2018). For instance, students can use sensors and program circuit boards to measure their own progress in physical education (e.g., measuring how many laps they can run in a given period).

Similarly, in New Brunswick, Brilliant Labs provide learning materials to schools so they can offer students CS lessons using makerspaces that encourage students to develop projects, engage with technology, learn, and collaborate. These makerspaces enable students to creatively apply their CS and computational thinking lessons, sparking interest and deepening understanding of CS and computational thinking.

Thailand’s curricular reforms also integrated project-based learning into CS education. Thai students in grades 4-6 learn about daily life through computers, including skills such as using logic in problem-solving, searching data and assessing its correctness, and block coding (e.g., Scratch). Then, students in grades 7-9 focus on learning about primary data through objectives that include using programming to solve problems, collecting, analyzing, presenting, and assessing data and information, and textual programming such as Python. Finally, students in grades 10-12 focus on applying advanced computing technology and programming to solve real-world problems, using knowledge from other subjects and data from external sources (Piamsa-nga et al., 2020).

After two years of nationwide discussions from 2014 to 2016, the Polish Ministry of National Education announced the creation of a new core curriculum for CS in primary and secondary schools (Syslo, 2020). The new curriculum’s goals included students using technology to identify solutions for problems in every day and professional situations and supporting other disciplines—such as science, the arts, and the social sciences—in innovation (Panskyi, Rowinska, & Biedron, 2019).

CS skills are increasingly necessary to function in today’s technology-advanced world and for the future. They enable individuals to understand how technology works, and how best to harness its potential to improve lives. As these skills take preeminence in the rapidly changing 21st century, CS education promises to significantly enhance student preparedness for the future of work and active citizenship.

Our findings suggest six recommendations for governments interested in taking CS education to scale in primary and secondary schools. First, governments should use economic development strategies focused on expanding technology-based jobs to engage all stakeholders and expand CS education in primary and secondary schools. Indeed, such a strategy helps attract and retain investors and foster CS education demand among students. Second, provide access to ICT infrastructure in primary and secondary schools to facilitate the introduction and expansion of CS education. Third, developing qualified teachers for CS should be a top priority. The evidence is clear that a qualified teacher is the most important factor in student learning, and thus preparing the teacher force needed for CS at scale is crucial. Fourth, expose students early to CS education to increase their likelihood of pursuing it. This is especially important for girls and other URM groups historically underrepresented in STEM and CS fields. Fifth, engage key stakeholders (including educators, the private sector, and civil society) to help address bottlenecks in physical and technical capacity. Finally, teach CS in an interactive, hands-on way to build skills for life.

Through studying the cases of regional and national governments at various levels of economic development and progress in implementing CS education programs, governments from around the globe can learn how to expand and improve CS education and help students develop a new basic skill necessary for the future of work and active citizenship.

Case studies

For a detailed discussion of regional and national education systems from diverse regions and circumstances that have implemented computer science education programs, download the case studies.

file-pdf Arkansas file-pdf British Columbia file-pdf Chile file-pdf England file-pdf Italy file-pdf New Brunswick file-pdf South Korea file-pdf South Africa file-pdf Uruguay

About the Authors

Emiliana vegas, co-director – center for universal education, michael hansen, senior fellow – brown center on education policy, brian fowler, former research analyst – center for universal education.

  • 1. Denning et al. (1989) defined the discipline of computing as “the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application.”
  • 2. Integrated development environments include programs like Scratch (Resnick et al., 2009), Code.org (Kelelioglu, 2015), and CHERP3 Creative Hybrid Environment for Robotics Programming (Bers et al., 2014).
  • 3. The authors of these studies conclude that self-teaching methods and laboratory control methods may be effective for teaching programming skills.
  • 4. In 2019, President Tabaré Vázquez stated that “All children in kindergartens and schools are programming in Scratch, or designing strategies based on problem-solving” (Uruguay Presidency, 2019).
  • 5. Remote instruction via videoconferencing technology improved learning in mathematics in an experiment in Ghana (Johnston & Ksoll, 2017). It is very plausible that Uruguay’s approach to giving computational thinking instruction via videoconference could also be effective.

Acknowledgments

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Brookings gratefully acknowledges the support provided by Amazon, Atlassian Foundation International, Google, and Microsoft.

Brookings recognizes that the value it provides is in its commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Related content

information technology in 21st century essay

Expanding computer science education for a technologically advancing world

On October 26, the Center for Universal Education (CUE) will host a virtual event to launch the report “Building skills for life: How to expand and improve computer science education around the world.”

information technology in 21st century essay

What do we know about the expansion of K-12 computer science education?

This brief reviews various efforts around the world to improve and scale computer science education.

information technology in 21st century essay

Realizing the promise: How can education technology improve learning for all?

This research is intended as an evidence-based tool for ministries of education to adopt and more successfully invest in education technology.

  • Media Relations
  • Terms and Conditions
  • Privacy Policy

Media and Information Literacy, a critical approach to literacy in the digital world

information technology in 21st century essay

What does it mean to be literate in the 21 st century? On the celebration of the International Literacy Day (8 September), people’s attention is drawn to the kind of literacy skills we need to navigate the increasingly digitally mediated societies.

Stakeholders around the world are gradually embracing an expanded definition for literacy, going beyond the ability to write, read and understand words. Media and Information Literacy (MIL) emphasizes a critical approach to literacy. MIL recognizes that people are learning in the classroom as well as outside of the classroom through information, media and technological platforms. It enables people to question critically what they have read, heard and learned.

As a composite concept proposed by UNESCO in 2007, MIL covers all competencies related to information literacy and media literacy that also include digital or technological literacy. Ms Irina Bokova, Director-General of UNESCO has reiterated significance of MIL in this media and information landscape: “Media and information literacy has never been so vital, to build trust in information and knowledge at a time when notions of ‘truth’ have been challenged.”

MIL focuses on different and intersecting competencies to transform people’s interaction with information and learning environments online and offline. MIL includes competencies to search, critically evaluate, use and contribute information and media content wisely; knowledge of how to manage one’s rights online; understanding how to combat online hate speech and cyberbullying; understanding of the ethical issues surrounding the access and use of information; and engagement with media and ICTs to promote equality, free expression and tolerance, intercultural/interreligious dialogue, peace, etc. MIL is a nexus of human rights of which literacy is a primary right.

Learning through social media

In today’s 21 st century societies, it is necessary that all peoples acquire MIL competencies (knowledge, skills and attitude). Media and Information Literacy is for all, it is an integral part of education for all. Yet we cannot neglect to recognize that children and youth are at the heart of this need. Data shows that 70% of young people around the world are online. This means that the Internet, and social media in particular, should be seen as an opportunity for learning and can be used as a tool for the new forms of literacy.

The Policy Brief by UNESCO Institute for Information Technologies in Education, “Social Media for Learning by Means of ICT” underlines this potential of social media to “engage students on immediate and contextual concerns, such as current events, social activities and prospective employment.

UNESCO MIL CLICKS - To think critically and click wisely

For this reason, UNESCO initiated a social media innovation on Media and Information Literacy, MIL CLICKS (Media and Information Literacy: Critical-thinking, Creativity, Literacy, Intercultural, Citizenship, Knowledge and Sustainability).

MIL CLICKS is a way for people to acquire MIL competencies in their normal, day-to-day use of the Internet and social media. To think critically and click wisely. This is an unstructured approach, non-formal way of learning, using organic methods in an online environment of play, connecting and socializing.  

MIL as a tool for sustainable development

In the global, sustainable context, MIL competencies are indispensable to the critical understanding and engagement in development of democratic participation, sustainable societies, building trust in media, good governance and peacebuilding. A recent UNESCO publication described the high relevance of MIL for the Sustainable Development Goals (SDGs).

“Citizen's engagement in open development in connection with the SDGs are mediated by media and information providers including those on the Internet, as well as by their level of media and information literacy. It is on this basis that UNESCO, as part of its comprehensive MIL programme, has set up a MOOC on MIL,” says Alton Grizzle, UNESCO Programme Specialist. 

UNESCO’s comprehensive MIL programme

UNESCO has been continuously developing MIL programme that has many aspects. MIL policies and strategies are needed and should be dovetailed with existing education, media, ICT, information, youth and culture policies.

The first step on this road from policy to action is to increase the number of MIL teachers and educators in formal and non-formal educational setting. This is why UNESCO has prepared a model Media and Information Literacy Curriculum for Teachers , which has been designed in an international context, through an all-inclusive, non-prescriptive approach and with adaptation in mind.

The mass media and information intermediaries can all assist in ensuring the permanence of MIL issues in the public. They can also highly contribute to all citizens in receiving information and media competencies. Guideline for Broadcasters on Promoting User-generated Content and Media and Information Literacy , prepared by UNESCO and the Commonwealth Broadcasting Association offers some insight in this direction.

UNESCO will be highlighting the need to build bridges between learning in the classroom and learning outside of the classroom through MIL at the Global MIL Week 2017 . Global MIL Week will be celebrated globally from 25 October to 5 November 2017 under the theme: “Media and Information Literacy in Critical Times: Re-imagining Ways of Learning and Information Environments”. The Global MIL Feature Conference will be held in Jamaica under the same theme from 24 to 27 October 2017, at the Jamaica Conference Centre in Kingston, hosted by The University of the West Indies (UWI).

Alton Grizzle , Programme Specialist – Media Development and Society Section

More on this subject

Global Network of Learning Cities webinar ‘Countering climate disinformation: strengthening global citizenship education and media literacy’

Other recent news

Artisans from Yucatan continue working with UNESCO on the safeguarding plan for Mayan-Yucatecan embroidery

  • EssayBasics.com
  • Pay For Essay
  • Write My Essay
  • Homework Writing Help
  • Essay Editing Service
  • Thesis Writing Help
  • Write My College Essay
  • Do My Essay
  • Term Paper Writing Service
  • Coursework Writing Service
  • Write My Research Paper
  • Assignment Writing Help
  • Essay Writing Help
  • Call Now! (USA) Login Order now
  • EssayBasics.com Call Now! (USA) Order now
  • Writing Guides

Technology In 21st Century (Essay Sample) 2023

Technology in 21st century.

Modern technology is an important area that business needs to consider. The reason behind this is that technological advancements are pivotal in enhancing business operations around the globe. Most businesses thrive using modern technology as technology has advanced operations of business through several ways. Some of the ways technology have enhanced business operations are efficient marketing through social media platforms, effective mass communication to all personnel in the business and provisions of effectual ways business people use to store and access data for the functioning of the business. Besides, social media is an influential social media aspect that has huge followership, and effective use of social media is an index to a business in increasing sales volume among many businesses. The paper views technology in the 21st century.

Technology has played a crucial role towards enhancement of globalization in the 21st century. Globalization had huge impacts on the economic world, through an array of merits and demerits arising from globalization acts. New technological trends have played a fundamental role in making a rapid enhancement to globalization. Additionally, people connect and communicate to other people from areas that are very far geographically, for example, people who have lived in one country or one continent have pertinent information concerning other far areas such as they communicate to people and find more information about other important continents and aspects such as business from other far geographies. Rapid globalization has also enhanced economic development from a business-based perspective.

Modern technology stimulates most of the business activities around the world. The reason behind the argument is that most of the businesses in the 21st century make extensive use modern technology to conduct business. From a different angle, most people transact businesses even from a far distance through technology, which has given the world an outlook of a global, interactive society where people cold share and access new ideas and vital information. People exchange business ideas and transact business activities from far distances, which has helped disqualify distance as a barrier in business, and most of the business have taken advantage of globalization and modern trends in technology to enhance their business operations.

Information Technology (IT) and the Internet are protuberant technological trends in the 21st century. The chief reason behind the argument is that most businesses have adapted Information technology in their operations for effectiveness. Information technology has had an array of impacts to businesses around the world. A good example of the impacts that Information technology has is the impact and aspect of competition. Firms and business around the world use Internet and Information technology factor to outdo the other firms that provide similar related services. Organizations have learned on Information technology as a chief aspect in interviews and evaluations, as people with (IT) competence are huge assets for most business around the world.

In the 21st century, technology has evolved and became an inevitable aspect around the globe. Scholars have proved that most people can carry out projects and make business plans that consume information technology competencies and services extensively. A greater percentage of the plans that business people make for example marketing and management plans incorporate IT experts as advisors, who give the proprietors the best techniques to apply. From a different view, Information technology is a field that involves uniformity and accuracy of undertakings based on IT. In a case where one uses modern technology to market or manage the business operations, the aspect uniformity, accurate targets and effective marketing strategies have been protuberant outcomes aligned to good use technological trends in businesses.

information technology in 21st century essay

Oxford Martin School logo

Technology over the long run: zoom out to see how dramatically the world can change within a lifetime

It is easy to underestimate how much the world can change within a lifetime. considering how dramatically the world has changed can help us see how different the world could be in a few years or decades..

Technology can change the world in ways that are unimaginable until they happen. Switching on an electric light would have been unimaginable for our medieval ancestors. In their childhood, our grandparents would have struggled to imagine a world connected by smartphones and the Internet.

Similarly, it is hard for us to imagine the arrival of all those technologies that will fundamentally change the world we are used to.

We can remind ourselves that our own future might look very different from the world today by looking back at how rapidly technology has changed our world in the past. That’s what this article is about.

One insight I take away from this long-term perspective is how unusual our time is. Technological change was extremely slow in the past – the technologies that our ancestors got used to in their childhood were still central to their lives in their old age. In stark contrast to those days, we live in a time of extraordinarily fast technological change. For recent generations, it was common for technologies that were unimaginable in their youth to become common later in life.

The long-run perspective on technological change

The big visualization offers a long-term perspective on the history of technology. 1

The timeline begins at the center of the spiral. The first use of stone tools, 3.4 million years ago, marks the beginning of this history of technology. 2 Each turn of the spiral represents 200,000 years of history. It took 2.4 million years – 12 turns of the spiral – for our ancestors to control fire and use it for cooking. 3

To be able to visualize the inventions in the more recent past – the last 12,000 years – I had to unroll the spiral. I needed more space to be able to show when agriculture, writing, and the wheel were invented. During this period, technological change was faster, but it was still relatively slow: several thousand years passed between each of these three inventions.

From 1800 onwards, I stretched out the timeline even further to show the many major inventions that rapidly followed one after the other.

The long-term perspective that this chart provides makes it clear just how unusually fast technological change is in our time.

You can use this visualization to see how technology developed in particular domains. Follow, for example, the history of communication: from writing to paper, to the printing press, to the telegraph, the telephone, the radio, all the way to the Internet and smartphones.

Or follow the rapid development of human flight. In 1903, the Wright brothers took the first flight in human history (they were in the air for less than a minute), and just 66 years later, we landed on the moon. Many people saw both within their lifetimes: the first plane and the moon landing.

This large visualization also highlights the wide range of technology’s impact on our lives. It includes extraordinarily beneficial innovations, such as the vaccine that allowed humanity to eradicate smallpox , and it includes terrible innovations, like the nuclear bombs that endanger the lives of all of us .

What will the next decades bring?

The red timeline reaches up to the present and then continues in green into the future. Many children born today, even without further increases in life expectancy, will live well into the 22nd century.

New vaccines, progress in clean, low-carbon energy, better cancer treatments – a range of future innovations could very much improve our living conditions and the environment around us. But, as I argue in a series of articles , there is one technology that could even more profoundly change our world: artificial intelligence (AI).

One reason why artificial intelligence is such an important innovation is that intelligence is the main driver of innovation itself. This fast-paced technological change could speed up even more if it’s driven not only by humanity’s intelligence but also by artificial intelligence. If this happens, the change currently stretched out over decades might happen within a very brief time span of just a year. Possibly even faster. 4

I think AI technology could have a fundamentally transformative impact on our world. In many ways, it is already changing our world, as I documented in this companion article . As this technology becomes more capable in the years and decades to come, it can give immense power to those who control it (and it poses the risk that it could escape our control entirely).

Such systems might seem hard to imagine today, but AI technology is advancing quickly. Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, as I documented in this article .

legacy-wordpress-upload

Technology will continue to change the world – we should all make sure that it changes it for the better

What is familiar to us today – photography, the radio, antibiotics, the Internet, or the International Space Station circling our planet – was unimaginable to our ancestors just a few generations ago. If your great-great-great grandparents could spend a week with you, they would be blown away by your everyday life.

What I take away from this history is that I will likely see technologies in my lifetime that appear unimaginable to me today.

In addition to this trend towards increasingly rapid innovation, there is a second long-run trend. Technology has become increasingly powerful. While our ancestors wielded stone tools, we are building globe-spanning AI systems and technologies that can edit our genes.

Because of the immense power that technology gives those who control it, there is little that is as important as the question of which technologies get developed during our lifetimes. Therefore, I think it is a mistake to leave the question about the future of technology to the technologists. Which technologies are controlled by whom is one of the most important political questions of our time because of the enormous power these technologies convey to those who control them.

We all should strive to gain the knowledge we need to contribute to an intelligent debate about the world we want to live in. To a large part, this means gaining knowledge and wisdom on the question of which technologies we want.

Acknowledgments: I would like to thank my colleagues Hannah Ritchie, Bastian Herre, Natasha Ahuja, Edouard Mathieu, Daniel Bachler, Charlie Giattino, and Pablo Rosado for their helpful comments on drafts of this essay and the visualization. Thanks also to Lizka Vaintrob and Ben Clifford for the conversation that initiated this visualization.

Appendix: About the choice of visualization in this article

The recent speed of technological change makes it difficult to picture the history of technology in one visualization. When you visualize this development on a linear timeline, then most of the timeline is almost empty, while all the action is crammed into the right corner:

Linear version of the spiral chart

In my large visualization here, I tried to avoid this problem and instead show the long history of technology in a way that lets you see when each technological breakthrough happened and how, within the last millennia, there was a continuous acceleration of technological change.

The recent speed of technological change makes it difficult to picture the history of technology in one visualization. In the appendix, I show how this would look if it were linear.

It is, of course, difficult to assess when exactly the first stone tools were used.

The research by McPherron et al. (2010) suggested that it was at least 3.39 million years ago. This is based on two fossilized bones found in Dikika in Ethiopia, which showed “stone-tool cut marks for flesh removal and percussion marks for marrow access”. These marks were interpreted as being caused by meat consumption and provide the first evidence that one of our ancestors, Australopithecus afarensis, used stone tools.

The research by Harmand et al. (2015) provided evidence for stone tool use in today’s Kenya 3.3 million years ago.

References:

McPherron et al. (2010) – Evidence for stone-tool-assisted consumption of animal tissues before 3.39 million years ago at Dikika, Ethiopia . Published in Nature.

Harmand et al. (2015) – 3.3-million-year-old stone tools from Lomekwi 3, West Turkana, Kenya . Published in Nature.

Evidence for controlled fire use approximately 1 million years ago is provided by Berna et al. (2012) Microstratigraphic evidence of in situ fire in the Acheulean strata of Wonderwerk Cave, Northern Cape province, South Africa , published in PNAS.

The authors write: “The ability to control fire was a crucial turning point in human evolution, but the question of when hominins first developed this ability still remains. Here we show that micromorphological and Fourier transform infrared microspectroscopy (mFTIR) analyses of intact sediments at the site of Wonderwerk Cave, Northern Cape province, South Africa, provide unambiguous evidence—in the form of burned bone and ashed plant remains—that burning took place in the cave during the early Acheulean occupation, approximately 1.0 Ma. To the best of our knowledge, this is the earliest secure evidence for burning in an archaeological context.”

This is what authors like Holden Karnofsky called ‘Process for Automating Scientific and Technological Advancement’ or PASTA. Some recent developments go in this direction: DeepMind’s AlphaFold helped to make progress on one of the large problems in biology, and they have also developed an AI system that finds new algorithms that are relevant to building a more powerful AI.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

Importance of ICT in Education Essay

Ict: introduction, teachers and their role in education, impact of ict in education, use of ict in education, importance of ict to students, works cited.

Information and Communication Technology is among the most indispensable tools that the business world relies on today. Virtually all businesses, in one way or another, rely on technology tools to carry out operations. Other organizations like learning institutions are not left behind technology-wise. ICT is increasingly being employed in contemporary learning institutions to ease the work of students and teachers.

Among the most commendable successes of employing ICT in learning institutions is e-learning, in which the ICT tools are used to access classrooms remotely. This paper explores the importance of the tools of the tools of ICT in education and the roles that these tools have played in making learning better and easier.

Teachers are scholars who have mastered specific subjects that form part of their specialty and help in imparting knowledge to students. Some of the roles that teachers play in academic institutions include designing syllabuses, preparing timetables, preparing for lessons and convening students for lessons, and carrying out continuous assessments on students.

Others include keeping records of academic reports, disciplinary records, and other records related to the activities of students in school, like the participation of students in games and other activities.

In cases where there are limitations such that it is impossible to convene people and resources together for learning. E-learning provides a very important and convenient way of teaching people. In such a case, a teacher provides learning materials and lessons online, which can be accessed by his/her students at their convenience.

The materials can be audio files of recorded classroom lessons, audio-visual files for lessons requiring visual information like practical or even text documents, and hypertext documents (Tinio 1). This method of teaching is also convenient for teachers because they are able to record lessons at their convenience, and the assessment of students involves less documentation.

This is because with the use of the internet, teachers are able to upload assignments and continuous assessments on the e-learning systems, and after students are done with the assignments, they use the system or emails to send their completed assignments to their teachers. This comes with a number of advantages which are brought about by having students complete assignments in soft copies.

One of these advantages is that feedback from teachers will be timely and it will be convenient for the teachers. Teachers can also use technology tools such as plagiarism software to check if students have copied the works of other scholars and thus establish the authenticity of the assignment. It can thus be argued that although e-learning systems have their disadvantages, they are very instrumental in teaching people whose schedules are tight and who may have limitations as far as accessing the classroom is concerned.

Therefore technology has been an influential and essential tool in the career of education, and several innovations have been made that have made teaching a much easier career. The paragraph below discusses other ways in which technology has been employed in the education career.

Teachers can also use the tools of ICT in other functions. One such function is keeping records of student performances and other kinds of records within the academic institution. This can be done by uploading the information to a Management Information System for the school or college, which should have a database for supporting the same. The information can also be stored in soft form in Compact Disks, Hard Drives, Flash Disks, or even Digital Video Disks (Obringer 1).

This ensures that information is properly stored and backed up and also ensures that records are not as bulky as they would have been in the absence of the tools of ICT. Such a system also ensures that information can easily be accessed and also ensures that proper privacy of the data is maintained.

Another way in which teachers can use the tools of ICT to ease their work is by employing tools like projectors for presentations of lessons, iPads for students, computers connected to the internet for communicating to students about continuous assessments, and the like (Higgins 1). This way, the teacher will be able to reduce the paperwork that he /she uses in his/her work, and this is bound to make his/her work easier.

For instance, if the teacher can access a projector, he/she can prepare a presentation of a lesson for his/her students, and this way, he will not have to carry textbooks, notebooks, and the like to the classroom for the lesson. The teacher can also post notes and relevant texts for a given course on the information system for the school or on an interactive website, and thus he/she will have more time for discussions during lessons.

Teachers can also, in consultation with IT specialists, develop real-time systems where students can answer questions related woo what they have learned in class and get automated results through the system (Masie 1).

This will help the students understand the concepts taught in class better, and this way, teachers will have less workload. Such websites will also help teachers to show the students how questions related to their specialty are framed early enough so that students can concentrate on knowledge acquisition during class hours.

This is as opposed to a case where the students remain clueless about the kind of questions they expect in exams and spend most of their time preparing for exams rather than reading extensively to acquire knowledge. ICT can also be sued by teachers to advertise the kind of services they offer in schools and also advertise the books and journals they have written. This can be achieved by using websites for the school or specific teachers or professors.

As evidenced in the discussion above, ICT is a very instrumental tool in education as a career. The specific tools of ICT used in education, as discussed above, include the use of ICT in distance learning, storage of student performance and other relevant information in databases and storage media, and the use of tools of ICT in classroom like projectors, iPads and the like. Since the invention of the internet and the subsequent popularity of computers, a lot of functions of education as a career have been made simpler.

These include the administration of continuous assessments, marking continuous assessments, giving feedback to students, and even checking the originality of the ideas expressed in the assignments and examinations. All in all, the impact that ICT has had in educational institutions is so much that school life without ICT is somehow impossible for people who are accustomed to using ICT.

Higgins, Steve. “Does ICT improve learning and teaching in schools”. 2007. Web.

Masie, Shank. “What is electronic learning?” 2007. Web.

Obringer, Ann. “ How E-Learning Works ”. 2008. Web.

Tinio, Victoria. “ICT in Education”. 2008. Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, November 22). Importance of ICT in Education. https://ivypanda.com/essays/ict-in-education/

"Importance of ICT in Education." IvyPanda , 22 Nov. 2023, ivypanda.com/essays/ict-in-education/.

IvyPanda . (2023) 'Importance of ICT in Education'. 22 November.

IvyPanda . 2023. "Importance of ICT in Education." November 22, 2023. https://ivypanda.com/essays/ict-in-education/.

1. IvyPanda . "Importance of ICT in Education." November 22, 2023. https://ivypanda.com/essays/ict-in-education/.

Bibliography

IvyPanda . "Importance of ICT in Education." November 22, 2023. https://ivypanda.com/essays/ict-in-education/.

  • E-Learning in the Academic Industry
  • Effectiveness of Integrating ICT in Schools
  • ICT for Disaster Management Systems
  • Project Management: Building ICT Center
  • The Future of E-Learning
  • Traditional Learning and E-Learning Differences
  • E-Learning: Video, Television and Videoconferencing
  • Ottawa Region ICT Centres Project
  • ICT Disaster Management Systems
  • Nontakers Perceptiom Towards ICT
  • Technological Advances in Education
  • Computer Based Training Verses Instructor Lead Training
  • Smart Classroom and Its Effect on Student Learning
  • Use of Technology as a Learning Tool
  • Computers & Preschool Children: Why They Are Required in Early Childhood Centers

IMAGES

  1. New Technology: Top 12 Inventions of the 21st Century

    information technology in 21st century essay

  2. Importance of 💻 Computer Essay in English // write an essay on

    information technology in 21st century essay

  3. Learning in the 21st century essay

    information technology in 21st century essay

  4. 21st Century Technologies and Their Relationship to Student Achievement

    information technology in 21st century essay

  5. Best Essay On Technology [& Its Impacts]

    information technology in 21st century essay

  6. Essay on Importance of Computer in Life for Students

    information technology in 21st century essay

VIDEO

  1. Information Technology Essay writing in English..Short Essay on Technology Information in 150 words

  2. Microsoft certified educators (MCE) Training Day 2

  3. 5 inventions that changed the world

  4. Information Technology essay in English

  5. Benefits of ICT and Multimedia in 21st Century

  6. Tourism In India Growing Global Attraction Essay In 2023

COMMENTS

  1. Information technologies of 21st century and their impact on the society

    Twenty first century has witnessed emergence of some ground breaking information technologies that have revolutionised our way of life. The revolution began late in 20th century with the arrival of internet in 1995, which has given rise to methods, tools and gadgets having astonishing applications in all academic disciplines and business sectors.

  2. Information technologies of 21st century and their impact on the

    Twenty first century has witnessed emergence of some ground breaking information technologies that have revolutionised our way of life. The revolution began late in 20th century with the arrival of internet in 1995, which has given rise to methods, tools and gadgets having astonishing applications in all academic disciplines and business sectors. In this article we shall provide a design of a ...

  3. How Is Technology Changing the World, and How Should the World Change

    This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions.

  4. Here's how technology has changed the world since 2000

    Since the dotcom bubble burst back in 2000, technology has radically transformed our societies and our daily lives. From smartphones to social media and healthcare, here's a brief history of the 21st century's technological revolution. Just over 20 years ago, the dotcom bubble burst, causing the stocks of many tech firms to tumble.

  5. Science, technology and innovation in a 21st century context

    Science, technology and innovation in a 21st century context. This editorial essay was prepared by John H. "Jack" Marburger for a workshop on the "science of science and innovation policy" held in 2009 that was the basis for this special issue. It is published posthumously. Linking the words "science," "technology," and ...

  6. The Digital Revolution: How Technology is Changing the Way We

    While technology offers convenience and connectivity, it is essential to strike a balance, ensuring that we do not sacrifice the benefits of face-to-face interactions for the sake of digital convenience. Figure 1: Increased reliance on electronic media has led to a noticeable decrease in social interaction.

  7. The Impact of Digital Technology on Society and Economic Growth

    The Luddites of the early 19th century resisted and tried to destroy machines that rendered their weaving skills obsolete, even though the machines ushered in new skills and jobs. Such disruption occurs precisely because the new technology is so flexible and pervasive.

  8. The impact of technology on work in the twenty-first century: exploring

    The twenty-first century has seen significant expansion in the use and availability of technology, which has created a paradigm shift in how we can work. The papers in this special issue explore different facets of the smart and dark side of technology and how new waves of technology also lead to significant changes in the way we work.

  9. How artificial intelligence is transforming the world

    Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information ...

  10. Essay on Information Technology in 400 Words

    Essay on Information Technology in 400 Words. Essay on Information Technology: Information Technology is the study of computer systems and telecommunications for storing, retrieving, and transmitting information using the Internet. Today, we rely on information technology to collect and transfer data from and on the internet.

  11. Essay on Information Technology in the 21st Century

    In the 21 st century the technology has a variety of uses. The IT sector has helped people in many ways. This sector is benefiting everyone. First, in the field of education there has been a major change. People now can easily gain new information even if they are not physically present.

  12. The computer for the 21st century: present security & privacy

    Decades went by since Mark Weiser published his influential work on the computer of the 21st century. Over the years, some of the UbiComp features presented in that paper have been gradually adopted by industry players in the technology market. While this technological evolution resulted in many benefits to our society, it has also posed, along the way, countless challenges that we have yet to ...

  13. The role of digital technologies in 21st century learning

    The increasing ubiquity of digital technologies in society consequently creates tensions and challenges to agents of socialization (such as the home and school), requiring them to recontextualize and rethink their role. As access to and use of digital technologies grow, so do the opportunities and risks they present.

  14. Information Technology In The 21st Century

    The 21st century is a century of Information Technology. Just as steam engine emerged to be the technology of the 19th century and computer technology enhanced the capacity of human brain in the 20th century, Information Technology is the in-thing in the 21 century.

  15. Operationalizing the Information Age, Knowledge Economy, 21st Century

    21st Century may be the most prolific buzzword as it encompasses both the concepts of Information Age and Knowledge Economy. Essentially, the 21st Century will be marked in time from January 1 ...

  16. BUILDING SKILLS FOR LIFE

    Rather, current engagement efforts both expose students to foundational skills that help navigate technology in 21st century life and provide opportunities for students to explore technical fields.

  17. Technology in the 21st Century: Informative Essay

    Thereby, the aim and motive of my essay are to explore the areas of advancement, to differentiate the technology of the 21st century and that of primitive times, and to make people aware that how their lives revolve around technology. ... Technology in the 21st Century: Informative Essay. (2023, October 26). Edubirdie. Retrieved March 24, 2024 ...

  18. Media and Information Literacy, a critical approach to ...

    Media and Information Literacy (MIL) emphasizes a critical approach to literacy. MIL recognizes that people are learning in the classroom as well as outside of the classroom through information, media and technological platforms. It enables people to question critically what they have read, heard and learned. As a composite concept proposed by ...

  19. Technology In 21st Century Essay Sample 2023

    The paper views technology in the 21st century. Technology has played a crucial role towards enhancement of globalization in the 21st century. Globalization had huge impacts on the economic world, through an array of merits and demerits arising from globalization acts. New technological trends have played a fundamental role in making a rapid ...

  20. Technology over the long run: zoom out to see how dramatically the

    The big visualization offers a long-term perspective on the history of technology. 1. The timeline begins at the center of the spiral. The first use of stone tools, 3.4 million years ago, marks the beginning of this history of technology. 2 Each turn of the spiral represents 200,000 years of

  21. Time to Rethink: Educating for a Technology-Transformed World

    We present information on how technology is transforming virtually every aspect of our lives and the threats we face from social media, climate change, and growing inequality. We then analyze the adequacy of proposals for teaching new skills, such as 21st-Century Skills, to prepare students for a world of work that is changing at warp speed.

  22. Information And Communication Technology In The 21st Century

    Technology in an educational context is the ability to shape and change the physical world or knowledge by the manipulation of material and tools. Education in the 21st century has greatly benefited from the improvements provided by information and communication technology, visual and audio content that can be sent immediately to any part of ...

  23. Determinants of 21st-Century Skills and 21st-Century Digital Skills for

    While the importance of these skills to fulfill the demands for workers in the 21st century has been well established, research has identified that comprehensive knowledge about skill assessment is lacking (Voogt & Roblin, 2012).Although various components of digital skills have been described in theory (e.g., Claro et al., 2012; Jara et al., 2015; Siddiq et al., 2017; Van Deursen et al., 2016 ...

  24. Importance of ICT in Education

    Impact of ICT in Education. In cases where there are limitations such that it is impossible to convene people and resources together for learning. E-learning provides a very important and convenient way of teaching people. In such a case, a teacher provides learning materials and lessons online, which can be accessed by his/her students at ...

  25. ICT Skills in the Deployment of 21st Century Skills: A (Cognitive

    ICT technologies are an integral part of today's digitized society. Therefore, it is important that children acquire ICT skills as part of 21st century skills education to prepare them for later life. Drawing on the literature, seven 21st century skills can profit from the addition of ICT skills, i.e., technical, information, communication, collaboration, critical thinking, creative, and ...