Read our research on: Abortion | Podcasts | Election 2024

Regions & Countries

Americans’ complicated feelings about social media in an era of privacy concerns.

(Busakorn Pongparnit)

Amid public concerns over Cambridge Analytica’s use of Facebook data and a subsequent movement to encourage users to abandon Facebook , there is a renewed focus on how social media companies collect personal information and make it available to marketers.

Pew Research Center has studied the spread and impact of social media since 2005, when just 5% of American adults used the platforms. The trends tracked by our data tell a complex story that is full of conflicting pressures. On one hand, the rapid growth of the platforms is testimony to their appeal to online Americans. On the other, this widespread use has been accompanied by rising user concerns about privacy and social media firms’ capacity to protect their data.

All this adds up to a mixed picture about how Americans feel about social media. Here are some of the dynamics.

People like and use social media for several reasons

lack of privacy on social media essay

The Center’s polls have found over the years that people use social media for important social interactions like staying in touch with friends and family and reconnecting with old acquaintances. Teenagers are especially likely to report that social media are important to their friendships and, at times, their romantic relationships .

Beyond that, we have documented how social media play a role in the way people participate in civic and political activities, launch and sustain protests , get and share health information , gather scientific information , engage in family matters , perform job-related activities and get news . Indeed, social media is now just as common a pathway to news for people as going directly to a news organization website or app.

Our research has not established a causal relationship between people’s use of social media and their well-being. But in a 2011 report, we noted modest associations between people’s social media use and higher levels of trust, larger numbers of close friends, greater amounts of social support and higher levels of civic participation.

People worry about privacy and the use of their personal information

While there is evidence that social media works in some important ways for people, Pew Research Center studies have shown that people are anxious about all the personal information that is collected and shared and the security of their data.

Overall, a 2014 survey found that 91% of Americans “agree” or “strongly agree” that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and 64% said the government should do more to regulate advertisers.

lack of privacy on social media essay

Moreover, people struggle to understand the nature and scope of the data collected about them. Just 9% believe they have “a lot of control” over the information that is collected about them, even as the vast majority (74%) say it is very important to them to be in control of who can get information about them.

Six-in-ten Americans (61%) have said they would like to do more to protect their privacy. Additionally, two-thirds have said current laws are not good enough in protecting people’s privacy, and 64% support more regulation of advertisers.

Some privacy advocates hope that the European Union’s General Data Protection Regulation , which goes into effect on May 25, will give users – even Americans – greater protections about what data tech firms can collect, how the data can be used, and how consumers can be given more opportunities to see what is happening with their information.

People’s issues with the social media experience go beyond privacy

In addition to the concerns about privacy and social media platforms uncovered in our surveys, related research shows that just 5% of social media users trust the information that comes to them via the platforms “a lot.”

lack of privacy on social media essay

A considerable number of social media users said they simply ignored  political arguments when they broke out in their feeds. Others went steps further by blocking or unfriending those who offended or bugged them.

Why do people leave or stay on social media platforms?

The paradox is that people use social media platforms even as they express great concern about the privacy implications of doing so – and the social woes they encounter. The Center’s most recent survey about social media found that 59% of users said it would  not be difficult to give up these sites, yet the share saying these sites would be hard to give up grew 12 percentage points from early 2014.

Some of the answers about why people stay on social media could tie to our findings about how people adjust their behavior on the sites and online, depending on personal and political circumstances. For instance, in a 2012 report we found that 61% of Facebook users said they had taken a break from using the platform. Among the reasons people cited were that they were too busy to use the platform, they lost interest, they thought it was a waste of time and that it was filled with too much drama, gossip or conflict.

In other words, participation on the sites for many people is not an all-or-nothing proposition.

People pursue strategies to try to avoid problems on social media and the internet overall. Fully 86% of internet users said in 2012 they had taken steps to try to be anonymous online. “Hiding from advertisers” was relatively high on the list of those they wanted to avoid.

Many social media users fine-tune their behavior to try to make things less challenging or unsettling on the sites, including changing their privacy settings and restricting access to their profiles. Still, 48% of social media users reported in a 2012 survey they have difficulty managing their privacy controls.

After National Security Agency contractor Edward Snowden disclosed details about government surveillance programs starting in 2013, 30% of adults said they took steps to hide or shield their information and 22% reported they had changed their online behavior in order to minimize detection.

One other argument that some experts make in Pew Research Center canvassings about the future is that people often find it hard to disconnect because so much of modern life takes place on social media. These experts believe that unplugging is hard because social media and other technology affordances make life convenient and because the platforms offer a very efficient, compelling way for users to stay connected to the people and organizations that matter to them.

Note: See topline results  for overall social media user data   here (PDF).

lack of privacy on social media essay

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Social Media Fact Sheet

7 facts about americans and instagram, social media use in 2021, 64% of americans say social media have a mostly negative effect on the way things are going in the u.s. today, share of u.s. adults using social media, including facebook, is mostly unchanged since 2018, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Social Media and Lack of Privacy

Social media has presented different platforms where families and friends can connect despite other geographical locations. Social media users relate to various platforms using their devices and share moments or ideas in economics, politics, or even business. Although social media has made the world a smaller place, it has fallbacks. This paper will look at how social media has hugely caused a lack of privacy.

Social media users tend to post issues concerning their private lives or public matters. For years now, social media privacy has been an issue. There are numerous reports about data breaches, causing social media users to be more cautious about their privacy. The data breaches have led to a lack of trust and raised suspicions among the users whether they have lost control over their information.

The number of social media users is rising, making them vulnerable to different forms of security breaches. When private information gets unauthorized, the impacts can be grave. According to Pew Research center (2017), about 13% of online users in the U.S. have reported their accounts hacked by unauthorized users. These hacks can cause redirects and malware of various types, which would cause vulnerability to evil deeds.

Sharing private information may cause judgments by the public. Social media users can ruin or build their reputations, depending on their activities on these platforms. They can do this through their relationships and influence from other people using similar platforms. Therefore, some people can be subject to unfair judgments or misunderstandings resulting from a small portion of one’s story (Rahman et al., 2019).

The different types of social media threats include phishing, data mining, and malware. Phishing is among the most common ways criminals gather private information from social media users. Phishing attacks come in calls, emails, or text messages from a legitimate institution. The statements or calls may trick users into sharing sensitive and private information like passwords or bank information.

Data mining is extracting helpful information from a large data set. Online users open new social media accounts almost every day, meaning that every social media user leaves a stream of data. Social media platforms require personal information such as name, date of birth, personal interests, or location (Rahman et al., 2019). Similarly, different firms use the information to obtain data based on how, where, and when users are active on their platforms. The data obtained will enable companies to understand their target markets and improve their advertising methods. It can be worse as firms may share the data with third parties without the users’ knowledge. Finally, malware is suspicious software designed to attack computers and gain information that they contain. If malware is successfully installed in a user’s computer, then all the information and data can be stolen (Rahman et al., 2019). Social media platforms are potential delivery systems for various malware. Compromising one account can spread the malware to friends and contacts by taking charge of the account.

To conclude, social media privacy continues to be in question as different forms of hacking and data breaches are introduced. The youths and young people are the most vulnerable to such insecurities as they never take the matter seriously. Also, these youngsters are more influenced by their peers than adults or parents, hence exposing them to insecurities. Social media users are urged to avoid sharing sensitive information or conceal information that they do not feel comfortable sharing.

Americans and Cyber Security . (2017, January 16). Pew Research Center.

Rahman, H. U., Rehman, A. U., Nazir, S., Rehman, I. U., & Uddin, N. (2019, March). Privacy and security—limits of personal information to minimize loss of privacy. In  Future of Information and Communication Conference  (pp. 964-974). Springer, Cham.

Cite This Work

To export a reference to this article please select a referencing style below:

Related Essays

Cultural appropriation or can anybody own a culture, critical analysis essay on death of a salesman, hypothetical production of macbeth, battle of palo alto, the uniqueness of the holocaust, the great gatsby novel, popular essay topics.

  • American Dream
  • Artificial Intelligence
  • Black Lives Matter
  • Bullying Essay
  • Career Goals Essay
  • Causes of the Civil War
  • Child Abusing
  • Civil Rights Movement
  • Community Service
  • Cultural Identity
  • Cyber Bullying
  • Death Penalty
  • Depression Essay
  • Domestic Violence
  • Freedom of Speech
  • Global Warming
  • Gun Control
  • Human Trafficking
  • I Believe Essay
  • Immigration
  • Importance of Education
  • Israel and Palestine Conflict
  • Leadership Essay
  • Legalizing Marijuanas
  • Mental Health
  • National Honor Society
  • Police Brutality
  • Pollution Essay
  • Racism Essay
  • Romeo and Juliet
  • Same Sex Marriages
  • Social Media
  • The Great Gatsby
  • The Yellow Wallpaper
  • Time Management
  • To Kill a Mockingbird
  • Violent Video Games
  • What Makes You Unique
  • Why I Want to Be a Nurse
  • Send us an e-mail

Book cover

Modern Socio-Technical Perspectives on Privacy pp 113–147 Cite as

Social Media and Privacy

  • Xinru Page 7 ,
  • Sara Berrios 7 ,
  • Daricia Wilkinson 8 &
  • Pamela J. Wisniewski 9  
  • Open Access
  • First Online: 09 February 2022

16k Accesses

4 Citations

4 Altmetric

With the popularity of social media, researchers and designers must consider a wide variety of privacy concerns while optimizing for meaningful social interactions and connection. While much of the privacy literature has focused on information disclosures, the interpersonal dynamics associated with being on social media make it important for us to look beyond informational privacy concerns to view privacy as a form of interpersonal boundary regulation. In other words, attaining the right level of privacy on social media is a process of negotiating how much, how little, or when we desire to interact with others, as well as the types of information we choose to share with them or allow them to share about us. We propose a framework for how researchers and practitioners can think about privacy as a form of interpersonal boundary regulation on social media by introducing five boundary types (i.e., relational, network, territorial, disclosure, and interactional) social media users manage. We conclude by providing tools for assessing privacy concerns in social media, as well as noting several challenges that must be overcome to help people to engage more fully and stay on social media.

Download chapter PDF

1 Introduction

The way people communicate with one another in the twenty-first century has evolved rapidly. In the 1990s, if someone wanted to share a “how-to” video tutorial within their social networks, the dissemination options would be limited (e.g., email, floppy disk, or possibly a writeable compact disc). Now, social media platforms, such as TikTok, provide professional grade video editing and sharing capabilities that give users the potential to both create and disseminate such content to thousands of viewers within a matter of minutes. As such, social media has steadily become an integral component for how people capture aspects of their physical lives and share them with others. Social media platforms have gradually altered the way many people live [ 1 ], learn [ 2 , 3 ], and maintain relationships with others [ 4 ].

Carr and Hayes define social media as “Internet-based channels that allow users to opportunistically interact and selectively self-present, either in real time or asynchronously, with both broad and narrow audiences who derive value from user-generated content and the perception of interaction with others” [ 5 ]. Social media platforms offer new avenues for expressing oneself, experiences, and emotions with broader online communities via posts, tweets, shares, likes, and reviews. People use these platforms to talk about major milestones that bring happiness (e.g., graduation, marriage, pregnancy announcements), but they also use social media as an outlet to express grief and challenges, and to cope with crises [ 6 , 7 , 8 ]. Many scholars have highlighted the host of positive outcomes from interpersonal interactions on social media including social capital, self-esteem, and personal well-being [ 9 , 10 , 11 , 12 ]. Likewise, researchers have also shed light on the increased concerns over unethical data collection and privacy abuses [ 13 , 14 ].

This chapter highlights the privacy issues that must be addressed in the context of social media and provides guidance on how to study and design for social media privacy. We first provide an overview of the history of social media and its usage. Next, we highlight common social media privacy concerns that have arisen over the years. We also point out how scholars have identified and sought to predict privacy behavior, but many efforts have failed to adequately account for individual differences. By reconceptualizing privacy in social media as a boundary regulation, we can explain these gaps from previous one-size-fits-all approaches and provide tools for measuring and studying privacy violations. Finally, we conclude with a word of caution about the consequences of ignoring privacy concerns on social media.

2 A Brief History of Social Media

Section highlights.

Social media use has quickly increased over the past decade and plays a key role in social, professional, and even civic realms. The rise of social media has led to “networked individualism.”

This enables people to access a wider variety of specialized relationships , making it more likely they can meet a variety of needs. It also allows people to project their voice to a wider audience.

However, people have more frequent turnover in their social networks , and it takes much more effort to maintain social relations and discern (mis)information and intention behind communication.

The initial popularity of social media harkened back to the historical rise of social network sites (SNSs). The canonical definition of SNSs is attributed to Boyd and Ellison [ 15 ] who differentiate SNSs from other forms of computer-mediated communication. According to Boyd and Ellison, SNS consists of (1) profiles representing users and (2) explicit connections between these profiles that can be traversed and interacted with. A social networking profile is a self-constructed digital representation of oneself and one’s social relationships. The content of these profiles varies by platform from profile pictures to personal information such as interests, demographics, and contact information. Visibility also varies by platform and often users have some control over who can see their profile (e.g., everyone or “friends”). Most SNSs also provide a way to leave messages on another’s profile, such as posting to someone’s timeline on Facebook or sending a mention or direct message to someone on Twitter.

Public interest and research initially focused on a small subset of SNSs (e.g., Friendster [ 16 ] and MySpace [ 17 , 18 , 19 ]), but the past decade has seen the proliferation of a much broader range of social networking technologies, as well as an evolution of SNSs into what Kane et al. term social media networks [ 20 ]. This extended definition emphasizes the reach of social media content beyond a single platform. It acknowledges how the boundedness of SNSs has become blurred as platform functionality that was once contained in a single platform, such as “likes,” are now integrated across other websites, third parties, and mobile apps.

Over the past decade, SNSs and social media networks have quickly become embedded in many facets of personal, professional, and social life. In that time, these platforms became more commonly known as “social media.” In the USA, only 5% of adults used social media in 2005. By 2011, half of the US adult population was using social media, and 72% were social users by 2019 [ 21 ]. MySpace and Facebook dominated SNS research about a decade ago, but now other social media platforms, such as YouTube, Instagram, Snapchat, Twitter, Kik, TikTok, and others, are popular choices among social media users. The intensity of use also has drastically increased. For example, half of Facebook users log on several times a day, and three-quarters of Facebook users are active on the platform at least daily [ 21 ]. Worldwide, Facebook alone has 1.59 billion users who use it on a daily basis and 2.41 billion using it at least monthly [ 22 ]. About half of the users of other popular platforms such as Snapchat, Instagram, Twitter, and YouTube also report visiting those sites daily. Around the world, there are 4.2 billion users who spend a cumulative 10 billion hours a day on social networking sites [ 23 ]. However, different social networking sites are dominant in different cultures. For example, the most popular social media in China, WeChat (inc. Wēixìn 微信), has 1.213 billion monthly users [ 23 ].

While SNS profiles started as a user-crafted representation of an individual user, these profiles now also often consist of information that is passively collected, aggregated, and filtered in ways that are ambiguous to the user. This passively collected information can include data accessed through other avenues (e.g., search engines, third-party apps) beyond the platform itself [ 24 ]. Many people fail to realize that their information is being stored and used elsewhere. Compared to tracking on the web, social media platforms have access to a plethora of rich data and fine-grained personally identifiable information (PII) which could be used to make inferences about users’ behavior, socioeconomic status, and even their political leanings [ 25 ]. While online tracking might be valuable for social media companies to better understand how to target their consumers and personalize social media features to users’ preferences, the lack of transparency regarding what and how data is collected has in more recent years led to heightened privacy concerns and skepticism around how social media platforms are using personal data [ 26 , 27 , 28 ]. This has, in turn, contributed to a loss of trust and changes in how people interact (or not) on social media, leading some users to abandon certain platforms altogether [ 26 , 29 ] or to seek alternative social media platforms that are more privacy focused.

For example, WhatsApp, a popular messaging app, updated its privacy policy to allow its parent company, Facebook, and its subsidiaries to collect WhatsApp data [ 30 ]. Users were given the option to accept the terms or lose access to the app. Shortly after, WhatsApp rival Signal reported 7.5 million installs globally over 4 days. Recent and multiple social media data breaches have heightened people’s awareness around potential inferences that could be made about them and the danger in sensitive privacy breaches. Considering the invasive nature of such practices, both consumers and companies are increasingly acknowledging the importance of privacy, control, and transparency in social media [ 31 ]. Similarly, as researchers and practitioners, we must acknowledge the importance of privacy on social media and design for the complex challenges associated with networked privacy. These types of intrusions and data privacy issues are akin to the informational privacy issues that have been investigated in the context of e-commerce, websites, and online tracking (see Chap. 9 ).

While early research into social media and privacy largely focused on these types of concerns, researchers have uncovered how the social dynamics surrounding social media have led to a broader array of social privacy issues that shape people’s adoption of platforms and their usage behaviors. Rainie and Wellman explain how the rise of social technologies, combined with ubiquitous Internet and mobile access, has led to the rise of “networked individualism” [ 32 ]. People have access to a wider variety of relationships than they previously did offline in a geographically and time-bound world. These new opportunities make it more likely that people can foster relationships that meet their individual needs for havens (support and belonging), bandages (coping), safety nets (protect from crisis), and social capital (ability to survive and thrive through situation changes). Additionally, social media users can project their voice to an extended audience, including many weak ties (e.g., acquaintances and strangers). This enables individuals to meet their social, emotional, and economic needs by drawing on a myriad of specialized relationships (different individuals each particularly knowledgeable in a specific domain such as economics, politics, sports, caretaking). In this way, individuals are increasingly networked or embedded within multiple communities that serve their interests and needs.

Inversely, networked individualism has also made people less likely to have a single “home” community, dealing with more frequent turnover and change in their social networks. Rainie and Wellman describe how people’s social routines are different from previous generations that were more geographically bound – today, only 10% of people’s significant ties are their neighbors [ 32 ]. As such, researchers have questioned and studied the extent to which people can meaningfully maintain interpersonal relationships on social media. The upper limit for doing so has been estimated at 150 connections or “friends” [ 33 ], but social media connections often well exceed this number. With such large networks, it also takes users much more effort to distinguish (mis)information, when communication is intended for the user, and the intent behind that communication. The technical affordances of social media can also help or hinder their (in)ability to capture the nuances of the various relationships in their social network. On many social media platforms, relationships are flattened into friends and followers, making them homogenous and lacking differentiation between, for instance, casual acquaintance and trusted confidant [ 16 , 34 ]. These characteristics of social media lead to a host of social privacy issues which are crucial to address. In the next section, we summarize some of the key privacy challenges that arise due to the unique characteristics of social media.

3 Privacy Challenges in Social Media

Information disclosure privacy issues have been a dominant focus in online technologies and the primary focus for social media. It focuses on access to data and defining public vs. private disclosures . It emphasizes user control over who sees what.

With so many people from different social circles able to access a user’s social media content, the issues of context collapse occur. Users may post to an imagined audience rather than realizing that people from multiple social contexts are privy to the same information.

The issues of self-presentation jump to the foreground in social media. Being able to manage impressions is a part of privacy management.

The social nature of social media also introduces the issues of controlling access to oneself , both in terms of availability and physical access.

Despite all of these privacy concerns, there is a noted privacy paradox between what people say they are concerned about and their resulting behaviors online.

Early focus of social media privacy research was focused on helping individuals meet their privacy needs in light of four key challenges: (1) information disclosure, (2) context collapse, (3) reputation management, and (4) access to oneself. This section gives an overview of these privacy challenges and how research sought to overcome them. The remainder of this chapter shows how the research has moved beyond focusing on the individual when it comes to social media and privacy; rather, social media privacy has been reconceptualized as a dynamic process of interpersonal boundary regulation between individuals and groups.

3.1 Information Disclosure/Control over Who Sees What

A commonality among early social media privacy research is that the focus has been on information privacy and self-disclosure [ 35 ]. Self-disclosure is the information a person chooses to share with other people or websites, such as posting a status update on social media. Information privacy breaches occur when a website and/or person leaks private information about a user, sometimes unintentionally. Many studies have focused on informational privacy and on sharing information with, or withholding it from, the appropriate people [ 36 , 37 , 38 ] on social media. Privacy settings related to self-disclosure have also been studied in detail [ 39 , 40 , 41 ]. Generally, social media platforms help users control self-disclosure in two ways. First is the level of granularity or type of information that one can share with others. Facebook is the most complex, allowing users to disclose and control more granular information for profile categories such as bio, website, email addresses, and at least eight other categories at the time of writing this chapter. Others have fewer information groupings, which make user profiles chunkier, and thus self-disclosure boundaries less granular. The second dimension is one’s access level permissions, or with whom one can share personal information. The most popular social media platforms err on the side of sharing more information to more people by allowing users to give access to categories such as “Everyone,” “All Users,” or “Public.” Similarly, many social media platforms give the option for access for “friends” or “followers” only.

Many researchers have highlighted how disclosures can be shared more widely than intended. Tufekci examined disclosure mechanisms used by college students on MySpace and Facebook to manage the boundary between private and public. Findings suggest that students are more likely to adjust profile visibility rather than limiting their disclosures [ 42 ]. Other research points out how users may not want their posts to remain online indefinitely, but most social media platforms default to keeping past posts visible unless the user specifies otherwise [ 43 ]. Even when the platform offers ways to limit post sharing, there are often intentional and unintentional ways this content is shared that negates the users’ wishes. For example, Twitter is a popular social media platform where users can choose to have their tweets available only to their followers. However, millions of private tweets have been retweeted, exposing private information to the public [ 44 ]. Even platforms like Snapchat, which make posts ephemeral by default, are susceptible to people taking screenshots of a snap and distributing through other channels. Thus, as social media companies continue to develop social media platforms, they should consider how to protect users from information disclosure and teach people to practice privacy protective habits.

Although some users adjust their privacy settings to limit information disclosures, they may be unaware of third-party sites that can still access their information. Scholars have emphasized the importance of educating users on the secondary use of their data, such as when third-party software takes information from their profiles [ 45 ]. Data surveillance continues to expand, and the business model of social media corporations tends to favor getting more information about users, which makes it difficult for users that want to control their disclosure [ 46 ]. Third-party apps can also access information about social media users’ connections without consent of the person whose information is being stored [ 47 ].

3.2 Unique Considerations for Managing Disclosures Within Social Media

As mentioned earlier, social media can expand a person’s network, but as that network expands and diversifies, users have less control over how their personal information is shared with others. Two unique privacy considerations for social media that arise from this tension are context collapse and imagined audiences, which we describe in more detail in the subsections below. For example, as Facebook has become a social gathering place for adults, one’s “friends” may include family members, coworkers, colleagues, and acquaintances all in one virtual social sphere. Social media users may want to share information with these groups but are concerned about which audiences are appropriate for sharing what types of information. This is because these various social spheres that intersect on Facebook may not intersect as readily in the physical world (e.g., college buddies versus coworkers) [ 48 ]. These distinct social circles are brought together into one space due to social media. This concept is referred to as “context collapse” since a user’s audience is no longer limited to one context (e.g., home, work, school) [ 15 , 49 , 50 ]. We highlight research on the phenomenon of the privacy paradox and explain how context collapse and imagined audiences may help explain the apparent disconnect between users’ stated privacy concerns and their actual privacy behavior.

Context Collapse

Nuanced differences between one’s relationships are not fully represented on social media. While real-life relationships are notorious for being complex, one of the biggest criticisms of social media platforms is that they often simplify relationships to a “binary” [ 51 ] or “monolithic” [ 52 ] dimension of either friend or not friend. Many platforms just have one type of relationship such as a “friend,” and all relationships are treated the same. Once a “friend” has been added to one’s network, maintaining appropriate levels of social interactions in light of one’s relationship context with this individual (and the many others within one’s network) becomes even more problematic [ 53 ]. Since each friend may have different and, at times, mutually exclusive expectations, acting accordingly within a single space has become a challenge. As Boyd points out, for instance, teenagers cannot be simultaneously cool to their friends and to their parents [ 53 ]. Due to this collapsed context of relationships within social media, acquaintances, family, friends, coworkers, and significant others all have the same level of access to a social media user once added to one’s network – unless appropriately managed.

Research reveals that the way people manage context collapses varies. Working professionals might deal with context collapse by limiting posts containing personal information, creating different accounts, and avoiding friending those they worked with [ 54 ]. As another example, many adolescents manage context collapse by keeping their family members separate from their personal accounts [ 55 ]. Other mechanisms for managing context collapse include access-level permission to request friendship, denying friend requests, and unfriending. While there is limited support for manually assigning different privileges to each friend, the default is to start out the same and many users never change those defaults.

Privacy incidents resulting from mixing work and social media show the importance of why context collapse must be addressed. Context collapse has been shown to negatively affect those seeking employment [ 56 ], as well as endangering those who are employed. For example, a teacher in Massachusetts lost her job because she did not realize her Facebook posts were public to those who were not her friends; her complaints about parents of students getting her sick led to her getting fired from her job [ 57 ]. Many others have shared anecdotes about being fired after controversial Facebook and Twitter posts [ 58 , 59 ]. Even celebrities who live in the public eye can suffer from context collapse [ 60 , 61 ]. Kim Kardashian, for example, received intense criticism from Internet fans when she posted a photo on social media of her daughter using a cellphone and wearing makeup while Kim was getting ready for hair and wardrobe [ 62 ]. Many online users criticized her parenting style for not limiting screen time and Kim subsequently shared a photo of a stack of books that the kids have access to while she works.

Nevertheless, context collapse can also increase bridging social capital, which is the potential social benefit that can come through having ties to a wider audience. Context collapse enables this to occur by allowing people to increase their connections to weak ties and creating serendipitous situations by sharing with people beyond whom one would normally share [ 60 ]. For example, job hunters may increase their chances of finding a job by using social media to network and connect with those they would not normally be associated with on a daily basis. Getting out a message or spreading the word can also be accomplished more easily. For instance, finding people to contribute to natural disaster funds can be effective on social media because multiple contexts can be easily reached from one account [ 63 ]. In addition to managing context collapse, social media users also have to anticipate whether they are sharing disclosures with their intended audiences.

Imagined Audiences

The disconnect between the real audience and the imagined audience on social media poses privacy risks. Understanding who can see what content, how, when, and where is key to deciding what content to share and under what circumstances. Yet, research has consistently demonstrated how users do not accurately anticipate who can potentially see their posts. This manifests as wrongly anticipating that a certain person can see content (when they cannot), as well as not realizing when another person can access posted content. Users have an “imagined audience” [ 64 , 65 ] to whom they are posting their content, but it often does not match the actual audience viewing the user’s content. Social media users typically imagine that the audience for their social media posts are like-minded people, such as family or close friends [ 65 ]. Sometimes, online users think of specific people or groups when creating content such as a daughter, coworkers, people who need cleaning tips, or even one’s deceased father [ 65 ]. Despite these imagined audiences, privacy settings may be set so that many more people can see these posts (acquaintances, strangers, etc.). While users do tend to limit who sees their profile to a defined audience [ 44 , 66 , 67 ], they still tend to believe their posts are more private than they actually are [ 49 , 68 ].

Some users adopt privacy management strategies to counter potential mismatch in audience. Vitak identified several privacy management tactics users employ to disclose information to a limited audience [ 69 ]:

Network-based . Social media users decide who to friend or follow, therefore filtering their network of people. Some Facebook users avoid friending people they do not know. Others set friends’ profiles to “hidden,” so that they do not have to see their posts, but avoid the negative connotations associated with “unfriending.”

Platform-based . Some users choose to use the social media sites’ privacy settings to control who sees their posts. A common approach on Facebook is to change the setting to be “friends only,” so that only a user’s friends may see their posts.

Content-based . These users control their privacy by being careful about the information they post. If they knew that an employer could see their posts, then they would avoid posting when they were at work.

Profile-based . A less commonly used approach is to create multiple accounts (on a single platform or across platforms). For example, a professional, personal, and fun account.

As another example, teenagers often navigate public platforms by posting messages that parents or others would not understand their true meaning. For instance, by posting a song lyric or quote that is only recognized by specific individuals as a reference to a specific movie scene or ironic message, they therefore creatively limit their audience [ 49 , 70 ]. Others manage their audience by using more self-limiting privacy tactics like self-censorship [ 70 ], choosing just to not post something they were considering in the first place. These various tactics allow users to control who can see what on social media in different ways.

3.3 Reputation Management Through Self-Presentation

Technology-mediated interactions have led to new ways of managing how we present ourselves to different groups of friends (e.g., using different profiles on the same platform based on the audience) [ 71 ]. Being able to control the way we come across to others can be a challenging privacy problem that social media users must learn to navigate. Features to limit audience can also help with managing self-presentation. Nonetheless, reputation or impression management is not just about avoiding posts or limiting access to content. Posting more content, such as selfies, is another approach used to control the way others perceive a user [ 72 ]. In this case, it is important to present the content that helps convey a certain image of oneself. Research has revealed that those who engage more in impression management tend to have more online friends and disclose more personal information [ 73 ]. Those who feel online disclosures could leave them vulnerable to negativity, such as individuals who identify as LGBTQ+, have also been found to put an emphasis on impression management in order to navigate their online presence [ 74 ]. However, studies still show that users have anxieties around not having control over how they are presented [ 75 ]. Social media users worry not only about what they post, but are concerned about how others’ postings will reflect on them [ 42 ].

Another dimension that affects impression management attitudes is how social media platforms vary in their policies on whether user profiles must be consistent with their offline identities. Facebook’s real name policy, for instance, requires that people use their real name and represent themselves as one person, corresponding to their offline identities. Research confirms that online profiles actually do reflect users’ authentic personalities [ 76 ]. However, some platforms more easily facilitate identity exploration and have evolved norms encouraging it. For example, Finsta accounts popped up on Instagram a few years after the company started. These accounts are “Fake Instagram” accounts often sharing content that the user does not want to associate with their more public identity, allowing for more identity exploration. This may have arisen from the social norm that has evolved where Instagram users often feel like they need to present an ideal self. Scholars have observed such pressure on Instagram more than on other platforms like Snapchat [ 77 ]. While the ability to craft an online image separate from one’s offline identity may be more prevalent on platforms like Instagram, certain types of social media such as location-sharing social networks are deeply tied to one’s offline self, sharing actual physical location of its users. Users of Foursquare, a popular location-sharing app, have leveraged this tight coupling for impression management. Scholars have observed that users try to impress their friends or family members about the places they spend their time while skipping “check-in” at places like McDonald’s or work for fear of appearing boring or unimpressive [ 78 ].

Regardless of how tightly one’s online presence corresponds with their offline identity, concerns about self-presentation can arise. For example, users may lie about their location on location-sharing platforms as an impression management tactic and have concerns about harming their relationships with others [ 79 ]. On the other hand, Finstas are meant to help with self-presentation by hiding one’s true identity. Ironically, the content posted may be even more representative of the user’s attitudes and activities than the idealized images on one’s public-facing account. These contrasting examples illustrate how self-presentation concerns are complicated.

What further complicates reputation management is that social media content is shared and consumed by a group of people and not just individuals or dyads. Thus, self-presentation is not only controlled by the individual, but by others who might post pictures and/or tag that individual. Even when friends/followers do not directly post about the user, their actions can reflect on the user just by virtue of being connected with them. The issues of co-owned data and how to negotiate disclosure rules are a key area of privacy research on the rise. We refer you to Chap. 6 , which goes in-depth on this topic.

3.4 Access to Oneself

A final privacy challenge many social media users encounter is controlling accessibility others have to them. Some social media platforms automatically display when someone is online, which may invite interaction whether users want to be accessible or not. Controlling access to oneself is not as straightforward as limiting or blocking certain people’s access. For instance, studies have also shown that social pressures influence individuals to accept friend requests from “weak ties” as well as true friends [ 53 , 80 ]. As a result, the social dynamics on social media are becoming more complex, creating social anxiety and drama for many social media users [ 52 , 53 , 80 ]. Although a user may want to control who can interact with him or her, they may be worried about how using privacy features such as “blocking” other accounts may send the wrong signal to others and hurt their relationships [ 81 ]. In fact, an online social norm called “hyperfriending” [ 82 ] has developed where only 25% of a user’s online connections represent true friendship [ 83 ]. This may undermine the privacy individuals wished they had over who interacts with them on their various accounts. Due to social norms or etiquette, users may feel compelled to interact with others online [ 84 ]. Even if users do not feel like they need to interact, they can sometimes get annoyed or overwhelmed by seeing too much information from others [ 85 ]. Their mental state is being bombarded by an overload of information, and they may feel their attention is being captured.

Many social media sites now include location-sharing features to be able to tell people where they are by checking in to various locations, tag photos or posts, or even share location in real time. Therefore, privacy issues may also arise when sharing one’s location on social media and receiving undesirable attention. Studies point out user concerns about how others may use knowledge of that location to reach out and ask to meet up, or even to physically go find the person [ 86 ]. In fact, research has found that people may not be as concerned about the private nature of disclosing location as they are concerned for disturbing others or being disturbed oneself as a result of location sharing [ 87 ]. This makes sense given that analysis of mobile phone conversations reveals that describing one’s location plays a big role in signaling availability and creating social awareness [ 87 , 88 ].

Some scholars focus on the potential harm that may come because of sharing their location. Tsai et al. surveyed people about perceived risks and found that fear of potential stalkers is one of the biggest barriers to adopting location-sharing services [ 89 ]. Nevertheless, studies have also found that many individuals believe that the benefits of using location sharing outweigh the hypothetical costs. Foursquare users have expressed fears that strangers could use the application to stalk them [ 78 ]. These concerns may explain why users share their location more often with close relationships [ 37 ].

Geotagging is another area of privacy concern for online users. Geotagging is when media (photo, website, QR codes) contain metadata with geographical information. More often the information is longitudinal and latitudinal coordinates, but sometimes even time stamps are attached to photos people post. This poses a threat to individuals that post online without realizing that their photos can reveal sensitive information. For example, one study assessed Craigslist postings and demonstrated how they could extract location and hours a person would likely be home based on a photo the individual listed [ 90 ]. The study even pinpointed the exact home address of a celebrity TV host based on their posted Twitter photos. Researchers point out how many users are unaware that their physical safety is at risk when they post photos of themselves or indicate they are on vacation [ 22 , 90 , 91 ]. Doing so may make them easy targets for robbers or stalkers to know when and where to find them.

3.5 Privacy Paradox

While researchers have investigated these various privacy attitudes, perceptions, and behaviors, the privacy paradox (where behavior does not match with stated privacy concerns) has been especially salient on social media [ 92 , 93 , 94 , 95 , 96 , 97 ]. As a result, much research focuses on understanding the decision-making process behind self-disclosure [ 98 ]. Scholars that view disclosure as a result of weighing the costs and the benefits of disclosing information use the term “privacy calculus” to characterize this process [ 99 ]. Other research draws on the theory of bounded rationality to explain how people’s actions are not fully rational [ 100 ]. They are often guided by heuristic cues which do not necessarily lead them to make the best privacy decisions [ 101 ]. Indeed, a large body of literature has tried to dispel or explain the privacy paradox [ 94 , 102 , 103 ].

4 Reconceptualizing Social Media Privacy as Boundary Regulation

By reconceptualizing privacy in social media as a boundary regulation , we can see that the seeming paradox in privacy is actually a balance between being too open or disclosing too much and being too inaccessible or disclosing too little. The latter can result in social isolation which is privacy regulation gone wrong.

In the context of social media, there are five different types of privacy boundaries that should be considered.

People use various methods of coping with privacy violations , many not tied to disclosing less information.

Drawing from Altman’s theories of privacy in the offline world (see Chap. 2 ), Palen and Dourish describe how, just like in the real world, social media privacy is a boundary regulation process along various dimensions besides just disclosure [ 104 ]. Privacy can also involve regulating interactional boundaries with friends or followers online and the level of accessibility one desires to those people. For example, if a Facebook user wants to limit the people that can post on their wall, they can exclude certain people. Research has identified other threats to interpersonal boundary regulation that arise out of the unique nature of social media [ 42 ]. First, as mentioned previously, the threat to spatial boundaries occurs because our audiences are obscured so that we no longer have a good sense of whom we may be interacting with. Second, temporal boundaries are blurred because any interaction may now occur asynchronously at some time in the future due to the virtual persistence of data. Third, multiple interpersonal spaces are merging and overlapping in a way that has caused a “steady erosion of clearly situated action” [ 5 ]. Since each space may have different and, at times, mutually exclusive behavioral requirements, acting accordingly within those spaces has become more of a challenge to manage context collapses [ 42 ]. Along with these problems, a major interpersonal boundary regulation challenge is that social media environments often take control of boundary regulation away from the end users. For instance, Facebook’s popular “Timeline” automatically (based on an obscure algorithm) broadcasts an individual’s content and interactions to all of his or her friends [ 41 ]. Thus, Facebook users struggle to keep up to date on how to manage interactions within these spaces as Facebook, not the end user, controls what is shared with whom.

4.1 Boundary Regulation on Social Media

One conceptualization of privacy that has become popular in the recent literature is viewing privacy on social media as a form of interpersonal boundary regulation. These scholars have characterized privacy as finding the optimal or appropriate level of privacy rather than the act of withholding self-disclosures. That is, it is just as important to avoid over disclosing as it is to avoid under disclosing. Therefore, disclosure is considered a boundary that must be regulated so that it is not too much or too little. Petronio’s communication privacy management (CPM) theory emphasizes how disclosing information (see Chap. 2 ) is vital for building relationships, creating closeness, and creating intimacy [ 105 ]. Thus, social isolation and loneliness resulting from under disclosure can be outcomes of privacy regulation gone wrong just as much as social crowding can be an issue. Similarly, the framework of contextual integrity explains that context-relative informational norms define privacy expectations and appropriate information flows and so a disclosure in one context (such as your doctor asking you for your personal medical details) may be perfectly appropriate in that context but not in another (such as your employer asking you for your personal medical details) [ 106 ]. Here it is not just about an information disclosure boundary but about a relationship boundary where the appropriate disclosure depends on the relationship between the discloser and the recipient.

Drawing on Altman’s theory of boundary regulation, Wisniewski et al. created a useful taxonomy detailing the various types of privacy boundaries that are relevant for managing one’s privacy on social media [ 107 ]. They identified five distinct privacy boundaries relevant to social media:

Relationship . This involves regulating who is in one’s social network as well as appropriate interactions for each relationship type.

Network . This consists of regulating access to one’s social connections as well as interactions between those connections.

Territorial . This has to do with regulating what content comes in for personal consumption and what is available in interactional spaces.

Disclosure . The literature commonly focuses on this aspect which consists of regulating what personal and co-owned information is disclosed to one’s social network.

Interactional . This applies to regulating potential interaction with those within and outside of one’s social network.

Of these boundary types, Wisniewski et al. emphasize the most important is maintaining relationship boundaries between people. Similarly, Child and Petronio note that “one of the most obvious issues emerging from the impact of social network site use is the challenge of drawing boundary lines that denote where relationships begin and end” [ 108 ]. Making sure that social media facilitates behavior appropriate to each of the user’s relationships is a major challenge.

Each of these interpersonal boundaries can be further classified into regulation of more fine-grained dimensions. In Table 7.1 , we summarize the different ways that each of these five interpersonal boundaries can be regulated on social media.

Next, we describe each of these interpersonal boundaries in more detail.

Self- and Confidant Disclosures

The information disclosure concerns described in the previous “Privacy Challenges” section are the focus of privacy around disclosure boundaries. Posting norms on social media platforms often encourage the disclosure of one’s personal information (e.g., age, sexual orientation, location, personal images) [ 109 , 110 ]. Disclosing such information can leave one open to financial, personal, and professional risks such as identity theft [ 46 , 111 ]. However, there are motivations for disclosing personal information. For example, research suggests that posting behaviors on social media platforms have a significant relationship with a desire for positive self-presentation [ 112 , 113 ]. Privacy management is necessary for balancing the benefits of disclosure and its associated risks. This involves regulating both self-disclosure for information about one’s self and confidant-disclosure boundaries for information that is “co-owned” with others [ 105 ] (e.g., a photograph that includes other people, or information about oneself that is shared with another in confidence).

There are a variety of disclosure boundary regulation mechanisms on social media interfaces. Many platforms offer users the freedom to selectively share various types of information, create personal biographies, share links to their websites, or post their birthday. Self-disclosure can also be maintained through privacy settings such as granular control over who has access to specific posts. The level of information one wishes to disclose could be managed by various privacy settings. Many social media platforms encourage multiparty participation with features such as tagging, subtweeting, or replying to others’ posts. This level of engagement promotes the celebration of shared moments or co-owned information/content. At the same time, it increases possibilities for breaching confidentiality and can create unwanted situations such as posting congratulations to a pregnancy that has not yet been announced to most family members or friends. Some ways that people manage violations of disclosure boundaries are to reactively confront the violator in private or to stop using the platform after the unexpected disclosure [ 114 ].

Relationship Connection and Context

Relationship boundaries have to do with who the user accepts into his or her “friend group” and consequently shapes the nature of online interactions within a person’s social network. Social media platforms have embedded the idea of “friend-based privacy” where information and interactional access is primarily dependent on one’s connections. The structure of one’s network can affect the level of engagement and the types of disclosures made on a platform. Individuals with more open relationship boundaries may have higher instances of weak ties compared to others who may employ stricter rules for including people into their inner circles. For example, studies have found people who engage in “hyper-adding,” namely, adding a significant number of persons to their network which could result in a higher distribution of “weak ties” [ 53 , 82 ].

After users accept friends and make connections, they must manage overlapping contexts such as work, family, or acquaintances. This leads to the types of privacy issues discussed under “Context Collapse” in the previous “Privacy Challenges” section. Research shows that boundary violations are hardly remedied by blocking or unfriending unless in extreme cases [ 115 ]. Furthermore, users rarely organize their friends into groups (and some social media platforms do not offer that functionality) [ 114 ]. People are either unaware of the feature, think it takes too much time, or are concerned that the wrong person would still see their information. As a result, users often feel they have to sacrifice being authentic online to control their privacy.

Network Discovery and Interaction

An individual’s social media network is often public knowledge, and there are advantages and disadvantages of having friends being aware of one’s social connections (aka friends list or followers). Network boundary mechanisms enable people to identify groups of people and manage interactions between the various groups. We highlight two types of network boundaries, namely, network discovery and network intersection boundaries. First, network discovery boundaries are primarily centered around the act of regulating the type of access others have to one’s network connections. Implementing an open approach to network discovery boundaries may create problems that may arise including competition as competitors within the same industry could steal clients by carefully selecting from a publicly facing friend list. Another issue arises when a person’s friend does not have a good reputation and that connection is negatively received by others within that social group. Sometimes the result is positive, for example, when friends or family find they have mutual connections, thus building social capital. Some social media platforms offer the ability to hide friend groups from everyone.

Network intersection boundaries involve the regulation of the interactions among different friend groups within one’s social network. Social media users have expressed the benefits of engaging in discourse online with people who they may not personally know offline [ 116 ]. In contrast, clashes within one’s friend list due to opposing political views or personal stances could create tensions that would make moderating a post difficult. These boundaries could be harder to control and sometimes lead to conflict if one is forced to choose which friends can participate in discussions.

Inward- and Outward-Facing Territories

Territorial boundaries include “places and objects in the environment” to indicate “ownership, possession, and occasional active defense” [ 117 ]. Within social media, there are features that are either inward-facing territories or outward-facing territories. Inward-facing territories are commonly characterized as spaces where users could find updates on their friends and see the content their connections were posting (such as the “news feed” on Facebook or “updates” on LinkedIn). To control their inward-facing territories, individuals could hide posts from specific people, adjust their privacy settings, and use filters to find specific information.

These territories are constantly being updated with photos, videos, and news articles that are personalized and not public facing which contributes to an overall low priority for territorial management [ 114 ]. Most choose to ignore content that is irrelevant to them rather than employing privacy features. In addition, once privacy features are used to hide content from particular friends, users rarely revisit that decision to reconsider including content within that territory from that person.

It is important to note that the key characteristic of outward-facing territory management is the regulation of potentially unsatisfactory interactions rather than a fear of information exposure. One example of an outward-facing territory is Facebook’s wall/timeline, where a person’s friend may contribute to your social media presence. Outward-facing territories fall between a public and private place, which creates more risk of unintended boundary violations. Altman argues that “because of their semipublic quality [outward-facing territories] often have unclear rules regarding their use and are susceptible to encroachment by a variety of users, sometimes inappropriately and sometimes predisposing to social conflict” [ 117 ]. Similar to confidant disclosure described above, connections may post (unwanted) content on a user’s wall that could lead to turbulence if that content is later deleted.

Interactional Disabling and Blocking

Interactional boundaries limit the need for other boundary regulations discussed because a person reduces access to oneself by disabling features [ 114 ]. For example, a user may deactivate Facebook Messenger to avoid receiving messages but reactivate the app when they deem that interaction to be welcomed. In a similar regard, disabling semipublic features of the interface (such as the wall on Facebook) could assist users in having a greater sense of control. This manifestation of interaction withdrawal is typically not directed at reducing interaction with a specific person; rather, it may be motivated by a high desire to control one’s online spaces. As such, disabling features are associated with perceptions of mistrust within one’s network and a desire to limit interruptions [ 115 ]. On the more extreme end, blocking could also be employed to regulate interactional boundaries. Unlike other withdrawal mechanisms such as disabling your wall, picture tagging, or chat, blocking is inherently targeted. The act represents the rejection and revocation of access to oneself from a particular party. Some social media platforms allow users to block other people or pages, meaning that the blocked person may not contact or interact with the user in any form. Generally, blocking a person results from a negative experience such as stalking or being bombarded with unwanted content [ 118 ].

4.2 Coping with Social Media Privacy Violations

Overtime, many social media platforms have implemented new privacy features that attempt to address evolving privacy risks and users’ need for more granular control online. While this effort is commendable, Ellison et al. argue that “privacy behaviors on social networking sites are not limited to privacy settings” [ 41 ]. Thus, social media users still venture outside the realm of privacy settings to achieve appropriate levels of social interactions. Coping mechanisms can be viewed as behaviors utilized to maintain or regain interpersonal boundaries [ 107 ]. Although these coping approaches may often be suboptimal, Wisniewski et al.’s framework of coping strategies for maintaining one’s privacy provides insight into the struggles many social media users face in maintaining these boundaries.

This approach is often defined as the “reduction of intensity of inputs” [ 117 ]. Filtering includes selecting whom one will accept into their online social circle and is often used in the management of relational boundaries. Filtering techniques may include relying on social cues (e.g., viewing the profile picture or examining mutual friends) before confirming the addition of a new connection. Other methods leverage non-privacy-related features that are repurposed to manage interactions based on relation context, for example, creating multiple accounts on the same platform to separate professional connections from personal friends.

The vast amount of information on social media could easily become overwhelming and difficult to consume. Therefore, social media users may opt to ignore posts or skim through information to decide which ones should receive priority for engagement. Ignoring is most common for inward-facing territories such as your “Feed” page. The overreliance on this approach might increase the chances of missing critical moments that connections shared.

Blocking is a more extreme approach to interactional boundary management compared to filtering and ignoring, which contributes to lower levels of reported usage [ 119 ]. As an alternative, users have developed other technology-supported mechanisms that would allow them to avoid unwanted interactions. As an example, Wisniewski et al. describe using pseudonyms on Facebook to make it more difficult to find a user on the platform [ 107 ]. Another method for blocking unwanted interactions is to use the account of a close friend or loved one to enjoy the benefits of the content on the platform without the hassle of expected interactions. Page et al. highlight this type of secondary use for those who avoid social media because of social anxieties, harassment, and other social barriers [ 120 ].

When some users feel they are losing control, they withdraw from social media by doing one of the following: deleting their account, censoring their posts, or avoiding confrontation. As a result, a common technique is limiting or adjusting the information shared (even avoiding posts that may be received negatively) [ 121 ]. Das and Kramer found that “people with more boundaries to regulate censor more; people who exercise more control over their audience censor more content; and, users with more politically and age diverse friends censor less, in general” [ 122 ]. Withdrawal suggests that some users think the risks outweigh the benefits of social media.

Unlike offensive coping mechanisms such as filtering, blocking, or withdrawal, social media users resort to more defensive mechanisms when the intention is to create interactions that may be confrontational. Aggressive behavior is displayed when the goal is to seek revenge or garner attention from specific people or groups. Some users may choose to exploit subliminal references in their posts to indirectly address or offend specific persons (e.g., an ex-partner, coworker, family member).

Compliance is giving in to pressures (external or internal) and adjusting one’s interpersonal boundary preferences for others. Altman describes this as “repeated failures to achieve a balance between achieved and desired levels of privacy” [ 117 ]. Relinquishing one’s interactional privacy needs to accommodate pressures of disclosure, nondisclosure, or friending preferences could result in a perceived loss of control over social interactions.

A healthy strategy for managing social media boundary violations is communicating with the other person involved and finding a resolution. Prior work indicates that most users that compromise do so offline [ 107 ]. These compromises are mostly with closer friends who the user can contact through email, phone call, or messaging. These more private scenarios avoid other people becoming involved online. Also, many compromises are about tagging someone in photos or sharing personal information about another user (i.e., confidant disclosure).

In addition to this coping framework for social media privacy, Stutzman examined the creation of multiple profiles on social media websites, primarily Facebook, as an information regulation mechanism. Through grounded theory, he identified three types of information boundary regulation within this context (pseudonymity, practical obscurity, and transparent separations) and four overarching motives for these mechanisms (privacy, identity, utility, and propriety) [ 71 ]. Lampinen et al. created a framework of strategies for managing private versus public disclosures. It defined three dimensions by which strategies differed: behavioral vs. mental, individual vs. collaborative, and preventative vs. corrective [ 71 , 123 ]. The various coping frameworks conceptualize privacy as a process of interpersonal boundary regulation. However, they do not solve the problem of managing privacy on these platforms. They do attempt to model the complexity of privacy management in a way that better reflects the complex nature of interpersonal relationships rather than as a matter of withholding versus disclosing private information.

5 Addressing Privacy Challenges

Rather than just measuring privacy concerns, researchers and designers should focus on understanding attitudes towards boundary regulation. Validated tools for measuring boundary preservation concern and boundary enhancement expectations are provided in this chapter.

Privacy features need to be designed to account for individual differences in how they are perceived and used. While some feel features like untag, unfriend, and delete are useful, others are worried about how using such features will impact their relationships.

Unaddressed privacy concerns can serve as a barrier to using social media. It is crucial to design for not only functional privacy concerns (e.g., being overloaded by information, guarding from inappropriate data access) but social privacy concerns as well (e.g., unwelcome interactions, pressures surrounding appropriate self-presentation).

This section describes how to better identify privacy concerns by measuring them from a boundary regulation perspective. We also emphasize the importance of individual differences when designing privacy features. Finally, we elaborate on a crucial set of social privacy issues that we feel are a priority to address. While many social media users may feel these types of social pressures to some degree, these problems have pushed some of society’s most vulnerable to complete abandonment of social media despite their desire for social connection. We call on social media designers and researchers to focus on these problems which are a side effect of the technologies we have created.

5.1 Understanding People and Their Privacy Concerns

Understanding social media privacy as a boundary regulation allows us to better conceptualize people’s attitudes and behaviors. It helps us anticipate their concerns and balance between too little or too much privacy. However, many existing tools for measuring privacy come from the information privacy perspective [ 124 , 125 , 126 ] and focus on data collection by organizations, errors, secondary use, or technical control of data. In detailing the various types of privacy boundaries that are relevant for managing one’s privacy on social media, Wisniewski et al. [ 114 ] emphasized that the most important is maintaining relationship boundaries between people.

Page et al. [ 86 , 127 ] similarly found that concerns about damaging relationship boundaries are actually at the root of low-level privacy concerns such as worrying about who sees what, being too accessible, or being bothered or bothering others by sharing too much information. For instance, a typically cited privacy concern such as being worried about a stranger knowing one’s current location turns out to be a privacy concern only if an individual expects that a stranger might violate typical relationship expectations. Their research revealed that many people were unconcerned about strangers knowing their location and explained that no one would care enough to use that information to come find them. They did not expect anyone to violate relationship boundaries and so were privacy unconcerned. On the other hand, those who felt there was a likelihood of someone using their location for nefarious purposes were privacy concerned. Social media enabling a negative change in relationship boundaries and the types of interactions that are now possible (such as strangers now being able to locate me) drives privacy concerns.

In fact, while scholars have used many lower-level privacy concerns such as being worried about sharing information to predict social media usage and adoption, they have met with mixed success leading to the commonly observed privacy paradox. However, research shows that preserving one’s relationship boundaries is at the root of these low-level online privacy concerns (e.g., informational, psychological, interactional, and physical privacy concerns) and is a significant predictor of social media usage [ 86 , 127 ]. In other words, concerns about social media damaging one’s relationships (aka relationship boundary regulation) are what drives privacy concerns.

5.2 Measuring Privacy Concerns

Boundary regulation plays a key role in maintaining the right level of privacy on social media, but how do we evaluate whether a platform is adequately supporting it? A popular scale for testing users’ awareness of secondary access is the Internet Users’ Information Privacy Concerns (IUIPC) scale, which measures their perceptions of collection, control, and awareness of user data [ 125 ]. An important finding is that users “want to know and have control over their information stored in marketers’ databases.” This indicates that social media should be designed such that people know where their data goes. However, throughout this chapter, it is evident that research on social media privacy has found concerns about social privacy more salient. In fact, the focus on relationship boundaries is a key privacy boundary to consider and measure in evaluating privacy concerns. Thus, having a scale to measure relationship boundary regulation would allow researchers and designers to better evaluate social media privacy.

Here we present validated relationship boundary regulation survey items developed by Page et al. which predict adoption and usage for various social media including Facebook, Twitter, LinkedIn, Instagram, and location-sharing social media [ 127 , 128 ]. These survey items can be used to evaluate privacy concerns for use of existing social media platforms, as well as capturing attitudes about new features or platforms. The survey items capture attitudes about one’s ability to regulate relationship boundaries when using a social media platform and are administered with a 7-point Likert scale (−3 = Disagree Completely, −2 = Disagree Mostly, −1 Disagree Slightly, 0 = Neither agree nor disagree, 1 = Agree Slightly, 2 = Agree Mostly, 3 = Agree Completely). These items measure both concerns and positive expectations.

When evaluating a new or existing social media platform, the relationship boundary preservation concern (BPC) items can be used to gauge user’s concerns about harming their relationships. A higher score would indicate that more support for privacy management is needed on a given platform. The relationship boundary enhancement expectation (BEE) items can also be used to evaluate whether users expect that using the platform will improve the user’s relationships. A high score is important to driving adoption and usage – having low concerns alone is not enough to drive usage. Along similar lines, even if users have high concerns, they may be counteracted by a perceived high level of benefits and so users remain frequent users of a platform. For instance, Facebook, one of the most widely used platforms, was shown to both invoke high levels of concern as well as high levels of enhancement expectation [ 127 ]. However, note that high frequency of use does not necessarily mean high levels of engagement (e.g., posting, commenting) or that users do not employ suboptimal workarounds (e.g., being vague in their posts) [ 81 ]. On the other hand, Twitter has a higher level of concerns compared to perceived enhancement and, accordingly, lower levels of usage [ 127 ].

In the validation studies, the set of survey items representing BPC were treated as a scale and factor analysis used to compute a single score. Similarly, the ones representing BEE were used to generate a single factor score to represent that construct. These could be used to evaluate new features or platforms in the lab or after deployment. For instance, after performing tasks on a new feature or platform, the user can answer these questions and the designer can compare the responses between different designs in A/B testing, or to predict usage frequency and adoption intentions (e.g., see [ 127 , 129 ] for detailed examples). Moreover, by correlating BPC or BEE with demographics or other customer segmentations (e.g., age, whether they are new customers, purpose for using the platform), product designers may be able to identify attitudes that are connected with certain segments of their customer base and address it directly.

5.3 Designing Privacy Features

When designing for privacy features, a crucial aspect to consider is individual differences. Privacy is not one-size-fits-all: there are many variations in how people feel, what they expect, and how they behave. Because social media connects individuals with diverse needs and expectations, and from a myriad of contexts, a necessity in addressing social media privacy is understanding individual differences in privacy attitudes and behaviors. Many individual differences have been identified that shape privacy needs and preferences [ 15 ] and behaviors [ 6 , 24 , 99 ].

Scholars have established that privacy as a construct is not limited to informational privacy (i.e., understanding the flow of data) but also includes social privacy concerns that may be more interactional (e.g., accessibility) or psychological in nature (e.g., self-presentation) [ 111 , 130 ]. Thus, a host of attitudes and experiences could shape an individual’s view on what it means to have privacy online. For example, people’s preferences for privacy tools could be heavily influenced by the type of data being shared or the recipient of that data [ 36 , 131 , 132 ]. Likewise, prior experiences (negative or positive) could shape how people interact online which could affect disclosure [ 133 ]. Context and relevance have also been found to significantly influence privacy behavior online. Drawing from the contextual integrity framework, many researchers argue that when people perceive data collection to be reasonable or appropriate, they are more likely to share information [ 134 ]. On the other hand, research has shown that when faced with uncomfortable scenarios, people employ privacy protective behaviors such as nondisclosure or falsifying information [ 135 ]. Research has also pointed to personal characteristics that could shape digital privacy behavior such as personality, culture, gender, age, and social norms [ 64 , 106 , 136 , 137 , 138 , 139 , 140 ].

While identifying concerns about damaging one’s relationships is important to measure, understanding the individual differences that can lead someone to be concerned can provide insight into addressing these concerns. For instance, through a series of investigations, Page et al. uncovered a communication style that predicts concerns about preserving relationship boundaries on many different social media platforms [ 127 , 128 , 129 ]. This communication style is characterized by wanting to put information out there so that the individual does not need to proactively inform others. Those who prefer an FYI (For Your Information) communication style are less concerned about relationship boundary preservation and, as a result, exhibit higher levels of engagement, interactions, and use of social media than low FYI communicators. For example, the survey items that capture an FYI communication style preference for location-sharing social media are: “I want the people I know to be aware of my location, without having to bother to tell them,” “I would prefer to make my location available to the people I know, so that they can see it whenever they need it,” and “The people I know should be able to get my location whenever they feel they need it.” Each item is administered with a 7-point Likert scale (Disagree strongly, Disagree moderately, Disagree slightly, Neutral, Agree slightly, Agree moderately, Agree strongly). For other social media platforms, the information type is adjusted (i.e., “what I’m up to” instead of “my location”).

Consequently, this raises concern over implications for non-FYI communicators since the design of major social media platforms is catered to FYI communicators [ 127 , 128 ]. Drawing on this insight, Page demonstrated how considering the user’s communication style when designing location-sharing social media interfaces can alleviate boundary preservation concerns [ 129 ]. Certain design choices such as choosing a request-based location-sharing interaction can lower concerns for non-FYI communicators, while continuous location-sharing and check-in type interactions that are typical in social media may be fine for FYI communicators.

This demonstrates that researchers should consider in the design of social media individual differences that affect privacy attitudes. Another individual difference in attitudes towards privacy features is a user’s apprehension that using common features such as untag, delete, or unfriend/unfollow can act as a hindrance in their relationships with others. Page et al. identified that while many use privacy features and perceive them as a tool useful for protecting their privacy, there are also many who are concerned about how using privacy features could hurt their relationships with others (e.g., being worried about offending others by untagging or unfriending) [ 81 ]. Instead, those individuals would use alternative privacy management tactics such as vaguebooking (not sharing specific details and using vague posts). Designers need to be aware that privacy features also need to be catered to individual variations in attitudes as well or else they may be ineffective and unused by certain segments of the user population.

5.4 Privacy Concerns and Social Disenfranchisement

A significant amount of research within the domain of social media nonuse has been focused on functional barriers that hinder adoption. In many cases, nonuse is traced to a lack of access (e.g., limited access to technology, financial resources, or the Internet). However, the push against adoption and subsequent usage can be voluntary [ 141 ] due to functional privacy concerns such as concerns about data breaches, information overload, or annoying posts [ 120 ]. Several social media companies have also implemented features such as time limits to help users counter overuse [ 142 ].

Likewise, it is equally important to consider social barriers that prevent social media engagement for people who really could use the social connection. Sharing about distressing experiences can be beneficial and reduce stigma, improve connection and interpersonal relationships with one’s network, and enhance well-being [ 6 , 7 , 143 , 144 ]. However, Page et al. identified a class of barriers that highlight social privacy concerns rooted in social anxiety or concerns about being overly influenced by others on social media. This is in contrast to the prior school of thought that focused primarily on functional motivations as barriers that influence nonuse (see Fig. 7.1 ) [ 120 ]. They point out that many who are already vulnerable avoid social media due to social barriers such as online harassment or paralysis over making decisions pertaining to online social interactions. Yet, they are also the ones who could benefit greatly from social connection and who end up losing touch with friends and social support by being off social media. They term this lose-lose situation of negative social consequences that arise when using social media as well as consequences from not using it, social disenfranchisement . They call on designers to address such social barriers and to realize that in designing the user experience to connect users so well, they are implicitly designing the nonuser experience of being left out. Given that social media usage may not always be a viable option, designers should design to alleviate the negative consequences of nonuse.

figure 1

Extension of Wyatt’s frame that divided nonusers along the dimensions of whether someone has used the technology in the past and the motivation for adoption (extrinsic, e.g., organizationally imposed, versus intrinsic, e.g., desire to communicate through technology). Page et al. differentiate between functional motivations/barriers of use (which has been the focus of much research) versus social motivations/barriers to use. Other frameworks consider additional temporal states of adoption (whether they are currently using and whether they will in the future). See [ 120 ] for more detailed descriptions

5.5 Guidelines for Designing Privacy-Sensitive Social Media

Now that you have learned about various privacy problems related to social media use, how do you apply that to designing or studying social media? Here are some practical guidelines.

Identifying Privacy Attitudes

Measuring privacy attitudes is a tricky task. Using existing informational privacy scales, users often say they are concerned, but this does not end up matching their actual behavior. By approaching it from a boundary regulation perspective, it will be easier to identify the proper balance between sharing too much and sharing too little. The survey items described in this chapter offer a way to measure concerns about boundary regulation as well as positive expectations. Considering both are key to more accurately predicting user behaviors.

Understanding Your Target Population

Some key characteristics are described in this chapter. Identifying these in your target population can help you be aware of individual differences that might affect privacy preferences on social media. When you are measuring privacy concerns, matching the preferences of your audience makes it more likely that they will have a good user experience. Pay particular attention to traits that have been identified as being related to usage and adoption of social media platforms, such as the FYI communication style which can be measured using the survey items provided in this chapter.

Evaluating Privacy Features

Focus on understanding whether users perceive your privacy features as useful or perhaps as posing a relational hindrance. The survey items provided in this chapter can help you do so. When anticipating privacy needs of your social media users, make sure you identify features that may impact boundary regulation both positively and negatively. You can compare attitudes between the existing feature and the newer version of the feature that will/has been deployed. You can also correlate attitudes towards privacy features with individual characteristics – some subpopulation of users may see privacy features as useful, while others may consider them a relational hindrance.

6 Chapter Summary

Social media has been widely adopted and quickly become an integral part of social, personal, economic, political, professional, and instrumental welfare. Understanding how mediated social interactions change the assumptions around audience management, disclosure, and self-presentation is key to working towards reconciling offline privacy assumptions with new realities. Moreover, given the rapidly changing landscape of widely available social media platforms, researchers and designers need to continually re-evaluate the privacy implications of new services, features, and interaction modalities.

With the rise of networked individualism, an especially strong emphasis must be placed on understanding individual characteristics and traits that can shape a user’s privacy expectations and needs. Given the inherently social nature of social media, understanding social norms and the influence of larger cultural and structural factors is also important for interpreting expectations of privacy and the significance around various social media behaviors.

Privacy does not have a one-size-fits-all solution. It is a normative construct that is context dependent and can change over time, from culture to culture, and person to person. It needs to be weighed across different individuals and against other important goals and values of the larger group or society. Because people and their social interactions can be complex, designing for social media privacy is usually not a straightforward task. However, the consequences of not addressing privacy issues can range from irritating to devastating. Using this chapter as a guide and taking the steps to think through privacy needs and expectations of your social media users is an integral part of designing for social media.

Quan-Haase, Anabel, and Alyson L. Young. 2010. Uses and gratifications of social media: A comparison of Facebook and instant messaging. Bulletin of Science, Technology & Society 30 (5): 350–361.

Article   Google Scholar  

Gruzd, Anatoliy, Drew Paulin, and Caroline Haythornthwaite. 2016. Analyzing social media and learning through content and social network analysis: A faceted methodological approach. Journal of Learning Analytics 3 (3): 46–71.

Yang, Huining. 2020. Secondary-school Students’ Perspectives of Utilizing Tik Tok for English learning in and beyond the EFL classroom. In 2020 3rd International Conference on Education Technology and Social Science (ETSS 2020) , 163–183.

Google Scholar  

Van Dijck, José. 2012. Facebook as a tool for producing sociality and connectivity. Television & New Media 13 (2): 160–176.

Grudin, Jonathan. 2001. Desituating action: Digital representation of context. Human–Computer Interaction 16 (2–4): 269–286.

Andalibi, Nazanin, Oliver L. Haimson, Munmun De Choudhury, and Andrea Forte. 2016. Understanding social media disclosures of sexual abuse through the lenses of support seeking and anonymity. In Proceedings of the 2016 CHI conference on human factors in computing systems , 3906–3918.

Andalibi, Nazanin, Pinar Ozturk, and Andrea Forte. 2017. Sensitive self-disclosures, responses, and social support on Instagram: The case of #depression. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing , 1485–1500.

Lin, Han, William Tov, and Qiu Lin. 2014. Emotional disclosure on social networking sites: The role of network structure and psychological needs. Computers in Human Behavior 41: 342–350.

Burke, Moira, Cameron Marlow, and Thomas Lento. 2010. Social network activity and social well-being. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , ACM, 1909–1912.

Ellison, Nicole B., Charles Steinfield, and Cliff Lampe. 2007. The benefits of Facebook “Friends:” Social capital and college students’ use of online social network sites. Journal of Computer-Mediated Communication 12 (4): 1143–1168.

———. 2011. Connection strategies: social capital implications of Facebook-enabled communication practices. New Media & Society 13 (6): 873–892.

Koroleva, Ksenia, Hanna Krasnova, Natasha Veltri, and Oliver Günther. 2011. It’s all about networking! Empirical investigation of social capital formation on social network sites. In ICIS 2011 Proceedings .

Fischer-Hübner, Simone, Julio Angulo, Farzaneh Karegar, and Tobias Pulls. 2016. Transparency, privacy and trust–technology for tracking and controlling my data disclosures: Does this work? In IFIP International Conference on Trust Management , Springer, 3–14.

Xu, Heng, Hock-Hai Teo, Bernard C.Y. Tan, and Ritu Agarwal. 2012. Research note-effects of individual self-protection, industry self-regulation, and government regulation on privacy concerns: A study of location-based services. Information Systems Research 23 (4): 1342–1363.

Boyd, Danah. 2002. Faceted Id/Entity: Managing Representation in a Digital World . Retrieved August 14, 2020 from https://dspace.mit.edu/handle/1721.1/39401 .

Boyd, Danah M., and Nicole B. Ellison. 2007. Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication 13 (1): 210–230.

Dwyer, C., S.R. Hiltz, M.S. Poole, et al. 2010. Developing reliable measures of privacy management within social networking sites. In System Sciences (HICSS), 2010 43rd Hawaii International Conference on , 1–10.

Hargittai, E. 2007. Whose space? Differences among users and non-users of social network sites. Journal of Computer-Mediated Communication 13: 1.

Tufekci, Zeynep. 2008. Grooming, Gossip, Facebook and Myspace. Information, Communication & Society 11 (4): 544–564.

Kane, Gerald C., Maryam Alavi, Giuseppe Joe Labianca, and Stephen P. Borgatti. 2014. What’s different about social media networks? A framework and research agenda. MIS Quarterly 38 (1): 275–304.

Pew Research Center. 2019. Social Media Fact Sheet . Pew Research Center: Internet, Science & Technology. Retrieved November 27, 2020 from https://www.pewresearch.org/internet/fact-sheet/social-media/ .

Fire, M., R. Goldschmidt, and Y. Elovici. 2014. Online social networks: Threats and solutions. IEEE Communications Surveys Tutorials 16 (4): 2019–2036.

Social Media Users. DataReportal – Global Digital Insights . Retrieved March 16, 2021 from https://datareportal.com/social-media-users .

Alalwan, Ali Abdallah, Nripendra P. Rana, Yogesh K. Dwivedi, and Raed Algharabat. 2017. Social media in marketing: A review and analysis of the existing literature. Telematics and Informatics 34 (7): 1177–1190.

Binns, Reuben, Jun Zhao, Max Van Kleek, and Nigel Shadbolt. 2018. Measuring third-party tracker power across web and mobile. ACM Transactions on Internet Technology 18 (4): 52:1–52:22.

Barnard, Lisa. 2014. The cost of creepiness: How online behavioral advertising affects consumer purchase intention.

Dolin, Claire, Ben Weinshel, Shawn Shan, et al. 2018. Unpacking perceptions of data-driven inferences underlying online targeting and personalization. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems , ACM, 493.

Ur, Blase, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, and Yang Wang. 2012. Smart, useful, scary, creepy: Perceptions of online behavioral advertising. In Proceedings of the eighth symposium on usable privacy and security , ACM, 4.

Dogruel, Leyla. 2019. Too much information!? Examining the impact of different levels of transparency on consumers’ evaluations of targeted advertising. Communication Research Reports 36 (5): 383–392.

Hamilton, Isobel Asher, and Dean Grace. Signal downloads skyrocketed 4,200% after WhatsApp announced it would force users to share personal data with Facebook. It’s top of both Google and Apple’s app stores. Business Insider . Retrieved February 1, 2021 from https://www.businessinsider.com/whatsapp-facebook-data-signal-download-telegram-encrypted-messaging-2021-1 .

Wilkinson, Daricia, Moses Namara, Karishma Patil, Lijie Guo, Apoorva Manda, and Bart Knijnenburg. 2021. The Pursuit of Transparency and Control: A Classification of Ad Explanations in Social Media .

Lee, Rainie, and Barry Wellman. 2012. Networked . Cambridge, MA: MIT Press.

Dunbar, Robin. 2011. How many" friends" can you really have? IEEE Spectrum 48 (6): 81–83.

Carr, Caleb T., and Rebecca A. Hayes. 2015. Social media: Defining, developing, and divining. Atlantic Journal of Communication 23 (1): 46–65.

Xu, Heng, Tamara Dinev, H. Smith, and Paul Hart. 2008. Examining the Formation of Individual’s Privacy Concerns: Toward an Integrative View.

Consolvo, Sunny, Ian E Smith, Tara Matthews, Anthony LaMarca, Jason Tabert, and Pauline Powledge. 2005. Location disclosure to social relations: Why, when, & what people want to share. 10.

Wiese, Jason, Patrick Gage Kelley, Lorrie Faith Cranor, Laura Dabbish, Jason I. Hong, and John Zimmerman. 2011. Are you close with me? Are you nearby?: Investigating social groups, closeness, and willingness to share. UbiComp 10.

Xu, Heng, and Sumeet Gupta. 2009. The effects of privacy concerns and personal innovativeness on potential and experienced customers’ adoption of location-based services. Electronic Markets 19 (2–3): 137–149.

Acquisti, A., and R. Gross. 2006. Imagined communities: Awareness, information sharing, and privacy on the Facebook. Privacy Enhancing Technologies : 36–58.

Debatin, Bernhard, Jennette P. Lovejoy, Ann-Kathrin Horn, and Brittany N. Hughes. 2009. Facebook and online privacy: Attitudes, behaviors, and unintended consequences. Journal of Computer-Mediated Communication 15 (1): 83–108.

Ellison, Nicole B., Jessica Vitak, Charles Steinfield, Rebecca Gray, and Cliff Lampe. 2011. Negotiating privacy concerns and social capital needs in a social media environment. In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web , ed. S. Trepte and L. Reinecke, 19–32. Berlin: Springer.

Chapter   Google Scholar  

Tufekci, Z. 2008. Can You See Me Now? Audience and Disclosure Regulation in Online Social Network Sites . Retrieved January 29, 2021 from https://journals.sagepub.com/doi/abs/10.1177/0270467607311484 .

Ayalon, Oshrat and Eran Toch. 2013. Retrospective privacy: Managing longitudinal privacy in online social networks. In Proceedings of the Ninth Symposium on Usable Privacy and Security – SOUPS ’13 , ACM Press, 1.

Meeder, Brendan, Jennifer Tam, Patrick Gage Kelley, and Lorrie Faith Cranor. 2010. RT @IWantPrivacy: Widespread Violation of Privacy Settings in the Twitter Social Network . 12.

Padyab, Ali, and Tero Pã. Facebook Users Attitudes towards Secondary Use of Personal Information . 20.

van der Schyff, Karl, Stephen Flowerday, and Steven Furnell. 2020. Duplicitous social media and data surveillance: An evaluation of privacy risk. Computers & Security 94: 101822.

Symeonidis, Iraklis, Gergely Biczók, Fatemeh Shirazi, Cristina Pérez-Solà, Jessica Schroers, and Bart Preneel. 2018. Collateral damage of Facebook third-party applications: A comprehensive study. Computers & Security 77: 179–208.

Binder, Jens, Andrew Howes, and Alistair Sutcliffe. 2009. The problem of conflicting social spheres: Effects of network structure on experienced tension in social network sites. In Proceedings of the 27th international conference on Human factors in computing systems – CHI 09 , ACM Press, 965.

Marwick, Alice E., and Danah Boyd. 2011. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society 13 (1): 114–133.

Sibona, Christopher. 2014. Unfriending on Facebook: Context collapse and unfriending behaviors. In 2014 47th Hawaii International Conference on System Sciences , 1676–1685.

Boyd, Danah Michele. 2004. Friendster and publicly articulated social networking. In Extended Abstracts of the 2004 Conference on Human Factors and Computing Systems – CHI ’04 , ACM Press, 1279.

Brzozowski, Michael J., Tad Hogg, and Gabor Szabo. 2008. Friends and foes: Ideological social networking. In Proceeding of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems – CHI ’08 , ACM Press, 817.

Boyd, Danah. 2006. Friends, Friendsters, and MySpace Top 8: Writing community into being on social network sites. First Monday .

Vitak, Jessica, Cliff Lampe, Rebecca Gray, and Nicole B Ellison. “Why won’t you be my Facebook friend?”: Strategies for Managing Context Collapse in the Workplace . 3.

Dennen, Vanessa P., Stacey A. Rutledge, Lauren M. Bagdy, Jerrica T. Rowlett, Shannon Burnick, and Sarah Joyce. 2017. Context collapse and student social media networks: Where life and high school collide. In Proceedings of the 8th International Conference on Social Media & Society - #SMSociety17 , ACM Press, 1–5.

Pike, Jacqueline C., Patrick J. Bateman, and Brian S. Butler. 2018. Information from social networking sites: Context collapse and ambiguity in the hiring process. Information Systems Journal 28 (4): 729–758.

Heussner, Ki Mae and Dalia Fahmy. Teacher loses job after commenting about students, parents on Facebook. ABC News . Retrieved November 19, 2020 from https://abcnews.go.com/Technology/facebook-firing-teacher-loses-job-commenting-students-parents/story?id=11437248 .

Torba, Andrew. 2019. High school teacher fired for tweets criticizing illegal immigration. Gab News . Retrieved November 19, 2020 from https://news.gab.com/2019/09/16/high-school-teacher-fired-for-tweets-criticizing-illegal-immigration/ .

Hall, Gaynor, and Courtney Gousman. 2020. Suburban teacher’s social media post sparks outrage, internal investigation | WGN-TV. WGNTV . Retrieved November 19, 2020 from https://wgntv.com/news/chicago-news/suburban-teachers-social-media-post-sparks-outrage-internal-investigation/ .

Davis, Jenny L., and Nathan Jurgenson. 2014. Context collapse: Theorizing context collusions and collisions. Information, Communication & Society 17 (4): 476–485.

Kaul, Asha, and Vidhi Chaudhri. 2018. Do celebrities have it all? Context collapse and the networked publics. Journal of Human Values 24 (1): 1–10.

Donnelly, Erin. 2019. Kim Kardashian mom-shamed over photo of North staring at a phone: “Give her a book.” Yahoo! Entertainment . Retrieved April 11, 2021 from https://www.yahoo.com/entertainment/kim-kardashian-mom-shamed-north-west-phone-book-151126429.html .

Sutton, Jeannette, Leysia Palen, and Irina Shklovski. 2008. Backchannels on the Front Lines: Emergent Uses of Social Media in the 2007 Southern California Wildfires . 9.

Litt, Eden. 2012. Knock, knock. Who’s there? The imagined audience. Journal of Broadcasting & Electronic Media 56 (3): 330–345.

Litt, Eden, and Eszter Hargittai. 2016. The imagined audience on social network sites. Social Media + Society 2 (1): 2056305116633482.

Li, N., and G. Chen. 2010. Sharing location in online social networks. IEEE Network 24 (5): 20–25.

Stutzman, Fred, and Jacob Kramer-Duffield. 2010. Friends only: Examining a privacy-enhancing behavior in Facebook. In Proceedings of the 28th international conference on Human factors in computing systems – CHI ’10 , ACM Press, 1553.

Jung, Yumi, and Emilee Rader. 2016. The imagined audience and privacy concern on Facebook: Differences between producers and consumers. Social Media + Society 2 (2): 2056305116644615.

Vitak, Jessica. 2015. Balancing Audience and Privacy Tensions on Social Network Sites . 20.

Oolo, Egle, and Andra Siibak. 2013. Performing for one’s imagined audience: Social steganography and other privacy strategies of Estonian teens on networked publics. Institute of Journalism and Communication, University of Tartu, Tartu, Estonia 7: 1.

Stutzman, Fred, and Woodrow Hartzog. 2012. Boundary Regulation in Social Media . 10.

Pounders, Kathrynn, Christine M. Kowalczyk, and Kirsten Stowers. 2016. Insight into the motivation of selfie postings: Impression management and self-esteem. European Journal of Marketing 50 (9/10): 1879–1892.

Krämer, Nicole C., and Stephan Winter. 2008. Impression Management 2.0: The relationship of self-esteem, extraversion, self-efficacy, and self-presentation within social networking sites. Journal of Media Psychology 20 (3): 106–116.

Duguay, Stefanie. 2016. “He has a way gayer Facebook than I do”: Investigating sexual identity disclosure and context collapse on a social networking site. New Media & Society 18 (6): 891–907.

Tang, Karen P., Jialiu Lin, Jason I. Hong, Daniel P. Siewiorek, and Norman Sadeh. 2010. Rethinking location sharing: Exploring the implications of social-driven vs. purpose-driven location sharing. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing , ACM, 85–94.

Back, Mitja D., Juliane M. Stopfer, Simine Vazire, et al. 2010. Facebook profiles reflect actual personality, not self-idealization. Psychological Science 21 (3): 372–374.

Choi, Tae Rang, and Yongjun Sung. 2018. Instagram versus Snapchat: Self-expression and privacy concern on social media. Telematics and Informatics 35 (8): 2289–2298.

Lindqvist, Janne, Justin Cranshaw, Jason Wiese, Jason Hong, and John Zimmerman. 2011. I’m the mayor of my house: Examining why people use foursquare – a social-driven location sharing application. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems – CHI ’11 , ACM Press, 2409.

Page, Xinru, Bart P. Knijnenburg, and Alfred Kobsa. 2013. What a tangled web we weave: Lying backfires in location-sharing social media. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work – CSCW ’13 , ACM Press, 273.

Hogg, Tad, and D Wilkinson. 2008. Multiple Relationship Types in Online Communities and Social Networks . 6.

Page, Xinru, Reza Ghaiumy Anaraky, Bart P. Knijnenburg, and Pamela J. Wisniewski. 2019. Pragmatic tool vs. relational hindrance: Exploring why some social media users avoid privacy features. In Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 1–23.

Fono, D., and K. Raynes-Goldie. 2006. Hyperfriends and beyond: Friendship and social norms on Live Journal. Internet Research Annual .

Zinoviev, Dmitry, and Vy Duong. 2009. Toward understanding friendship in online social networks. arXiv:0902.4658 [cs] .

Smith, Hilary, Yvonne Rogers, and Mark Brady. 2003. Managing one’s social network: Does age make a difference. In Proceedings of the Interact 2003, IOS Press, 551–558.

Ehrlich, Kate, and N. Shami. 2010. Microblogging inside and outside the workplace. Proceedings of the International AAAI Conference on Web and Social Media 4: 1.

Page, Xinru, Alfred Kobsa, and Bart P. Knijnenburg. 2012. Don’t disturb my circles! Boundary preservation is at the center of location-sharing concerns. In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media , 266–273.

Iachello, Giovanni, and Jason Hong. 2007. End-user privacy in human-computer interaction. Foundations and Trends in Human-Computer Interaction 1 (1): 1–137.

Article   MATH   Google Scholar  

Bentley, Frank R., and Crysta J. Metcalf. 2008. Location and activity sharing in everyday mobile communication. In Proceeding of the Twenty-Sixth Annual CHI Conference Extended Abstracts on Human Factors in Computing Systems – CHI ’08 , ACM Press, 2453.

Tsai, Janice Y., Patrick Gage Kelley, Lorrie Faith Cranor, and Norman Sadeh. Location-Sharing Technologies: Privacy Risks and Controls . 34.

Friedland, Gerald, and Robin Sommer. 2010. Cybercasing the Joint: On the Privacy Implications of Geo-Tagging . 6.

Stefanidis, Anthony, Andrew Crooks, and Jacek Radzikowski. 2011. Harvesting ambient geospatial information from social media feeds.

Awad, Naveen Farag, and M.S. Krishnan. 2006. The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Quarterly 30 (1): 13–28.

Chen, Xi, and Shuo Shi. 2009. A literature review of privacy research on social network sites. In 2009 International Conference on Multimedia Information Networking and Security , IEEE, 93–97.

Gerber, Nina, Paul Gerber, and Melanie Volkamer. 2018. Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Computers & Security 77: 226–261.

Houghton, David J., and Adam N. Joinson. 2010. Privacy, social network sites, and social relations. Journal of Technology in Human Services 28 (1–2): 74–94.

Pavlou, Paul A. 2011. State of the information privacy literature: Where are we now and where should we go. MIS Quarterly 35 (4): 977–988.

Xu, Feng, Katina Michael, and Xi Chen. 2013. Factors affecting privacy disclosure on social network sites: An integrated model. Electronic Commerce Research 13 (2): 151–168.

Xu, Heng, Rachida Parks, Chao-Hsien Chu, and Xiaolong Luke Zhang. 2010. Information disclosure and online social networks: From the case of Facebook news feed controversy to a theoretical understanding. AMCIS , Citeseer, 503.

Dinev, Tamara, Massimo Bellotto, Paul Hart, Vincenzo Russo, Ilaria Serra, and Christian Colautti. 2006. Privacy calculus model in e-commerce – a study of Italy and the United States. European Journal of Information Systems 15 (4): 389–402.

Selten, Reinhard. 1990. Bounded rationality. Journal of Institutional and Theoretical Economics (JITE)/Zeitschrift für die gesamte Staatswissenschaft 146 (4): 649–658.

Knijnenburg, Bart P., Elaine M. Raybourn, David Cherry, Daricia Wilkinson, Saadhika Sivakumar, and Henry Sloan. 2017. Death to the privacy calculus? In Proceedings of the 2017 Networked Privacy Workshop at CSCW , Social Science Research Network.

Dienlin, Tobias, and Sabine Trepte. Is the privacy paradox a relic of the past? An in-depth analysis of privacy attitudes and privacy behaviors. European Journal of Social Psychology 45 (3): 285–297.

Kokolakis, Spyros. 2017. Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & Security 64: 122–134.

Palen, Leysia, and Paul Dourish. 2003. Unpacking “Privacy” for a networked world. NEW HORIZONS 5: 8.

Petronio, Sandra. 1991. Communication boundary management: A theoretical model of managing disclosure of private information between marital couples. Communication Theory 1 (4): 311–335.

Nissenbaum, Helen. 2010. Privacy in Context . Stanford University Press.

Wisniewski, Pamela, Heather Lipford, and David Wilson. 2012. Fighting for my space: Coping mechanisms for SNS boundary regulation. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems – CHI ’12 , ACM Press, 609.

Petronio, S. 2010. Communication Privacy Management Theory: What Do We Know About Family Privacy Regulation? Journal of Family Theory & Review 2 (3): 175–196.

Clemens, Chris, David Atkin, and Archana Krishnan. 2015. The influence of biological and personality traits on gratifications obtained through online dating websites. Computers in Human Behavior 49: 120–129.

Vitak, Jessica, and Nicole B. Ellison. 2013. ‘There’s a network out there you might as well tap’: Exploring the benefits of and barriers to exchanging informational and support-based resources on Facebook. New Media & Society 15 (2): 243–259.

Fogel, Joshua, and Elham Nehmad. 2009. Internet social network communities: Risk taking, trust, and privacy concerns. Computers in Human Behavior 25 (1): 153–160.

Agger, Ben. 2015. Oversharing: Presentations of Self in the Internet Age . Routledge.

Book   Google Scholar  

Krämer, Nicole C., and Nina Haferkamp. 2011. Online self-presentation: Balancing privacy concerns and impression construction on social networking sites. In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web , ed. S. Trepte and L. Reinecke, 127–141. Berlin: Springer.

The University of Central Florida, Wisniewski Pamela, A.K.M. Najmul Islam, et al. 2016. Framing and measuring multi-dimensional interpersonal privacy preferences of social networking site users. Communications of the Association for Information Systems 38: 235–258.

Pamela Wisniewski, A.K.M. Najmul Islam, Bart P. Knijnenburg, and Sameer Patil. 2015. Give social network users the privacy they want. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing , ACM, 1427–1441.

Bouvier, Gwen. 2015. What is a discourse approach to Twitter, Facebook, YouTube and other social media: Connecting with other academic fields? Journal of Multicultural Discourses 10 (2): 149–162.

Altman, Irwin. 1975. The Environment and Social Behavior: Privacy, Personal Space, Territory, and Crowding . Monterey, CA: Brooks/Cole Publishing Company.

Paasonen, Susanna, Ben Light, and Kylie Jarrett. 2019. The dick pic: Harassment, curation, and desire. Social Media + Society 5 (2): 2056305119826126.

Karr-Wisniewski, Pamela, David Wilson, and Heather Richter-Lipford. 2011. A new social order: Mechanisms for social network site boundary regulation. In Americas Conference on Information Systems, AMCIS .

Page, Xinru, Pamela Wisniewski, Bart P. Knijnenburg, and Moses Namara. 2018. Social media’s have-nots: An era of social disenfranchisement. Internet Research 28: 5.

Sleeper, Manya, Rebecca Balebako, Sauvik Das, Amber Lynn McConahy, Jason Wiese, and Lorrie Faith Cranor. 2013. The post that wasn’t: Exploring self-censorship on Facebook. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work , ACM, 793–802.

Das, Sauvik, and Adam Kramer. 2013. Self-censorship on Facebook. Proceedings of the International AAAI Conference on Web and Social Media 7: 1.

Lampinen, Airi, Vilma Lehtinen, Asko Lehmuskallio, and Sakari Tamminen. 2011. We’re in it together: Interpersonal management of disclosure in social network services. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems – CHI ’11 , ACM Press, 3217.

Buchanan, Tom, Carina Paine, Adam N. Joinson, and Ulf-Dietrich Reips. 2007. Development of measures of online privacy concern and protection for use on the internet. Journal of the American Society for Information Science & Technology 58 (2): 157–165.

Malhotra, Naresh K., Sung S. Kim, and James Agarwal. 2004. Internet Users’ Information Privacy Concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research 15 (4): 336–355.

Westin, Alan. 1991. Harris-Equifax Consumer Privacy Survey . Atlanta, GA: Equifax Inc.

Page, Xinru, Reza Ghaiumy Anaraky, and Bart P. Knijnenburg. 2019. How communication style shapes relationship boundary regulation and social media adoption. In Proceedings of the 10th International Conference on Social Media and Society , 126–135.

Page, Xinru, Bart P. Knijnenburg, and Alfred Kobsa. 2013. FYI: Communication style preferences underlie differences in location-sharing adoption and usage. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing , ACM, 153–162.

Page, Xinru Woo. 2014. Factors That Influence Adoption and Use of Location-Sharing Social Media . Irvine: University of California.

Solove, Daniel. 2008. Understanding Privacy . Cambridge, MA: Harvard University Press.

Knijnenburg, B.P., Alfred Kobsa, and Hongxia Jin. 2013. Dimensionality of information disclosure behavior. International Journal of Human-Computer Studies 71 (12): 1144–1162.

Wilkinson, Daricia, Paritosh Bahirat, Moses Namara, et al. 2019. Privacy at a glance: Exploring the effectiveness of screensavers to improve privacy awareness. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). Under Review , ACM.

Joinson, Adam N., Ulf-Dietrich Reips, Tom Buchanan, and Carina B. Paine Schofield. 2010. Privacy, trust, and self-disclosure online. Human–Computer Interaction 25 (1): 1–24.

Nissenbaum, Helen. 2004. Privacy as contextual integrity. Washington Law Review 79: 119–157.

Ramokapane, Kopo M., Gaurav Misra, Jose M. Such, and Sören Preibusch. 2021. Truth or dare: Understanding and predicting how users lie and provide untruthful data online.

Barkhuus, Louise. 2012. The mismeasurement of privacy: Using contextual integrity to reconsider privacy in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , ACM, 367–376.

Cho, Hichang, Bart Knijnenburg, Alfred Kobsa, and Yao Li. 2018. Collective privacy management in social media: A cross-cultural validation. ACM Transactions on Computer-Human Interaction 25 (3): 17:1–17:33.

Hoy, Mariea Grubbs, and George Milne. 2010. Gender differences in privacy-related measures for young adult Facebook users. Journal of Interactive Advertising 10 (2): 28–45.

Li, Yao, Bart P. Knijnenburg, Alfred Kobsa, and M-H. Carolyn Nguyen. 2015. Cross-cultural privacy prediction. In Workshop “Privacy Personas and Segmentation”, 11th Symposium On Usable Privacy and Security (SOUPS) .

Sheehan, Kim Bartel. 1999. An investigation of gender differences in on-line privacy concerns and resultant behaviors. Journal of Interactive Marketing 13 (4): 24–38.

Wyatt, Sally M.E. 2003. Non-users also matter: The construction of users and non-users of the Internet. Now Users Matter: The Co-construction of Users and Technology : 67–79.

2018. Facebook and Instagram introduce time limit tool. BBC News . Retrieved February 10, 2021 from https://www.bbc.com/news/newsbeat-45030712 .

Andalibi, Nazanin. 2020. Disclosure, privacy, and stigma on social media: Examining non-disclosure of distressing experiences. ACM Transactions on Computer-Human Interaction (TOCHI) 27 (3): 1–43.

Gibbs, Martin, James Meese, Michael Arnold, Bjorn Nansen, and Marcus Carter. 2015. #Funeral and Instagram: Death, social media, and platform vernacular. Information, Communication & Society 18 (3): 255–268.

Download references

Author information

Authors and affiliations.

Brigham Young University, Provo, UT, USA

Xinru Page & Sara Berrios

Department of Computer Science, Clemson University, Clemson, SC, USA

Daricia Wilkinson

Department of Computer Science, University of Central Florida, Orlando, FL, USA

Pamela J. Wisniewski

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Xinru Page .

Editor information

Editors and affiliations.

Clemson University, Clemson, SC, USA

Bart P. Knijnenburg

University of Central Florida, Orlando, FL, USA

Pamela Wisniewski

University of North Carolina at Charlotte, Charlotte, NC, USA

Heather Richter Lipford

School of Social and Behavioral Sciences, Arizona State University, Tempe, AZ, USA

Nicholas Proferes

Bridgewater Associates, Westport, CT, USA

Jennifer Romano

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2022 The Author(s)

About this chapter

Cite this chapter.

Page, X., Berrios, S., Wilkinson, D., Wisniewski, P.J. (2022). Social Media and Privacy. In: Knijnenburg, B.P., Page, X., Wisniewski, P., Lipford, H.R., Proferes, N., Romano, J. (eds) Modern Socio-Technical Perspectives on Privacy. Springer, Cham. https://doi.org/10.1007/978-3-030-82786-1_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-82786-1_7

Published : 09 February 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-82785-4

Online ISBN : 978-3-030-82786-1

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

lack of privacy on social media essay

Social Media Privacy: What Are The Risks? (How To Stay Safe)

Are you unknowingly giving scammers or predators your personal information? Learn how to identify social media privacy risks and secure your accounts.

lack of privacy on social media essay

Jory MacKay

Aura Cybersecurity Editor

Jory MacKay is a writer and award-winning editor with over a decade of experience for online and print publications. He has a bachelor's degree in journalism from the University of Victoria and a passion for helping people identify and avoid fraud.

lack of privacy on social media essay

Alina Benny

Alina Benny is an Aura authority on internet security, identity theft, and fraud. She holds a bachelor's degree in Electronics Engineering from the Cochin University of Science and Technology and has nearly a decade in content research. Twitter: @heyabenny

Illustration of a phone projecting a hand with two fingers pointing out

Aura’s digital security app keeps your family safe from scams, fraud, and identity theft.

What Can Scammers Do With Your Social Media Profile and Posts?

According to a recent survey, 81% of Americans say they’re concerned about their privacy on social networking sites [ * ]. Yet, the privacy risks of using social media are a nightmare that most users choose to ignore — until it becomes a reality.  

That’s what happened to families in Arizona, when a local man used location data on Snapchat to stalk and spy on young girls in the area [ * ].

The scary truth is that:

Scammers can use the information you freely give out on social media — your posts, profile, and behavioral data — to spy on you, scam you out of money , or steal your identity. 

Even worse, data protection issues and privacy loopholes mean that you (or your kids) are likely sharing personal data without your knowledge. But how much danger are you putting yourself in just by using social media? And is there a way to stay social and safe at the same time? 

In this guide, we’ll share the most common online privacy risks, and explain how to keep your sensitive information safe from cybercriminals who are searching your social media accounts.

{{show-toc}}

Why Does Social Media Privacy Matter?

Social media privacy refers to the personal and sensitive information that people can find out about you from your accounts. This information can be purposefully shared (such as in public profiles and posts) or unknowingly shared (such as the data sites share with other companies and social media marketing agencies). 

But while most people are concerned about what companies know about them, the bigger danger is what scammers and fraudsters know — and how they can use that information. 

According to the Federal Trade Commission (FTC), one out of every four fraud victims was targeted on social media last year, leading to losses of $770 million [ * ].

“One out of every four fraud victims was targeted on social media last year, leading to losses of $770 million – Federal Trade Commission”

Even with your account set to private, advertisers and scammers can gain access to your sensitive data in the form of: 

  • Profile information — such as your name, birthdate, and contact information.
  • Status updates — including personal life events, work and relationship status, and religious beliefs. 
  • Location data — such as your hometown information and geo check-ins. 
  • Personal interests — including hobbies and buying history.
  • Shared content — such as personal images and videos.
  • Posts from friends and family — anything someone posts about you can be found and used by advertisers, hackers, and fraudsters.  

What’s even more worrying is that some social media sites (like Facebook) collect user data about people who don’t even have an account [ * ]. These “shadow profiles” are typically used to target you with ads on other connected sites.

But what can happen if unscrupulous users gain access to your personal information? 

What are the Most Common Social Media Privacy Issues?

  • Hacking and account takeovers
  • Social media phishing scams
  • Shared location data
  • Data mining leading to identity theft
  • Privacy “loopholes” that expose your data
  • Employers evaluating you based on your posts
  • Doxxing leading to emotional distress
  • Cyberbullying and online harassment
  • Romance scams on social media
  • Third-party apps with account access
  • Malware and viruses in messages
  • An excessive online footprint

The more information you share on social media, the more you put your identity, accounts, and finances at risk. Here are the most common social media privacy issues that you need to know about.

1. Hacking and account takeovers

Many people unknowingly post personal information that could give hackers clues to their passwords or security questions — for example, posting about your hometown, pets, elementary school, or extended family. 

Scammers either use this information to try and brute-force their way into your account or employ social engineering attacks to trick you into providing your password. 

In many cases, scammers don’t even need to trick you into giving up your passwords or account information. Leaked social media account information sells on the Dark Web for as little as $25 [ * ].

📚 Related: How To Keep Your Kids & Teens Safe on Social Media →

2. Social media phishing scams

If your social media accounts aren’t set to private, you can receive messages from anyone — even scammers trying to get you to click on malicious links. Last year, 12% of all clicks to fake phishing websites originated on social media. 

Fraudsters also regularly use social media to run romance scams and investment fraud schemes. In the past few years, the brutally-named “ pig butchering scam ” has run rampant on social media, costing victims over $10 billion. 

📚 Related: The Worst Social Media Scams of 2023 (and How To Avoid Them) →

3. Shared location data used by stalkers and predators

Many social media sites include location data by default — such as on photos or posts. This data can be used by stalkers, scammers, or even thieves to track your movement. 

📚 Related: How Can Someone Track Your Location? (And How To Stop Them)

4. Data mining leading to identity theft

Scammers need surprisingly little information to steal your identity . And often, the starting point for identity theft can be publicly available information on social media.

Scammers can use your name, address, or phone number to target you with phishing scams — or look up more sensitive information about you that’s for sale on the Dark Web. With just your main email address or phone number , scammers can find any leaked passwords, credit card numbers, or even your Social Security number (SSN). 

5. Privacy “loopholes” exposing your sensitive information

Social media companies regularly change their policies and features — and some of those changes can cause serious data privacy issues. 

For example, in some cases, posts you share privately with friends or in private groups can be shared publicly without your permission. And if your friends don’t follow the same stringent social media privacy settings that you do, this information could be accessed by anyone — even scammers and employers. 

📚 Related: The 11 Latest Facebook Scams You Didn't Know About (Until Now) →

6. Employers or recruiters evaluating you based on your posts

Your social media profiles may seem personal, but 70% of employers say they use social media to research candidates during the hiring process [ * ]. Even worse, 57% say they found content that caused them not to hire a candidate. 

7. Doxxing leading to emotional distress or physical harm

“Doxxing” occurs when hackers or bad actors purposefully share personal information about you on the internet in order to cause harm — for example, someone sharing your phone number or home address so that others will harass you. The more information about you that is publicly available, the more likely you could be “doxxed” if targeted by hackers.

8. Cyberbullying and online harassment

For kids, teens, and even adults, social media can be a source of bullying and emotional and psychological attacks. A public account gives cyberbullies easy access to target you with messages and malicious posts — as well as access to your personal information. 

📚 Related: How To Prevent Cyberbullying →

9. Romance scams on social media

Fraudsters create fake social media profiles to try and lure you into fake online relationships — and then ask you for cash, gift cards, or personal information. Romance scammers on social media can use your personal information to craft the perfect scam designed to ensnare you. 

10. Third-party apps that can access your other accounts

Many people use social media logins (such as “Log in with Facebook”). But while these services are convenient, they can expose your personal information to companies or apps that might not have the best digital security in place. 

11. Malware and viruses in messages or posts

If your account is set to public, scammers may send you malicious links via direct messages. These messages often seek to create a sense of urgency by using one of these methods:

  • “Is this you in this photo/video?” These messages show a video or photo preview to grab your curiosity and persuade you to click on the link. 
  • Fake copyright infringement notices. Scammers send messages like this over Instagram to scare you into clicking on their links. 
  • Rewards or giveaways from major companies. Fraudsters pose as Walmart, Apple, Microsoft, Amazon, or other big companies and offer free prizes — if you click on their links.

If you click on any links in these types of messages, you’ll either infect your device with malware or be taken to what looks like a login page for the social media site. But in reality, it’s a fake website designed to steal your username and password.

12. An excessive online footprint that data brokers can access

Finally, all of your activity on social media contributes to your online footprint — the trove of data that advertisers use to target you with ads. Unfortunately, in many cases, anyone can purchase this information from data brokers, putting you at risk of scams, or an onslaught of spam calls, texts, and emails. 

How To Know if Your Personal Information Was Leaked in a Social Media Data Breach

Unfortunately, there’s only so much that you can do as an individual to protect your private information on social media. 

Some of the biggest risks are outside of your control, like if a social media site is hacked. In the last four years, Twitch, Linkedin, Facebook, Twitter, and Quora have all been hacked — with millions of passwords and other account information ending up on the Dark Web.

The easiest way to see if your data is available to hackers is to use Aura’s free Dark Web scanner. Aura scans known Dark Web forums and sites for your email address to alert you of compromised accounts. Find out if your social media accounts are at risk →

How To Update Your Privacy Settings on Every Social Media Platform

It’s important to keep your data private on social media. Here’s how to update your privacy settings to protect yourself, your family, and your personal information.

Facebook privacy settings: What you need to know

Facebook has been in the hot seat for quite some time regarding privacy laws and leaks, including the Cambridge Analytica scandal and the 2018 data breach that impacted 530 million users [ * ]. 

How to update your privacy settings on Facebook

  • First, use Facebook’s Privacy Checkup tool to understand what’s shared on your profile and who can view it. 
  • Then, adjust privacy settings under “Settings” and “Privacy” to only allow friends to view personal profile information, including your email address, phone number, and location.
  • In your privacy settings, use the “Activity Log” to review all of your posts and photos including any in which you’ve been tagged; and then remove anything you don’t want people to see.
  • Under your privacy settings you can also limit who can send you friend requests, find your profile, and search for you using your email or phone number. 
  • Finally, comb through your friends list and delete any unrecognized Facebook users who could view your profile. 

📚 Related: How To Recover a Hacked Facebook Account →

Instagram privacy settings: What you need to know

A public Instagram profile gives people access to your name, location, and contact information (if business settings are enabled). Ireland’s data privacy regulator recently filed a $402 million fine against Instagram [ * ] over a loophole that allowed children to open public business accounts.

How to update your privacy settings on Instagram

  • Under “Settings” and “Account Privacy,” move your Instagram profile visibility from public to private so that only friends can see your profile.
  • Adjust your Instagram story settings under “Privacy” and “Story” to permit only close friends to view your temporary posts, as these often feature risky location identifiers.
  • As with Facebook, comb through your existing friends list and remove any unknown or unrecognized users that could pose a privacy risk.

📚 Related: The 10 Biggest Scams Happening on Instagram Right Now →

Twitter privacy settings: What you need to know

Few Twitter users understand the privacy implications of their feed. According to a Pew survey, 65% of users believed their Twitter accounts were set to private [ * ]. But in reality, 92% of those people actually had their accounts set to public. 

How to update your privacy settings on Twitter

  • Modify your “Protected Tweets” settings so that Twitter feed details may only be viewed by followers and not by those who either searched on Google or are not Twitter followers.
  • Adjust visibility settings to a private profile to only allow your followers or people you follow to view your information. 
  • Turn location settings off when posting a tweet.

📚 Related: How To Properly Set Up Your iPhone's Privacy Settings →

TikTok privacy settings: What you need to know

TikTok has quickly become one of the most used social media platforms. While TikTok doesn’t offer as many opportunities to accidentally share private information, it’s still important to update your privacy settings to help prevent phishing attacks and other scams. 

How to update your privacy settings on TikTok

  • Access privacy settings under the “Settings” tab, and toggle the “Private Account” option to “On” (the option will turn green). This only allows followers to view your profile and content.
  • Under the “Privacy” settings, turn off “Suggest your account to others” to limit unwanted users from discovering you. 
  • In the same settings menu: set comments, mentions and tags, and direct messages to only “Followers you follow back and people you sent messages to.” 

📚 Related: TikTok Parental Controls: How To (Safely) Set It Up for Kids →

LinkedIn privacy settings: What you need to know

LinkedIn is a powerful tool for building your professional network. But it can also expose some of your most sensitive information — including your name, occupation, and contact details. 

The Federal Bureau of Investigations (FBI) recently issued a warning against a LinkedIn threat wherein criminals have been making fake, yet convincing, profiles in order to promote fraudulent cryptocurrency investments [ * ]. 

How to update your privacy settings on LinkedIn

  • Under “Visibility” change your “Profile viewing options” to “Private Mode.” Then, restrict who can find your profile when searching your email address, phone number, or through services outside of LinkedIn (such as search engines). 

How to change your visibility on LinkedIn

  • In the same section, edit “Email visibility” to make your personal email information viewable to either just you or only 1st-degree connections. 
  • Under “Account Preferences,” unsync your account with your phone, Gmail, or Outlook contacts.
  • Turn off “InMail messages” under “Data Privacy” > “Other Applications” > “Permitted Applications.” This prevents messages from LinkedIn members who are not part of your connections.

📚 Related: How To Spot a LinkedIn Job Scam (11 Warning Signs) →

Snapchat privacy settings: What you need to know

Snapchat is known for its temporary messages. But scammers can still target you — even if your messages disappear. The biggest privacy threats on Snapchat are your viewing settings and location tracking. If left in default settings, these can be used by scammers, predators, and other people looking to harass you online. 

How to update your privacy settings on Snapchat

  • Under “Privacy Controls” set “See My Location” to “Only Me.” Also, ensure that you remain in “Ghost Mode” while on the map feature so as to never share location details.
  • Assess your current “Contact Me” settings and ensure that only added friends and contacts can get in touch with you through Snapchat. 
  • Adjust your “My Story” settings to only allow your Snapchat story to be viewed by friends or a customized selection of users.

📚 Related: Don’t Fall For These 7 Sordid Snapchat Scams →

Google privacy settings: What you need to know

You might not think of your Google account as a social media service. But platforms like Google Maps, Hangouts, YouTube, and Gmail offer similar functionality (and privacy concerns) to other social media apps. 

Even more dangerous are all of the third-party apps that you log in to using your Google account. If any of these accounts are compromised, your personal information could be at risk. 

How to update your privacy settings on Google

  • Use the Google Privacy Check-Up Tool to understand where any privacy risks exist in your Google account, and follow the provided recommendations. Pay special attention to the Google Photos options as well as your Google profile settings.
  • Under “History Settings,” pause location history to prevent ongoing tracking of your movements.
  • Review details provided under “Data from apps and services you use” in your account settings, and remove access from third-party apps that you do not use or recognize. 

Microsoft privacy settings: What you need to know

Similar to Google, Microsoft accounts are often used to log in to third-party applications, like Skype. Cybercriminals can gain access to your Microsoft login details through these apps if you're not careful.

How to update your privacy settings on Microsoft:

  • Enable two-step verification to increase login security and reduce the chance of unapproved account access. 
  • Assess apps and services under your “Account Settings” and remove any unauthorized or unused third-party data access.

📚 Related: Is Norton Privacy Monitor Assistant Worth It? →

Apple privacy settings: What you need to know

Apple iCloud accounts are often used by iPhone and Mac users to back up and share private files. This can include images, videos, geographical locations, and contact details. These details can leave Apple users open to hacking, malware and virus risks, and may even present personal security and safety concerns.

How to update your privacy settings on Apple:

  • Use the “App Privacy Report” to understand individual permissions of apps in terms of data access and usage.
  • Access “Location Settings” and remove location tracking permissions for unwanted or questionable applications.
  • Enable six-digit passcodes, Touch ID, or Face ID to prevent any unauthorized account access.

Social Media Privacy Checklist:

Ultimately, staying private on social media isn’t just about changing your settings. You also need to change how you use social media. 

Here are 10 tips to help you keep your social media profiles secure and private :

  • Set strong passwords , and enable two-factor authentication (2FA) on your accounts whenever available.
  • Avoid using publicly known or available information as the answers to your password security questions.
  • Only provide social media platforms with the minimum amount of information requested to create an account.
  • Whenever possible, do not provide social media sites or third-party apps access to your email accounts or contacts.
  • Create a separate email account to use just for your social media profiles and third-party apps. With Aura, you can easily create and manage “throwaway” email accounts that give you access to services while protecting your main inbox. 
  • Review every social media site’s privacy policy before signing up and posting content.
  • Disable location sharing across all social media, and avoid using geotagged photos.
  • Review your current privacy settings and make sure your account has not been made public by default.
  • Learn to recognize the signs of online scams .
  • Consider signing up for an all-in-one digital security solution. Aura combines top-rated identity theft protection with 24/7 credit monitoring, Dark Web monitoring, digital security tools — such as antivirus software and a virtual private network (VPN) — and a $1 million insurance policy that covers you against eligible losses due to identity theft. Try Aura free for 14 days and see if it’s right for you!

The Bottom Line: Stay Social, Private, and Secure

Social media will always be a balancing act between privacy and promotion. The more public you make your personal information, the higher risk there is that you’ll be targeted by scammers and hackers. 

Keep your accounts safe by using strong privacy settings, and protect yourself from scammers with Aura’s comprehensive identity theft protection service. 

Stay safe on social media. Try Aura free for 14 days →

Award-winning identity theft protection with AI-powered digital security tools, 24/7 White Glove support, and more. Try Aura for free.

Related Articles

My Instagram was hacked. Help!

How to Recover a Hacked Instagram Account [Step by Step]

Was your Instagram account hacked? Don’t panic. Follow this step-by-step guide on what to do if your Instagram is hacked and you’ve been locked out.

How do hackers get into your computer - illustration

How Hackers Get Into Your Computer (And How To Stop Them)

How do hackers get into your computer? Cybercriminals have numerous scams they can use to break into your device. Here’s how to keep them out.

Try Aura—14 Days Free

Start your free trial today**

Why protecting privacy is a losing game today—and how to change the game

Subscribe to techstream, cameron f. kerry cameron f. kerry ann r. and andrew h. tisch distinguished visiting fellow - governance studies , center for technology innovation @cam_kerry.

July 12, 2018

Recent congressional hearings and data breaches have prompted more legislators and business leaders to say the time for broad federal privacy legislation has come. Cameron Kerry presents the case for adoption of a baseline framework to protect consumer privacy in the U.S.

Kerry explores a growing gap between existing laws and an information Big Bang that is eroding trust. He suggests that recent privacy bills have not been ambitious enough, and points to the Obama administration’s Consumer Privacy Bill of Rights as a blueprint for future legislation. Kerry considers ways to improve that proposal, including an overarching “golden rule of privacy” to ensure people can trust that data about them is handled in ways consistent with their interests and the circumstances in which it was collected.

Table of Contents Introduction: Game change? How current law is falling behind Shaping laws capable of keeping up

  • 31 min read

Introduction: Game change?

There is a classic episode of the show “I Love Lucy” in which Lucy goes to work wrapping candies on an assembly line . The line keeps speeding up with the candies coming closer together and, as they keep getting farther and farther behind, Lucy and her sidekick Ethel scramble harder and harder to keep up. “I think we’re fighting a losing game,” Lucy says.

This is where we are with data privacy in America today. More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system. If we don’t change the rules of the game soon, it will turn into a losing game for our economy and society.

More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system.

The Cambridge Analytica drama has been the latest in a series of eruptions that have caught peoples’ attention in ways that a steady stream of data breaches and misuses of data have not.

The first of these shocks was the Snowden revelations in 2013. These made for long-running and headline-grabbing stories that shined light on the amount of information about us that can end up in unexpected places. The disclosures also raised awareness of how much can be learned from such data (“we kill people based on metadata,” former NSA and CIA Director Michael Hayden said ).

The aftershocks were felt not only by the government, but also by American companies, especially those whose names and logos showed up in Snowden news stories. They faced suspicion from customers at home and market resistance from customers overseas. To rebuild trust, they pushed to disclose more about the volume of surveillance demands and for changes in surveillance laws. Apple, Microsoft, and Yahoo all engaged in public legal battles with the U.S. government.

Then came last year’s Equifax breach that compromised identity information of almost 146 million Americans. It was not bigger than some of the lengthy roster of data breaches that preceded it, but it hit harder because it rippled through the financial system and affected individual consumers who never did business with Equifax directly but nevertheless had to deal with the impact of its credit scores on economic life. For these people, the breach was another demonstration of how much important data about them moves around without their control, but with an impact on their lives.

Now the Cambridge Analytica stories have unleashed even more intense public attention, complete with live network TV cut-ins to Mark Zuckerberg’s congressional testimony. Not only were many of the people whose data was collected surprised that a company they never heard of got so much personal information, but the Cambridge Analytica story touches on all the controversies roiling around the role of social media in the cataclysm of the 2016 presidential election. Facebook estimates that Cambridge Analytica was able to leverage its “academic” research into data on some 87 million Americans (while before the 2016 election Cambridge Analytica’s CEO Alexander Nix boasted of having profiles with 5,000 data points on 220 million Americans). With over two billion Facebook users worldwide, a lot of people have a stake in this issue and, like the Snowden stories, it is getting intense attention around the globe, as demonstrated by Mark Zuckerberg taking his legislative testimony on the road to the European Parliament .

The Snowden stories forced substantive changes to surveillance with enactment of U.S. legislation curtailing telephone metadata collection and increased transparency and safeguards in intelligence collection. Will all the hearings and public attention on Equifax and Cambridge Analytica bring analogous changes to the commercial sector in America?

I certainly hope so. I led the Obama administration task force that developed the “ Consumer Privacy Bill of Rights ” issued by the White House in 2012 with support from both businesses and privacy advocates, and then drafted legislation to put this bill of rights into law. The legislative proposal issued after I left the government did not get much traction, so this initiative remains unfinished business.

The Cambridge Analytica stories have spawned fresh calls for some federal privacy legislation from members of Congress in both parties, editorial boards, and commentators. With their marquee Zuckerberg hearings behind them, senators and congressmen are moving on to think about what do next. Some have already introduced bills and others are thinking about what privacy proposals might look like. The op-eds and Twitter threads on what to do have flowed. Various groups in Washington have been convening to develop proposals for legislation.

This time, proposals may land on more fertile ground. The chair of the Senate Commerce Committee, John Thune (R-SD) said “many of my colleagues on both sides of the aisle have been willing to defer to tech companies’ efforts to regulate themselves, but this may be changing.” A number of companies have been increasingly open to a discussion of a basic federal privacy law. Most notably, Zuckerberg told CNN “I’m not sure we shouldn’t be regulated,” and Apple’s Tim Cook expressed his emphatic belief that self-regulation is no longer viable.

For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation.

This is not just about damage control or accommodation to “techlash” and consumer frustration. For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation. An increasing spread of state legislation on net neutrality, drones, educational technology, license plate readers, and other subjects and, especially broad new legislation in California pre-empting a ballot initiative, have made the possibility of a single set of federal rules across all 50 states look attractive. For multinational companies that have spent two years gearing up for compliance with the new data protection law that has now taken effect in the EU, dealing with a comprehensive U.S. law no longer looks as daunting. And more companies are seeing value in a common baseline that can provide people with reassurance about how their data is handled and protected against outliers and outlaws.

This change in the corporate sector opens the possibility that these interests can converge with those of privacy advocates in comprehensive federal legislation that provides effective protections for consumers. Trade-offs to get consistent federal rules that preempt some strong state laws and remedies will be difficult, but with a strong enough federal baseline, action can be achievable.

how current law is falling behind

Snowden, Equifax, and Cambridge Analytica provide three conspicuous reasons to take action. There are really quintillions of reasons. That’s how fast IBM estimates we are generating digital information, quintillions of bytes of data every day—a number followed by 30 zeros. This explosion is generated by the doubling of computer processing power every 18-24 months that has driven growth in information technology throughout the computer age, now compounded by the billions of devices that collect and transmit data, storage devices and data centers that make it cheaper and easier to keep the data from these devices, greater bandwidth to move that data faster, and more powerful and sophisticated software to extract information from this mass of data. All this is both enabled and magnified by the singularity of network effects—the value that is added by being connected to others in a network—in ways we are still learning.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Futurists and business forecasters debate just how many tens of billions of devices will be connected in the coming decades, but the order of magnitude is unmistakable—and staggering in its impact on the quantity and speed of bits of information moving around the globe. The pace of change is dizzying, and it will get even faster—far more dizzying than Lucy’s assembly line.

Most recent proposals for privacy legislation aim at slices of the issues this explosion presents. The Equifax breach produced legislation aimed at data brokers. Responses to the role of Facebook and Twitter in public debate have focused on political ad disclosure, what to do about bots, or limits to online tracking for ads. Most state legislation has targeted specific topics like use of data from ed-tech products, access to social media accounts by employers, and privacy protections from drones and license-plate readers. Facebook’s simplification and expansion of its privacy controls and recent federal privacy bills in reaction to events focus on increasing transparency and consumer choice. So does the newly enacted California Privacy Act.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Most recent proposals for privacy legislation aim at slices of the issues this explosion presents.

Measures like these double down on the existing American privacy regime. The trouble is, this system cannot keep pace with the explosion of digital information, and the pervasiveness of this information has undermined key premises of these laws in ways that are increasingly glaring. Our current laws were designed to address collection and storage of structured data by government, business, and other organizations and are busting at the seams in a world where we are all connected and constantly sharing. It is time for a more comprehensive and ambitious approach. We need to think bigger, or we will continue to play a losing game.

Our existing laws developed as a series of responses to specific concerns, a checkerboard of federal and state laws, common law jurisprudence, and public and private enforcement that has built up over more than a century. It began with the famous Harvard Law Review article by (later) Justice Louis Brandeis and his law partner Samuel Warren in 1890 that provided a foundation for case law and state statutes for much of the 20th Century, much of which addressed the impact of mass media on individuals who wanted, as Warren and Brandeis put it, “to be let alone.” The advent of mainframe computers saw the first data privacy laws adopted in 1974 to address the power of information in the hands of big institutions like banks and government: the federal Fair Credit Reporting Act that gives us access to information on credit reports and the Privacy Act that governs federal agencies. Today, our checkerboard of privacy and data security laws covers data that concerns people the most. These include health data, genetic information, student records and information pertaining to children in general, financial information, and electronic communications (with differing rules for telecommunications carriers, cable providers, and emails).

Outside of these specific sectors is not a completely lawless zone. With Alabama adopting a law last April, all 50 states now have laws requiring notification of data breaches (with variations in who has to be notified, how quickly, and in what circumstances). By making organizations focus on personal data and how they protect it, reinforced by exposure to public and private enforcement litigation, these laws have had a significant impact on privacy and security practices. In addition, since 2003, the Federal Trade Commission—under both Republican and Democratic majorities—has used its enforcement authority to regulate unfair and deceptive commercial practices and to police unreasonable privacy and information security practices. This enforcement, mirrored by many state attorneys general, has relied primarily on deceptiveness, based on failures to live up to privacy policies and other privacy promises.

These levers of enforcement in specific cases, as well as public exposure, can be powerful tools to protect privacy. But, in a world of technology that operates on a massive scale moving fast and doing things because one can, reacting to particular abuses after-the-fact does not provide enough guardrails.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books. This includes most of the data we generate through such widespread uses as web searches, social media, e-commerce, and smartphone apps. The changes come faster than legislation or regulatory rules can adapt, and they erase the sectoral boundaries that have defined our privacy laws. Take my smart watch, for one example: data it generates about my heart rate and activity is covered by the Health Insurance Portability and Accountability Act (HIPAA) if it is shared with my doctor, but not when it goes to fitness apps like Strava (where I can compare my performance with my peers). Either way, it is the same data, just as sensitive to me and just as much of a risk in the wrong hands.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books.

It makes little sense that protection of data should depend entirely on who happens to hold it. This arbitrariness will spread as more and more connected devices are embedded in everything from clothing to cars to home appliances to street furniture. Add to that striking changes in patterns of business integration and innovation—traditional telephone providers like Verizon and AT&T are entering entertainment, while startups launch into the provinces of financial institutions like currency trading and credit and all kinds of enterprises compete for space in the autonomous vehicle ecosystem—and the sectoral boundaries that have defined U.S. privacy protection cease to make any sense.

Putting so much data into so many hands also is changing the nature of information that is protected as private. To most people, “personal information” means information like social security numbers, account numbers, and other information that is unique to them. U.S. privacy laws reflect this conception by aiming at “personally identifiable information,” but data scientists have repeatedly demonstrated that this focus can be too narrow. The aggregation and correlation of data from various sources make it increasingly possible to link supposedly anonymous information to specific individuals and to infer characteristics and information about them. The result is that today, a widening range of data has the potential to be personal information, i.e. to identify us uniquely. Few laws or regulations address this new reality.

Nowadays, almost every aspect of our lives is in the hands of some third party somewhere. This challenges judgments about “expectations of privacy” that have been a major premise for defining the scope of privacy protection. These judgments present binary choices: if private information is somehow public or in the hands of a third party, people often are deemed to have no expectation of privacy. This is particularly true when it comes to government access to information—emails, for example, are nominally less protected under our laws once they have been stored 180 days or more, and articles and activities in plain sight are considered categorically available to government authorities. But the concept also gets applied to commercial data in terms and conditions of service and to scraping of information on public websites, for two examples.

As more devices and sensors are deployed in the environments we pass through as we carry on our days, privacy will become impossible if we are deemed to have surrendered our privacy simply by going about the world or sharing it with any other person. Plenty of people have said privacy is dead, starting most famously with Sun Microsystems’ Scott McNealy back in the 20th century (“you have zero privacy … get over it”) and echoed by a chorus of despairing writers since then. Without normative rules to provide a more constant anchor than shifting expectations, true privacy actually could be dead or dying. The Supreme Court may have something to say on the subject in we will need a broader set of norms to protect privacy in settings that have been considered public. Privacy can endure, but it needs a more enduring foundation.

The Supreme Court in its recent Carpenter decision recognized how constant streams of data about us change the ways that privacy should be protected. In holding that enforcement acquisition of cell phone location records requires a warrant, the Court considered the “detailed, encyclopedic, and effortlessly compiled” information available from cell service location records and “the seismic shifts in digital technology” that made these records available, and concluded that people do not necessarily surrender privacy interests to collect data they generate or by engaging in behavior that can be observed publicly. While there was disagreement among Justices as to the sources of privacy norms, two of the dissenters, Justice Alito and Gorsuch, pointed to “expectations of privacy” as vulnerable because they can erode or be defined away.

How this landmark privacy decision affects a wide variety of digital evidence will play out in criminal cases and not in the commercial sector. Nonetheless, the opinions in the case point to a need for a broader set of norms to protect privacy in settings that have been thought to make information public. Privacy can endure, but it needs a more enduring foundation.

Our existing laws also rely heavily on notice and consent—the privacy notices and privacy policies that we encounter online or receive from credit card companies and medical providers, and the boxes we check or forms we sign. These declarations are what provide the basis for the FTC to find deceptive practices and acts when companies fail to do what they said. This system follows the model of informed consent in medical care and human subject research, where consent is often asked for in person, and was imported into internet privacy in the 1990s. The notion of U.S. policy then was to foster growth of the internet by avoiding regulation and promoting a “ market resolution ” in which individuals would be informed about what data is collected and how it would be processed, and could make choices on this basis.

Maybe informed consent was practical two decades ago, but it is a fantasy today. In a constant stream of online interactions, especially on the small screens that now account for the majority of usage, it is unrealistic to read through privacy policies. And people simply don’t.

It is not simply that any particular privacy policies “suck,” as Senator John Kennedy (R-LA) put it in the Facebook hearings. Zeynep Tufecki is right that these disclosures are obscure and complex . Some forms of notice are necessary and attention to user experience can help, but the problem will persist no matter how well designed disclosures are. I can attest that writing a simple privacy policy is challenging, because these documents are legally enforceable and need to explain a variety of data uses; you can be simple and say too little or you can be complete but too complex. These notices have some useful function as a statement of policy against which regulators, journalists, privacy advocates, and even companies themselves can measure performance, but they are functionally useless for most people, and we rely on them to do too much.

At the end of the day, it is simply too much to read through even the plainest English privacy notice, and being familiar with the terms and conditions or privacy settings for all the services we use is out of the question. The recent flood of emails about privacy policies and consent forms we have gotten with the coming of the EU General Data Protection Regulation have offered new controls over what data is collected or information communicated, but how much have they really added to people’s understanding? Wall Street Journal reporter Joanna Stern attempted to analyze all the ones she received (enough paper printed out to stretch more than the length of a football field), but resorted to scanning for a few specific issues. In today’s world of constant connections, solutions that focus on increasing transparency and consumer choice are an incomplete response to current privacy challenges.

Moreover, individual choice becomes utterly meaningless as increasingly automated data collection leaves no opportunity for any real notice, much less individual consent. We don’t get asked for consent to the terms of surveillance cameras on the streets or “beacons” in stores that pick up cell phone identifiers, and house guests aren’t generally asked if they agree to homeowners’ smart speakers picking up their speech. At best, a sign may be posted somewhere announcing that these devices are in place. As devices and sensors increasingly are deployed throughout the environments we pass through, some after-the-fact access and control can play a role, but old-fashioned notice and choice become impossible.

Ultimately, the familiar approaches ask too much of individual consumers. As the President’s Council of Advisers on Science and Technology Policy found in a 2014 report on big data , “the conceptual problem with notice and choice is that it fundamentally places the burden of privacy protection on the individual,” resulting in an unequal bargain, “a kind of market failure.”

This is an impossible burden that creates an enormous disparity of information between the individual and the companies they deal with. As Frank Pasquale ardently dissects in his “Black Box Society,”   we know very little about how the businesses that collect our data operate. There is no practical way even a reasonably sophisticated person can get arms around the data that they generate and what that data says about them. After all, making sense of the expanding data universe is what data scientists do. Post-docs and Ph.D.s at MIT (where I am a visiting scholar at the Media Lab) as well as tens of thousands of data researchers like them in academia and business are constantly discovering new information that can be learned from data about people and new ways that businesses can—or do—use that information. How can the rest of us who are far from being data scientists hope to keep up?

As a result, the businesses that use the data know far more than we do about what our data consists of and what their algorithms say about us. Add this vast gulf in knowledge and power to the absence of any real give-and-take in our constant exchanges of information, and you have businesses able by and large to set the terms on which they collect and share this data.

Businesses are able by and large to set the terms on which they collect and share this data. This is not a “market resolution” that works.

This is not a “market resolution” that works. The Pew Research Center has tracked online trust and attitudes toward the internet and companies online. When Pew probed with surveys and focus groups in 2016, it found that “while many Americans are willing to share personal information in exchange for tangible benefits, they are often cautious about disclosing their information and frequently unhappy about that happens to that information once companies have collected it.” Many people are “uncertain, resigned, and annoyed.” There is a growing body of survey research in the same vein. Uncertainty, resignation, and annoyance hardly make a recipe for a healthy and sustainable marketplace, for trusted brands, or for consent of the governed.

Consider the example of the journalist Julia Angwin. She spent a year trying to live without leaving digital traces, which she described in her book “Dragnet Nation.” Among other things, she avoided paying by credit card and established a fake identity to get a card for when she couldn’t avoid using one; searched hard to find encrypted cloud services for most email; adopted burner phones that she turned off when not in use and used very little; and opted for paid subscription services in place of ad-supported ones. More than a practical guide to protecting one’s data privacy, her year of living anonymously was an extended piece of performance art demonstrating how much digital surveillance reveals about our lives and how hard it is to avoid. The average person should not have to go to such obsessive lengths to ensure that their identities or other information they want to keep private stays private. We need a fair game.

Shaping laws capable of keeping up

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights we developed in the Obama administration has taken on new life as a model. The Los Angeles Times , The Economist , and The New York Times all pointed to this bill of rights in urging Congress to act on comprehensive privacy legislation, and the latter said “there is no need to start from scratch …” Our 2012 proposal needs adapting to changes in technology and politics, but it provides a starting point for today’s policy discussion because of the wide input it got and the widely accepted principles it drew on.

The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission: individual control, transparency, respect for the context in which the data was obtained, access and accuracy, focused collection, security, and accountability. These broad principles are rooted in longstanding and globally-accepted “fair information practices principles.” To reflect today’s world of billions of devices interconnected through networks everywhere, though, they are intended to move away from static privacy notices and consent forms to a more dynamic framework, less focused on collection and process and more on how people are protected in the ways their data is handled. Not a checklist, but a toolbox. This principles-based approach was meant to be interpreted and fleshed out through codes of conduct and case-by-case FTC enforcement—iterative evolution, much the way both common law and information technology developed.

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights developed in the Obama administration has taken on new life as a model. The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission.

The other comprehensive model that is getting attention is the EU’s newly effective General Data Protection Regulation. For those in the privacy world, this has been the dominant issue ever since it was approved two years ago, but even so, it was striking to hear “the GDPR” tossed around as a running topic of congressional questions for Mark Zuckerberg. The imminence of this law, its application to Facebook and many other American multinational companies, and its contrast with U.S. law made GDPR a hot topic. It has many people wondering why the U.S. does not have a similar law, and some saying the U.S. should follow the EU model.

I dealt with the EU law since it was in draft form while I led U.S. government engagement with the EU on privacy issues alongside developing our own proposal. Its interaction with U.S. law and commerce has been part of my life as an official, a writer and speaker on privacy issues, and a lawyer ever since. There’s a lot of good in it, but it is not the right model for America.

There’s a lot of good in the GDPR, but it is not the right model for America.

What is good about the EU law? First of all, it is a law—one set of rules that applies to all personal data across the EU. Its focus on individual data rights in theory puts human beings at the center of privacy practices, and the process of complying with its detailed requirements has forced companies to take a close look at what data they are collecting, what they use it for, and how they keep it and share it—which has proved to be no small task. Although the EU regulation is rigid in numerous respects, it can be more subtle than is apparent at first glance. Most notably, its requirement that consent be explicit and freely given is often presented in summary reports as prohibiting collecting any personal data without consent; in fact, the regulation allows other grounds for collecting data and one effect of the strict definition of consent is to put more emphasis on these other grounds. How some of these subtleties play out will depend on how 40 different regulators across the EU apply the law, though. European advocacy groups were already pursuing claims against “ les GAFAM ” (Google, Amazon, Facebook, Apple, Microsoft) as the regulation went into effect.

The EU law has its origins in the same fair information practice principles as the Consumer Privacy Bill of Rights. But the EU law takes a much more prescriptive and process-oriented approach, spelling out how companies must manage privacy and keep records and including a “right to be forgotten” and other requirements hard to square with our First Amendment. Perhaps more significantly, it may not prove adaptable to artificial intelligence and new technologies like autonomous vehicles that need to aggregate masses of data for machine learning and smart infrastructure. Strict limits on the purposes of data use and retention may inhibit analytical leaps and beneficial new uses of information. A rule requiring human explanation of significant algorithmic decisions will shed light on algorithms and help prevent unfair discrimination but also may curb development of artificial intelligence. These provisions reflect a distrust of technology that is not universal in Europe but is a strong undercurrent of its political culture.

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy. The Consumer Privacy Bill of Rights offers a blueprint for such an approach.

Sure, it needs work, but that’s what the give-and-take of legislating is about. Its language on transparency came out sounding too much like notice-and-consent, for example. Its proposal for fleshing out the application of the bill of rights had a mixed record of consensus results in trial efforts led by the Commerce Department.

It also got some important things right. In particular, the “respect for context” principle is an important conceptual leap. It says that a people “have a right to expect that companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” This breaks from the formalities of privacy notices, consent boxes, and structured data and focuses instead on respect for the individual. Its emphasis on the interactions between an individual and a company and circumstances of the data collection and use derives from  the insight of information technology thinker Helen Nissenbaum . To assess privacy interests, “it is crucial to know the context—who is gathering the information, who is analyzing it, who is disseminating and to whom, the nature of the information, the relationships among the various parties, and even larger institutional and social circumstances.”

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy.

Context is complicated—our draft legislation listed 11 different non-exclusive factors to assess context. But that is in practice the way we share information and form expectations about how that information will be handled and about our trust in the handler. We bare our souls and our bodies to complete strangers to get medical care, with the understanding that this information will be handled with great care and shared with strangers only to the extent needed to provide care. We share location information with ride-sharing and navigation apps with the understanding that it enables them to function, but Waze ran into resistance when that functionality required a location setting of “always on.” Danny Weitzner, co-architect of the Privacy Bill of Rights, recently discussed how the respect for context principle “would have prohibited [Cambridge Analytica] from unilaterally repurposing research data for political purposes” because it establishes a right “not to be surprised by how one’s personal data issued.” The Supreme Court’s Carpenter decision opens up expectations of privacy in information held by third parties to variations based on the context.

The Consumer Privacy Bill of Rights does not provide any detailed prescription as to how the context principle and other principles should apply in particular circumstances. Instead, the proposal left such application to case-by-case adjudication by the FTC and development of best practices, standards, and codes of conduct by organizations outside of government, with incentives to vet these with the FTC or to use internal review boards similar to those used for human subject research in academic and medical settings. This approach was based on the belief that the pace of technological change and the enormous variety of circumstances involved need more adaptive decisionmaking than current approaches to legislation and government regulations allow. It may be that baseline legislation will need more robust mandates for standards than the Consumer Privacy Bill of Rights contemplated, but any such mandates should be consistent with the deeply embedded preference for voluntary, collaboratively developed, and consensus-based standards that has been a hallmark of U.S. standards development.

In hindsight, the proposal could use a lodestar to guide the application of its principles—a simple golden rule for privacy: that companies should put the interests of the people whom data is about ahead of their own. In some measure, such a general rule would bring privacy protection back to first principles: some of the sources of law that Louis Brandeis and Samuel Warren referred to in their famous law review article were cases in which the receipt of confidential information or trade secrets led to judicial imposition of a trust or duty of confidentiality. Acting as a trustee carries the obligation to act in the interests of the beneficiaries and to avoid self-dealing.

A Golden Rule of Privacy that incorporates a similar obligation for one entrusted with personal information draws on several similar strands of the privacy debate. Privacy policies often express companies’ intention to be “good stewards of data;” the good steward also is supposed to act in the interests of the principal and avoid self-dealing. A more contemporary law review parallel is Yale law professor Jack Balkin’s concept of “ information fiduciaries ,” which got some attention during the Zuckerberg hearing when Senator Brian Schatz (D-HI) asked Zuckerberg to comment on it. The Golden Rule of Privacy would import the essential duty without importing fiduciary law wholesale. It also resonates with principles of “respect for the individual,” “beneficence,” and “justice” in ethical standards for human subject research that influence emerging ethical frameworks for privacy and data use. Another thread came in Justice Gorsuch’s Carpenter dissent defending property law as a basis for privacy interests: he suggested that entrusting someone with digital information may be a modern equivalent of a “bailment” under classic property law, which imposes duties on the bailee. And it bears some resemblance to the GDPR concept of “ legitimate interest ,” which permits the processing of personal data based on a legitimate interest of the processor, provided that this interest is not outweighed by the rights and interests of the subject of the data.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected. This should hold regardless of how the data is collected, who receives it, or the uses it is put to. If it is personal data, it should have enduring protection.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected.

Such trust is an essential building block of a sustainable digital world. It is what enables the sharing of data for socially or economically beneficial uses without putting human beings at risk. By now, it should be clear that trust is betrayed too often, whether by intentional actors like Cambridge Analytica or Russian “ Fancy Bears ,” or by bros in cubes inculcated with an imperative to “ deploy or die .”

Trust needs a stronger foundation that provides people with consistent assurance that data about them will be handled fairly and consistently with their interests. Baseline principles would provide a guide to all businesses and guard against overreach, outliers, and outlaws. They would also tell the world that American companies are bound by a widely-accepted set of privacy principles and build a foundation for privacy and security practices that evolve with technology.

Resigned but discontented consumers are saying to each other, “I think we’re playing a losing game.” If the rules don’t change, they may quit playing.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Artificial Intelligence Internet & Telecommunications Privacy

Courts & Law

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative The Privacy Debate

Joshua P. Meltzer

November 1, 2023

Jordan Famularo, Richmond Wong

October 27, 2022

Nicol Turner Lee, Caitlin Chin-Rothmann

April 7, 2022

  • Search Menu
  • Advance articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Data Privacy Law
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, piecing together two different conceptions of privacy, empowering consumers, harnessing social privacy, from controlling data access to authorizing data usage.

  • < Previous

On (some aspects of) social privacy in the social media space

Adrian Kuenzler, Faculty of Law, Zurich University, Switzerland; Yale Law School, Information Society Project, USA

  • Article contents
  • Figures & tables
  • Supplementary Data

Adrian Kuenzler, On (some aspects of) social privacy in the social media space, International Data Privacy Law , Volume 12, Issue 1, February 2022, Pages 63–73, https://doi.org/10.1093/idpl/ipab022

  • Permissions Icon Permissions

This commentary ties in with an emerging field in privacy scholarship that focuses on collective rather than individualistic viewpoints: recent debates address privacy in digital markets in terms of individual rights to choose between different options, such as between Facebook, Instagram, Snapchat, or Twitter, while users of digital platforms try to make sense of who they are and how they fit into networked contexts.

In such contexts, audiences are hidden and almost anything that users share is in plain view. Privacy is thus to be found within public environments rather than in opposition to them—that is, by controlling access to meaning rather than by controlling access to content.

While legal scholarship is mostly built around the assumption that consumers have to choose to be private or to be public, in digital markets, privacy and publicity are inevitably muddled.

Drawing on the German Federal Court of Justice’s recent Facebook decision, the commentary observes that reclaiming privacy in digital markets depends not just on selecting between different options but also on being able to make choices in relation to them.

Recent scholarly debates about privacy in digital markets involve two distinct domains: privacy as individual autonomy; and privacy as a social practice of information-sharing and visibility. Both domains are preoccupied with different questions but legal scholarship does not always account for their disparity. On the one hand, the autonomy scholars who study privacy are inclined to stress the manner in which privacy enables individuals to be ‘left alone’, 1 to limit the extent of knowledge and access others have to their personal environment through information, presence, and vicinity, 2 and thus to hold a reasonable measure of control over the way in which they can present themselves to others. 3 On the other hand, those that we might term social privacy scholars are mostly apprehensive about how the realities of the networked social media landscape necessitate and empower individuals to share personal information to maintain a variety of contextually distinct environments that enable them to build different kinds of relationships, 4 facilitate interactions that recognize different social expectations, 5 and preserve different meanings of what they reveal to different people. 6 While the literature on privacy has long distinguished between individual and social dimensions of privacy, legal scholarship in digital markets takes recourse predominantly to the insights of the former. 7 A careful look at recent case law dealing with the market-driven implications of social media platforms’ privacy policies, however, exposes various opportunities for cross-fertilization.

The objective of this short commentary is to present an analytical structure for weaving together some of the most important strands of argument on individual and social privacy in digital markets. It does so by taking a step back from existing privacy scholarship, pushing the inquiry to a relatively general level of discussion to pinpoint two distinct categories of remedies common to competition, consumer, and data protection law considerations. One relies on the function of the consumer as ‘user’ of social networking platforms; the other refers to the consumer as their ‘producer’. These notions enable us to juxtapose and differentiate between the strikingly diverse policy proposals offered by scholars in these realms. They also serve as the foundation for some introductory and inevitably abridged remarks about the existing case law in this field.

Although the autonomy and social privacy scholars emphasize related questions—they both grapple with the manner in which social media affects practices of information-sharing and perceptibility—the conditions in which their concerns materialize are in fact quite different. The autonomy scholars essentially look at interferences in our common model of liberal selfhood in which privacy is construed as an individual right and harms to privacy are gauged by their consequences for a single person. This right to privacy originates as early as in the case of Katz v   US , according to which privacy is violated if the affected person had a ‘reasonable expectation’ to be let alone. 8 The social privacy scholars, by contrast, acknowledge the limitations of controlling privacy in digital markets where most personal information is in plain view and thus address how personal information can best be navigated in public settings. 9 Put more accurately, the autonomy scholars care about ‘limiting’ the stream of information while the social privacy scholars stress that information flows ‘appropriately’. 10 Recent case law and legislative actions dealing with information-sharing practices in the digital economy equally envisage remedies that target either ‘access to’ or ‘usage of’ consumers’ personal information. 11

Safeguarding individual and/or social privacy requires consideration of the institutional correctives for properly addressing them in digital markets. The most evident solution proceeds from the idea of the ‘consumer as user’. This customary approach depends on sovereignty or its functional counterpart—the consumer’s ability to choose between several different providers (eg social media platforms such as Facebook, Instagram, Snapchat, or Twitter)—to reconcile the relationship between platforms and their users. Scholars who emphasize this corrective resort to consumer choice as an expression of the market’s various offers. By casting competition as a device for defining different ‘spheres of influence’ to protect against undue interferences with users’ privacy, product choice and product functionalities are viewed as being determined by the market rather than by authorities or courts. As a result, the power of the consumer resides fundamentally in selecting between different products to protect against unwarranted intrusions into privacy. Although concentration may restrict the choice of available options, market incentives are thought to steer platforms towards the collection and processing of less personal data if consumers value privacy, because the consumer’s influence is grounded essentially in autonomy and separation. 12

An alternative institutional corrective offered in this commentary depends on what we might refer to as the idea of the ‘consumer as producer’. Where most data-driven platforms are based on active involvement by users to produce pertinent content and information, consumers may also influence the platform’s own collection and processing of personal data. Users who permit businesses to collect and commercialize economically valuable information about their behaviour may exert an influence on platforms not by switching but primarily by causing the firm to adjust terms and conditions of service. Here, the consumer’s authority originates from dependence: a platform’s business that crucially relies on consumers as producers to generate new content and monetize personal information may be influenced, first and foremost, by the user’s pushback. The stronger the degree of cohesion between the platform and its users, the more likely it becomes that consumers will be in a position to cajole, persuade, or even force the platform to adopt more privacy-protective terms of agreement. The point is not simply that influence moves along both directions between consumers and the platform. Rather, the consumer’s influence also turns on the premise that a high degree of cohesion between the platform and consumers gives rise to loyalty and trust, bestowing a kind of standing on consumers that leads to intensified administrative action by which the platform’s method of doing business may be brought in line with users’ expectations. The consumer’s influence, as a result, rests in integration, not separation. 13

To be sure, social privacy not only involves the contextual nature of the flow of a person’s information, and thus the reputational, social and public value of privacy protection that is vital to a robust culture of free expression, the integrity of intellectual activity, creativity and innovation. Rather, social privacy also involves the individual and population-based relations between data subjects that inevitably fail to materialize as a simple upshot of informational self-determination. Capturing the vested interests of third parties, including those of people belonging to certain classified groups, that regularly are implicated in the collection, processing or use of personal information arguably renders data ill-suited for a regulatory solution that is predicated upon a single act of personal control at the point of its collection. What is more, there are various digital services distinctly different from social media requiring different designs of privacy beyond the individual and social division, with the result that social privacy may be executed in vastly different ways. 14 But a necessary first step in creating an effective response to both the contextual nature of the flow of information and the representation of third-party and population-level interests at stake in data production may be to recognize that the institutional corrective must shift from offering opportunities to ‘exit’ to giving data subjects ‘voice’—that is, to enable them to obtain acknowledgement and standing to influence the objectives and circumstances of data production with a view to generating conditions of mutual commitment. 15 This commentary exemplifies the shift from individual exit rights to institutional governance mechanisms and public management of existing data flows, highlighting some of the most salient features of social privacy pertaining to social media platforms. Against the backdrop of a rapidly evolving digital landscape, future studies need to carefully investigate additional institutional correctives that are justifiable and workable for social privacy to unfold, and how exactly these correctives can be enshrined in law and put into operation.

To substantiate the analysis in some more detail, consider the German Federal Court of Justice’s (FCJ) recent Facebook decision. The case concerned Facebook’s online social networking platform, which required users to agree to its terms of service. These terms specified, amongst other things, that Facebook tracks, collects, and analyses personal user data on its platform and other group-related services, including on third-party websites and applications that are linked to Facebook’s pages by the firm’s plugins and programming interfaces such as its ‘Like’, ‘Share’, or ‘Login’ buttons. 16 The Federal Cartel Office (FCO) had previously launched an investigation into Facebook’s data-gathering activities and found that the company abused its dominant position by conditioning the use of the network on the user’s authorization to merge third-party website data with the personal information collected on Facebook’s social media platform. 17 On appeal, the Düsseldorf Higher Regional Court (HRC) ordered a suspension of the FCO’s decision; 18 however, the FCJ subsequently annulled the HRC’s decree. 19 The key question was whether users had been given an actual choice as to whether they would be in a position to use the social network in a personalized manner—giving the platform potentially unlimited access to users’ data on the internet—or whether users should be able to agree to a level of personalization that is based solely on data they themselves decide to share on Facebook. 20 The FCJ’s nuanced argument supports an approach based on integration, not separation, that is, in favour of empowering consumers to help adjust the platform’s terms and conditions of agreement rather than having to switch between distinct providers in case of disagreement. Specifically, the Court proclaimed that Facebook had abused its dominant position in the market for social networks because users who had signed up for Facebook’s service could not refuse or withdraw consent with respect to Facebook’s collection, processing and use of personal data on third-party websites and applications without incurring a significant disadvantage (ie having to leave the platform altogether). 21 Challenging the view that separation unconditionally must be seen as the main source of consumer influence, the Court observed:

[t]he typical contractual service of social networks is to provide the user with a comprehensive, personal “virtual space” [in which the] user should be able to build “real interpersonal relationships”. [On social media platforms, the] focus of the user’s experience [is] on creating their own “virtual identity” by building up a personal profile and creating a friends list. This identity [is] the virtual image of the user’s real life. All activities that a user develops in a social network are related to his personal network of friends and acquaintances …. 22

Access to Facebook, in the Court’s view, thus determines the ability of (at least some) consumers to participate in daily life so that they cannot simply be expected to relinquish their membership of the platform: ‘the social network is an important form of social communication. The use of the forum for the purpose of mutual exchange and expression of opinion is of particular essence because of the high number of users and the [platform’s] network effects’. 23 ‘Facebook [therefore] provides a communication platform [which] carries significant weight for public discourse on political, social, cultural and economic issues.’ 24 Where sharing information and connecting with other people in the online space may complement, or even supplement, regular face-to-face encounters—where people’s virtual self-identity is utterly essential to a person’s social acceptance—integration, in the Court’s opinion, represents a more meaningful channel of influence than separation. Most of all, the very essence of Facebook’s social media platform creates a special responsibility in drawing up terms and conditions of agreement, 25 and in making allowance for the significance of collecting, processing, and selling consumers’ personal information. 26

Apologists of the conventional view maintain that if the consumer’s destiny rests on intensified administrative action to adjust a platform’s terms and conditions of agreement, social media platforms will be required to offer at least two versions of their services: one in which they may combine consumers’ data from third-party websites and applications and one where users can opt out of the platform’s doing so. 27 Put differently, apologists are concerned that competition law interventions require social media platforms to afford consumers more than a single choice—that they give consumers the right to be signed up to a platform’s services ‘and’ afford them an option to choose between distinct features of the services that these platforms offer. 28 Since the conventional view is largely based on separation, apologists assume that the market’s benefits always reside in switching between different platforms rather than in prodding them, through intensified administrative practice, to offer users different distinct features of their services. The view is rooted in a conception of markets that clings firmly to the idea that (only) a choice between different products is a ‘free’ choice and that consumers are sovereign if they have an opportunity to pick between several alternative options. 29 Prompting social media platforms to provide consumers with some kind of choice between different degrees of personalization, including any form of pre-selection of privacy options presented by the platform, thus inevitably taps into divisive issues of government interference with free markets, which are subject to a number of familiar objections, the most serious being the restriction of freedom suggested by state interference with individual decisions. 30 In contrast, the FCJ’s ruling submits that in the digital economy, consumers’ choices may be determined, to a significant extent, by the market’s backdrop and that consumers, under certain circumstances, may only be able to choose freely if they can pick between different offers ‘pertaining to’ highly personalized goods. 31 As the FCJ maintains:

every company, including a dominant one, is left to choose its own type of economic activity and to decide which goods or services it wants to offer. But a dominant undertaking’s freedom to design its business model as it sees fit only exists within the boundaries of competition law. It is curtailed where this freedom is abused or leads to an anti-competitive restraint … . 32

To this extent, a dominant platform’s behaviour:

which cannot (at least also) be seen as being determined by demand-side preferences may constitute an anti-competitive abuse as a result of insufficient market pressure. To be sure, competition cannot always bring about a situation in which the market’s offers reflect all consumer preferences. But when an increase in the number of users of a social networking website has a positive effect on the platform’s market position, the platform must have a particular interest in providing services that are appealing to as many users as possible. This suggests that demand-side preferences are an essential factor in determining whether competition is effective. 33

As a consequence of this view, and because data-sensitive consumers can reasonably be expected to avail themselves of less personalized experiences, the FCJ concludes that under competitive conditions, consumers would have come across a variety of options with regard to the platform’s extent of personalization, particularly the platform’s merging of user profiles with data collected by third-party websites and applications. 34

Given the significance of Facebook’s services, consumers therefore cannot be left simply to resort to switching between different platforms. Instead, the institutional corrective must be to integrate their choices into the platform’s service. Rather than forcing users to renounce their memberships under these conditions, a more effective remedy then lies in intensifying the extent of constructive cohesion between the platform and its users. 35

Once we look to the FCJ’s proposition, we can discern an effort to harness the benefits of social (rather than individual) privacy. From the FCJ’s vantage point, privacy must not be theorized merely as a right to limit access to information but as the incessant administration of distinct boundaries between different spheres of action and different measures of disclosure within those spheres of action. 36 Sharing, communicating, and contributing information is viewed as indispensable to privacy, but users modify the manner in which they communicate with others depending on the extent to which such communication is appropriate in a given situation. 37 A platform’s privacy settings may enable users to limit certain acts of communication to exclude a particular group of followers they do not seek to address while still being connected with these followers to share other interests. 38 For instance, social media poses particular challenges in targeting family, friends and colleagues fittingly given the tenacious nature of previously published and often permanently archived messages. Changing subjects or conversations is difficult to achieve online if a captive audience gets to know—and will construe—a user’s post in an unintended fashion, and in contexts that may be far removed from the one in which the message was produced. Managing family, friends and colleagues side by side to make each group esteem and understand the content that different users share typically requires navigating distinctive codes of conduct in discrete social situations. 39 The upshot is that in networked contexts where audiences are hidden and almost anything that users share is in plain view, consumers can control what they reveal by employing the platform’s technical features to create different lists of friends, to block certain individuals from seeing particular messages, and to personalize selected portions of their profiles. Users of social media websites typically employ these strategies not because they have something to hide but because they seek to gain control over a social situation without having to constantly verbalize the situation’s backdrop and to explain its unspoken conventions. For example, when a person communicates with colleagues, the person may regularly act differently than when the person speaks with family; an adult may misapprehend a young person’s self-expression that positions them authentically within their own community of friends; or an independent viewer scanning a person’s history of posts on Facebook may easily misread the subtle norms that govern a specific situation in which the information has been shared. 40 Defending social privacy in this sense necessitates a considerable degree of monitoring and negotiation with regard to the audience that can view a user’s profile and in relation to who can respond to a person’s conversations. After all, different recipients of a message can understand, interpret and construe that message differently because social media regularly meets distinct communal spheres head-on. 41

Hence, users who have to navigate their identity and interests to act in keeping with different norms and practices need to be able to manage (rather than only limit) flows of information based on target groups and audiences. Privacy, in the networked social context, encompasses distinct social conventions that govern information in different settings, and involves control over the manner in which personal information is transmitted. 42 Just as someone may be at ease when sharing personal health data with their doctor in a medical facility but would disapprove of the doctor communicating the same information to strangers over lunch, 43 online social privacy includes a notion of contextual integrity to the extent that actors in a given setting are held responsible for precluding that information from spreading beyond that setting. 44 And navigating public and private spaces simultaneously requires the properties of social media to be such that users can create boundaries around these spaces to meaningfully portray the shades of who they are to different and potentially conflicting audiences. 45 Taken as a whole, social privacy suggests that in digital markets information is fundamentally interlaced—that posted images may cover other people; that comments are sent and received; and that shared content may incriminate family, friends and colleagues all at once. Rather than conceptualizing privacy as a reasonable expectation to be left alone, privacy in digital markets is governed by a blend of different groups of viewers, a mixture of technical processes and features, and distinct social conventions in which different contexts are inseparable, unstable and shifting—because content and meaning are co-created by a host of different users simultaneously and in sequence. 46 To be sure, the FCJ’s decision relates only to the issue of whether users are required to accept, without additional permission, a level of personalization based on access to user data collected outside of Facebook’s platform. 47 In light of social privacy’s dimensions, the implementation of such a choice (ie enabling data-sensitive consumers to choose between different levels of personalization of the platform) appears somewhat coarse. But the FCJ arguably considers this requirement to represent a possibility to make allowance for some, at least preliminary, considerations pertaining to social privacy. As the Court remarks:

[the] right to informational self-determination, particularly in light of the considerable political, social and economic significance of internet communication, requires – in view of the scope and depth of the data gathered – a particular level of protection of users from exploitation through inappropriate disclosure for utilization by the operator of a social network. 48

The most auspicious attempt for consumers to reclaim some control of privacy on social media, then, relies on integration, not separation. The main thrust of this approach is that agencies and courts—on behalf of consumers—help to incorporate into the platform those features that put users in a position to meaningfully segment their audience, to limit flows of information, and to restrict the addressees that can (mis-)construe the publicized information. 49 For instance, if someone wants to post an impression about a past weekend trip with friends, or a visit to a renowned art gallery on Facebook, they can publish different content for different audiences. While some followers may like to see images of sceneries and landscapes, others may scorn the person’s attempt at gaining more attention; another group of individuals may taunt the person’s interest in popular culture, while again others will commend the person’s visit to the gallery’s Renaissance’s collection. In short, to eschew alienating some people, users may modify the platform’s privacy settings. Hence, where personal information is broadly publicized and made accessible, and where such information can effortlessly be replicated and passed on, integration is an appropriate strategy to pursue. Privacy, in turn, can no longer be viewed simply as being rooted in consumers’ autonomous decisions to switch between different providers. Instead, privacy must rest on the notion of a continuous and active practice that involves the ability of users to influence the way in which they are perceived. And managing the reach of an individual’s shared personal information requires the ability to exert some meaningful influence over the manner in which such information flows, using the platform’s technological features to determine how that information is understood, at what moment, and by whom. 50

This notion of privacy is acutely sensitive to the means by which big technology companies are presently approached by regulators and courts: while some commentators understand the FCJ’s decision to grant users a ‘right’ to be on Facebook, seen through the prism of social privacy, consumers are empowered to manage unauthorized audience dynamics, to control unwanted interpretations of published information, and to preclude the unwarranted spread of their own data. In an environment where privacy can no longer be attained simply by making available or retaining personal information, alternatives reside in the continuous negotiation of different social contexts in which information is passed along. 51 Achieving social privacy thus emanates from integration, not from separation. It resides in strengthening the ties between the platform and its users to enable consumers to embed meaning and context into the shared content itself. 52

Engaging the notion of social (as opposed to individual) privacy, we can make out several connections between the ostensibly dissimilar policy choices that legal scholarship and practice in digital markets have put forward. Drawing from existing work that privacy scholars have produced in contemplating the legal and technical instantiations of privacy, the idea of social privacy can narrow the interstices between the available scholarship and case law.

On the one hand, the notion of social privacy highlights a problem for approaches that resort to the consumer’s sole ability to choose between different products when most personal information is in plain view and product choice is limited because of a lack of available alternatives. Reliance on competition between different products in such cases—whether achieved by breaking up dominant social media platforms or by opening up their accumulated data piles to rivals through interoperability or data portability measures—would simply provide stronger incentives for platforms to compete on the collection and sale of more personal data to advertisers for a lesser amount of money. 53 This is because the current backdrop of digital platforms’ business models revolves all around their capacity to collect, analyse and sell consumers’ data with the utmost accuracy, and to follow users around the web. 54 Any effort to instil a greater extent of rivalry into digital marketplaces may then only generate a more intrusive and less surveillance-resistant environment for consumers. 55 If an increasing amount of data brokers utilize ever-more effective tracking technologies to keep up with their opponents, any incentive to compete on privacy will disappear. Given the current state of the digital economy’s advertising-driven equilibrium, separation and switching will merely produce new opportunities for other actors to vie for the collection and sale of user data. Although some market actors may still have an interest in competing on privacy, they may nevertheless have difficulties in communicating to consumers the benefits of doing so. 56 Consumers, in turn, cannot always readily identify, let alone evaluate, a platforms’ privacy-enhancing updates of their services, and some technology firms may easily obscure the full extent of their own privacy-safeguarding commitments when they announce new ways to preclude consumer targeting. 57 Here, competition and privacy inexorably collide. Under circumstances where efforts to protect privacy may prove gruelling to pay off, taking resort to separation inevitably may hurt consumers and society as a whole, and even competitors themselves, because firms will profit more from exploiting user data than from acting in consumers’ interests. 58 The competition authority’s task, then, is to realign incentives by affording consumers an opportunity to wield some influence on the market, and authorizing them to adjust—through administrative practice—platforms’ terms and conditions of agreement. In this way, leading platforms may be compelled to cede command of the flow of user data to consumers, enabling them to gain control over their (perceived) identity, reputation, and capacities for self-presentation. 59

This is precisely what the FCJ’s reasoning suggests when it asserts that:

[it] might be expected that with functioning competition … offers would be available on the market for social networks that would accommodate user preferences for greater autonomy in granting access to their data … and that would give users the choice to select between a more personalized experience, which entails the processing of [third-party website] data, or a lesser degree of personalization based on data that they disclose when using the platform operator’s service. 60

Clearly, in the Court’s view, a market-based solution for consumer demands of privacy may conceivably emerge. But rather than just intensifying rivalry, competition needs to be addressed towards the qualities consumers genuinely care about. Only if incentives are aligned with consumers’ interests can users of social networking platforms come across a variety of privacy-protective offers in the market. To the extent that privacy is important, integration puts users in a position to modify the dominant platform’s terms and conditions of agreement, with the result that other services may be expected to adopt the same, or similar, alternatives, providing increased privacy protections and better product offerings across the market. 61 This multi-staged competitive pursuit accommodates consumer preferences that are indifferent towards a platform’s data sharing with third parties, while at the same time catering to the aspirations of those consumers who are indeed concerned about their privacy. Consumers, on the whole, remain free to authorize ‘or’ reject a platform’s data tracking on third-party websites and applications—an outcome that results from integration, not separation. In this sense, integration benefits both consumers and renders markets more contestable, ultimately giving rise to a variety of offers in which actors’ revenues from online advertising need not necessarily decline, but in which competitors will have more compelling reasons to adopt a blend of different monetization strategies. 62

Overall, the FCJ understood that Facebook’s monopoly and advertising advantage have been durable, and that Facebook’s advertising-driven business model has brought about a crowded ecosystem of digital businesses transmitting personal user data along each step from social media platforms to advertisers. Rather than relying on separation and switching, the FCJ ensured that individual users can be ‘guaranteed … the possibility of exerting [an] influence on the context and manner in which their [information] is made accessible to others and used by them’ and how users can be given ‘a substantial say with respect to attributions to their own identity’. 63 Since the Court’s focus was mainly on social privacy—the way in which individuals expose themselves to different audiences in different contexts—the most effective remedy was to confront the market’s underlying incentives, forcing Facebook to yield authority over the platform’s use of third-party website data to its users and affording them a significant degree of influence over the platform’s ‘flow’ of information. 64 As shown above, this kind of question—whether users of social medial platforms should be able to make their own decisions on who can get access to their personal information, under what circumstances and how—is thoroughly canvassed in a growing body of scholarly contributions on privacy in networked societies, and the FCJ’s reasoning usefully relates this notion to contemporary legal scholarship and practice.

On the other hand, to accomplish social privacy in digital markets, users need not only be in a position to rely on their ability to allow or deny access to personal information. Rather, they also need to be able to rely on their capacity to navigate technology to affect the ‘manner’ in which personal information flows—to manage the context in which content is conceived, interpreted, and construed, and to influence other people’s behaviour that involves the further disclosure, concealment, or re-interpretation of that content within an interconnected audience. Consider California’s Consumer Privacy Act (CCPA), which allows consumers to, among other things, opt out of or delete a website’s sale or sharing of personal user data. 65 As intimated previously, stipulating a right to delete or stop the sale of data may do justice to some claims of privacy in digital markets. However, social privacy is not only about a user’s mastery of data access but also about the prospect for individuals to be appropriately entrenched in their ongoing social relationships. 66 Here, user rights such as the CCPA’s entitlements to opt out or delete personal data must be reconciled with the meaning that is embedded in decisions that consumers make when they attempt to moderate the use of their personal information to preclude other people from misinterpreting what they learn. 67 In this sense, future disputes over the definition of the scope of users’ rights to ‘opt out’ or to ‘delete’ personal data will be crucial to advance social privacy. For instance, if privacy no longer pivots around hiding from the public but on controlling the flow of publicly available information, do such rights also apply to mere inferences that actors make about the behaviour of users, their outlooks, actions, mindsets or predilections? 68 If so, how broadly may the law wind up pertaining to this sort of profiling activities, especially where they originate from aggregated user data to target individual consumers? Moreover, if privacy is crucially embedded in context, do such rights relate only to the most pertinent data processing activities of social media platforms (eg the merging of third-party user data with the platform’s own collected information) or do they also involve the ability to situate a user’s personal information within the technical architecture of a platform’s product ecosystem that matters in terms of how users define their relationship with others and how the public perceives any particular piece of information (eg the manner in which audiences can be segmented on the platform’s owned and operated properties)? 69 Although much of the current legal debate is focused on whether users have a right to be left alone, in digital markets, users’ rights must encompass the ability to gain control over the situation in which contextual cues and social norms are navigated. And affording consumers agency over the social situation entails that they must realistically be in a position to contest other participants who are more influential in relation to that situation. Here, clarifications with respect to the bounds of user rights will be instrumental in providing consumers with the means to appropriately manage the flow and interpretation of shared personal information, and to give them actual control over the construction of experiences provided to them and others by dominant social media platforms. 70 Just like in the FCJ’s decision, authorizing agencies and courts to impose a type of remedy that makes consumers’ voices heard successfully by the firm is thus essential for achieving social privacy.

To be clear, social privacy does not depend exclusively, perhaps not even predominantly, on competition law’s characteristic precondition of market power of digital platforms. At one end of the spectrum, the European Commission’s recent proposal for a Digital Markets Act seeks to supplement ex post competition intervention with ex ante regulation. Here, designated ‘gatekeepers’ (as identified by eight categories of digital services and a selection of quantitative and/or qualitative benchmarks) are to be bound directly—without any additional consideration of the interests involved in a given case—by several discrete obligations, consisting of, inter alia, a requirement to ‘refrain from combining personal data sourced from these core platform services with personal data from any other services offered by the gatekeeper or with personal data from third-party services [… unless end users have provided their consent]’. 71 At the other end of the spectrum, as suggested previously, where platforms lack significant market power, switching, by itself, cannot be expected to be a sufficient means to achieve social privacy. If competition fails to address the qualities that consumers truly care about, behavioural exploitation of consumer choice online is also conceivable without market power (although the latter easily aggravates the problem through lock-in, network, and economies of scale and scope effects). 72 In that sense, social privacy cannot wholly be an issue of competition law and consumer preferences, but integration—at least to the extent that effective user rights are enshrined in existing legislation and digital platforms’ architectural properties to enable consumers to control the flow of personal data—must nonetheless be seen as an appropriate avenue to pursue: 73 an economy in which no social dimension of privacy exists is one in which no self-constituting process can take place, no agency can be developed, and no welfare can be achieved. 74

Without a doubt, the FCJ’s decision taps right into these issues. Confronting the adverse effects of competition on privacy, the Court’s emphasis is on integration, not separation, and this reminds us of the oft-neglected fact that in digital markets, affording consumers an ability to choose between different products may hardly solve all of the issues that disquiet them. Even if a user decides to leave Facebook’s social networking website for Instagram, Snapchat, or Twitter, any of these substitutes will nevertheless attempt to collect as much personal data as possible to maximize their revenue, and to keep users engaged with the same, or similar, attention-seeking interactions on the platform. 75 For all of the regular benefits of separation, the most significant impacts of the data-driven economy are evidently collective, so that individual choices between different products may at best have a marginal bearing on privacy. Instead, by affirming users’ memberships with the platform and authorizing them to adjust—by way of administrative practice—the platform’s terms and conditions of agreement, the FCJ asserts the consumer’s standing to participate in the debate about how to gain a significant measure of influence over the social context in which information is shared, construed, (re-)interpreted, and relayed. 76

It might at first sight seem counterintuitive to assert that giving prominence to consumer influence through integration would lend weight to privacy because we often associate consumer influence with switching. However, the FCJ in its recent Facebook decision makes it clear that privacy in digital markets is not simply an upshot of the consumer’s ability to choose between several alternative options but is determined by a confluence of shifting contexts that successively overlap. Privacy, as a result, must include affording users some control over their desired stream of information to take pre-emptive action, and to ensure that this information cannot be understood in such a way as to distort its envisioned meaning. Asserting privacy, in this sense, requires policymakers to pay close attention to a platform’s technical architecture that determines acts of broadcasting, audience segmentation and gaining meaningful control over the surrounding matter in which personal information flows. Consumer influence therefore often originates from integration because privacy can no longer be framed simply in terms of harms to individuals or groups. Rather, it must also be conceptualized in relation to social networks and ongoing relationships among different people. More pointedly, safeguarding social privacy entails recognizing that information is inherently entwined and almost constantly is in plain view so that consumers must be given a different type of footing than is permitted by reliance on switching alone. In contrast, any insistence on sovereignty and separation may likely undercut rather than bolster the authority of consumers to address pertinent questions of how to meaningfully share information and to simultaneously create some sense of discretion. In digital markets, sovereignty may likely constrain, rather than augment, the ability of consumers to monitor the claims of a networked and mutually dependent public in which privacy is not a static construct but a process by which individuals seek control over their identities. The ability to manage impressions, information flows, and context is thus an aspect that may be at least as vital in networked societies as is, typically, the consumer’s sovereignty for the efficient working of competitive markets.

See DJ Solove, ‘The Meaning and Value of Privacy’ in B Roessler and D Mokrosinska (eds), Social Dimensions of Privacy: Interdisciplinary Perspectives (Cambridge: Cambridge University Press, 2015) 71–82.

R Gavison, ‘Privacy and the Limits of Law’ (1984) 89 Yale Law Journal 421; J Reiman, ‘Privacy, Intimacy and Personhood’ in FA Schoemann (ed), Philosophical Dimensions of Privacy: An Anthology (New York: Cambridge University Press, 1984) 300–16.

K Strandburg, ‘Monitoring, Dataification and Consent: Legal Approaches to Privacy in the Big Data Context’ in J Lane and others (eds), Privacy, Big Data and the Public Good: Frameworks for Engagement (New York: Cambridge University Press, 2014) 5–43; A Marmor, ‘What is the Right to Privacy?’ (2015) 43 Philosophy & Public Affairs 4–26.

FA Schoemann, ‘Privacy: Philosophical Dimensions of the Literature’ in FA Schoemann (ed), Philosophical Dimensions of Privacy: An Anthology (New York: Cambridge University Press, 1984) 1–33; J Rachels, ‘Why Privacy is Important’ in FA Schoemann (ed), Philosophical Dimensions of Privacy: An Anthology (New York: Cambridge University Press, 1984) 290–94; Marmor (n 3).

NF Johnson and S Jajodia, ‘Exploring Steganography: Seeing the Unseen’ (1998) 31 Computer 26.

A Marwick and D Boyd, ‘Networked Privacy: How Teenagers Negotiate Context in Social Media’ (2014) 16 New Media & Society 1051.

Important exceptions include S Wachter and B Mittelstadt, ‘A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI’ (2019) 2 Columbia Business Law Review 494; N Richards and W Hartzog, ‘A Relational Turn for Data Protection?’ (2020) 6 European Data Protection Law Review 492; AE Waldman, ‘Privacy Law’s False Promise’ (2020) 97 Washington Law Review 1; S Viljoen, ‘Democratic Data: A Relational Theory for Data Governance’ (2021) 131 Yale Law Journal (forthcoming).

AE Marwick, Status Update: Celebrity, Publicity and Branding in the Social Media Age (New Haven: Yale University Press, 2013); D Boyd, It’s Complicated. The Social Lives of Networked Teens (New Haven: Yale University Press, 2014).

H Nissenbaum, Privacy in Context: Technology, Policy and the Integrity of Social Life (Stanford: Stanford University Press, 2009); H Nissenbaum, ‘Respect for Context as a Benchmark for Privacy Online: What it is and isn’t’ in B Roessler and D Mokrosinska (eds), Social Dimensions of Privacy: Interdisciplinary Perspectives (Cambridge: Cambridge University Press, 2015) 278–302.

See the Section ‘From controlling data access to authorizing data usage’ below.

For a concise synthesis of these arguments see G Colangelo and M Maggiolino, ‘Data Accumulation and the Privacy-Antitrust Interface: Insights from the Facebook Case’ (2018) 8 International Data Privacy Law 224; P Këllezi, ‘Consumer Choice and Consent in Data Protection’ (2021) Competition Policy International 18 January 2021; T Körber, ‘Is Knowledge (Market) Power? On the Relationship between Data Protection, “Data Power” and Competition Law’ (2018) Competition Policy International 7 February 2018.

Bundeskartellamt, Decision of 6 February 2019, B6-22/16; Bundesgerichtshof (23 June 2020) KVR 69/19 DE:BGH:2020:230620BKVR69.19.0; M Ioannidou, ‘Responsive Remodelling of Competition Law Enforcement’ (2020) 40 Oxford Journal of Legal Studies 846; A Kuenzler, ‘Direct Consumer Influence – The Missing Strategy to Integrate Data Privacy Preferences into the Market’ (2020) 39 Yearbook of European Law 423.

See JL Kröger, OH Lutz and S Ullrich, ‘The Myth of Individual Control: Mapping the Limitations of Privacy Self-Management’ (2021) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3881776 > last accessed 12 October 2021.

A Hirschman, Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States (Cambridge, MA: Harvard University Press, 1970).

Bundesgerichtshof (n 13).

Bundeskartellamt (n 13).

OLG Düsseldorf, Order of 26 August 2019, VI-Kart 1/19 (V).

Ibid, para 131.

Ibid, para 24; see also Case COMP/M.7217 Facebook/WhatsApp, para 54. This is, amongst other things, the result of (identity-based) network effects according to which the benefit of Facebook’s platform for each single user increases with the total number of people connected to the network. The greater the number of users, the greater the communication options for each individual user. The Court emphasized that the network’s value for each user increases not just because of the sheer number of people using Facebook but also because of the identity of users that can be reached, Bundesgerichtshof (n 13), para 44. Additionally, owing to the purpose of social networks, parallel use only makes sense if people can find the same friends as users in an alternative network. A user would therefore have to move their previous contacts in the original network to the new network. But a user’s contacts have their own contacts in the previous network, so that they must also be encouraged to switch or use different networks simultaneously. The more contacts a user has in the previous network and the more closely these contacts are connected to other users, the more difficult it becomes to move the user’s contacts to a new network, ibid para 48.

Ibid, para 102.

Ibid, para 124.

Bundesgerichtshof, Press Release No 080/2020, 23 June 2020.

See for a succinct overview Këllezi (n 12).

Ibid; see sources quoted (nn 1–3).

See DM Kreps, A Course in Microeconomic Theory (New Jersey: Princeton University Press, 1990).

See MJ Trebilcock, The Limits of Freedom of Contract (Cambridge, MA: Harvard University Press, 1997).

See AK Sen, Choice, Welfare and Measurement (Cambridge, MA: Harvard University Press, 1997).

Bundesgerichtshof (n 13), para 122.

Ibid, para 87.

Ibid, paras 91, 120.

Ibid, para 131; Kuenzler (n 13).

L Palen and P Dourish, ‘Unpacking “Privacy” for a Networked World’ in G Cockton and P Korhonen (eds), Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York: ACM, 2003) 129–36; I Altman, ‘Privacy Regulation: Culturally Universal or Culturally Specific?’ (1977) 33 Journal of Social Issues 66.

E Goffman, The Presentation of Self in Everyday Life (London: Penguin, 1990).

For a contextualized vision of privacy see A Monti and R Wacks, Protecting Personal Information: The Right to Privacy Reconsidered (Oxford: Hart, 2019).

Viljoen (n 7).

Marwick (n 9); Boyd (n 9).

Waldman (n 7).

Richards and Hartzog (n 7).

Nissenbaum’s contextual integrity model inserts context and collectivity into rights-based models of privacy and acknowledges the reality of information dissemination without consent, Nissenbaum (n 10).

M Becker, ‘Privacy in the Digital Age: Comparing and Contrasting Individual versus Social Approaches Towards Privacy’ (2019) 21 Ethics and Information Technology 307.

Marwick and Boyd (n 6).

Bundesgerichtshof (n 13), paras 53, 58, 86, 94, 97, 119.

Ibid, para 103.

Marwick and Boyd (n 6); see further for an approach relying on integration A Kuenzler, ‘Advancing Quality Competition in Big Data Markets’ (2019) 15 Journal of Competition Law and Economics 500.

Users have been shown to engage in numerous creative tactics such as subtweeting, for instance, where they determine who can access the shared information by ignoring the technical features of social media and instead focusing on encoding the content itself to limit the audience, or manipulating technical affordances such as deactivating and activating profiles to gain control over the context in which posts are viewed and interpreted, see Boyd (n 9).

JE Cohen, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (New Haven: Yale University Press, 2012); A Kapczynski, ‘The Law of Informational Capitalism’ (2020) 129 Yale Law Journal 1276; Marwick and Boyd (n 6).

A Kuenzler, Restoring Consumer Sovereignty. How Markets Manipulate Us and What the Law Can Do About It (New York: Oxford University Press, 2017).

For elaborated proposals in this regard see T Wu, The Curse of Bigness. Antitrust in the New Gilded Age (Columbia Global Reports, 2018); Z Teachout, Break ’Em Up. Recovering our Freedom From Big Ag, Big Tech, and Big Money (New York: All Points, 2020). At present, the basic policy considerations in Europe tend to turn upon private control versus public open access approaches, European Commission, ‘A European Strategy for Data’ COM(2020) 66 final. Specific data portability rights are enshrined in Regulation (EU) 2016/679 of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation), OJ L 119/1, art 20; Directive (EU) 2015/2366 of 25 November 2015 on Payment Services in the Internal Market, OJ L 337/35, arts 66–67; Directive (EU) 2019/944 of 5 June 2019 on Common Rules for the Internal Market for Electricity, OJ L 158/125, art 23.

V Mayer-Schönberger and K Cukier, Big Data: A Revolution That Will Transform How We Live, Work, and Think (London: John Murray, 2013); S Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (London: Hachette, 2019).

EM Douglas, ‘The New Antitrust/Data Privacy Interface’ (2021) 130 Yale Law Journal Forum 647; M Lemley, ‘The Contradictions of Platform Regulation’ (2021) < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3778909 > last accessed 12 October 2021; MK Ohlhausen and A Okuliar, ‘Competition, Consumer Protection, and the Right [Approach] to Privacy’ (2015) 80 Antitrust Law Journal 121.

A Acquisti, L Brandimarte and G Loewenstein, ‘Privacy and Human Behavior in the Age of Information’ (2015) 347 Science 509. In the search engine market, for instance, DuckDuckGo has long been trying to compete by offering more privacy. However, such attempts—at least under existing circumstances—have led to no significant threat to digital platforms’ prevailing business models, presumably because it is far more profitable, less risky and more certain for new entrants to compete on collecting and selling more user data.

Recent proposals by Google and Apple, for example, have been met with profound criticism for being technologies that seemingly intend to advance consumer privacy but in fact do very little to adjust the subtleties lying beneath the forces of an industry put up on the basis of surveillance-driven advertising, D Geradin, D Katsifis and T Karanikioti, ‘Google As a De Facto Privacy Regulator: Analysing the Privacy Sandbox From an Antitrust Perspective’ (2021) European Competition Journal (forthcoming).

ME Stucke and A Ezrachi, Competition Overdose. How Free Market Mythology Transformed Us from Citizen Kings to Market Servants (New York: Harper Business, 2020).

A Kuenzler, ‘Intellectual Property on the Cusp of the Intangible Economy’ (2021) 16 Journal of Intellectual Property Law and Practice 692. The intuitive idea behind this reasoning is that while more intense competition may be beneficial in markets where firms make products that consumers want, increased competition is likely to be detrimental in markets where firms produce harmful, unwanted or undesirable products. Such competition will lead to a ‘race to the bottom’ rather than a ‘race to the top’ in terms of quality. Clearly, alongside social media platforms’ many benefits, these platforms also cause significant detriment. As a result, simply intensifying rivalry would further undermine privacy, Kuenzler (n 49).

Bundesgerichtshof (n 13), para 86.

Stucke and Ezrachi (n 58); Kuenzler (n 59).

Integration may thus even work to reinstate the ability of consumers to switch. Note, however, that realigning market incentives through integration requires careful consideration of existing market dynamics. For instance, Google’s and Apple’s current privacy-enhancing updates of their services, offering opt-out rights of third-party data collection, may be viewed as evidence that big technology platforms do indeed respond to consumer preferences. In reality, however, these updates could end up revealing more information about users rather than less. While both updates are thought to limit the quantity of user data available to other firms, they also run the risk of further entrenching the control of the incumbent undertakings over users’ data and of making it more difficult for other actors to compete with them. Instead of letting competitors collect user data on their own, Google and Apple will simply do the tracking themselves. A result of the companies’ dominance in the browser and mobile operating systems market is that fewer actors may get hold of their users’ profiles, with the effect that marketers (and publishers) may eventually flock to the incumbent companies’ owned and operated inventory because they have unique data-advantages that other actors cannot replicate, see Competition and Markets Authority, ‘Online Platforms and Digital Advertising. Market Study Final Report. Appendix G: The Role of Tracking in Digital Advertising’ (London, 2020).

Bundesgerichtshof (n 13), para 104.

Ibid, paras 105, 107, 115, 119.

CCPA of 2018, 2017 Cal AB 375 (2018), amended by the California Privacy Rights Act of 2020, Proposition 24, 1879 (19-0021A1).

Cohen (n 51); Boyd (n 9).

Wachter and Mittelstadt (n 7) have recently proposed the enactment of a novel right to reasonable inferences, requiring both an ex ante justification to be given by the data controller for making such inferences, and an ex post mechanism to enable data subjects to challenge them.

In a recent joint statement, the UK Competition and Markets Authority (CMA) and the Information Commissioner’s Office (ICO) acknowledge that regulatory interventions to restrict data access (ie data silos) can be a vital instrument in aligning competition and data processing interests, Competition and Markets Authority and Information Commissioner’s Office, Competition and Data Protection in Digital Markets: A Joint Statement Between the CMA and the ICO (London, 2021). The effect of providing users with a greater extent of control over dominant platforms’ own data-gathering activities might also be to alter the advertising behaviour of big technology firms, which might in turn alleviate the burden of smaller companies vis-à-vis bigger rivals to comply with existing data protection rules when seeking to sell the collected data for profit, see UK Competition and Markets Authority, Online Platforms and Digital Advertising. Market Study Final Report (London, 2020). From the point of view of social privacy, such rights give users more control over how to aptly present themselves on the internet.

Ioannidou (n 13).

European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on Contestable and Fair Markets in the Digital Sector (Digital Markets Act)’ COM(2020) 842 final, art 5 (a).

For an excellent overview see J Laux, S Wachter and B Mittelstadt, ‘Neutralizing Online Behavioural Advertising: Algorithmic Targeting with Market Power as An Unfair Commercial Practice’ (2021) 58 Common Market Law Review 719. This is why here, integration is to be preferred over separation and switching, see sources quoted (nn 53–62) and accompanying text.

This finding is corroborated by the literature on dark-patterns in online choice architecture, J Luguri and LJ Strahilevitz, ‘Shining a Light on Dark Patterns’ (2021) 13 Journal of Legal Analysis 43; R Calo, ‘Digital Market Manipulation’ (2014) 82 George Washington Law Review 995.

L Floridi, The 4 th Revolution. How the Infosphere is Reshaping Human Reality (New York: Oxford University Press, 2014) 101–28; B Schneier, Data and Goliath. The Hidden Battles to Collect Your Data and Control Your World (New York: Norton, 2015).

JM Balkin, ‘The Fiduciary Model of Privacy’ (2020) 134 Harvard Law Review Forum 11; Kuenzler (n 13).

389 US 347 (1967); see SD Warren and LD Brandeis, ‘The Right to Privacy’ (1890) 4 Harvard Law Review 193–220; see also AF Westin, Privacy and Freedom (New York: Atheneum, 1967); R Gavison, ‘Privacy and the Limits of the Law’ (1980) 89 Yale Law Journal 421.

Author notes

Email alerts, citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 2044-4001
  • Print ISSN 2044-3994
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Feb 15, 2023

6 Example Essays on Social Media | Advantages, Effects, and Outlines

Got an essay assignment about the effects of social media we got you covered check out our examples and outlines below.

Social media has become one of our society's most prominent ways of communication and information sharing in a very short time. It has changed how we communicate and has given us a platform to express our views and opinions and connect with others. It keeps us informed about the world around us. Social media platforms such as Facebook, Twitter, Instagram, and LinkedIn have brought individuals from all over the world together, breaking down geographical borders and fostering a genuinely global community.

However, social media comes with its difficulties. With the rise of misinformation, cyberbullying, and privacy problems, it's critical to utilize these platforms properly and be aware of the risks. Students in the academic world are frequently assigned essays about the impact of social media on numerous elements of our lives, such as relationships, politics, and culture. These essays necessitate a thorough comprehension of the subject matter, critical thinking, and the ability to synthesize and convey information clearly and succinctly.

But where do you begin? It can be challenging to know where to start with so much information available. Jenni.ai comes in handy here. Jenni.ai is an AI application built exclusively for students to help them write essays more quickly and easily. Jenni.ai provides students with inspiration and assistance on how to approach their essays with its enormous database of sample essays on a variety of themes, including social media. Jenni.ai is the solution you've been looking for if you're experiencing writer's block or need assistance getting started.

So, whether you're a student looking to better your essay writing skills or want to remain up to date on the latest social media advancements, Jenni.ai is here to help. Jenni.ai is the ideal tool for helping you write your finest essay ever, thanks to its simple design, an extensive database of example essays, and cutting-edge AI technology. So, why delay? Sign up for a free trial of Jenni.ai today and begin exploring the worlds of social networking and essay writing!

Want to learn how to write an argumentative essay? Check out these inspiring examples!

We will provide various examples of social media essays so you may get a feel for the genre.

6 Examples of Social Media Essays

Here are 6 examples of Social Media Essays:

The Impact of Social Media on Relationships and Communication

Introduction:.

The way we share information and build relationships has evolved as a direct result of the prevalence of social media in our daily lives. The influence of social media on interpersonal connections and conversation is a hot topic. Although social media has many positive effects, such as bringing people together regardless of physical proximity and making communication quicker and more accessible, it also has a dark side that can affect interpersonal connections and dialogue.

Positive Effects:

Connecting People Across Distances

One of social media's most significant benefits is its ability to connect individuals across long distances. People can use social media platforms to interact and stay in touch with friends and family far away. People can now maintain intimate relationships with those they care about, even when physically separated.

Improved Communication Speed and Efficiency

Additionally, the proliferation of social media sites has accelerated and simplified communication. Thanks to instant messaging, users can have short, timely conversations rather than lengthy ones via email. Furthermore, social media facilitates group communication, such as with classmates or employees, by providing a unified forum for such activities.

Negative Effects:

Decreased Face-to-Face Communication

The decline in in-person interaction is one of social media's most pernicious consequences on interpersonal connections and dialogue. People's reliance on digital communication over in-person contact has increased along with the popularity of social media. Face-to-face interaction has suffered as a result, which has adverse effects on interpersonal relationships and the development of social skills.

Decreased Emotional Intimacy

Another adverse effect of social media on relationships and communication is decreased emotional intimacy. Digital communication lacks the nonverbal cues and facial expressions critical in building emotional connections with others. This can make it more difficult for people to develop close and meaningful relationships, leading to increased loneliness and isolation.

Increased Conflict and Miscommunication

Finally, social media can also lead to increased conflict and miscommunication. The anonymity and distance provided by digital communication can lead to misunderstandings and hurtful comments that might not have been made face-to-face. Additionally, social media can provide a platform for cyberbullying , which can have severe consequences for the victim's mental health and well-being.

Conclusion:

In conclusion, the impact of social media on relationships and communication is a complex issue with both positive and negative effects. While social media platforms offer many benefits, such as connecting people across distances and enabling faster and more accessible communication, they also have a dark side that can negatively affect relationships and communication. It is up to individuals to use social media responsibly and to prioritize in-person communication in their relationships and interactions with others.

The Role of Social Media in the Spread of Misinformation and Fake News

Social media has revolutionized the way information is shared and disseminated. However, the ease and speed at which data can be spread on social media also make it a powerful tool for spreading misinformation and fake news. Misinformation and fake news can seriously affect public opinion, influence political decisions, and even cause harm to individuals and communities.

The Pervasiveness of Misinformation and Fake News on Social Media

Misinformation and fake news are prevalent on social media platforms, where they can spread quickly and reach a large audience. This is partly due to the way social media algorithms work, which prioritizes content likely to generate engagement, such as sensational or controversial stories. As a result, false information can spread rapidly and be widely shared before it is fact-checked or debunked.

The Influence of Social Media on Public Opinion

Social media can significantly impact public opinion, as people are likelier to believe the information they see shared by their friends and followers. This can lead to a self-reinforcing cycle, where misinformation and fake news are spread and reinforced, even in the face of evidence to the contrary.

The Challenge of Correcting Misinformation and Fake News

Correcting misinformation and fake news on social media can be a challenging task. This is partly due to the speed at which false information can spread and the difficulty of reaching the same audience exposed to the wrong information in the first place. Additionally, some individuals may be resistant to accepting correction, primarily if the incorrect information supports their beliefs or biases.

In conclusion, the function of social media in disseminating misinformation and fake news is complex and urgent. While social media has revolutionized the sharing of information, it has also made it simpler for false information to propagate and be widely believed. Individuals must be accountable for the information they share and consume, and social media firms must take measures to prevent the spread of disinformation and fake news on their platforms.

The Effects of Social Media on Mental Health and Well-Being

Social media has become an integral part of modern life, with billions of people around the world using platforms like Facebook, Instagram, and Twitter to stay connected with others and access information. However, while social media has many benefits, it can also negatively affect mental health and well-being.

Comparison and Low Self-Esteem

One of the key ways that social media can affect mental health is by promoting feelings of comparison and low self-esteem. People often present a curated version of their lives on social media, highlighting their successes and hiding their struggles. This can lead others to compare themselves unfavorably, leading to feelings of inadequacy and low self-esteem.

Cyberbullying and Online Harassment

Another way that social media can negatively impact mental health is through cyberbullying and online harassment. Social media provides a platform for anonymous individuals to harass and abuse others, leading to feelings of anxiety, fear, and depression.

Social Isolation

Despite its name, social media can also contribute to feelings of isolation. At the same time, people may have many online friends but need more meaningful in-person connections and support. This can lead to feelings of loneliness and depression.

Addiction and Overuse

Finally, social media can be addictive, leading to overuse and negatively impacting mental health and well-being. People may spend hours each day scrolling through their feeds, neglecting other important areas of their lives, such as work, family, and self-care.

In sum, social media has positive and negative consequences on one's psychological and emotional well-being. Realizing this, and taking measures like reducing one's social media use, reaching out to loved ones for help, and prioritizing one's well-being, are crucial. In addition, it's vital that social media giants take ownership of their platforms and actively encourage excellent mental health and well-being.

The Use of Social Media in Political Activism and Social Movements

Social media has recently become increasingly crucial in political action and social movements. Platforms such as Twitter, Facebook, and Instagram have given people new ways to express themselves, organize protests, and raise awareness about social and political issues.

Raising Awareness and Mobilizing Action

One of the most important uses of social media in political activity and social movements has been to raise awareness about important issues and mobilize action. Hashtags such as #MeToo and #BlackLivesMatter, for example, have brought attention to sexual harassment and racial injustice, respectively. Similarly, social media has been used to organize protests and other political actions, allowing people to band together and express themselves on a bigger scale.

Connecting with like-minded individuals

A second method in that social media has been utilized in political activity and social movements is to unite like-minded individuals. Through social media, individuals can join online groups, share knowledge and resources, and work with others to accomplish shared objectives. This has been especially significant for geographically scattered individuals or those without access to traditional means of political organizing.

Challenges and Limitations

As a vehicle for political action and social movements, social media has faced many obstacles and restrictions despite its many advantages. For instance, the propagation of misinformation and fake news on social media can impede attempts to disseminate accurate and reliable information. In addition, social media corporations have been condemned for censorship and insufficient protection of user rights.

In conclusion, social media has emerged as a potent instrument for political activism and social movements, giving voice to previously unheard communities and galvanizing support for change. Social media presents many opportunities for communication and collaboration. Still, users and institutions must be conscious of the risks and limitations of these tools to promote their responsible and productive usage.

The Potential Privacy Concerns Raised by Social Media Use and Data Collection Practices

With billions of users each day on sites like Facebook, Twitter, and Instagram, social media has ingrained itself into every aspect of our lives. While these platforms offer a straightforward method to communicate with others and exchange information, they also raise significant concerns over data collecting and privacy. This article will examine the possible privacy issues posed by social media use and data-gathering techniques.

Data Collection and Sharing

The gathering and sharing of personal data are significant privacy issues brought up by social media use. Social networking sites gather user data, including details about their relationships, hobbies, and routines. This information is made available to third-party businesses for various uses, such as marketing and advertising. This can lead to serious concerns about who has access to and uses our personal information.

Lack of Control Over Personal Information

The absence of user control over personal information is a significant privacy issue brought up by social media usage. Social media makes it challenging to limit who has access to and how data is utilized once it has been posted. Sensitive information may end up being extensively disseminated and may be used maliciously as a result.

Personalized Marketing

Social media companies utilize the information they gather about users to target them with adverts relevant to their interests and usage patterns. Although this could be useful, it might also cause consumers to worry about their privacy since they might feel that their personal information is being used without their permission. Furthermore, there are issues with the integrity of the data being used to target users and the possibility of prejudice based on individual traits.

Government Surveillance

Using social media might spark worries about government surveillance. There are significant concerns regarding privacy and free expression when governments in some nations utilize social media platforms to follow and monitor residents.

In conclusion, social media use raises significant concerns regarding data collecting and privacy. While these platforms make it easy to interact with people and exchange information, they also gather a lot of personal information, which raises questions about who may access it and how it will be used. Users should be aware of these privacy issues and take precautions to safeguard their personal information, such as exercising caution when choosing what details to disclose on social media and keeping their information sharing with other firms to a minimum.

The Ethical and Privacy Concerns Surrounding Social Media Use And Data Collection

Our use of social media to communicate with loved ones, acquire information, and even conduct business has become a crucial part of our everyday lives. The extensive use of social media does, however, raise some ethical and privacy issues that must be resolved. The influence of social media use and data collecting on user rights, the accountability of social media businesses, and the need for improved regulation are all topics that will be covered in this article.

Effect on Individual Privacy:

Social networking sites gather tons of personal data from their users, including delicate information like search history, location data, and even health data. Each user's detailed profile may be created with this data and sold to advertising or used for other reasons. Concerns regarding the privacy of personal information might arise because social media businesses can use this data to target users with customized adverts.

Additionally, individuals might need to know how much their personal information is being gathered and exploited. Data breaches or the unauthorized sharing of personal information with other parties may result in instances where sensitive information is exposed. Users should be aware of the privacy rules of social media firms and take precautions to secure their data.

Responsibility of Social Media Companies:

Social media firms should ensure that they responsibly and ethically gather and use user information. This entails establishing strong security measures to safeguard sensitive information and ensuring users are informed of what information is being collected and how it is used.

Many social media businesses, nevertheless, have come under fire for not upholding these obligations. For instance, the Cambridge Analytica incident highlighted how Facebook users' personal information was exploited for political objectives without their knowledge. This demonstrates the necessity of social media corporations being held responsible for their deeds and ensuring that they are safeguarding the security and privacy of their users.

Better Regulation Is Needed

There is a need for tighter regulation in this field, given the effect, social media has on individual privacy as well as the obligations of social media firms. The creation of laws and regulations that ensure social media companies are gathering and using user information ethically and responsibly, as well as making sure users are aware of their rights and have the ability to control the information that is being collected about them, are all part of this.

Additionally, legislation should ensure that social media businesses are held responsible for their behavior, for example, by levying fines for data breaches or the unauthorized use of personal data. This will provide social media businesses with a significant incentive to prioritize their users' privacy and security and ensure they are upholding their obligations.

In conclusion, social media has fundamentally changed how we engage and communicate with one another, but this increased convenience also raises several ethical and privacy issues. Essential concerns that need to be addressed include the effect of social media on individual privacy, the accountability of social media businesses, and the requirement for greater regulation to safeguard user rights. We can make everyone's online experience safer and more secure by looking more closely at these issues.

In conclusion, social media is a complex and multifaceted topic that has recently captured the world's attention. With its ever-growing influence on our lives, it's no surprise that it has become a popular subject for students to explore in their writing. Whether you are writing an argumentative essay on the impact of social media on privacy, a persuasive essay on the role of social media in politics, or a descriptive essay on the changes social media has brought to the way we communicate, there are countless angles to approach this subject.

However, writing a comprehensive and well-researched essay on social media can be daunting. It requires a thorough understanding of the topic and the ability to articulate your ideas clearly and concisely. This is where Jenni.ai comes in. Our AI-powered tool is designed to help students like you save time and energy and focus on what truly matters - your education. With Jenni.ai , you'll have access to a wealth of examples and receive personalized writing suggestions and feedback.

Whether you're a student who's just starting your writing journey or looking to perfect your craft, Jenni.ai has everything you need to succeed. Our tool provides you with the necessary resources to write with confidence and clarity, no matter your experience level. You'll be able to experiment with different styles, explore new ideas , and refine your writing skills.

So why waste your time and energy struggling to write an essay on your own when you can have Jenni.ai by your side? Sign up for our free trial today and experience the difference for yourself! With Jenni.ai, you'll have the resources you need to write confidently, clearly, and creatively. Get started today and see just how easy and efficient writing can be!

Try Jenni for free today

Create your first piece of content with Jenni today and never look back

  • Up to 1 Years Old
  • Up to 2 Years Old
  • Up to 3 Years Old
  • Up to 4 Years Old
  • Up to 5 Years Old
  • Up to 6 Years Old
  • Up to 7 Years Old
  • Up to 8 Years Old
  • Up to 9 Years Old
  • Up to 12 Years Old
  • 24 Volt Parallel
  • Lamborghini
  • Bluetooth Audio Connectivity
  • 3 & 5 Point Safety Harness
  • With Optional Leather Style Seat Upgrade
  • With Optional Soft EVA Wheel Upgrade
  • All Wheel Drive
  • The Most Expensive Ride On Toys at RiiRoo | £500+
  • Older Children
  • Upgrade to Leather Style Padded Seat
  • Upgrade to Soft EVA Wheels
  • Shop All Cars & Jeeps
  • Kids Electric Toys Up to 3 Years Old
  • Kids Electric Toys Up to 4 Years Old
  • Kids Electric Toys Up to 5 Years Old
  • Kids Electric Toys Up to 6 Years Old
  • Kids Electric Toys Up to 7 Years Old
  • Kids Electric Toys Up to 8 Years Old
  • Kids Electric Toys Up to 9 Years Old
  • Kids Electric Toys Up to 12 Years Old
  • Kids Electric Toys Up to 15+ Years Old
  • Electric Motorbikes
  • Petrol Motorbikes
  • Shop All Motorbikes
  • Electric Quads & ATV's
  • Petrol Quads & ATV's
  • Shop All Quads & ATV's
  • Electric Scooters
  • Push Scooters
  • Shop All Scooters
  • Drift Karts
  • Petrol Go Karts
  • Pedal Go Karts
  • Shop All Go Karts
  • Push Along's
  • High Powered Electric
  • High Powered Petrol
  • Shop All High Powered
  • Control Board Modules
  • Media Players
  • Parental Remotes
  • Pedal Switches
  • Shop All Spares
  • Showroom Range
  • Accessories
  • Kids Ride on Trains
  • Pink Electric Ride on Cars
  • Kids Diggers
  • Pre-Assembled
  • Shop All Others
  • RiiRoo Coupon and Discount Codes
  • Ride On Buying Guides
  • Customer Reviews
  • Latest Products
  • Assembly Video Instructions
  • Troubleshooting Guides
  • Help Centre
  • Lifestyle Blog
  • Health and Wellbeing Blog

What is The Impact of Social Media on Privacy?

As social media has become more popular, people have become increasingly worried about the privacy implications of using these platforms.

a kid doing some programming on computer

Additionally, many people are concerned about the way that social media companies collect and use data about their users.

These concerns have led to calls for tighter regulation of social media companies and increased awareness of the importance of protecting one’s privacy online. However, it remains to be seen how effective such measures will be in protecting users’ privacy.

1. Social media has impacted privacy by enabling people to share personal information

Social media has impacted privacy by enabling people to share personal information with a wider audience than ever before:

Social media platforms like Twitter allow users to make posts available for public viewing, meaning that they can reach a much larger audience than before (when such platforms did not exist).

Additionally, many people associate themselves with their online identity on these platforms and believe that it is okay to share their personal thoughts and information with a wide audience.

2. Social media has led to more breaches of privacy

Recent events have confirmed everyone’s worst fears about the dangers of posting too much personal information online on social media platforms:

For example, many people were extremely concerned when it was revealed that Cambridge Analytica had been using Facebook data of 50 million users to help them with their political advertising during Donald Trump’s presidential campaign in 2016.

In another instance, Equifax, one of the US' major credit rating agencies announced that hackers had stolen sensitive financial data from over 140 million customers (which would potentially compromise both individual and national security).

This is a problem because once a user shares information online, they lose control over it - it means that it might be used for purposes which they do not approve, or in a way that they don’t expect.

3. Social media has impacted privacy by enabling the collection of private data

Social media platforms are able to collect data about their users via their website logs, search engines, cookies, third-party apps and other sources.

This enables websites use this information for targeted advertising or even to sell it on to third parties without the user’s knowledge:

Although many social media platforms have claimed that they use this information ethically, there is still cause for concern.

For example, Facebook was recently forced to admit that it had been collecting Android users’ call records and text message history since 2017  without letting them know.

Although Facebook claims that they have done this in order to improve their messaging service, the fact remains that they were using people’s personal data for financial gain without permission.

4. The impact on privacy is not known, because it's hard to regulate social media companies

Even though there are now calls for greater regulation of the social media industry, it is far from certain that this will actually be effective in protecting people’s privacy.

Although the European Union has taken steps to protect internet users by enforcing strict data protection laws, even these measures have not been enough to stop Facebook and Google from allegedly using shady practices such as tracking users across websites and storing cookies on them even when they have no accounts (which violates basic principles of online marketing).

This suggests that tighter regulations may not solve the problem of personal information being used without consent.

5. Users should be aware of the risks and take steps to protect their own privacy online

Despite the challenges that lie ahead when it comes to protecting user privacy on social media, it is important for users to be aware of the risks and take steps in order to protect themselves.

Many people are now deleting their Facebook accounts or at least considering deleting them after they have become more conscious about where their data is going and what it being used for.

Additionally, users should not share private information via social media platforms unless absolutely necessary (i.e., keep financial details like credit card numbers and expiry dates private) as this will reduce the risk of cyber theft.

8. There are ways that users can minimize the risk of their private information being exposed online

  • Avoid sharing any personal information online
  • Delete social media accounts that do not add much value to their lives
  • Only sign in and use social media platforms via the most secure internet connections possible (e.g., using a VPN)
  • Be aware of when companies are tracking them online and avoid letting their browser save cookies on their devices

Even though there are ways for users to protect their own privacy, there are also other ways in which people's sensitive information can be leaked online, such as when they open attachments or click on links that they should not have.

This is another reason why users should be careful about what they open/click on and avoid doing this over an unsecured internet connection.

Although social media platforms like Facebook and Twitter give users the option to set their profiles to private mode, these measures offer limited protection against people finding out information about you.

It is difficult for anyone who wants to protect themselves from sharing too much personal information online because it requires constant vigilance.

Social media sites are designed specifically to get users to share things willingly without thinking about what the consequences may be.

The question of whether social media impacts privacy remains unclear because different studies have come up with very different results when investigating this issue

Some studies have shown that social media impacts privacy because people are more likely to share information online than they would otherwise (the mere-exposure effect).

On the other hand, other studies have suggested that sharing information on social media may actually increase privacy because it allows users to selectively reveal only certain things about themselves, ie what they want others to know.

As of yet, it is difficult to reach a definitive conclusion as whether or not social media impacts privacy because different studies have come up with very different results.

Research has found that some people are reluctant to share personal details unless they are certain their posts can be kept private, while others have shown that users regularly disclose sensitive information even when they know it isn't private.

Even though this contradiction suggests that social media does impact privacy in some way, the specific nature of the relationship between these two factors remains unclear.

Wrapping Up:

Although social media can have a negative impact on privacy because it allows users to share information they should not be revealing, the relationship between sharing-information and reduced-privacy is complex and different studies have come up with very different results when investigating this issue.

Social media platforms have given rise to many benefits such as connecting with friends and family members who live far away, but they have also enabled websites to collect far more data than was previously possible.

This unfortunately means that people’s personal information is often exploited for financial gain or even stolen by hackers without them knowing about it.

Although tighter regulations may be enforced in the future to ensure that data is better protected from those who do not have permission access to it, social media will always carry some element of risk as long as people continue to share things about themselves online without thinking carefully first.

Added to your cart:

Additional items are displayed in the cart and checkout.

Compare products

{"one"=>"Select 2 or 3 items to compare", "other"=>"{{ count }} of 3 items selected"}

Select first item to compare

Select second item to compare

Select third item to compare

  • Share full article

Advertisement

Subscriber-only Newsletter

David French

Florida has banned kids using social media, but it won’t be that simple.

In a collage-like illustration, a storefront grate rolls down to mask the top of a child’s head.

By David French

Opinion Columnist

My entire life I’ve seen a similar pattern. Older generations reflect on the deficiencies of “kids these days,” and they find something new to blame. The latest technology and new forms of entertainment are always bewitching our children. In my time, I’ve witnessed several distinct public panics over television, video games and music. They’ve all been overblown.

This time, however, I’m persuaded — not that smartphones are the sole cause of increasing mental health problems in American kids, but rather that they’re a prime mover in teen mental health in a way that television, games and music are not. No one has done more to convince me than Jonathan Haidt. He’s been writing about the dangers of smartphones and social media for years , and his latest Atlantic story masterfully marshals the evidence for smartphones’ negative influence on teenage life.

At the same time, however, I’m wary of government intervention to suppress social media or smartphone access for children. The people best positioned to respond to their children’s online life are parents, not regulators, and it is parents who should take the lead in responding to smartphones. Otherwise, we risk a legal remedy that undermines essential constitutional doctrines that protect both children and adults.

I don’t want to minimize the case against phones. Haidt’s thesis is sobering:

Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity — all were affected.

The consequences, Haidt argues, have been dire. Children — especially teenagers — are suffering from greater rates of anxiety and depression, and suicide rates have gone up; and they spend less time hanging out with friends, while loneliness and friendlessness are surging.

Neither smartphones nor social media are solely responsible for declining teen mental health. The rise of smartphones correlates with a transformation of parenting strategies, away from permitting free play and in favor of highly managed schedules and copious amounts of organized sports and other activities. The rise of smartphones also correlates with the fraying of our social fabric. Even there, however, the phones have their roles to play. They provide a cheap substitute for in-person interaction, and the constant stream of news can heighten our anxiety.

I’m so convinced that smartphones have a significant negative effect on children that I’m now much more interested in the debate over remedies. What should be done?

That question took on added urgency Tuesday, when Ron DeSantis, the governor of Florida, signed a bill banning children under 14 from having social media accounts and requiring children under 16 to have parental permission before opening an account. The Florida social media bill is one of the strictest in the country, but Florida is hardly the only state that is trying to regulate internet access by minors. Utah passed its own law; so have Ohio and Arkansas . California passed a bill mandating increased privacy protections for children using the internet.

So is this — at long last — an example of the government actually responding to a social problem with a productive solution? New information has helped us understand the dangers of a commercial product, and now the public sector is reacting with regulation and limitation. What’s not to like?

Quite a bit, actually. Federal courts have blocked enforcement of the laws in Ohio , Arkansas and California . Utah’s law faces a legal challenge and Florida’s new law will undoubtedly face its day in court as well. The reason is simple: When you regulate access to social media, you’re regulating access to speech, and the First Amendment binds the government to protect the free-speech rights of children as well as adults.

In a 2011 case, Brown v. Entertainment Merchants Association , the Supreme Court struck down a California law banning the sale of violent video games to minors. The 7-to-2 decision featured three Democratic appointees joining with four Republican appointees. Justice Antonin Scalia, writing for the majority, reaffirmed that “minors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.”

The state certainly has power to protect children from harm — as laws restricting children's’ access to alcohol and tobacco attest — but that power “does not include a free-floating power to restrict the ideas to which children may be exposed,” the majority opinion said. Consequently, as the court has repeatedly observed, “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.”

Lawmakers and parents may find this doctrine frustrating, but there is a genuine method to the free-speech madness, even for children. In a free-speech case from 1982, Island Trees School District v. Pico , Justice William Brennan cast doubt on a public school district’s effort to remove “improper” books from library shelves and wrote powerfully in support of student free speech and students’ access to ideas. “Just as access to ideas makes it possible for citizens generally to exercise their rights of free speech and press in a meaningful manner,” Brennan argued, “such access prepares students for active and effective participation in the pluralistic, often contentious society in which they will soon be adult members.”

Justice Brennan is exactly right. We can’t shelter children from debate and dialogue and then expect them to emerge in college as grown-ups, ready for liberal democracy. Raising citizens in a flourishing republic is a process, one that isn’t susceptible to one-size-fits all bans on speech and expression, even if that speech or expression poses social and emotional challenges for today’s teens.

Compounding the problem, social media bans are almost always rooted at least in part in the content on the platforms. It’s the likes, comments, fashions, and trends that cause people to obsess over social media. Yet content discrimination is uniquely disfavored in First Amendment law. As the Supreme Court has repeatedly explained, one of the most basic First Amendment principles is that “as a general matter, the government has no power to restrict expression because of its message, its ideas, its subject matter, or its content.”

For content discrimination to be lawful, it has to pass the most difficult of legal tests, a test called “strict scrutiny.” This means that the law is only constitutional if it advances a “compelling government interest and is narrowly drawn to serve that interest.” While one can certainly agree that protecting the mental health of young people is a compelling interest, it is much more difficult to argue that sweeping bans that cut off children from gaining access to a vast amount of public debate and information are “narrowly drawn.”

Finally, attempting to restrict minors’ access to social media can implicate and limit adult speech . Age verification measures would require both adult and child users of social media platforms to reveal personally identifying information as a precondition for fully participating in the American marketplace of ideas.

It’s for these reasons (and others) that federal district judges in California, Arkansas and Ohio have blocked enforcement of each state’s social media law, and it’s for these reasons that the laws in Utah and Florida rightly face an uphill legal climb.

The government isn’t entirely powerless in the face of online harms. I think it is entirely proper to attempt to age-limit online access to pornography . The Supreme Court has permitted state and local governments to use zoning laws to push porn shops into specific, designated areas of the community, and “zoning” online porn for adults only should be entirely proper as well. The Supreme Court hasn’t permitted age-gating pornography yet , but its prior objections were rooted in part in the technical challenges to age verification. With better technology comes better capability to reasonably and easily distinguish between children and adults.

The distinction between social media and pornography should be obvious. There is a difference between denying access to content to minors that they possess no right to see or produce, and to denying access to content that they have a right to both see and produce.

It is also entirely proper to ban smartphones in schools. The court has long held that the First Amendment rights of students should be construed “in light of the special characteristics of the school environment.” And it’s highly likely that courts would uphold phone bans as a means of preventing proven distractions during instruction.

But the primary responsibility for policing kids’ access to phones should rest with parents, not with the state. Not every social problem has a governmental solution, and the more that the problem is rooted in the inner life of children, the less qualified the government is to address it.

And don’t think that a parent-centered approach to dealing with the challenge of online generation is inherently inadequate. As we’ve seen throughout American history, parenting cultures can change substantially, based on both information and experience. Public intellectuals like Jonathan Haidt perform an immense public service by informing the public, and just as parents adjust children’s diets or alter discipline habits in response to new information, they can change the culture around cellphones.

In fact, there are signs this is already happening. I have three children — aged 25, 23 and 16 — and I can personally attest to the changing culture in my little corner of the world. I gave my oldest two kids iPhones when they were 12 and 11, and granted access to Facebook and Instagram with little thought to the consequences. Most of my peers did the same.

Quickly enough, we learned our mistake. When my youngest entered middle school, I noticed that parents were far more cautious. We talked about phone use, and we tried to some extent to adopt an informal, collaborative approach so that no member of the friend group was alone and isolated while all her peers were texting on their phones and posting online. It didn’t work perfectly, and my daughter spent a few unpleasant months as the last friend without a phone at age 15, but awareness of the risks was infinitely higher, and even when children did receive phones, the controls on use were much tighter.

One of the core responsibilities of the American government at all levels is to protect the liberty of its citizens , especially those liberties enumerated in the Bill of Rights. At the same time, it is the moral obligation of the American people to exercise those liberties responsibly. Haidt and the countless researchers who’ve exposed the risks of online life are performing an invaluable role. They’re giving parents the information we need to be responsible. But the First Amendment rights of adults and children are too precious to suppress, especially when parents are best positioned to protect children from harm online.

David French is an Opinion columnist, writing about law, culture, religion and armed conflict. He is a veteran of Operation Iraqi Freedom and a former constitutional litigator. His most recent book is “Divided We Fall: America’s Secession Threat and How to Restore Our Nation .” You can follow him on Threads ( @davidfrenchjag ).

Watch CBS News

Supreme Court wary of restricting government contact with social media platforms in free speech case

By Melissa Quinn

Updated on: March 18, 2024 / 8:43 PM EDT / CBS News

Washington — The Supreme Court on Monday appeared wary of limiting the Biden administration's contacts with social media platforms in a closely watched dispute that  tests how much the government can  pressure social media companies to remove content before crossing a constitutional line from persuasion into coercion.

The case, known as Murthy v. Missouri, arose out of efforts during the early months of the Biden administration to push social media platforms to take down posts that officials said spread falsehoods about the pandemic and the 2020 presidential election. 

A U.S. district court judge said White House officials, as well as some federal agencies and their employees, violated the First Amendment's right to free speech by "coercing" or "significantly encouraging" social media sites' content-moderation decisions. The judge issued an injunction restricting the Biden administration's contacts with platforms on a variety of issues, though that order has been on hold.

During oral arguments on Monday, the justices seemed skeptical of a ruling that would broadly restrict the government's communications with social media platforms, raising concerns about hamstringing officials' ability to communicate with platforms about certain matters.

"Some might say that the government actually has a duty to take steps to protect the citizens of this country, and you seem to be suggesting that that duty cannot manifest itself in the government encouraging or even pressuring platforms to take down harmful information," Justice Ketanji Brown Jackson told Benjamin Aguiñaga, the Louisiana solicitor general. "I'm really worried about that, because you've got the First Amendment operating in an environment of threatening circumstances from the government's perspective, and you're saying the government can't interact with the source of those problems."

The Supreme Court is seen on March 18, 2024.

Justice Amy Coney Barrett warned Aguiñaga that one of the proposed standards for determining when the government's actions cross the bound into unlawful speech suppression — namely when a federal agency merely encourages a platform to remove problematic posts — "would sweep in an awful lot." She questioned whether the FBI could reach out to a platform to encourage it to take down posts sharing his and other Louisiana officials' home addresses and calling on members of the public to rally.

Aguiñaga said the FBI could be encouraging a platform to suppress constitutionally protected speech.

The legal battle is one of five that the Supreme Court is considering this term that stand at the intersection of the First Amendment's free speech protections and social media. It was also the first of two that the justices heard Monday that involves alleged jawboning, or informal pressure by the government on an intermediary to take certain actions that will suppress speech.

The second case raises whether a New York financial regulator  violated the National Rifle Association's free speech rights  when she pressured banks and insurance companies in the state to sever ties with the gun rights group after the 2018 shooting in Parkland, Florida. Decisions from the Supreme Court in both cases are expected by the end of June.

The Biden administration's efforts to stop misinformation

The social media case stems from the Biden administration's efforts to pressure platforms, including Twitter, now known as X, YouTube and Facebook, to take down posts it believed spread falsehoods about the pandemic and the last presidential election.

Brought by five social media users and two states, Louisiana and Missouri, their challenge claimed their speech was stifled when platforms removed or downgraded their posts after strong-arming by officials in the White House, Centers for Disease Control, FBI and Department of Homeland Security.

The challengers alleged that at the heart of their case is a "massive, sprawling federal 'Censorship Enterprise,'" through which federal officials communicated with social media platforms with the goal of pressuring them to censor and suppress speech they disfavored.

U.S. District Judge Terry Doughty found that seven groups of Biden administration officials violated the First Amendment because they transformed the platforms' content-moderation decisions into state action by "coercing" or "significantly encouraging" their activities. He limited the types of communications agencies and their employees could have with the platforms, but included several carve-outs.

The U.S. Court of Appeals for the 5th Circuit then determined that certain White House officials and the FBI violated free speech rights when they coerced and significantly encouraged platforms to suppress content related to COVID-19 vaccines and the election. It narrowed the scope of Doughty's order but said federal employees could not "coerce or significantly encourage" a platform's content-moderation decisions.

The justices in October agreed to decide whether the Biden administration impermissibly worked to suppress speech on Facebook, YouTube and X. The high court temporarily paused the lower court's order limiting Biden administration officials' contact with social media companies.

In filings with the court, the Biden administration argued that the social media users and states lack legal standing to even bring the case, but said officials must be free "to inform, to persuade, and to criticize."

"This case should be about that fundamental distinction between persuasion and coercion," Brian Fletcher, principal deputy solicitor general, told the justices. 

Fletcher argued that the states and social media users were attempting to use the courts to "audit all of the executive branch communications with and about social media platforms," and said administration officials public statements are "classic bully pulpit exhortations."

But Aguiñaga told the justices that the platforms faced "unrelenting pressure" from federal officials to suppress protected speech.

"The government has no right to persuade platforms to violate Americans' constitutional rights," he said. "And pressuring platforms in in backrooms shielded from public view is not using the bully pulpit at all. That's just being a bully."

The oral arguments

Several of the justices questioned whether the social media users who brought the suit demonstrated that they suffered a clear injury traceable to the government or could show that an injunction against the government would correct future injuries caused by the platforms' content moderation, which much be shown to bring a challenge in federal courts.

"I have such a problem with your brief," Justice Sonia Sotomayor told Aguiñaga. "You omit information that changes the context of some of your claims. You attribute things to people that it didn't happen to. ... I don't know what to make of all this because I'm not sure how we get to prove direct injury in any way."

Aguiñaga apologized and said he takes "full responsibility" for any aspects of their filings that were not forthcoming.

Justice Elena Kagan asked Aguiñaga to point to the piece of evidence that most clearly showed that the government was responsible for his clients having material taken down.

"We know that there's a lot of government encouragement around here," she said. "We also know that the platforms are actively content moderating, and they're doing that irrespective of what the government wants, so how do you decide that it's government action as opposed to platform action?"

The justices frequently raised communications between the federal government and the press, which often involve heated discussions.

Justice Samuel Alito referenced emails between federal officials and platforms, some of which he said showed "constant pestering" by White House employees and requests for meetings with the social media sites.

"I cannot imagine federal officials taking that approach to the print media, our representatives over there," he said, referencing the press section in the courtroom. "If you did that to them, what do you think the reaction would be?"

Alito speculated that the reason why the federal officials felt free to pressure the platforms was because it has Section 230, a key legal shield for social media companies, and possible antitrust action "in its pocket," which he called "big clubs available to it." 

"It's treating Facebook and these other platforms like they're subordinates," Alito said. "Would you do that to the New York Times or the Wall Street Journal or the Associated Press or any other big newspaper or wire service?"

Fletcher conceded that officials' anger is "unusual," but said it's not odd for there to be a back-and-forth between White House employees and the media.

Kavanaugh, though, said that he "assumed, thought, experienced government press people throughout the federal government who regularly call up the media and berate them." He also noted that "platforms say no all the time to the government."

Chief Justice John Roberts — noting that he has "no experience coercing anybody" — said the government is "not monolithic, and that has to dilute the concept of coercion significantly." Roberts said one agency may be attempting to coerce a platform one way, while another may be pushing it to go the other direction.

The NRA's court fight

In the second case, the court considered whether the former superintendent of the New York State Department of Financial Services violated the NRA's free speech rights when she pushed regulated insurance companies and banks to stop doing business with the group.

Superintendent Maria Vullo, who left her post in 2019, had been investigating since 2017 two insurers involved in NRA-endorsed affinity programs, Chubb and Lockton, and determined they violated state insurance law. The investigation found that a third, Lloyd's of London, underwrote similar unlawful insurance products for the NRA.

Then, after the Parkland school shooting in February 2018, Vullo issued guidance letters that urged regulated entities "to continue evaluating and managing their risks, including reputational risks" that may arise from their dealings with the NRA or similar gun rights groups.

Later that year, the Department of Financial Services entered into consent decrees with the three insurance companies it was investigating. As part of the agreements, the insurers admitted they provided some unlawful NRA-supported programs and agreed to stop providing the policies to New York residents. 

The NRA then sued the department, alleging that Vullo privately threatened insurers with enforcement action if they continued working with the group and created a system of "informal censorship" that was designed to suppress its speech, in violation of the First Amendment.

A federal district court sided with the NRA, finding that the group sufficiently alleged that Vullo's actions "could be interpreted as a veiled threat to regulated industries to disassociate with the NRA or risk DFS enforcement action."

But a federal appeals court disagreed and determined that the guidance letters and a press release couldn't "reasonably be construed as being unconstitutionally threatening or coercive," because they "were written in an even-handed, nonthreatening tone" and used words intended to persuade, not intimidate.

The NRA appealed the decision to the Supreme Court, which agreed to consider whether Vullo violated the group's free speech rights when she urged financial entities to sever their ties with it.

"Allowing unpopular speech to form the basis for adverse regulatory action under the guise of 'reputational risk,' as Vullo attempted here, would gut a core pillar of the First Amendment," the group, which is represented in part by the American Civil Liberties Union, told the court in a filing .

The NRA argued that Vullo "openly targeted the NRA for its political speech and used her extensive regulatory authority over a trillion-dollar industry to pressure the institutions she oversaw into blacklisting the organization."

"In the main, she succeeded," the organization wrote. "But in doing so, she violated the First Amendment principle that government regulators cannot abuse their authority to target disfavored speakers for punishment."

Vullo, though, told the court that the insurance products the NRA was offering its members were unlawful, and noted that the NRA itself signed a consent order with the department after Vullo left office after it found the group was marketing insurance producers without the proper license from the state.

"Accepting the NRA's arguments would set an exceptionally dangerous precedent," lawyers for the state wrote in a Supreme Court brief. "The NRA's arguments would encourage damages suits like this one and deter public officials from enforcing the law — even against entities like the NRA that committed serious violations."

The NRA, they claimed, is asking the Supreme Court to give it "favored status because it espouses a controversial view," and the group has never claimed that it was unable to exercise its free speech rights.

  • Biden Administration
  • Supreme Court of the United States
  • Social Media
  • Free Speech

Melissa Quinn is a politics reporter for CBSNews.com. She has written for outlets including the Washington Examiner, Daily Signal and Alexandria Times. Melissa covers U.S. politics, with a focus on the Supreme Court and federal courts.

More from CBS News

Court allows South Carolina district lines despite gerrymander ruling

Black voters, organizers in battleground states anxious about Biden enthusiasm

Could House control flip to Democrats? Early resignations leave GOP on edge

White House orders federal agencies to name chief AI officers

IMAGES

  1. Social Media Essay: Tips and Topics

    lack of privacy on social media essay

  2. Social Media Essay

    lack of privacy on social media essay

  3. Advantages and Disadvantages of Social Media Essay Example

    lack of privacy on social media essay

  4. Social Media and Its Negative Impact on Our Youth Essay Example

    lack of privacy on social media essay

  5. Social Media Essay

    lack of privacy on social media essay

  6. A Complete Guide To Prepare An Impressive Social Media Essay

    lack of privacy on social media essay

VIDEO

  1. The Issue Today Is Lack Of Privacy

  2. OPPOSITION...WE WRITE OUR NAMES ON HISTORY'S PAGE

  3. Privacy And Security In Online Social Media Assignment 3 Answers

  4. Lack of Social Media Strategy and Planning

  5. Essay on social media in english // social media essay in english speech // social media problem

  6. Workplace Social Media Do's and Don'ts

COMMENTS

  1. How Americans feel about social media and privacy

    Overall, a 2014 survey found that 91% of Americans "agree" or "strongly agree" that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and ...

  2. Social Media Is a Threat to Privacy, Essay Example

    Social media has increased privacy concerns with online platforms. Although they are effective in connecting with family and friend, social media can also endanger private information. Individuals create social media profiles that may expose their private information. According to research conducted by Carnegie Mellon University, information ...

  3. Essay on Lack Of Privacy On Social Media

    High-quality essay on the topic of "Lack Of Privacy On Social Media" for students in schools and colleges.

  4. Social Media and Privacy: The Dangers and Privacy Issues

    This paper set out to explore the privacy and security issues that affect social media. The discussions presented herein reveal that users of social media willingly post personal information, which can be used by malicious criminals and businesses to compromise the privacy and security of individuals in the real world.

  5. Social Media and Lack of Privacy

    This paper will look at how social media has hugely caused a lack of privacy. Social media users tend to post issues concerning their private lives or public matters. For years now, social media privacy has been an issue. ... Use our essay writing service and save your time. We guarantee high quality, on-time delivery and 100% confidentiality. ...

  6. Privacy Issues in Social Media Essay

    This is why privacy can be viewed as the biggest ethical issue regarding digital media through, social privacy issues and data privacy issues. 2186 Words 9 Pages

  7. Full article: Ethical concerns about social media privacy policies: do

    While privacy has been addressed by extant research (Bartoli et al., Citation 2022; Burkhardt et al., Citation 2022), the literature demonstrated the lack of research into consent actions within social media, even though an 1890 publication heralded the unauthorized use of personal imagery, a key aspect of social media (Maehle et al., Citation ...

  8. Social media and its effect on privacy

    articles, magazine articles, and research papers pertaining to social media to determine what effects social media has on the user's privacy and how much trust should be placed in social media networks such as Facebook. It provides a comprehensive view of the most used social media networks in 2012 and offers methods and suggestions for users ...

  9. Privacy in Social Media

    Abstract. Most people's immediate concern about privacy in social media, and about the internet more generally, relates to data protection. People fear that information they post on various platforms is potentially abused by corporate entities, governments, or even criminals, in all sorts of nefarious ways. The main premise of this chapter is ...

  10. Social Media and Privacy

    Section Highlights. Information disclosure privacy issues have been a dominant focus in online technologies and the primary focus for social media. It focuses on access to data and defining public vs. private disclosures.It emphasizes user control over who sees what. With so many people from different social circles able to access a user's social media content, the issues of context collapse ...

  11. Social Media Users' Legal Consciousness About Privacy

    In thinking about privacy, two emerging phenomena are of particular interest: on the one hand, technological architectures of social media push the boundaries of disclosure—both voluntary and involuntary—accompanied by privacy policy in the terms and conditions (T&C) 2 of use. In response, the question of informed consent has entered European law, to counterbalance a perceived disparity in ...

  12. Social Media Privacy: What Are The Risks? (How To Stay Safe)

    8. Cyberbullying and online harassment. For kids, teens, and even adults, social media can be a source of bullying and emotional and psychological attacks. A public account gives cyberbullies easy access to target you with messages and malicious posts — as well as access to your personal information.

  13. Why protecting privacy is a losing game today—and how to ...

    Most state legislation has targeted specific topics like use of data from ed-tech products, access to social media accounts by employers, and privacy protections from drones and license-plate readers.

  14. Privacy Literacy on Social Media: Its Predictors and Outcomes

    This study focuses on privacy literacy on social media with respect to how it functions as a marker for reflecting and alleviating the privacy divide. In doing so, the paper investigates the influence of objective measures such as characteristics of user populations (e.g., ethnicity, gender, and privacy experience) on the degree of privacy ...

  15. Full article: Online Privacy Breaches, Offline Consequences

    This is a critical point, as there are few alternatives to using many of services (e.g., search engines) that strip people of their privacy. Similarly, given that social media frequency leads to stronger social connections, and ultimately well-being (Roberts & David, Citation 2020), quitting social media may lead to a cut in ties with friends ...

  16. On (some aspects of) social privacy in the social media space

    What is more, there are various digital services distinctly different from social media requiring different designs of privacy beyond the individual and social division, with the result that social privacy may be executed in vastly different ways. 14 But a necessary first step in creating an effective response to both the contextual nature of ...

  17. 6 Example Essays on Social Media

    People's reliance on digital communication over in-person contact has increased along with the popularity of social media. Face-to-face interaction has suffered as a result, which has adverse effects on interpersonal relationships and the development of social skills. Decreased Emotional Intimacy.

  18. Key Social Media Privacy Issues for 2020

    The full impact of social media attacks on the 2020 state, federal, and presidential elections is hard to predict. Become an Expert in Cybersecurity. The importance of comprehensively addressing social media privacy issues cannot be underestimated. The challenge calls for skilled experts.

  19. A Survey on Privacy in Social Media: Identification, Mitigation, and

    Publishing user-generated data risks exposing individuals' privacy. Users privacy in social media is an emerging research area and has attracted increasing attention recently. These works study privacy issues in social media from the two different points of views: identification of vulnerabilities and mitigation of privacy risks.

  20. The Battle for Digital Privacy Is Reshaping the Internet

    Now that system, which ballooned into a $350 billion digital ad industry, is being dismantled. Driven by online privacy fears, Apple and Google have started revamping the rules around online data ...

  21. What is The Impact of Social Media on Privacy?

    3. Social media has impacted privacy by enabling the collection of private data. Social media platforms are able to collect data about their users via their website logs, search engines, cookies, third-party apps and other sources. This enables websites use this information for targeted advertising or even to sell it on to third parties without ...

  22. (PDF) Social Media Use and Privacy Concerns: How do ...

    The study is to prove the intermediary effect of privacy concerns, explore the attitude toward online privacy on social media, and summarize the privacy protection strategies adopted by college ...

  23. Opinion

    Utah's law faces a legal challenge and Florida's new law will undoubtedly face its day in court as well. The reason is simple: When you regulate access to social media, you're regulating ...

  24. Supreme Court wary of restricting government contact with social media

    Supreme Court wary of limiting federal contact with social media sites 05:33. Washington — The Supreme Court on Monday appeared wary of limiting the Biden administration's contacts with social ...