U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.12(6); 2020 Jun

Logo of cureus

Social Media Use and Its Connection to Mental Health: A Systematic Review

Fazida karim.

1 Psychology, California Institute of Behavioral Neurosciences and Psychology, Fairfield, USA

2 Business & Management, University Sultan Zainal Abidin, Terengganu, MYS

Azeezat A Oyewande

3 Family Medicine, California Institute of Behavioral Neurosciences and Psychology, Fairfield, USA

4 Family Medicine, Lagos State Health Service Commission/Alimosho General Hospital, Lagos, NGA

Lamis F Abdalla

5 Internal Medicine, California Institute of Behavioral Neurosciences and Psychology, Fairfield, USA

Reem Chaudhry Ehsanullah

Safeera khan.

Social media are responsible for aggravating mental health problems. This systematic study summarizes the effects of social network usage on mental health. Fifty papers were shortlisted from google scholar databases, and after the application of various inclusion and exclusion criteria, 16 papers were chosen and all papers were evaluated for quality. Eight papers were cross-sectional studies, three were longitudinal studies, two were qualitative studies, and others were systematic reviews. Findings were classified into two outcomes of mental health: anxiety and depression. Social media activity such as time spent to have a positive effect on the mental health domain. However, due to the cross-sectional design and methodological limitations of sampling, there are considerable differences. The structure of social media influences on mental health needs to be further analyzed through qualitative research and vertical cohort studies.

Introduction and background

Human beings are social creatures that require the companionship of others to make progress in life. Thus, being socially connected with other people can relieve stress, anxiety, and sadness, but lack of social connection can pose serious risks to mental health [ 1 ].

Social media

Social media has recently become part of people's daily activities; many of them spend hours each day on Messenger, Instagram, Facebook, and other popular social media. Thus, many researchers and scholars study the impact of social media and applications on various aspects of people’s lives [ 2 ]. Moreover, the number of social media users worldwide in 2019 is 3.484 billion, up 9% year-on-year [ 3 - 5 ]. A statistic in Figure  1  shows the gender distribution of social media audiences worldwide as of January 2020, sorted by platform. It was found that only 38% of Twitter users were male but 61% were using Snapchat. In contrast, females were more likely to use LinkedIn and Facebook. There is no denying that social media has now become an important part of many people's lives. Social media has many positive and enjoyable benefits, but it can also lead to mental health problems. Previous research found that age did not have an effect but gender did; females were much more likely to experience mental health than males [ 6 , 7 ].

An external file that holds a picture, illustration, etc.
Object name is cureus-0012-00000008627-i01.jpg

Impact on mental health

Mental health is defined as a state of well-being in which people understand their abilities, solve everyday life problems, work well, and make a significant contribution to the lives of their communities [ 8 ]. There is debated presently going on regarding the benefits and negative impacts of social media on mental health [ 9 , 10 ]. Social networking is a crucial element in protecting our mental health. Both the quantity and quality of social relationships affect mental health, health behavior, physical health, and mortality risk [ 9 ]. The Displaced Behavior Theory may help explain why social media shows a connection with mental health. According to the theory, people who spend more time in sedentary behaviors such as social media use have less time for face-to-face social interaction, both of which have been proven to be protective against mental disorders [ 11 , 12 ]. On the other hand, social theories found how social media use affects mental health by influencing how people view, maintain, and interact with their social network [ 13 ]. A number of studies have been conducted on the impacts of social media, and it has been indicated that the prolonged use of social media platforms such as Facebook may be related to negative signs and symptoms of depression, anxiety, and stress [ 10 - 15 ]. Furthermore, social media can create a lot of pressure to create the stereotype that others want to see and also being as popular as others.

The need for a systematic review

Systematic studies can quantitatively and qualitatively identify, aggregate, and evaluate all accessible data to generate a warm and accurate response to the research questions involved [ 4 ]. In addition, many existing systematic studies related to mental health studies have been conducted worldwide. However, only a limited number of studies are integrated with social media and conducted in the context of social science because the available literature heavily focused on medical science [ 6 ]. Because social media is a relatively new phenomenon, the potential links between their use and mental health have not been widely investigated.

This paper attempt to systematically review all the relevant literature with the aim of filling the gap by examining social media impact on mental health, which is sedentary behavior, which, if in excess, raises the risk of health problems [ 7 , 9 , 12 ]. This study is important because it provides information on the extent of the focus of peer review literature, which can assist the researchers in delivering a prospect with the aim of understanding the future attention related to climate change strategies that require scholarly attention. This study is very useful because it provides information on the extent to which peer review literature can assist researchers in presenting prospects with a view to understanding future concerns related to mental health strategies that require scientific attention. The development of the current systematic review is based on the main research question: how does social media affect mental health?

Research strategy

The research was conducted to identify studies analyzing the role of social media on mental health. Google Scholar was used as our main database to find the relevant articles. Keywords that were used for the search were: (1) “social media”, (2) “mental health”, (3) “social media” AND “mental health”, (4) “social networking” AND “mental health”, and (5) “social networking” OR “social media” AND “mental health” (Table  1 ).

Out of the results in Table  1 , a total of 50 articles relevant to the research question were selected. After applying the inclusion and exclusion criteria, duplicate papers were removed, and, finally, a total of 28 articles were selected for review (Figure  2 ).

An external file that holds a picture, illustration, etc.
Object name is cureus-0012-00000008627-i02.jpg

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Inclusion and exclusion criteria

Peer-reviewed, full-text research papers from the past five years were included in the review. All selected articles were in English language and any non-peer-reviewed and duplicate papers were excluded from finally selected articles.

Of the 16 selected research papers, there were a research focus on adults, gender, and preadolescents [ 10 - 19 ]. In the design, there were qualitative and quantitative studies [ 15 , 16 ]. There were three systematic reviews and one thematic analysis that explored the better or worse of using social media among adolescents [ 20 - 23 ]. In addition, eight were cross-sectional studies and only three were longitudinal studies [ 24 - 29 ].The meta-analyses included studies published beyond the last five years in this population. Table  2  presents a selection of studies from the review.

IGU, internet gaming disorder; PSMU, problematic social media use

This study has attempted to systematically analyze the existing literature on the effect of social media use on mental health. Although the results of the study were not completely consistent, this review found a general association between social media use and mental health issues. Although there is positive evidence for a link between social media and mental health, the opposite has been reported.

For example, a previous study found no relationship between the amount of time spent on social media and depression or between social media-related activities, such as the number of online friends and the number of “selfies”, and depression [ 29 ]. Similarly, Neira and Barber found that while higher investment in social media (e.g. active social media use) predicted adolescents’ depressive symptoms, no relationship was found between the frequency of social media use and depressed mood [ 28 ].

In the 16 studies, anxiety and depression were the most commonly measured outcome. The prominent risk factors for anxiety and depression emerging from this study comprised time spent, activity, and addiction to social media. In today's world, anxiety is one of the basic mental health problems. People liked and commented on their uploaded photos and videos. In today's age, everyone is immune to the social media context. Some teens experience anxiety from social media related to fear of loss, which causes teens to try to respond and check all their friends' messages and messages on a regular basis.

On the contrary, depression is one of the unintended significances of unnecessary use of social media. In detail, depression is limited not only to Facebooks but also to other social networking sites, which causes psychological problems. A new study found that individuals who are involved in social media, games, texts, mobile phones, etc. are more likely to experience depression.

The previous study found a 70% increase in self-reported depressive symptoms among the group using social media. The other social media influence that causes depression is sexual fun [ 12 ]. The intimacy fun happens when social media promotes putting on a facade that highlights the fun and excitement but does not tell us much about where we are struggling in our daily lives at a deeper level [ 28 ]. Another study revealed that depression and time spent on Facebook by adolescents are positively correlated [ 22 ]. More importantly, symptoms of major depression have been found among the individuals who spent most of their time in online activities and performing image management on social networking sites [ 14 ].

Another study assessed gender differences in associations between social media use and mental health. Females were found to be more addicted to social media as compared with males [ 26 ]. Passive activity in social media use such as reading posts is more strongly associated with depression than doing active use like making posts [ 23 ]. Other important findings of this review suggest that other factors such as interpersonal trust and family functioning may have a greater influence on the symptoms of depression than the frequency of social media use [ 28 , 29 ].

Limitation and suggestion

The limitations and suggestions were identified by the evidence involved in the study and review process. Previously, 7 of the 16 studies were cross-sectional and slightly failed to determine the causal relationship between the variables of interest. Given the evidence from cross-sectional studies, it is not possible to conclude that the use of social networks causes mental health problems. Only three longitudinal studies examined the causal relationship between social media and mental health, which is hard to examine if the mental health problem appeared more pronounced in those who use social media more compared with those who use it less or do not use at all [ 19 , 20 , 24 ]. Next, despite the fact that the proposed relationship between social media and mental health is complex, a few studies investigated mediating factors that may contribute or exacerbate this relationship. Further investigations are required to clarify the underlying factors that help examine why social media has a negative impact on some peoples’ mental health, whereas it has no or positive effect on others’ mental health.

Conclusions

Social media is a new study that is rapidly growing and gaining popularity. Thus, there are many unexplored and unexpected constructive answers associated with it. Lately, studies have found that using social media platforms can have a detrimental effect on the psychological health of its users. However, the extent to which the use of social media impacts the public is yet to be determined. This systematic review has found that social media envy can affect the level of anxiety and depression in individuals. In addition, other potential causes of anxiety and depression have been identified, which require further exploration.

The importance of such findings is to facilitate further research on social media and mental health. In addition, the information obtained from this study can be helpful not only to medical professionals but also to social science research. The findings of this study suggest that potential causal factors from social media can be considered when cooperating with patients who have been diagnosed with anxiety or depression. Also, if the results from this study were used to explore more relationships with another construct, this could potentially enhance the findings to reduce anxiety and depression rates and prevent suicide rates from occurring.

The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.

The authors have declared that no competing interests exist.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 20 March 2024

Persistent interaction patterns across social media platforms and over time

  • Michele Avalle   ORCID: orcid.org/0009-0007-4934-2326 1   na1 ,
  • Niccolò Di Marco 1   na1 ,
  • Gabriele Etta 1   na1 ,
  • Emanuele Sangiorgio   ORCID: orcid.org/0009-0003-1024-3735 2 ,
  • Shayan Alipour 1 ,
  • Anita Bonetti 3 ,
  • Lorenzo Alvisi 1 ,
  • Antonio Scala 4 ,
  • Andrea Baronchelli 5 , 6 ,
  • Matteo Cinelli   ORCID: orcid.org/0000-0003-3899-4592 1 &
  • Walter Quattrociocchi   ORCID: orcid.org/0000-0002-4374-9324 1  

Nature ( 2024 ) Cite this article

12k Accesses

228 Altmetric

Metrics details

  • Mathematics and computing
  • Social sciences

Growing concern surrounds the impact of social media platforms on public discourse 1 , 2 , 3 , 4 and their influence on social dynamics 5 , 6 , 7 , 8 , 9 , especially in the context of toxicity 10 , 11 , 12 . Here, to better understand these phenomena, we use a comparative approach to isolate human behavioural patterns across multiple social media platforms. In particular, we analyse conversations in different online communities, focusing on identifying consistent patterns of toxic content. Drawing from an extensive dataset that spans eight platforms over 34 years—from Usenet to contemporary social media—our findings show consistent conversation patterns and user behaviour, irrespective of the platform, topic or time. Notably, although long conversations consistently exhibit higher toxicity, toxic language does not invariably discourage people from participating in a conversation, and toxicity does not necessarily escalate as discussions evolve. Our analysis suggests that debates and contrasting sentiments among users significantly contribute to more intense and hostile discussions. Moreover, the persistence of these patterns across three decades, despite changes in platforms and societal norms, underscores the pivotal role of human behaviour in shaping online discourse.

Similar content being viewed by others

research questions about the effects of social media

Cross-platform social dynamics: an analysis of ChatGPT and COVID-19 vaccine conversations

Shayan Alipour, Alessandro Galeazzi, … Walter Quattrociocchi

research questions about the effects of social media

The language of opinion change on social media under the lens of communicative action

Corrado Monti, Luca Maria Aiello, … Francesco Bonchi

research questions about the effects of social media

A social media network analysis of trypophobia communication

Xanat Vargas Meza & Shinichi Koyama

The advent and proliferation of social media platforms have not only transformed the landscape of online participation 2 but have also become integral to our daily lives, serving as primary sources for information, entertainment and personal communication 13 , 14 . Although these platforms offer unprecedented connectivity and information exchange opportunities, they also present challenges by entangling their business models with complex social dynamics, raising substantial concerns about their broader impact on society. Previous research has extensively addressed issues such as polarization, misinformation and antisocial behaviours in online spaces 5 , 7 , 12 , 15 , 16 , 17 , revealing the multifaceted nature of social media’s influence on public discourse. However, a considerable challenge in understanding how these platforms might influence inherent human behaviours lies in the general lack of accessible data 18 . Even when researchers obtain data through special agreements with companies like Meta, it may not be enough to clearly distinguish between inherent human behaviours and the effects of the platform’s design 3 , 4 , 8 , 9 . This difficulty arises because the data, deeply embedded in platform interactions, complicate separating intrinsic human behaviour from the influences exerted by the platform’s design and algorithms.

Here we address this challenge by focusing on toxicity, one of the most prominent aspects of concern in online conversations. We use a comparative analysis to uncover consistent patterns across diverse social media platforms and timeframes, aiming to shed light on toxicity dynamics across various digital environments. In particular, our goal is to gain insights into inherently invariant human patterns of online conversations.

The lack of non-verbal cues and physical presence on the web can contribute to increased incivility in online discussions compared with face-to-face interactions 19 . This trend is especially pronounced in online arenas such as newspaper comment sections and political discussions, where exchanges may degenerate into offensive comments or mockery, undermining the potential for productive and democratic debate 20 , 21 . When exposed to such uncivil language, users are more likely to interpret these messages as hostile, influencing their judgement and leading them to form opinions based on their beliefs rather than the information presented and may foster polarized perspectives, especially among groups with differing values 22 . Indeed, there is a natural tendency for online users to seek out and align with information that echoes their pre-existing beliefs, often ignoring contrasting views 6 , 23 . This behaviour may result in the creation of echo chambers, in which like-minded individuals congregate and mutually reinforce shared narratives 5 , 24 , 25 . These echo chambers, along with increased polarization, vary in their prevalence and intensity across different social media platforms 1 , suggesting that the design and algorithms of these platforms, intended to maximize user engagement, can substantially shape online social dynamics. This focus on engagement can inadvertently highlight certain behaviours, making it challenging to differentiate between organic user interaction and the influence of the platform’s design. A substantial portion of current research is devoted to examining harmful language on social media and its wider effects, online and offline 10 , 26 . This examination is crucial, as it reveals how social media may reflect and amplify societal issues, including the deterioration of public discourse. The growing interest in analysing online toxicity through massive data analysis coincides with advancements in machine learning capable of detecting toxic language 27 . Although numerous studies have focused on online toxicity, most concentrate on specific platforms and topics 28 , 29 . Broader, multiplatform studies are still limited in scale and reach 12 , 30 . Research fragmentation complicates understanding whether perceptions about online toxicity are accurate or misconceptions 31 . Key questions include whether online discussions are inherently toxic and how toxic and non-toxic conversations differ. Clarifying these dynamics and how they have evolved over time is crucial for developing effective strategies and policies to mitigate online toxicity.

Our study involves a comparative analysis of online conversations, focusing on three dimensions: time, platform and topic. We examine conversations from eight different platforms, totalling about 500 million comments. For our analysis, we adopt the toxicity definition provided by the Perspective API, a state-of-the-art classifier for the automatic detection of toxic speech. This API considers toxicity as “a rude, disrespectful or unreasonable comment likely to make someone leave a discussion”. We further validate this definition by confirming its consistency with outcomes from other detection tools, ensuring the reliability and comparability of our results. The concept of toxicity in online discourse varies widely in the literature, reflecting its complexity, as seen in various studies 32 , 33 , 34 . The efficacy and constraints of current machine-learning-based automated toxicity detection systems have recently been debated 11 , 35 . Despite these discussions, automated systems are still the most practical means for large-scale analyses.

Here we analyse online conversations, challenging common assumptions about their dynamics. Our findings reveal consistent patterns across various platforms and different times, such as the heavy-tailed nature of engagement dynamics, a decrease in user participation and an increase in toxic speech in lengthier conversations. Our analysis indicates that, although toxicity and user participation in debates are independent variables, the diversity of opinions and sentiments among users may have a substantial role in escalating conversation toxicity.

To obtain a comprehensive picture of online social media conversations, we analysed a dataset of about 500 million comments from Facebook, Gab, Reddit, Telegram, Twitter, Usenet, Voat and YouTube, covering diverse topics and spanning over three decades (a dataset breakdown is shown in Table 1 and Supplementary Table 1 ; for details regarding the data collection, see the ‘Data collection’ section of the Methods ).

Our analysis aims to comprehensively compare the dynamics of diverse social media accounting for human behaviours and how they evolved. In particular, we first characterize conversations at a macroscopic level by means of their engagement and participation, and we then analyse the toxicity of conversations both after and during their unfolding. We conclude the paper by examining potential drivers for the emergence of toxic speech.

Conversations on different platforms

This section provides an overview of online conversations by considering user activity and thread size metrics. We define a conversation (or a thread) as a sequence of comments that follow chronologically from an initial post. In Fig. 1a and Extended Data Fig. 1 , we observe that, across all platforms, both user activity (defined as the number of comments posted by the user) and thread length (defined as the number of comments in a thread) exhibit heavy-tailed distributions. The summary statistics about these distributions are reported in Supplementary Tables 1 and 2 .

figure 1

a , The distributions of user activity in terms of comments posted for each platform and each topic. b , The mean user participation as conversations evolve. For each dataset, participation is computed for the threads belonging to the size interval [0.7–1] (Supplementary Table 2 ). Trends are reported with their 95% confidence intervals. The x axis represents the normalized position of comment intervals in the threads.

Consistent with previous studies 36 , 37 our analysis shows that the macroscopic patterns of online conversations, such as the distribution of users/threads activity and lifetime, are consistent across all datasets and topics (Supplementary Tables 1 – 4 ). This observation holds regardless of the specific features of the diverse platforms, such as recommendation algorithms and moderation policies (described in the ‘Content moderation policies’ of the Methods ), as well as other factors, including the user base and the conversation topics. We extend our analysis by examining another aspect of user activity within conversations across all platforms. To do this, we introduce a metric for the participation of users as a thread evolves. In this analysis, threads are filtered to ensure sufficient length as explained in the ‘Logarithmic binning and conversation size’ section of the Methods .

The participation metric, defined over different conversation intervals (that is, 0–5% of the thread arranged in chronological order, 5–10%, and so on), is the ratio of the number of unique users to the number of comments in the interval. Considering a fixed number of comments c , smaller values of participation indicate that fewer unique users are producing c comments in a segment of the conversation. In turn, a value of participation equal to 1 means that each user is producing one of the c comments, therefore obtaining the maximal homogeneity of user participation. Our findings show that, across all datasets, the participation of users in the evolution of conversations, averaged over almost all considered threads, is decreasing, as indicated by the results of Mann–Kendall test—a nonparametric test assessing the presence of a monotonic upward or downward tendency—shown in Extended Data Table 1 . This indicates that fewer users tend to take part in a conversation as it evolves, but those who do are more active (Fig. 1b ). Regarding patterns and values, the trends in user participation for various topics are consistent across each platform. According to the Mann–Kendall test, the only exceptions were Usenet Conspiracy and Talk, for which an ambiguous trend was detected. However, we note that their regression slopes are negative, suggesting a decreasing trend, even if with a weaker effect. Overall, our first set of findings highlights the shared nature of certain online interactions, revealing a decrease in user participation over time but an increase in activity among participants. This insight, consistent across most platforms, underscores the dynamic interplay between conversation length, user engagement and topic-driven participation.

Conversation size and toxicity

To detect the presence of toxic language, we used Google’s Perspective API 34 , a state-of-the-art toxicity classifier that has been used extensively in recent literature 29 , 38 . Perspective API defines a toxic comment as “A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion”. On the basis of this definition, the classifier assigns a toxicity score in the [0,1] range to a piece of text that can be interpreted as an estimate of the likelihood that a reader would perceive the comment as toxic ( https://developers.perspectiveapi.com/s/about-the-api-score ). To define an appropriate classification threshold, we draw from the existing literature 39 , which uses 0.6 as the threshold for considering a comment as toxic. A robustness check of our results using different threshold and classification tools is reported in the ‘Toxicity detection and validation of employed models’ section of the Methods , together with a discussion regarding potential shortcomings deriving from automatic classifiers. To further investigate the interplay between toxicity and conversation features across various platforms, our study first examines the prevalence of toxic speech in each dataset. We then analyse the occurrence of highly toxic users and conversations. Lastly, we investigate how the length of conversations correlates with the probability of encountering toxic comments. First of all, we define the toxicity of a user as the fraction of toxic comments that she/he left. Similarly, the toxicity of a thread is the fraction of toxic comments it contains. We begin by observing that, although some toxic datasets exist on unmoderated platforms such as Gab, Usenet and Voat, the prevalence of toxic speech is generally low. Indeed, the percentage of toxic comments in each dataset is mostly below 10% (Table 1 ). Moreover, the complementary cumulative distribution functions illustrated in Extended Data Fig. 2 show that the fraction of extremely toxic users is very low for each dataset (in the range between 10 −3 and 10 −4 ), and the majority of active users wrote at least one toxic comment, as reported in Supplementary Table 5 , therefore suggesting that the overall volume of toxicity is not a phenomenon limited to the activity of very few users and localized in few conversations. Indeed, the number of users versus their toxicity decreases sharply following an exponential trend. The toxicity of threads follows a similar pattern. To understand the association between the size and toxicity of a conversation, we start by grouping conversations according to their length to analyse their structural differences 40 . The grouping is implemented by means of logarithmic binning (see the ‘Logarithmic binning and conversation size’ section of the Methods ) and the evolution of the average fraction of toxic comments in threads versus the thread size intervals is reported in Fig. 2 . Notably, the resulting trends are almost all increasing, showing that, independently of the platform and topic, the longer the conversation, the more toxic it tends to be.

figure 2

The mean fraction of toxic comments in conversations versus conversation size for each dataset. Trends represent the mean toxicity over each size interval and their 95% confidence interval. Size ranges are normalized to enable visual comparison of the different trends.

We assessed the increase in the trends by both performing linear regression and applying the Mann–Kendall test to ensure the statistical significance of our results (Extended Data Table 2 ). To further validate these outcomes, we shuffled the toxicity labels of comments, finding that trends are almost always non-increasing when data are randomized. Furthermore, the z -scores of the regression slopes indicate that the observed trends deviate from the mean of the distributions resulting from randomizations, being at least 2 s.d. greater in almost all cases. This provides additional evidence of a remarkable difference from randomness. The only decreasing trend is Usenet Politics. Moreover, we verified that our results are not influenced by the specific number of bins as, after estimating the same trends again with different intervals, we found that the qualitative nature of the results remains unchanged. These findings are summarized in Extended Data Table 2 . These analyses have been validated on the same data using a different threshold for identifying toxic comments and on a new dataset labelled with three different classifiers, obtaining similar results (Extended Data Fig. 5 , Extended Data Table 5 , Supplementary Fig. 1 and Supplementary Table 8 ). Finally, using a similar approach, we studied the toxicity content of conversations versus their lifetime—that is, the time elapsed between the first and last comment. In this case, most trends are flat, and there is no indication that toxicity is generally associated either with the duration of a conversation or the lifetime of user interactions (Extended Data Fig. 4 ).

Conversation evolution and toxicity

In the previous sections, we analysed the toxicity level of online conversations after their conclusion. We next focus on how toxicity evolves during a conversation and its effect on the dynamics of the discussion. The common beliefs that (1) online interactions inevitably devolve into toxic exchanges over time and (2) once a conversation reaches a certain toxicity threshold, it would naturally conclude, are not modern notions but they were also prevalent in the early days of the World Wide Web 41 . Assumption 2 aligns with the Perspective API’s definition of toxic language, suggesting that increased toxicity reduces the likelihood of continued participation in a conversation. However, this observation should be reconsidered, as it is not only the peak levels of toxicity that might influence a conversation but, for example, also a consistent rate of toxic content. To test these common assumptions, we used a method similar to that used for measuring participation; we select sufficiently long threads, divide each of them into a fixed number of equal intervals, compute the fraction of toxic comments for each of these intervals, average it over all threads and plot the toxicity trend through the unfolding of the conversations. We find that the average toxicity level remains mostly stable throughout, without showing a distinctive increase around the final part of threads (Fig. 3a (bottom) and Extended Data Fig. 3 ). Note that a similar observation was made previously 41 , but referring only to Reddit. Our findings challenge the assumption that toxicity discourages people from participating in a conversation, even though this notion is part of the definition of toxicity used by the detection tool. This can be seen by checking the relationship between trends in user participation, a quantity related to the number of users in a discussion at some point, and toxicity. The fact that the former typically decreases while the latter remains stable during conversations indicates that toxicity is not associated with participation in conversations (an example is shown in Fig. 3a ; box plots of the slopes of participation and toxicity for the whole dataset are shown in Fig. 3b ). This suggests that, on average, people may leave discussions regardless of the toxicity of the exchanges. We calculated the Pearson’s correlation between user participation and toxicity trends for each dataset to support this hypothesis. As shown in Fig. 3d , the resulting correlation coefficients are very heterogeneous, indicating no consistent pattern across different datasets. To further validate this analysis, we tested the differences in the participation of users commenting on either toxic or non-toxic conversations. To split such conversations into two disjoint sets, we first compute the toxicity distribution T i of long threads in each dataset i , and we then label a conversation j in dataset i as toxic if it has toxicity t ij  ≥  µ ( T i ) +  σ ( T i ), with µ ( T i ) being mean and σ ( T i ) the standard deviation of T i ; all of the other conversations are considered to be non-toxic. After splitting the threads, for each dataset, we compute the Pearson’s correlation of user participation between sets to find strongly positive values of the coefficient in all cases (Fig. 3c,e ). This result is also confirmed by a different analysis of which the results are reported in Supplementary Table 8 , in which no significant difference between slopes in toxic and non-toxic threads can be found. Thus, user behaviour in toxic and non-toxic conversations shows almost identical patterns in terms of participation. This reinforces our finding that toxicity, on average, does not appear to affect the likelihood of people participating in a conversation. These analyses were repeated with a lower toxicity classification threshold (Extended Data Fig. 5 ) and on additional datasets (Supplementary Fig. 2 and Supplementary Table 11 ), finding consistent results.

figure 3

a , Examples of a typical trend in averaged user participation (top) and toxicity (bottom) versus the normalized position of comment intervals in the threads (Twitter news dataset). b , Box plot distributions of toxicity ( n  = 25, minimum = −0.012, maximum = 0.015, lower whisker = −0.012, quartile 1 (Q1) = − 0.004, Q2 = 0.002, Q3 = 0.008, upper whisker = 0.015) and participation ( n  = 25, minimum = −0.198, maximum = −0.022, lower whisker = −0.198, Q1 = − 0.109, Q2 = − 0.071, Q3 = − 0.049, upper whisker = −0.022) trend slopes for all datasets, as resulting from linear regression. c , An example of user participation in toxic and non-toxic thread sets (Twitter news dataset). d , Pearson’s correlation coefficients between user participation and toxicity trends for each dataset. e , Pearson’s correlation coefficients between user participation in toxic and non-toxic threads for each dataset.

Controversy and toxicity

In this section, we aim to explore why people participate in toxic online conversations and why longer discussions tend to be more toxic. Several factors could be the subject matter. First, controversial topics might lead to longer, more heated debates with increased toxicity. Second, the endorsement of toxic content by other users may act as an incentive to increase the discussion’s toxicity. Third, engagement peaks, due to factors such as reduced discussion focus or the intervention of trolls, may bring a higher share of toxic exchanges. Pursuing this line of inquiry, we identified proxies to measure the level of controversy in conversations and examined how these relate to toxicity and conversation size. Concurrently, we investigated the relationship between toxicity, endorsement and engagement.

As shown previously 24 , 42 , controversy is likely to emerge when people with opposing views engage in the same debate. Thus, the presence of users with diverse political leanings within a conversation could be a valid proxy for measuring controversy. We operationalize this definition as follows. Exploiting the peculiarities of our data, we can infer the political leaning of a subset of users in the Facebook News, Twitter News, Twitter Vaccines and Gab Feed datasets. This is achieved by examining the endorsement, for example, in the form of likes, expressed towards news outlets of which the political inclinations have been independently assessed by news rating agencies (see the ‘Polarization and user leaning attribution’ section of the Methods ). Extended Data Table 3 shows a breakdown of the datasets. As a result, we label users with a leaning score l   ∈  [−1, 1], −1 being left leaning and +1 being right leaning. We then select threads with at least ten different labelled users, in which at least 10% of comments (with a minimum of 20) are produced by such users and assign to each of these comments the same leaning score of those who posted them. In this setting, the level of controversy within a conversation is assumed to be captured by the spread of the political leaning of the participants in the conversation. A natural way for measuring such a spread is the s.d. σ ( l ) of the distribution of comments possessing a leaning score: the higher the σ ( l ), the greater the level of ideological disagreement and therefore controversy in a thread. We analysed the relationship between controversy and toxicity in online conversations of different sizes. Figure 4a shows that controversy increases with the size of conversations in all datasets, and its trends are positively correlated with the corresponding trends in toxicity (Extended Data Table 3 ). This supports our hypothesis that controversy and toxicity are closely related in online discussions.

figure 4

a , The mean controversy ( σ ( l )) and mean toxicity versus thread size (log-binned and normalized) for the Facebook news, Twitter news, Twitter vaccines and Gab feed datasets. Here toxicity is calculated in the same conversations in which controversy could be computed (Extended Data Table 3 ); the relative Pearson’s, Spearman’s and Kendall’s correlation coefficients are also provided in Extended Data Table 3 . Trends are reported with their 95% confidence interval. b , Likes/upvotes versus toxicity (linearly binned). c , An example (Voat politics dataset) of the distributions of the frequency of toxic comments in threads before ( n  = 2,201, minimum = 0, maximum = 1, lower whisker = 0, Q1 = 0, Q2 = 0.15, Q3 = 0.313, upper whisker = 0.769) at the peak ( n  = 2,798, minimum = 0, maximum = 0.8, lower whisker = 0, Q1 = 0.125, Q2 = 0.196, Q3 = 0.282, upper whisker = 0.513) and after the peak ( n  = 2,791, minimum = 0, maximum = 1, lower whisker = 0, Q1 = 0.129, Q2 = 0.200, Q3 = 0.282, upper whisker = 0.500) of activity, as detected by Kleinberg’s burst detection algorithm.

As a complementary analysis, we draw on previous results 43 . In that study, using a definition of controversy operationally different but conceptually related to ours, a link was found between a greater degree of controversy of a discussion topic and a wider distribution of sentiment scores attributed to the set of its posts and comments. We quantified the sentiment of comments using a pretrained BERT model available from Hugging Face 44 , used also in previous studies 45 . The model predicts the sentiment of a sentence through a scoring system ranging from 1 (negative) to 5 (positive). We define the sentiment attributed to a comment c as its weighted mean \(s(c)=\sum _{i=1.5}{x}_{i}{p}_{i}\) , where x i   ∈  [1, 5] is the output score from the model and p i is the probability associated to that value. Moreover, we normalize the sentiment score s for each dataset between 0 and 1. We observe the trends of the mean s.d. of sentiment in conversations, \(\bar{\sigma }(s)\) , and toxicity are positively correlated for moderated platforms such as Facebook and Twitter but are negatively correlated on Gab (Extended Data Table 3 ). The positive correlation observed in Facebook and Twitter indicates that greater discrepancies in sentiment of the conversations can, in general, be linked to toxic conversations and vice versa. Instead, on unregulated platforms such as Gab, highly conflicting sentiments seem to be more likely to emerge in less toxic conversations.

As anticipated, another factor that may be associated with the emergence of toxic comments is the endorsement they receive. Indeed, such positive reactions may motivate posting even more comments of the same kind. Using the mean number of likes/upvotes as a proxy of endorsement, we have an indication that this may not be the case. Figure 4b shows that the trend in likes/upvotes versus comments toxicity is never increasing past the toxicity score threshold (0.6).

Finally, to complement our analysis, we inspect the relationship between toxicity and user engagement within conversations, measured as the intensity of the number of comments over time. To do so, we used a method for burst detection 46 that, after reconstructing the density profile of a temporal stream of elements, separates the stream into different levels of intensity and assigns each element to the level to which it belongs (see the ‘Burst analysis’ section of the Methods ). We computed the fraction of toxic comments at the highest intensity level of each conversation and for the levels right before and after it. By comparing the distributions of the fraction of toxic comments for the three intervals, we find that these distributions are statistically different in almost all cases (Fig. 4c and Extended Data Table 4 ). In all datasets but one, distributions are consistently shifted towards higher toxicity at the peak of engagement, compared with the previous phase. Likewise, in most cases, the peak shows higher toxicity even if compared to the following phase, which in turn is mainly more toxic than the phase before the peak. These results suggest that toxicity is likely to increase together with user engagement.

Here we examine one of the most prominent and persistent characteristics online discussions—toxic behaviour, defined here as rude, disrespectful or unreasonable conduct. Our analysis suggests that toxicity is neither a deterrent to user involvement nor an engagement amplifier; rather, it tends to emerge when exchanges become more frequent and may be a product of opinion polarization. Our findings suggest that the polarization of user opinions—intended as the degree of opposed partisanship of users in a conversation—may have a more crucial role than toxicity in shaping the evolution of online discussions. Thus, monitoring polarization could indicate early interventions in online discussions. However, it is important to acknowledge that the dynamics at play in shaping online discourse are probably multifaceted and require a nuanced approach for effective moderation. Other factors may influence toxicity and engagement, such as the specific subject of the conversation, the presence of influential users or ‘trolls’, the time and day of posting, as well as cultural or demographic aspects, such as user average age or geographical location. Furthermore, even though extremely toxic users are rare (Extended Data Fig. 2 ), the relationship between participation and toxicity of a discussion may in principle be affected also by small groups of highly toxic and engaged users driving the conversation dynamics. Although the analysis of such subtler aspects is beyond the scope of this Article, they are certainly worth investigating in future research.

However, when people encounter views that contradict their own, they may react with hostility and contempt, consistent with previous research 47 . In turn, it may create a cycle of negative emotions and behaviours that fuels toxicity. We also show that some online conversation features have remained consistent over the past three decades despite the evolution of platforms and social norms.

Our study has some limitations that we acknowledge and discuss. First, we use political leaning as a proxy for general leaning, which may capture only some of the nuances of online opinions. However, political leaning represents a broad spectrum of opinions across different topics, and it correlates well with other dimensions of leaning, such as news preferences, vaccine attitudes and stance on climate change 48 , 49 . We could not assign a political leaning to users to analyse controversies on all platforms. Still, those considered—Facebook, Gab and Twitter—represent different populations and moderation policies, and the combined data account for nearly 90% of the content in our entire dataset. Our analysis approach is based on breadth and heterogeneity. As such, it may raise concerns about potential reductionism due to the comparison of different datasets from different sources and time periods. We acknowledge that each discussion thread, platform and context has unique characteristics and complexities that might be diminished when homogenizing data. However, we aim not to capture the full depth of every discussion but to identify and highlight general patterns and trends in online toxicity across platforms and time. The quantitative approach used in our study is similar to numerous other studies 15 and enables us to uncover these overarching principles and patterns that may otherwise remain hidden. Of course, it is not possible to account for the behaviours of passive users. This entails, for example, that even if toxicity does not seem to make people leave conversations, it could still be a factor that discourages them from joining them. Our study leverages an extensive dataset to examine the intricate relationship between persistent online human behaviours and the characteristics of different social media platforms. Our findings challenge the prevailing assumption by demonstrating that toxic content, as traditionally defined, does not necessarily reduce user engagement, thereby questioning the assumed direct correlation between toxic content and negative discourse dynamics. This highlights the necessity for a detailed examination of the effect of toxic interactions on user behaviour and the quality of discussions across various platforms. Our results, showing user resilience to toxic content, indicate the potential for creating advanced, context-aware moderation tools that can accurately navigate the complex influence of antagonistic interactions on community engagement and discussion quality. Moreover, our study sets the stage for further exploration into the complexities of toxicity and its effect on engagement within online communities. Advancing our grasp of online discourse necessitates refining content moderation techniques grounded in a thorough understanding of human behaviour. Thus, our research adds to the dialogue on creating more constructive online spaces, promoting moderation approaches that are effective yet nuanced, facilitating engaging exchanges and reducing the tangible negative effects of toxic behaviour.

Through the extensive dataset presented here, critical aspects of the online platform ecosystem and fundamental dynamics of user interactions can be explored. Moreover, we provide insights that a comparative approach such as the one followed here can prove invaluable in discerning human behaviour from platform-specific features. This may be used to investigate further sensitive issues, such as the formation of polarization and misinformation. The resulting outcomes have multiple potential impacts. Our findings reveal consistent toxicity patterns across platforms, topics and time, suggesting that future research in this field should prioritize the concept of invariance. Recognizing that toxic behaviour is a widespread phenomenon that is not limited by platform-specific features underscores the need for a broader, unified approach to understanding online discourse. Furthermore, the participation of users in toxic conversations suggests that a simple approach to removing toxic comments may not be sufficient to prevent user exposure to such phenomena. This indicates a need for more sophisticated moderation techniques to manage conversation dynamics, including early interventions in discussions that show warnings of becoming toxic. Furthermore, our findings support the idea that examining content pieces in connection with others could enhance the effectiveness of automatic toxicity detection models. The observed homogeneity suggests that models trained using data from one platform may also have applicability to other platforms. Future research could explore further into the role of controversy and its interaction with other elements contributing to toxicity. Moreover, comparing platforms could enhance our understanding of invariant human factors related to polarization, disinformation and content consumption. Such studies would be instrumental in capturing the drivers of the effect of social media platforms on human behaviour, offering valuable insights into the underlying dynamics of online interactions.

Data collection

In our study, data collection from various social media platforms was strategically designed to encompass various topics, ensuring maximal heterogeneity in the discussion themes. For each platform, where feasible, we focus on gathering posts related to diverse areas such as politics, news, environment and vaccinations. This approach aims to capture a broad spectrum of discourse, providing a comprehensive view of conversation dynamics across different content categories.

We use datasets from previous studies that covered discussions about vaccines 50 , news 51 and brexit 52 . For the vaccines topic, the resulting dataset contains around 2 million comments retrieved from public groups and pages in a period that ranges from 2 January 2010 to 17 July 2017. For the news topic, we selected a list of pages from the Europe Media Monitor that reported the news in English. As a result, the obtained dataset contains around 362 million comments between 9 September 2009 and 18 August 2016. Furthermore, we collect a total of about 4.5 billion likes that the users put on posts and comments concerning these pages. Finally, for the brexit topic, the dataset contains around 460,000 comments from 31 December 2015 to 29 July 2016.

We collect data from the Pushshift.io archive ( https://files.pushshift.io/gab/ ) concerning discussions taking place from 10 August 2016, when the platform was launched, to 29 October 2018, when Gab went temporarily offline due to the Pittsburgh shooting 53 . As a result, we collect a total of around 14 million comments.

Data were collected from the Pushshift.io archive ( https://pushshift.io/ ) for the period ranging from 1 January 2018 to 31 December 2022. For each topic, whenever possible, we manually identified and selected subreddits that best represented the targeted topics. As a result of this operation, we obtained about 800,000 comments from the r/conspiracy subreddit for the conspiracy topic. For the vaccines topic, we collected about 70,000 comments from the r/VaccineDebate subreddit, focusing on the COVID-19 vaccine debate. We collected around 400,000 comments from the r/News subreddit for the news topic. We collected about 70,000 comments from the r/environment subreddit for the climate change topic. Finally, we collected around 550,000 comments from the r/science subreddit for the science topic.

We created a list of 14 channels, associating each with one of the topics considered in the study. For each channel, we manually collected messages and their related comments. As a result, from the four channels associated with the news topic (news notiziae, news ultimora, news edizionestraordinaria, news covidultimora), we obtained around 724,000 comments from posts between 9 April 2018 and 20 December 2022. For the politics topic, instead, the corresponding two channels (politics besttimeline, politics polmemes) produced a total of around 490,000 comments between 4 August 2017 and 19 December 2022. Finally, the eight channels assigned to the conspiracy topic (conspiracy bennyjhonson, conspiracy tommyrobinsonnews, conspiracy britainsfirst, conspiracy loomeredofficial, conspiracy thetrumpistgroup, conspiracy trumpjr, conspiracy pauljwatson, conspiracy iononmivaccino) produced a total of about 1.4 million comments between 30 August 2019 and 20 December 2022.

We used a list of datasets from previous studies that includes discussions about vaccines 54 , climate change 49 and news 55 topics. For the vaccines topic, we collected around 50 million comments from 23 January 2010 to 25 January 2023. For the news topic, we extend the dataset used previously 55 by collecting all threads composed of less than 20 comments, obtaining a total of about 9.5 million comments for a period ranging from 1 January 2020 to 29 November 2022. Finally, for the climate change topic, we collected around 9.7 million comments between 1 January 2020 and 10 January 2023.

We collected data for the Usenet discussion system by querying the Usenet Archive ( https://archive.org/details/usenet?tab=about ). We selected a list of topics considered adequate to contain a large, broad and heterogeneous number of discussions involving active and populated newsgroups. As a result of this selection, we selected conspiracy, politics, news and talk as topic candidates for our analysis. For the conspiracy topic, we collected around 280,000 comments between 1 September 1994 and 30 December 2005 from the alt.conspiracy newsgroup. For the politics topics, we collected around 2.6 million comments between 29 June 1992 and 31 December 2005 from the alt.politics newsgroup. For the news topic, we collected about 620,000 comments between 5 December 1992 and 31 December 2005 from the alt.news newsgroup. Finally, for the talk topic, we collected all of the conversations from the homonym newsgroup on a period that ranges from 13 February 1989 to 31 December 2005 for around 2.1 million contents.

We used a dataset presented previously 56 that covers the entire lifetime of the platform, from 9 January 2018 to 25 December 2020, including a total of around 16.2 million posts and comments shared by around 113,000 users in about 7,100 subverses (the equivalent of a subreddit for Voat). Similarly to previous platforms, we associated the topics to specific subverses. As a result of this operation, for the conspiracy topic, we collected about 1 million comments from the greatawakening subverse between 9 January 2018 and 25 December 2020. For the politics topic, we collected around 1 million comments from the politics subverse between 16 June 2014 and 25 December 2020. Finally, for the news topic, we collected about 1.4 million comments from the news subverse between 21 November 2013 and 25 December 2020.

We used a dataset proposed in previous studies that collected conversations about the climate change topic 49 , which is extended, coherently with previous platforms, by including conversations about vaccines and news topics. The data collection process for YouTube is performed using the YouTube Data API ( https://developers.google.com/youtube/v3 ). For the climate change topic, we collected around 840,000 comments between 16 March 2014 and 28 February 2022. For the vaccines topic, we collected conversations between 31 January 2020 and 24 October 2021 containing keywords about COVID-19 vaccines, namely Sinopharm, CanSino, Janssen, Johnson&Johnson, Novavax, CureVac, Pfizer, BioNTech, AstraZeneca and Moderna. As a result of this operation, we gathered a total of around 2.6 million comments to videos. Finally, for the news topic, we collected about 20 million comments between 13 February 2006 and 8 February 2022, including videos and comments from a list of news outlets, limited to the UK and provided by Newsguard (see the ‘Polarization and user leaning attribution’ section).

Content moderation policies

Content moderation policies are guidelines that online platforms use to monitor the content that users post on their sites. Platforms have different goals and audiences, and their moderation policies may vary greatly, with some placing more emphasis on free expression and others prioritizing safety and community guidelines.

Facebook and YouTube have strict moderation policies prohibiting hate speech, violence and harassment 57 . To address harmful content, Facebook follows a ‘remove, reduce, inform’ strategy and uses a combination of human reviewers and artificial intelligence to enforce its policies 58 . Similarly, YouTube has a similar set of community guidelines regarding hate speech policy, covering a wide range of behaviours such as vulgar language 59 , harassment 60 and, in general, does not allow the presence of hate speech and violence against individuals or groups based on various attributes 61 . To ensure that these guidelines are respected, the platform uses a mix of artificial intelligence algorithms and human reviewers 62 .

Twitter also has a comprehensive content moderation policy and specific rules against hateful conduct 63 , 64 . They use automation 65 and human review in the moderation process 66 . At the date of submission, Twitter’s content policies have remained unchanged since Elon Musk’s takeover, except that they ceased enforcing their COVID-19 misleading information policy on 23 November 2022. Their policy enforcement has faced criticism for inconsistency 67 .

Reddit falls somewhere in between regarding how strict its moderation policy is. Reddit’s content policy has eight rules, including prohibiting violence, harassment and promoting hate based on identity or vulnerability 68 , 69 . Reddit relies heavily on user reports and volunteer moderators. Thus, it could be considered more lenient than Facebook, YouTube and Twitter regarding enforcing rules. In October 2022, Reddit announced that they intend to update their enforcement practices to apply automation in content moderation 70 .

By contrast, Telegram, Gab and Voat take a more hands-off approach with fewer restrictions on content. Telegram has ambiguity in its guidelines, which arises from broad or subjective terms and can lead to different interpretations 71 . Although they mentioned they may use automated algorithms to analyse messages, Telegram relies mainly on users to report a range of content, such as violence, child abuse, spam, illegal drugs, personal details and pornography 72 . According to Telegram’s privacy policy, reported content may be checked by moderators and, if it is confirmed to violate their terms, temporary or permanent restrictions may be imposed on the account 73 . Gab’s Terms of Service allow all speech protected under the First Amendment to the US Constitution, and unlawful content is removed. They state that they do not review material before it is posted on their website and cannot guarantee prompt removal of illegal content after it has been posted 74 . Voat was once known as a ‘free-speech’ alternative to Reddit and allowed content even if it may be considered offensive or controversial 56 .

Usenet is a decentralized online discussion system created in 1979. Owing to its decentralized nature, Usenet has been difficult to moderate effectively, and it has a reputation for being a place where controversial and even illegal content can be posted without consequence. Each individual group on Usenet can have its own moderators, who are responsible for monitoring and enforcing their group’s rules, and there is no single set of rules that applies to the entire platform 75 .

Logarithmic binning and conversation size

Owing to the heavy-tailed distributions of conversation length (Extended Data Fig. 1 ), to plot the figures and perform the analyses, we used logarithmic binning. Thus, according to its length, each thread of each dataset is assigned to 1 out of 21 bins. To ensure a minimal number of points in each bin, we iteratively change the left bound of the last bin so that it contains at least N  = 50 elements (we set N  = 100 in the case of Facebook news, due to its larger size). Specifically, considering threads ordered in increasing length, the size of the largest thread is changed to that of the second last largest one, and the binning is recalculated accordingly until the last bin contains at least N points.

For visualization purposes, we provide a normalization of the logarithmic binning outcome that consists of mapping discrete points into coordinates of the x axis such that the bins correspond to {0, 0.05, 0.1, ..., 0.95, 1}.

To perform the part of the analysis, we select conversations belonging to the [0.7, 1] interval of the normalized logarithmic binning of thread length. This interval ensures that the conversations are sufficiently long and that we have a substantial number of threads. Participation and toxicity trends are obtained by applying to such conversations a linear binning of 21 elements to a chronologically ordered sequence of comments, that is, threads. A breakdown of the resulting datasets is provided in Supplementary Table 2 .

Finally, to assess the equality of the growth rates of participation values in toxic and non-toxic threads (see the ‘Conversation evolution and toxicity’ section), we implemented the following linear regression model:

where the term β 2 accounts for the effect that being a toxic conversation has on the growth of participation. Our results show that β 2 is not significantly different from 0 in most original and validation datasets (Supplementary Tables 8 and 11 )

Toxicity detection and validation of the models used

The problem of detecting toxicity is highly debated, to the point that there is currently no agreement on the very definition of toxic speech 64 , 76 . A toxic comment can be regarded as one that includes obscene or derogatory language 32 , that uses harsh, abusive language and personal attacks 33 , or contains extremism, violence and harassment 11 , just to give a few examples. Even though toxic speech should, in principle, be distinguished from hate speech, which is commonly more related to targeted attacks that denigrate a person or a group on the basis of attributes such as race, religion, gender, sex, sexual orientation and so on 77 , it sometimes may also be used as an umbrella term 78 , 79 . This lack of agreement directly reflects the challenging and inherent subjective nature of the concept of toxicity. The complexity of the topic makes it particularly difficult to assess the reliability of natural language processing models for automatic toxicity detection despite the impressive improvements in the field. Modern natural language processing models, such as Perspective API, are deep learning models that leverage word-embedding techniques to build representations of words as vectors in a high-dimensional space, in which a metric distance should reflect the conceptual distance among words, therefore providing linguistic context. A primary concern regarding toxicity detection models is their limited ability to contextualize conversations 11 , 80 . These models often struggle to incorporate factors beyond the text itself, such as the participant’s personal characteristics, motivations, relationships, group memberships and the overall tone of the discussion 11 . Consequently, what is considered to be toxic content can vary significantly among different groups, such as ethnicities or age groups 81 , leading to potential biases. These biases may stem from the annotators’ backgrounds and the datasets used for training, which might not adequately represent cultural heterogeneity. Moreover, subtle forms of toxic content, like indirect allusions, memes and inside jokes targeted at specific groups, can be particularly challenging to detect. Word embeddings equip current classifiers with a rich linguistic context, enhancing their ability to recognize a wide range of patterns characteristic of toxic expression. However, the requirements for understanding the broader context of a conversation, such as personal characteristics, motivations and group dynamics, remain beyond the scope of automatic detection models. We acknowledge these inherent limitations in our approach. Nonetheless, reliance on automatic detection models is essential for large-scale analyses of online toxicity like the one conducted in this study. We specifically resort to the Perspective API for this task, as it represents state-of-the-art automatic toxicity detection, offering a balance between linguistic nuance and scalable analysis capabilities. To define an appropriate classification threshold, we draw from the existing literature 64 , which uses 0.6 as the threshold for considering a comment to be toxic. This threshold can also be considered a reasonable one as, according to the developer guidelines offered by Perspective, it would indicate that the majority of the sample of readers, namely 6 out of 10, would perceive that comment as toxic. Due to the limitations mentioned above (for a criticism of Perspective API, see ref. 82 ), we validate our results by performing a comparative analysis using two other toxicity detectors: Detoxify ( https://github.com/unitaryai/detoxify ), which is similar to Perspective, and IMSYPP, a classifier developed for a European Project on hate speech 16 ( https://huggingface.co/IMSyPP ). In Supplementary Table 14 , the percentages of agreement among the three models in classifying 100,000 comments taken randomly from each of our datasets are reported. For Detoxify we used the same binary toxicity threshold (0.6) as used with Perspective. Although IMSYPP operates on a distinct definition of toxicity as outlined previously 16 , our comparative analysis shows a general agreement in the results. This alignment, despite the differences in underlying definitions and methodologies, underscores the robustness of our findings across various toxicity detection frameworks. Moreover, we perform the core analyses of this study using all classifiers on a further, vast and heterogeneous dataset. As shown in Supplementary Figs. 1 and 2 , the results regarding toxicity increase with conversation size and user participation and toxicity are quantitatively very similar. Furthermore, we verify the stability of our findings under different toxicity thresholds. Although the main analyses in this paper use the threshold value recommended by the Perspective API, set at 0.6, to minimize false positives, our results remain consistent even when applying a less conservative threshold of 0.5. This is demonstrated in Extended Data Fig. 5 , confirming the robustness of our observations across varying toxicity levels. For this study, we used the API support for languages prevalent in the European and American continents, including English, Spanish, French, Portuguese, German, Italian, Dutch, Polish, Swedish and Russian. Detoxify also offers multilingual support. However, IMSYPP is limited to English and Italian text, a factor considered in our comparative analysis.

Polarization and user leaning attribution

Our approach to measuring controversy in a conversation is based on estimating the degree of political partisanship among the participants. This measure is closely related to the political science concept of political polarization. Political polarization is the process by which political attitudes diverge from moderate positions and gravitate towards ideological extremes, as described previously 83 . By quantifying the level of partisanship within discussions, we aim to provide insights into the extent and nature of polarization in online debates. In this context, it is important to distinguish between ‘ideological polarization’ and ‘affective polarization’. Ideological polarization refers to divisions based on political viewpoints. By contrast, affective polarization is characterized by positive emotions towards members of one’s group and hostility towards those of opposing groups 84 , 85 . Here we focus specifically on ideological polarization. The subsequent description of our procedure for attributing user political leanings will further clarify this focus. On online social media, the individual leaning of a user toward a topic can be inferred through the content produced or the endorsement shown toward specific content. In this study, we consider the endorsement of users to news outlets of which the political leaning has been evaluated by trustworthy external sources. Although not without limitations—which we address below—this is a standard approach that has been used in several studies, and has become a common and established practice in the field of social media analysis due to its practicality and effectiveness in providing a broad understanding of political dynamics on these online platforms 1 , 43 , 86 , 87 , 88 . We label news outlets with a political score based on the information reported by Media Bias/Fact Check (MBFC) ( https://mediabiasfactcheck.com ), integrating with the equivalent information from Newsguard ( https://www.newsguardtech.com/ ). MBFC is an independent fact-checking organization that rates news outlets on the basis of the reliability and the political bias of the content that they produce and share. Similarly, Newsguard is a tool created by an international team of journalists that provides news outlet trust and political bias scores. Following standard methods used in the literature 1 , 43 , we calculated the individual leaning of a user l   ∈  [−1, 1] as the average of the leaning scores l c   ∈  [−1, 1] attributed to each of the content it produced/shared, where l c results from a mapping of the news organizations political scores provided by MBFC and Newsguard, respectively: [left, centre-left, centre, centre-right, right] to [−1, − 0.5, 0, 0.5, 1], and [far left, left, right, far right] to [−1, −0.5, 0.5, 1]). Our datasets have different structures, so we have to evaluate user leanings in different ways. For Facebook News, we assign a leaning score to users who posted a like at least three times and commented at least three times under news outlet pages that have a political score. For Twitter News, a leaning is assigned to users who posted at least 15 comments under scored news outlet pages. For Twitter Vaccines and Gab, we consider users who shared content produced by scored news outlet pages at least three times. A limitation of our approach is that engaging with politically aligned content does not always imply agreement; users may interact with opposing viewpoints for critical discussion. However, research indicates that users predominantly share content aligning with their own views, especially in politically charged contexts 87 , 89 , 90 . Moreover, our method captures users who actively express their political leanings, omitting the ‘passive’ ones. This is due to the lack of available data on users who do not explicitly state their opinions. Nevertheless, analysing active users offers valuable insights into the discourse of those most engaged and influential on social media platforms.

Burst analysis

We used the Kleinberg burst detection algorithm 46 (see the ‘Controversy and toxicity’ section) to all conversations with at least 50 comments in a dataset. In our analysis, we randomly sample up to 5,000 conversations, each containing a specific number of comments. To ensure the reliability of our data, we exclude conversations with an excessive number of double timestamps—defined as more than 10 consecutive or over 100 within the first 24 h. This criterion helps to mitigate the influence of bots, which could distort the patterns of human activity. Furthermore, we focus on the first 24 h of each thread to analyse streams of comments during their peak activity period. Consequently, Usenet was excluded from our study. The unique usage characteristics of Usenet render such a time-constrained analysis inappropriate, as its activity patterns do not align with those of the other platforms under consideration. By reconstructing the density profile of the comment stream, the algorithm divides the entire stream’s interval into subintervals on the basis of their level of intensity. Labelled as discrete positive values, higher levels of burstiness represent higher activity segments. To avoid considering flat-density phases, threads with a maximum burst level equal to 2 are excluded from this analysis. To assess whether a higher intensity of comments results in a higher comment toxicity, we perform a Mann–Whitney U -test 91 with Bonferroni correction for multiple testing between the distributions of the fraction of toxic comments t i in three intensity phases: during the peak of engagement and at the highest levels before and after. Extended Data Table 4 shows the corrected P values of each test, at a 0.99 confidence level, with H1 indicated in the column header. An example of the distribution of the frequency of toxic comments in threads at the three phases of a conversation considered (pre-peak, peak and post-peak) is reported in Fig. 4c .

Toxicity detection on Usenet

As discussed in the section on toxicity detection and the Perspective API above, automatic detectors derive their understanding of toxicity from the annotated datasets that they are trained on. The Perspective API is predominantly trained on recent texts, and its human labellers conform to contemporary cultural norms. Thus, although our dataset dates back to no more than the early 1990s, we provide a discussion on the viability of the application of Perspective API to Usenet and validation analysis. Contemporary society, especially in Western contexts, is more sensitive to issues of toxicity, including gender, race and sexual orientation, compared with a few decades ago. This means that some comments identified as toxic today, including those from older platforms like Usenet, might not have been considered as such in the past. However, this discrepancy does not significantly affect our analysis, which is centred on current standards of toxicity. On the other hand, changes in linguistic features may have some repercussions: there may be words and locutions that were frequently used in the 1990s that instead appear sparsely in today’s language, making Perspective potentially less effective in classifying short texts that contain them. We therefore proceeded to evaluate the impact that such a possible scenario could have on our results. In light of the above considerations, we consider texts labelled as toxic as correctly classified; instead, we assume that there is a fixed probability p that a comment may be incorrectly labelled as non-toxic. Consequently, we randomly designate a proportion p of non-toxic comments, relabel them as toxic and compute the toxicity versus conversation size trend (Fig. 2 ) on the altered dataset across various p . Specifically, for each value, we simulate 500 different trends, collecting their regression slopes to obtain a null distribution for them. To assess if the probability of error could lead to significant differences in the observed trend, we compute the fraction f of slopes lying outside the interval (−| s |,| s |), where s is the slope of the observed trend. We report the result in Supplementary Table 9 for different values of p . In agreement with our previous analysis, we assume that the slope differs significantly from the ones obtained from randomized data if f is less than 0.05.

We observed that only the Usenet Talk dataset shows sensitivity to small error probabilities, and the others do not show a significant difference. Consequently, our results indicate that Perspective API is suitable for application to Usenet data in our analyses, notwithstanding the potential linguistic and cultural shifts that might affect the classifier’s reliability with older texts.

Toxicity of short conversations

Our study focuses on the relationship between user participation and the toxicity of conversations, particularly in engaged or prolonged discussions. A potential concern is that concentrating on longer threads overlooks conversations that terminate quickly due to early toxicity, therefore potentially biasing our analysis. To address this, we analysed shorter conversations, comprising 6 to 20 comments, in each dataset. In particular, we computed the distributions of toxicity scores of the first and last three comments in each thread. This approach helps to ensure that our analysis accounts for a range of conversation lengths and patterns of toxicity development, providing a more comprehensive understanding of the dynamics at play. As shown in Supplementary Fig. 3 , for each dataset, the distributions of the toxicity scores display high similarity, meaning that, in short conversations, the last comments are not significantly more toxic than the initial ones, indicating that the potential effects mentioned above do not undermine our conclusions. Regarding our analysis of longer threads, we notice here that the participation quantity can give rise to similar trends in various cases. For example, high participation can be achieved because many users take part in the conversation, but also with small groups of users in which everyone is equally contributing over time. Or, in very large discussions, the contributions of individual outliers may remain hidden. By measuring participation, these and other borderline cases may not be distinct from the statistically highly likely discussion dynamics but, ultimately, this lack of discriminatory power does not have any implications on our findings nor on the validity of the conclusions that we draw.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

Facebook, Twitter and YouTube data are made available in accordance with their respective terms of use. IDs of comments used in this work are provided at Open Science Framework ( https://doi.org/10.17605/osf.io/fq5dy ). For the remaining platforms (Gab, Reddit, Telegram, Usenet and Voat), all of the necessary information to recreate the datasets used in this study can be found in the ‘Data collection’ section.

Code availability

The code used for the analyses presented in the Article is available at Open Science Framework ( https://doi.org/10.17605/osf.io/fq5dy ). The repository includes dummy datasets to illustrate the required data format and make the code run.

Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W. & Starnini, M. The echo chamber effect on social media. Proc. Natl Acad. Sci. USA 118 , e2023301118 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tucker, J. A. et al. Social media, political polarization, and political disinformation: a review of the scientific literature. Preprint at SSRN https://doi.org/10.2139/ssrn.3144139 (2018).

González-Bailón, S. et al. Asymmetric ideological segregation in exposure to political news on Facebook. Science 381 , 392–398 (2023).

Article   PubMed   ADS   Google Scholar  

Guess, A. et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381 , 398–404 (2023).

Article   CAS   PubMed   ADS   Google Scholar  

Del Vicario, M. et al. The spreading of misinformation online. Proc. Natl Acad. Sci. USA 113 , 554–559 (2016).

Article   PubMed   PubMed Central   ADS   Google Scholar  

Bakshy, E., Messing, S. & Adamic, L. A. Exposure to ideologically diverse news and opinion on Facebook. Science 348 , 1130–1132 (2015).

Article   MathSciNet   CAS   PubMed   ADS   Google Scholar  

Bail, C. A. et al. Exposure to opposing views on social media can increase political polarization. Proc. Natl Acad. Sci. USA 115 , 9216–9221 (2018).

Article   CAS   PubMed   PubMed Central   ADS   Google Scholar  

Nyhan, B. et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature 620 , 137–144 (2023).

Guess, A. et al. Reshares on social media amplify political news but do not detectably affect beliefs or opinions. Science 381 , 404–408 (2023).

Castaño-Pulgaŕın, S. A., Suárez-Betancur, N., Vega, L. M. T. & López, H. M. H. Internet, social media and online hate speech. Systematic review. Aggress. Viol. Behav. 58 , 101608 (2021).

Article   Google Scholar  

Sheth, A., Shalin, V. L. & Kursuncu, U. Defining and detecting toxicity on social media: context and knowledge are key. Neurocomputing 490 , 312–318 (2022).

Lupu, Y. et al. Offline events and online hate. PLoS ONE 18 , e0278511 (2023).

Gentzkow, M. & Shapiro, J. M. Ideological segregation online and offline. Q. J. Econ. 126 , 1799–1839 (2011).

Aichner, T., Grünfelder, M., Maurer, O. & Jegeni, D. Twenty-five years of social media: a review of social media applications and definitions from 1994 to 2019. Cyberpsychol. Behav. Social Netw. 24 , 215–222 (2021).

Lazer, D. M. et al. The science of fake news. Science 359 , 1094–1096 (2018).

Cinelli, M. et al. Dynamics of online hate and misinformation. Sci. Rep. 11 , 22083 (2021).

González-Bailón, S. & Lelkes, Y. Do social media undermine social cohesion? A critical review. Soc. Issues Pol. Rev. 17 , 155–180 (2023).

Roozenbeek, J. & Zollo, F. Democratize social-media research—with access and funding. Nature 612 , 404–404 (2022).

Article   CAS   PubMed   Google Scholar  

Dutton, W. H. Network rules of order: regulating speech in public electronic fora. Media Cult. Soc. 18 , 269–290 (1996).

Papacharissi, Z. Democracy online: civility, politeness, and the democratic potential of online political discussion groups. N. Media Soc. 6 , 259–283 (2004).

Coe, K., Kenski, K. & Rains, S. A. Online and uncivil? Patterns and determinants of incivility in newspaper website comments. J. Commun. 64 , 658–679 (2014).

Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A. & Ladwig, P. The “nasty effect:” online incivility and risk perceptions of emerging technologies. J. Comput. Med. Commun. 19 , 373–387 (2014).

Garrett, R. K. Echo chambers online?: Politically motivated selective exposure among internet news users. J. Comput. Med. Commun. 14 , 265–285 (2009).

Del Vicario, M. et al. Echo chambers: emotional contagion and group polarization on Facebook. Sci. Rep. 6 , 37825 (2016).

Garimella, K., De Francisci Morales, G., Gionis, A. & Mathioudakis, M. Echo chambers, gatekeepers, and the price of bipartisanship. In Proc. 2018 World Wide Web Conference , 913–922 (International World Wide Web Conferences Steering Committee, 2018).

Johnson, N. et al. Hidden resilience and adaptive dynamics of the global online hate ecology. Nature 573 , 261–265 (2019).

Fortuna, P. & Nunes, S. A survey on automatic detection of hate speech in text. ACM Comput. Surv. 51 , 85 (2018).

Phadke, S. & Mitra, T. Many faced hate: a cross platform study of content framing and information sharing by online hate groups. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 1–13 (Association for Computing Machinery, 2020).

Xia, Y., Zhu, H., Lu, T., Zhang, P. & Gu, N. Exploring antecedents and consequences of toxicity in online discussions: a case study on Reddit. Proc. ACM Hum. Comput. Interact. 4 , 108 (2020).

Sipka, A., Hannak, A. & Urman, A. Comparing the language of qanon-related content on Parler, GAB, and Twitter. In Proc. 14th ACM Web Science Conference 2022 411–421 (Association for Computing Machinery, 2022).

Fortuna, P., Soler, J. & Wanner, L. Toxic, hateful, offensive or abusive? What are we really classifying? An empirical analysis of hate speech datasets. In Proc. 12th Language Resources and Evaluation Conference (eds Calzolari, E. et al.) 6786–6794 (European Language Resources Association, 2020).

Davidson, T., Warmsley, D., Macy, M. & Weber, I. Automated hate speech detection and the problem of offensive language. In Proc. International AAAI Conference on Web and Social Media 11 (Association for the Advancement of Artificial Intelligence, 2017).

Kolhatkar, V. et al. The SFU opinion and comments corpus: a corpus for the analysis of online news comments. Corpus Pragmat. 4 , 155–190 (2020).

Article   PubMed   Google Scholar  

Lees, A. et al. A new generation of perspective API: efficient multilingual character-level transformers. In KDD'22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 3197–3207 (Association for Computing Machinery, 2022).

Vidgen, B. & Derczynski, L. Directions in abusive language training data, a systematic review: garbage in, garbage out. PLoS ONE 15 , e0243300 (2020).

Ross, G. J. & Jones, T. Understanding the heavy-tailed dynamics in human behavior. Phys. Rev. E 91 , 062809 (2015).

Article   MathSciNet   ADS   Google Scholar  

Choi, D., Chun, S., Oh, H., Han, J. & Kwon, T. T. Rumor propagation is amplified by echo chambers in social media. Sci. Rep. 10 , 310 (2020).

Beel, J., Xiang, T., Soni, S. & Yang, D. Linguistic characterization of divisive topics online: case studies on contentiousness in abortion, climate change, and gun control. In Proc. International AAAI Conference on Web and Social Media Vol. 16, 32–42 (Association for the Advancement of Artificial Intelligence, 2022).

Saveski, M., Roy, B. & Roy, D. The structure of toxic conversations on Twitter. In Proc. Web Conference 2021 (eds Leskovec, J. et al.) 1086–1097 (Association for Computing Machinery, 2021).

Juul, J. L. & Ugander, J. Comparing information diffusion mechanisms by matching on cascade size. Proc. Natl Acad. Sci. USA 118 , e2100786118 (2021).

Fariello, G., Jemielniak, D. & Sulkowski, A. Does Godwin’s law (rule of Nazi analogies) apply in observable reality? An empirical study of selected words in 199 million Reddit posts. N. Media Soc. 26 , 14614448211062070 (2021).

Qiu, J., Lin, Z. & Shuai, Q. Investigating the opinions distribution in the controversy on social media. Inf. Sci. 489 , 274–288 (2019).

Garimella, K., Morales, G. D. F., Gionis, A. & Mathioudakis, M. Quantifying controversy on social media. ACM Trans. Soc. Comput. 1 , 3 (2018).

NLPTown. bert-base-multilingual-uncased-sentiment, huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment (2023).

Ta, H. T., Rahman, A. B. S., Najjar, L. & Gelbukh, A. Transfer Learning from Multilingual DeBERTa for Sexism Identification CEUR Workshop Proceedings Vol. 3202 (CEUR-WS, 2022).

Kleinberg, J. Bursty and hierarchical structure in streams. Data Min. Knowl. Discov. 7 , 373–397 (2003).

Article   MathSciNet   Google Scholar  

Zollo, F. et al. Debunking in a world of tribes. PLoS ONE 12 , e0181821 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Albrecht, D. Vaccination, politics and COVID-19 impacts. BMC Publ. Health 22 , 96 (2022).

Article   CAS   Google Scholar  

Falkenberg, M. et al. Growing polarization around climate change on social media. Nat. Clim. Change 12 , 1114–1121 (2022).

Schmidt, A. L., Zollo, F., Scala, A., Betsch, C. & Quattrociocchi, W. Polarization of the vaccination debate on Facebook. Vaccine 36 , 3606–3612 (2018).

Schmidt, A. L. et al. Anatomy of news consumption on Facebook. Proc. Natl Acad. Sci. USA 114 , 3035–3039 (2017).

Del Vicario, M., Zollo, F., Caldarelli, G., Scala, A. & Quattrociocchi, W. Mapping social dynamics on Facebook: the brexit debate. Soc. Netw. 50 , 6–16 (2017).

Hunnicutt, T. & Dave, P. Gab.com goes offline after Pittsburgh synagogue shooting. Reuters , www.reuters.com/article/uk-pennsylvania-shooting-gab-idUKKCN1N20QN (29 October 2018).

Valensise, C. M. et al. Lack of evidence for correlation between COVID-19 infodemic and vaccine acceptance. Preprint at arxiv.org/abs/2107.07946 (2021).

Quattrociocchi, A., Etta, G., Avalle, M., Cinelli, M. & Quattrociocchi, W. in Social Informatics (eds Hopfgartner, F. et al.) 245–256 (Springer, 2022).

Mekacher, A. & Papasavva, A. “I can’t keep it up” a dataset from the defunct voat.co news aggregator. In Proc. International AAAI Conference on Web and Social Media Vol. 16, 1302–1311 (AAAI, 2022).

Facebook Community Standards , transparency.fb.com/policies/community-standards/hate-speech/ (Facebook, 2023).

Rosen, G. & Lyons, T. Remove, reduce, inform: new steps to manage problematic content. Meta , about.fb.com/news/2019/04/remove-reduce-inform-new-steps/ (10 April 2019).

Vulgar Language Policy , support.google.com/youtube/answer/10072685? (YouTube, 2023).

Harassment & Cyberbullying Policies , support.google.com/youtube/answer/2802268 (YouTube, 2023).

Hate Speech Policy , support.google.com/youtube/answer/2801939 (YouTube, 2023).

How Does YouTube Enforce Its Community Guidelines? , www.youtube.com/intl/enus/howyoutubeworks/policies/community-guidelines/enforcing-community-guidelines (YouTube, 2023).

The Twitter Rules , help.twitter.com/en/rules-and-policies/twitter-rules (Twitter, 2023).

Hateful Conduct , help.twitter.com/en/rules-and-policies/hateful-conduct-policy (Twitter, 2023).

Gorwa, R., Binns, R. & Katzenbach, C. Algorithmic content moderation: technical and political challenges in the automation of platform governance. Big Data Soc. 7 , 2053951719897945 (2020).

Our Range of Enforcement Options , help.twitter.com/en/rules-and-policies/enforcement-options (Twitter, 2023).

Elliott, V. & Stokel-Walker, C. Twitter’s moderation system is in tatters. WIRED (17 November 2022).

Reddit Content Policy , www.redditinc.com/policies/content-policy (Reddit, 2023).

Promoting Hate Based on Identity or Vulnerability , www.reddithelp.com/hc/en-us/articles/360045715951 (Reddit, 2023).

Malik, A. Reddit acqui-hires team from ML content moderation startup Oterlu. TechCrunch , tcrn.ch/3yeS2Kd (4 October 2022).

Terms of Service , telegram.org/tos (Telegram, 2023).

Durov, P. The rules of @telegram prohibit calls for violence and hate speech. We rely on our users to report public content that violates this rule. Twitter , twitter.com/durov/status/917076707055751168?lang=en (8 October 2017).

Telegram Privacy Policy , telegram.org/privacy (Telegram, 2023).

Terms of Service , gab.com/about/tos (Gab, 2023).

Salzenberg, C. & Spafford, G. What is Usenet? , www0.mi.infn.it/ ∼ calcolo/Wis usenet.html (1995).

Castelle, M. The linguistic ideologies of deep abusive language classification. In Proc. 2nd Workshop on Abusive Language Online (ALW2) (eds Fišer, D. et al.) 160–170, aclanthology.org/W18-5120 (Association for Computational Linguistics, 2018).

Tontodimamma, A., Nissi, E. & Sarra, A. E. A. Thirty years of research into hate speech: topics of interest and their evolution. Scientometrics 126 , 157–179 (2021).

Sap, M. et al. Annotators with attitudes: how annotator beliefs and identities bias toxic language detection. In Proc. 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (eds. Carpuat, M. et al.) 5884–5906 (Association for Computational Linguistics, 2022).

Pavlopoulos, J., Sorensen, J., Dixon, L., Thain, N. & Androutsopoulos, I. Toxicity detection: does context really matter? In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 4296–4305 (Association for Computational Linguistics, 2020).

Yin, W. & Zubiaga, A. Hidden behind the obvious: misleading keywords and implicitly abusive language on social media. Online Soc. Netw. Media 30 , 100210 (2022).

Sap, M., Card, D., Gabriel, S., Choi, Y. & Smith, N. A. The risk of racial bias in hate speech detection. In Proc. 57th Annual Meeting of the Association for Computational Linguistics (eds Kohonen, A. et al.) 1668–1678 (Association for Computational Linguistics, 2019).

Rosenblatt, L., Piedras, L. & Wilkins, J. Critical perspectives: a benchmark revealing pitfalls in PerspectiveAPI. In Proc. Second Workshop on NLP for Positive Impact (NLP4PI) (eds Biester, L. et al.) 15–24 (Association for Computational Linguistics, 2022).

DiMaggio, P., Evans, J. & Bryson, B. Have American’s social attitudes become more polarized? Am. J. Sociol. 102 , 690–755 (1996).

Fiorina, M. P. & Abrams, S. J. Political polarization in the American public. Annu. Rev. Polit. Sci. 11 , 563–588 (2008).

Iyengar, S., Gaurav, S. & Lelkes, Y. Affect, not ideology: a social identity perspective on polarization. Publ. Opin. Q. 76 , 405–431 (2012).

Cota, W., Ferreira, S. & Pastor-Satorras, R. E. A. Quantifying echo chamber effects in information spreading over political communication networks. EPJ Data Sci. 8 , 38 (2019).

Bessi, A. et al. Users polarization on Facebook and Youtube. PLoS ONE 11 , e0159641 (2016).

Bessi, A. et al. Science vs conspiracy: collective narratives in the age of misinformation. PLoS ONE 10 , e0118093 (2015).

Himelboim, I., McCreery, S. & Smith, M. Birds of a feather tweet together: integrating network and content analyses to examine cross-ideology exposure on Twitter. J. Comput. Med. Commun. 18 , 40–60 (2013).

An, J., Quercia, D. & Crowcroft, J. Partisan sharing: Facebook evidence and societal consequences. In Proc. Second ACM Conference on Online Social Networks, COSN ′ 14 13–24 (Association for Computing Machinery, 2014).

Mann, H. B. & Whitney, D. R. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18 , 50–60 (1947).

Download references

Acknowledgements

We thank M. Samory for discussions; T. Quandt and Z. Zhang for suggestions during the review process; and Geronimo Stilton and the Hypnotoad for inspiring the data analysis and result interpretation. The work is supported by IRIS Infodemic Coalition (UK government, grant no. SCH-00001-3391), SERICS (PE00000014) under the NRRP MUR program funded by the EU NextGenerationEU project CRESP from the Italian Ministry of Health under the program CCM 2022, PON project ‘Ricerca e Innovazione’ 2014-2020, and PRIN Project MUSMA for Italian Ministry of University and Research (MUR) through the PRIN 2022CUP G53D23002930006 and EU Next-Generation EU, M4 C2 I1.1.

Author information

These authors contributed equally: Michele Avalle, Niccolò Di Marco, Gabriele Etta

Authors and Affiliations

Department of Computer Science, Sapienza University of Rome, Rome, Italy

Michele Avalle, Niccolò Di Marco, Gabriele Etta, Shayan Alipour, Lorenzo Alvisi, Matteo Cinelli & Walter Quattrociocchi

Department of Social Sciences and Economics, Sapienza University of Rome, Rome, Italy

Emanuele Sangiorgio

Department of Communication and Social Research, Sapienza University of Rome, Rome, Italy

Anita Bonetti

Institute of Complex Systems, CNR, Rome, Italy

Antonio Scala

Department of Mathematics, City University of London, London, UK

Andrea Baronchelli

The Alan Turing Institute, London, UK

You can also search for this author in PubMed   Google Scholar

Contributions

Conception and design: W.Q., M.A., M.C., G.E. and N.D.M. Data collection: G.E. and N.D.M. with collaboration from M.C., M.A. and S.A. Data analysis: G.E., N.D.M., M.A., M.C., W.Q., E.S., A. Bonetti, A. Baronchelli and A.S. Code writing: G.E. and N.D.M. with collaboration from M.A., E.S., S.A. and M.C. All of the authors provided critical feedback and helped to shape the research, analysis and manuscript, and contributed to the preparation of the manuscript.

Corresponding authors

Correspondence to Matteo Cinelli or Walter Quattrociocchi .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Thorsten Quandt, Ziqi Zhang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 general characteristics of online conversations..

a . Distributions of conversation length (number of comments in a thread). b . Distributions of the time duration (days) of user activity on a platform for each platform and each topic. c . Time duration (days) distributions of threads. Colour-coded legend on the side.

Extended Data Fig. 2 Extremely toxic authors and conversations are rare.

a . Complementary cumulative distribution functions (CCDFs) of the toxicity of authors who posted more than 10 comments. Toxicity is defined as usual as the fraction of toxic comments over the total of comments posted by a user. b . CCDFs of the toxicity of conversations containing more than 10 comments. Colour-coded legend on the side.

Extended Data Fig. 3 User toxicity as conversations evolve.

Mean fraction of toxic comments as conversations progress. The x-axis represents the normalized position of comment intervals in the threads. For each dataset, toxicity is computed in the thread size interval [0.7−1] (see main text and Tab. S 2 in SI). Trends are reported with their 95% confidence interval. Colour-coded legend on the side.

Extended Data Fig. 4 Toxicity is not associated with conversation lifetime.

Mean toxicity of a . users versus their time of permanence in the dataset and b . threads versus their time duration. Trends are reported with their 95% confidence interval and they are reported using a normalized log-binning. Colour-coded legend on the side.

Extended Data Fig. 5 Results hold for a different toxicity threshold.

Core analyses presented in the paper repeated employing a lower (0.5) toxicity binary classification threshold. a . Mean fraction of toxic comments in conversations versus conversation size, for each dataset (see Fig. 2 ). Trends are reported with their 95% confidence interval. b . Pearson’s correlation coefficients between user participation and toxicity trends for each dataset. c . Pearson’s correlation coefficients between users’ participation in toxic and non-toxic thread sets, for each dataset. d . Boxplot of the distribution of toxicity ( n  = 25, min = −0.016, max = 0.020, lower whisker = −0.005, Q 1 = − 0.005, Q 2  = 0.004, Q 3  = 0.012, upper whisker = 0.020) and participation ( n  = 25, min = −0.198, max = −0.022, lower whisker = −0.198, Q 1 = − 0.109, Q 2  = − 0.070, Q 3  = − 0.049, upper whisker = −0.022) trend slopes for all datasets, as resulting from linear regression. The results of the relative Mann-Kendall tests for trend assessment are shown in Extended Data Table 5 .

Supplementary information

Supplementary information.

Supplementary Information 1–4, including details regarding data collection for validation dataset, Supplementary Figs. 1–3, Supplementary Tables 1–17 and software and coding specifications.

Reporting Summary

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Avalle, M., Di Marco, N., Etta, G. et al. Persistent interaction patterns across social media platforms and over time. Nature (2024). https://doi.org/10.1038/s41586-024-07229-y

Download citation

Received : 30 April 2023

Accepted : 22 February 2024

Published : 20 March 2024

DOI : https://doi.org/10.1038/s41586-024-07229-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research questions about the effects of social media

Subscribe or renew today

Every print subscription comes with full digital access

Science News

Social media harms teens’ mental health, mounting evidence shows. what now.

Understanding what is going on in teens’ minds is necessary for targeted policy suggestions

A teen scrolls through social media alone on her phone.

Most teens use social media, often for hours on end. Some social scientists are confident that such use is harming their mental health. Now they want to pinpoint what explains the link.

Carol Yepes/Getty Images

Share this:

By Sujata Gupta

February 20, 2024 at 7:30 am

In January, Mark Zuckerberg, CEO of Facebook’s parent company Meta, appeared at a congressional hearing to answer questions about how social media potentially harms children. Zuckerberg opened by saying: “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health.”

But many social scientists would disagree with that statement. In recent years, studies have started to show a causal link between teen social media use and reduced well-being or mood disorders, chiefly depression and anxiety.

Ironically, one of the most cited studies into this link focused on Facebook.

Researchers delved into whether the platform’s introduction across college campuses in the mid 2000s increased symptoms associated with depression and anxiety. The answer was a clear yes , says MIT economist Alexey Makarin, a coauthor of the study, which appeared in the November 2022 American Economic Review . “There is still a lot to be explored,” Makarin says, but “[to say] there is no causal evidence that social media causes mental health issues, to that I definitely object.”

The concern, and the studies, come from statistics showing that social media use in teens ages 13 to 17 is now almost ubiquitous. Two-thirds of teens report using TikTok, and some 60 percent of teens report using Instagram or Snapchat, a 2022 survey found. (Only 30 percent said they used Facebook.) Another survey showed that girls, on average, allot roughly 3.4 hours per day to TikTok, Instagram and Facebook, compared with roughly 2.1 hours among boys. At the same time, more teens are showing signs of depression than ever, especially girls ( SN: 6/30/23 ).

As more studies show a strong link between these phenomena, some researchers are starting to shift their attention to possible mechanisms. Why does social media use seem to trigger mental health problems? Why are those effects unevenly distributed among different groups, such as girls or young adults? And can the positives of social media be teased out from the negatives to provide more targeted guidance to teens, their caregivers and policymakers?

“You can’t design good public policy if you don’t know why things are happening,” says Scott Cunningham, an economist at Baylor University in Waco, Texas.

Increasing rigor

Concerns over the effects of social media use in children have been circulating for years, resulting in a massive body of scientific literature. But those mostly correlational studies could not show if teen social media use was harming mental health or if teens with mental health problems were using more social media.

Moreover, the findings from such studies were often inconclusive, or the effects on mental health so small as to be inconsequential. In one study that received considerable media attention, psychologists Amy Orben and Andrew Przybylski combined data from three surveys to see if they could find a link between technology use, including social media, and reduced well-being. The duo gauged the well-being of over 355,000 teenagers by focusing on questions around depression, suicidal thinking and self-esteem.

Digital technology use was associated with a slight decrease in adolescent well-being , Orben, now of the University of Cambridge, and Przybylski, of the University of Oxford, reported in 2019 in Nature Human Behaviour . But the duo downplayed that finding, noting that researchers have observed similar drops in adolescent well-being associated with drinking milk, going to the movies or eating potatoes.

Holes have begun to appear in that narrative thanks to newer, more rigorous studies.

In one longitudinal study, researchers — including Orben and Przybylski — used survey data on social media use and well-being from over 17,400 teens and young adults to look at how individuals’ responses to a question gauging life satisfaction changed between 2011 and 2018. And they dug into how the responses varied by gender, age and time spent on social media.

Social media use was associated with a drop in well-being among teens during certain developmental periods, chiefly puberty and young adulthood, the team reported in 2022 in Nature Communications . That translated to lower well-being scores around ages 11 to 13 for girls and ages 14 to 15 for boys. Both groups also reported a drop in well-being around age 19. Moreover, among the older teens, the team found evidence for the Goldilocks Hypothesis: the idea that both too much and too little time spent on social media can harm mental health.

“There’s hardly any effect if you look over everybody. But if you look at specific age groups, at particularly what [Orben] calls ‘windows of sensitivity’ … you see these clear effects,” says L.J. Shrum, a consumer psychologist at HEC Paris who was not involved with this research. His review of studies related to teen social media use and mental health is forthcoming in the Journal of the Association for Consumer Research.

Cause and effect

That longitudinal study hints at causation, researchers say. But one of the clearest ways to pin down cause and effect is through natural or quasi-experiments. For these in-the-wild experiments, researchers must identify situations where the rollout of a societal “treatment” is staggered across space and time. They can then compare outcomes among members of the group who received the treatment to those still in the queue — the control group.

That was the approach Makarin and his team used in their study of Facebook. The researchers homed in on the staggered rollout of Facebook across 775 college campuses from 2004 to 2006. They combined that rollout data with student responses to the National College Health Assessment, a widely used survey of college students’ mental and physical health.

The team then sought to understand if those survey questions captured diagnosable mental health problems. Specifically, they had roughly 500 undergraduate students respond to questions both in the National College Health Assessment and in validated screening tools for depression and anxiety. They found that mental health scores on the assessment predicted scores on the screenings. That suggested that a drop in well-being on the college survey was a good proxy for a corresponding increase in diagnosable mental health disorders. 

Compared with campuses that had not yet gained access to Facebook, college campuses with Facebook experienced a 2 percentage point increase in the number of students who met the diagnostic criteria for anxiety or depression, the team found.

When it comes to showing a causal link between social media use in teens and worse mental health, “that study really is the crown jewel right now,” says Cunningham, who was not involved in that research.

A need for nuance

The social media landscape today is vastly different than the landscape of 20 years ago. Facebook is now optimized for maximum addiction, Shrum says, and other newer platforms, such as Snapchat, Instagram and TikTok, have since copied and built on those features. Paired with the ubiquity of social media in general, the negative effects on mental health may well be larger now.

Moreover, social media research tends to focus on young adults — an easier cohort to study than minors. That needs to change, Cunningham says. “Most of us are worried about our high school kids and younger.” 

And so, researchers must pivot accordingly. Crucially, simple comparisons of social media users and nonusers no longer make sense. As Orben and Przybylski’s 2022 work suggested, a teen not on social media might well feel worse than one who briefly logs on. 

Researchers must also dig into why, and under what circumstances, social media use can harm mental health, Cunningham says. Explanations for this link abound. For instance, social media is thought to crowd out other activities or increase people’s likelihood of comparing themselves unfavorably with others. But big data studies, with their reliance on existing surveys and statistical analyses, cannot address those deeper questions. “These kinds of papers, there’s nothing you can really ask … to find these plausible mechanisms,” Cunningham says.

One ongoing effort to understand social media use from this more nuanced vantage point is the SMART Schools project out of the University of Birmingham in England. Pedagogical expert Victoria Goodyear and her team are comparing mental and physical health outcomes among children who attend schools that have restricted cell phone use to those attending schools without such a policy. The researchers described the protocol of that study of 30 schools and over 1,000 students in the July BMJ Open.

Goodyear and colleagues are also combining that natural experiment with qualitative research. They met with 36 five-person focus groups each consisting of all students, all parents or all educators at six of those schools. The team hopes to learn how students use their phones during the day, how usage practices make students feel, and what the various parties think of restrictions on cell phone use during the school day.

Talking to teens and those in their orbit is the best way to get at the mechanisms by which social media influences well-being — for better or worse, Goodyear says. Moving beyond big data to this more personal approach, however, takes considerable time and effort. “Social media has increased in pace and momentum very, very quickly,” she says. “And research takes a long time to catch up with that process.”

Until that catch-up occurs, though, researchers cannot dole out much advice. “What guidance could we provide to young people, parents and schools to help maintain the positives of social media use?” Goodyear asks. “There’s not concrete evidence yet.”

More Stories from Science News on Science & Society

research questions about the effects of social media

Timbre can affect what harmony is music to our ears

An illustration of many happy people

Not all cultures value happiness over other aspects of well-being

Cady Coleman looks through a circular window on the ISS.

‘Space: The Longest Goodbye’ explores astronauts’ mental health

a nuclear intercontinental ballistic missile in a silo

‘Countdown’ takes stock of the U.S. nuclear weapons stockpile

Two abstract heads look at each other. One has a computer brain and the other has a real human brain.

Why large language models aren’t headed toward humanlike understanding

Sekazi Mtingwa

Physicist Sekazi Mtingwa considers himself an apostle of science

In the foreground, Trish O’Kane points and guides a student holding binoculars amid a forest. More students stand in the background and look into the distance, some through binoculars.

A new book explores the transformative power of bird-watching

Open room with drug consumption booths, crash carts and text on the back wall that reads "This site saves lives" in English and Spanish.

U.S. opioid deaths are out of control. Can safe injection sites help?

From the nature index.

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Book cover

International Conference on Human-Computer Interaction

HCI 2015: HCI International 2015 - Posters’ Extended Abstracts pp 91–96 Cite as

Social Media Use and Impact on Interpersonal Communication

  • Yerika Jimenez 2 &
  • Patricia Morreale 3  
  • Conference paper
  • First Online: 01 January 2015

36k Accesses

2 Citations

2 Altmetric

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 529))

This research paper presents the findings of a research project that investigated how young adult interpersonal communications have changed since using social media. Specifically, the research focused on determining if using social media had a beneficial or an adverse effect on the development of interaction and communication skills of young adults. Results from interviews reveal a negative impact in young adult communications and social skills. In this paper young adult preferences in social media are also explored, to answer the question: Does social media usage affect the development of interaction and communication skills for young adults and set a basis for future adult communication behaviors?

  • Social media
  • Social interaction
  • Interpersonal communications
  • Young adults

You have full access to this open access chapter,  Download conference paper PDF

1 Introduction

Human interaction has changed drastically in the last 20 years, not only due to the introduction of the Internet, but also from social media and online communities. These social media options and communities have grown from being simply used to communicate on a private network into a strong culture that almost all individuals are using to communicate with others all over the world. We will concentrate on the impact that social media has on human communication and interaction among young adults, primarily college students. In today’s society, powerful social media platforms such as Myspace, Facebook, Twitter, Instagram (IG), and Pinterest have been the result of an evolution that is changing how humans communicate with each other. The big question we asked ourselves was how much has social media really impacted the way that humans communicate and interact with each other, and if so, how significant is the change of interpersonal interaction among young adults in the United States today?

The motivation behind this research has been personal experience with interaction and communication with friends and family; it had become difficult, sometimes even rare, to have a one-on-one conversation with them, without having them glancing at or interacting with their phone. Has social interaction changed since the introduction of advanced technology and primarily social media? In correlation with the research data collected in this study, it was concluded that many participants’ personal communication has decreased due social media influence encouraging them to have online conversations, as opposed to face-to-face, in-person conversations.

2 Related Work

The question of how social media affects social and human interaction in our society is being actively researched and studied. A literature review highlights the positive and negative aspects of social media interaction, as researchers battle to understand the current and future effects of social media interaction. A study done by Keith Oatley, an emeritus professor of cognitive psychology at the University of Toronto, suggests that the brain may interpret digital interaction in the same manner as in-person interaction, while others maintain that differences are growing between how we perceive one another online as opposed to in reality [ 1 ]. This means that young adults can interpret online communication as being real one-on-one communication because the brain will process that information as a reality. Another study revealed that online interaction helps with the ability to relate to others, tolerate differing viewpoints, and express thoughts and feeling in a healthy way [ 2 , 3 ]. Moreover a study executed by the National Institutes of Health found that youths with strong, positive face-to-face relationships may be those most frequently using social media as an additional venue to interact with their peers [ 4 ].

In contrast, research reveals that individuals with many friends may appear to be focusing too much on Facebook, making friends out of desperation rather than popularity, spending a great deal of time on their computer ostensibly trying to make connections in a computer-mediated environment where they feel more comfortable rather than in face-to-face social interaction [ 5 ]. Moreover, a study among college freshman revealed that social media prevents people from being social and networking in person [ 6 ].

3 Experimental Design

This research study was divided into two parts during the academic year 2013–2014. Part one, conducted during fall semester 2013, had the purpose of understanding how and why young adults use their mobile devices, as well as how the students describe and identify with their mobile devices. This was done by distributing an online survey to several Kean University student communities: various majors, fraternity and sorority groups, sports groups, etc. The data revealed that users primarily used their mobile devices for social media and entertainment purposes. The surveyed individuals indicated that they mainly accessed mobile apps like Facebook, Pinterest, Twitter, and Instagram, to communicate, interact, and share many parts of their daily life with their friends and peers.

Based on the data collected during part one, a different approach and purpose was used for part two, with the goal being to understand how social media activities shape the communication skills of individuals and reflects their attitudes, attention, interests, and activities. Additionally, research included how young adult communication needs change through the use of different social media platforms, and if a pattern can be predicted from the users’ behavior on the social media platforms. Part two of this research was conducted by having 30 one-on-one interviews with young adults who are college students. During this interview key questions were asked in order to understand if there is a significant amount of interpersonal interaction between users and their peers. Interpersonal interaction is a communication process that involves the exchange of information, feelings and meaning by means of verbal or non-verbal messages. For the purposes of this paper, only the data collected during spring 2014 is presented.

4 Data Collection

Through interviews, accurate results of the interaction of young adults with social media were collected. These interviews involved 30 one-on-one conversations with Kean University students. Having one-on-one interviews with participants allowed for individual results, first responses from the participant, without permitting responses being skewed or influenced by other participants, such as might occur in group interviews. It also allows users to give truthful answers, in contrast to an online or paper survey, as they might have second thoughts about an answer and change it. The one-on-one interviews consisted of ten open-ended questions, which were aimed to answer, and ultimately determine, how social media interaction involuntarily influences, positively or negatively, an individual’s attitude, attention, interests, and social/personal activities. The largest motive behind the questions was to determine how individual communication skills, formally and informally, have changed from interacting with various social media platforms. The interviews, along with being recorded on paper, were also video and audio-recorded. The average time for each interview was between two to ten minutes. These interviews were held in quiet labs and during off-times, so that the responses could be given and recorded clearly and without distraction (Fig.  1 ). A total of 19 females and 11 males participated, with ages ranging from 19 to 28 years old.

figure 1

Female participant during one-on-one interview

After conducting the interviews and analyzing the data collected, it was determined that the age when participants, both male and female, first began to use social media ranged between 9 to 17 years. It was found that, generally, males began to use social media around the age of 13, whereas females started around the age of 12. The average age for males starting to use social media is about 12.909 with a standard deviation of 2.343. For females, the average age is 12.263 with a standard deviation of 1.627. From this, we can determine that males generally begin to use social media around the age of 13, whereas females begin around the age of 12.

After determining the average age of when participants started using social media, it was necessary to find which social media platforms they had as a basis; meaning which social media platform they first used. MySpace was the first social media used by twenty-three participants, followed by Facebook with three users, and Mi Gente by only one user, with two participants not using social media at all. It was interesting to find that all of the participants who started using Myspace migrated to Facebook. The reasoning provided was that “everyone [they knew] started to use Facebook.” According to the participants, Facebook was “more interactive” and was “extremely easy to use.” The participants also stated that Myspace was becoming suitable for a younger user base, and it got boring because they needed to keep changing their profile backgrounds and modifying their top friends, which caused rifts or “popularity issues” between friends. After finding out which platform they started from, it was also essential to find out which platform they currently use. However, one platform that seemed to be used by all participants to keep up-to-date with their friends and acquaintances was Instagram, a picture and video-based social media platform. Another surprising finding was that many users did not use Pinterest at all, or had not even heard of the platform. After determining which social media platforms the users migrated to, it was essential to identify what caused the users to move from one platform to another. What are the merits of a certain platform that caused the users to migrate to it, and what are the drawbacks of another platform that caused users to migrate from it or simply not use it all?

4.1 Social Interaction Changes

For some participants social interaction had a chance for a positive outcome, while others viewed it in a more negative aspect. The participants were asked if their social interactions have changed since they were first exposed to social media (Table  1 ). One participant stated that “it is easier to just look at a social media page to see how friends and family are doing rather than have a one-on-one interaction.” As for people’s attitudes, they would rather comment or “like” a picture than stop and have a quick conversation. On the other hand, another participant felt that social media helped them when talking and expressing opinions on topics that they generally would not have discussed in person. Moreover, the participants are aware of the actions and thing that they are doing but continue to do it because they feel comfortable and did not desire to have one-on-one interactions with people.

The participants were also asked to explain how social media changed their communication and interactions during the years of using social media (Table  2 ). The data shows that participants interact less in person because they are relating more via online pictures and status. For other participants, it made them more cautious and even afraid of putting any personal information online because it might cause problems or rifts in their life. On the contrary, some participants stated that their communication and interaction is the same; however, they were able to see how it had changed for the people that are around them. A participant stated that “internet/social media is a power tool that allows people to be whatever they want and in a way it creates popularity, but once again they walk around acting like they do not know you and ‘like’ your pictures the next day.”

5 Discussion

The data illustrated in this paper shows how much the introduction and usage of social media has impacted the interaction and communication of young adults. The future of interaction and communication was also presented as a possibility, if the current trend continues with young adults and social media or online communities. This raises the notion of possibly not having any social, in-person interaction and having all communication or interaction online and virtually with all family and friends.

6 Conclusion

Referring back to the question asked during the introduction: how much has social media impacted the way we communicate and interact with each other? After reviewing all the findings, seeing the relationship individuals have with their mobile phones, and comparing social media platforms, it is clear that many young adults have an emotional attachment with their mobile device and want interaction that is quick and to the point, with minimal “in-person” contact. Many young adults prefer to use their mobile device to send a text message or interact via social media. This is due to their comfort level being higher while posting via social media applications, as opposed to in-person interaction. To successfully and accurately answer the question: yes, social media has had a very positive and negative effect on the way we communicate and interact with each other. However, how effective is this method of “virtual” communication and interaction in the real world?

Paul, A.: Your Brain on Fiction. The New York Times, 17 March 2012. http://www.nytimes.com/2012/03/18/opinion/sunday/the-neuroscience-of-your-brain-on-fiction.html?pagewanted=all&_r=0 . Accessed 26 April 2014

Burleson, B.R.: The experience and effects of emotional support: what the study of cultural and gender differences can tell us about close relationships, emotion, and interpersonal communication. Pers. Relat. 10 , 1–23 (2003)

Article   Google Scholar  

Hinduja, S., Patchin, J.: Personal information of adolescents on the internet: a quantitative content analysis of myspace. J. Adolesc. 31 , 125–146 (2007)

Hare, A.L., Mikami, A., Szwedo, Y., Allen, D., Evans, M.: Adolescent peer relationships and behavior problems predict young adults’ communication on social networking websites. Dev. Psychol. 46 , 46–56 (2010)

Orr, R.R., Simmering, M., Orr, E., Sisic, M., Ross, C.: The influence of shyness on the use of facebook in an undergraduate sample. Cyber Psychol. Behav. 12 , 337–340 (2007)

Tong, S.T., Van Der Heide, B., Langwell, L., Walther, J.B.: Too much of a good thing? The relationship between number of friends and interpersonal impressions on facebook. J. Comput. Mediated Commun. 13 , 531–549 (2008)

Download references

Author information

Authors and affiliations.

Department of Computer and Information Science and Engineering, University of Florida, 412 Newwell Drive, Gainesville, FL, 32611, USA

Yerika Jimenez

Department of Computer Science, Kean University, 1000 Morris Ave, Union, NJ, 07083, USA

Patricia Morreale

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yerika Jimenez .

Editor information

Editors and affiliations.

University of Crete and Foundation for Research and Technology - Hellas (FORTH), Heraklion, Crete, Greece

Constantine Stephanidis

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper.

Jimenez, Y., Morreale, P. (2015). Social Media Use and Impact on Interpersonal Communication. In: Stephanidis, C. (eds) HCI International 2015 - Posters’ Extended Abstracts. HCI 2015. Communications in Computer and Information Science, vol 529. Springer, Cham. https://doi.org/10.1007/978-3-319-21383-5_15

Download citation

DOI : https://doi.org/10.1007/978-3-319-21383-5_15

Published : 21 July 2015

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-21382-8

Online ISBN : 978-3-319-21383-5

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Questionnaire On “The impact of social media-used on the education of university student”

Profile image of Salman Islam

Introduction to Business Statistics.

Related Papers

Lawrence Ademiluyi

The proliferation of social media platforms and the high volume of social media subscription by members of the academia demand that researches be conducted to investigate its influence on academic activities among students and teachers. This study examined the influence of social media on academic a ctivities of business education students in colleges of education in Osun State. Descriptive survey research design was adopted for the study. Three research questions guided the study. The study was conducted in the two Colleges of Education in Osun State offering Business Education programme: Osun State College of Education, Ila Orangun and Osun State College of Education, Ilesa. The population comprised 154 NCE III students 2018/2019 academic session. No sample was drawn as a census study was conducted. A structured questionnaire with 38 items was used to collect data from the respondents. Mean rating was used to analyze the data collected in order to answer the research questions. Th...

research questions about the effects of social media

Universal Journal of Educational Research

Horizon Research Publishing(HRPUB) Kevin Nelson

University students are continually engaged with and exposed to information technology particularly social media for various motives. They spend their daily activities substantially on social media due to the availability of mobile devices and accessibility to internet that make utilizing social media more convenient. Due to inconclusive empirical evidence on the impact of social media, this study investigates the impact of social media usage intensity on the academic performance of accounting students after considering the students' personal attributes such as proficiency, gender, semester and ages differences. Adopting quantitative approach, questionnaire survey was distributed to accounting students in Melaka state and data from 113 accounting students were gathered. The result suggests that social media usage intensity has negative impact on students' performance. With regards to students' attributes, proficiency, gender and semester have impact on their performance. However, the findings may be different if the respondents come from rural areas. Thus, future research should address latest social media platforms in different regions and fields. This study is expected to contribute to the knowledge on social media usage and provide guidance to academics in adopting social media as part of teaching and learning activities in enhancing accounting students' performance.

Joseph Ganyo

Innovative Computing Review

INNOVATIVE COMPUTING REVIEW

The current study aims to find out the effects of social media sites on the academic performance of university students. Social media has been globally the major source of communication between individuals. Social media includes Cell phones, Facebook, Tick-talk, YouTube, Twitter, Myspace, Instagram, Skype, Tumblr, and many other social media platforms. Researchers all over the world have varied findings on the effect of social media on the academic performance of university students. The current study deployed a survey methodology by filling questionnaires from the respondents, regarding the usage of social media and its effects on the academic performance of students. For this purpose, two hundred students of the social sciences department enrolled in the BS program were the sample for the current study. Data were collected using an adapted questionnaire and for the analysis of data, AMOS Software version 24 was applied to develop a model. This model was developed to represent, the effect of social media on student academic achievement at the university level. The finding indicated study found that social networking media significantly affect the academic performance of university students. With this data, it is recommended that the administration must regulate the proper time usage of SMN by the students to save them from the destruction of excessive media usage.

International Journal Of Education, Learning & Training (IJELT)

Sandra Mensah

Annals of Spiru Haret University Economic Series

racheal ddungu

Priority-The International Business Review

waqas bin dilshad

This study seeks to determine the preference, extent, and persistence of usage of social media to formulate an understanding of what classifies as social media usage that has tendency to affect academic performance of students in a public sector university. The sampling technique appointed to choose 53 students from different departments of a public sector university in Karachi was simple random. A Questionnaire titled: "Social Media and Academic Performance of Students Questionnaire (SMAAPOS)" by Osharive (2015) was adopted, which consisted of Likert type (5-point) questions, to obtain primary data from the sample. This questionnaire was used to collect primary data, while secondary data was acquired from related books, journal articles, surveys, and websites, among other sources. For data processing, Microsoft Office Excel and SPSS were used. The evaluation of responses was done through descriptive analysis of frequency and percentages. The findings of this research justified that a large proportion of participants utilizes various social media platforms for both educational and entertainment means. While addiction of social media seems to be overweight and distraction because of social media is students' concern. The researcher proclaims that social media usage should be utilized for educational purposes in order to help students improve educational activities and prevent failures in students' academic success, along with decreasing the social media addiction for lessening the distractions that students face. This is to establish equilibrium between trending association with social media and involvement in academic activities of youngsters, for the purpose of minimizing obstacles in academic achievements.

American Scientific Research Journal for Engineering, Technology, and Sciences

Monia Oueder

The use of social media has met a rapid growth among the few past decades. This growth make it very popular for the communication amongst university students especially Tabuk university students. In fact, these social websites can be a good manner to exchange the information between students and even with their teachers. However, excessive social media can affect the student academic performance and make this use in question. This research tries to investigate about the benefits and the drawbacks of the social media use on student academic performance by conducting a survey on university students in Saudi Arabia especially in Tabuk university. The survey also explored which social network is the most popular amongst Tabuk university students and which one is useful for their academic skills. The survey has received 270 responses and descriptive statistics shows the relationship between the numbers of hours spent exploring the social media sites and the academic performances for the...

Global Journal of Management, Social Sciences and Humanities

Dr.Abdul Ghafoor Awan

ABSTRACT -Media is playing a very prominent role in the life of a modern man. There are a large number of people who use different type of social media in order to keep them update and connected with the entire world. Using social media affects their study and academic performance and ultimately their result become effective. Some of the students cannot remain away from using social media and it affects their academic performance badly. The purpose of this research is to find out the impact of using social media on academic performance of students at graduate leve.. The population of this study was all male and female graduate (final year) students of post graduate colleges of district Vehari. 300 students were selected as a sample of study randomly. A questionnaire comprised 40 statements based on 5-points Likert scale was developed for data collection. The respondents are to tick any one out of the five given options to show their attitude towards every statement presented to them via questionnaire. Our results show that the students use social media as a helping tool in their studies but it badly social affects their studies.

International Journal of Social Science and Humanity Studies

Nasiru Zubairu

Over the last decade, the number of people using social media has grown throughout the world, and it's more important than ever to understand how it affects students' academic achievements. Therefore, the aim of this study is to identify why students use social media platforms as well as to examine the impact on students' perormance. In order to meet the study objectives, the data is collected via an online questionnaire distributed in the electronic form to a sample of students attending tertiary institutions in Dutse. The mixed method research design has been applied in the present study. The Statistical Package for Social Science (SPSS) is used to analyze the quantitative data. On the other hand, the content analysis technique has been adopted to analyse qualitative data. According to the findings of the survey, the majority of students at tertiary institutions utilized social media platforms such as Facebook, WhatsApp, and Twitter, among others, and spent unnecessarily high time on them, drawing attention away from their study time. Results also indicates that higher institution students' use of social media platforms has a negative impact. The paper concludes that tertiary institutions should encourage students to use social media for academic research and assignments and that institutions should create ways to encourage students to use social media for academic purposes rather than for other purposes that interfere with their studies.

RELATED PAPERS

Ferne Nachbarn

Christof Dipper

La escritura académica en los posgrados profesionalizantes para maestros de educación básica

Blanca Araceli Rodríguez Hernández

Journal of Natural Sciences Research

veronica ngure

Reactive polymers

Ehtisham Qureshi

TRANSACTIONS OF THE JAPAN SOCIETY OF MECHANICAL ENGINEERS Series C

Diego Parra Sánchez

Corporate Ownership and Control

Merwe Oberholzer

Nucleic acids research

Juan Sáenz Neyra

Kati Räsänen

International Journal of Healthcare Information Systems and Informatics

George Vassilacopoulos

Matija Antonić

Journal of Tourism and Gastronomy Studies

Liska Damiati

Drug development and industrial pharmacy

komal saini

east dee Sulistyo

Clinical Rheumatology

Mohamed Hesham

Carolyn Rutter

BMC Complementary and Alternative Medicine

nur diyana mahmood

Nadezhda Ratiner

Revista Portuguesa de Saúde Ocupacional

Filipe Gonçalves

Journal of Intelligent & Fuzzy Systems

Lakshmana Gomathi Nayagam V

Boarding Houses and Kin Obligations

Richard Symanski

Clinical Neurophysiology

Volker Roth

Farmacéuticos comunitarios

Eduardo Satue

Osprey Publishing Ltd

Iorgulan Ica

See More Documents Like This

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

Social Media and Teen Mental Health: A Complex Mix

There is strong evidence to suggest that teenagers in the United States are collectively in the midst of a mental health crisis, as rates of both depression and suicide have climbed in recent years. Could the popularity of social media among young people be to blame?

Melissa DuPont-Reyes, PhD, MPH is an Assistant Professor of Sociomedical Sciences and Epidemiology

Melissa DuPont-Reyes , assistant professor of sociomedical sciences and epidemiology, says the answer may not be as simple as you think. She is leading a new study that takes a holistic perspective, broadening the focus from how the use of TikTok, Instagram, and other social media platforms can harm mental health to include an understanding of how they can be protective, too.

The National Institutes of Mental Health -funded longitudinal study is focused on Latinx adolescents, who use social media more than all other racial/ethnic or age groups, nationally. Beyond a simple measure of the frequency of social media use, Dupont-Reyes and colleagues will drill down into the diverse content young people encounter, including Spanish-language, Latinx-tailored, and English-language posts on a variety of platforms.

The study will collect data on both protective aspects like anti-stigma awareness campaigns and symptom support, as well as negative effects such as stigmatizing content, hate speech, and cyber-bullying. Researchers will examine how these exposures drive youths’ self-perception, help-seeking, and mental health outcomes, as well as the mediating role played by peers and family members.

To accomplish her study objective, in part, Dupont-Reyes will utilize validated, culturally appropriate survey assessments she developed as part of a project funded through a Robert Wood Johnson Foundation Pioneering Ideas Award. As part of the new study, young people will have the chance to research the question and have a say in how to address it through a process called Youth Participatory Action Research.

When it comes to social media’s effects on an adolescent mental health, Dupont-Reyes hypothesizes that context matters quite a lot. Her preliminary work has shown that for some youth, social media can be a lifeline. For instance, youth who are unaccompanied minors migrating, are LGBTQI+ in nontolerant settings, have a disability such as a speech impediment or even mental illness, or have experienced police brutality, all report that social media can be empowering as a tool to make their voices heard while also lending support and resources.

“I hope that my project demonstrates a more diverse portrait of adolescents in the U.S., and globally, as well as the social media that they encounter, and specifies the contexts in which social media can be beneficial to mental health and the contexts in which it might be harmful,” she says.

DuPont-Reyes says the evidence generated from the project could inform policies that are more equitable, accountable, and transparent—ultimately to create a safer technological landscape for diverse populations to promote mental health on a population level. At the same time, its findings can reach parents, teachers, the tech industry, health care providers, and others with its message that vilifying social media is not the answer.

“I hope my research can inform a more holistic and equitable approach to creating a safer social media environment for youth that doesn’t solely require restricting technology,” she says.

Related Information

Meet our team, melissa dupont-reyes, phd, mph.

Read our research on: Abortion | Podcasts | Election 2024

Regions & Countries

Teens and social media: key findings from pew research center surveys.

Laughing twin sisters looking at smartphone in park on summer evening

For the latest survey data on social media and tech use among teens, see “ Teens, Social Media, and Technology 2023 .” 

Today’s teens are navigating a digital landscape unlike the one experienced by their predecessors, particularly when it comes to the pervasive presence of social media. In 2022, Pew Research Center fielded an in-depth survey asking American teens – and their parents – about their experiences with and views toward social media . Here are key findings from the survey:

Pew Research Center conducted this study to better understand American teens’ experiences with social media and their parents’ perception of these experiences. For this analysis, we surveyed 1,316 U.S. teens ages 13 to 17, along with one parent from each teen’s household. The survey was conducted online by Ipsos from April 14 to May 4, 2022.

This research was reviewed and approved by an external institutional review board (IRB), Advarra, which is an independent committee of experts that specializes in helping to protect the rights of research participants.

Ipsos invited panelists who were a parent of at least one teen ages 13 to 17 from its KnowledgePanel , a probability-based web panel recruited primarily through national, random sampling of residential addresses, to take this survey. For some of these questions, parents were asked to think about one teen in their household. (If they had multiple teenage children ages 13 to 17 in the household, one was randomly chosen.) This teen was then asked to answer questions as well. The parent portion of the survey is weighted to be representative of U.S. parents of teens ages 13 to 17 by age, gender, race, ethnicity, household income and other categories. The teen portion of the survey is weighted to be representative of U.S. teens ages 13 to 17 who live with parents by age, gender, race, ethnicity, household income and other categories.

Here are the questions used  for this report, along with responses, and its  methodology .

Majorities of teens report ever using YouTube, TikTok, Instagram and Snapchat. YouTube is the platform most commonly used by teens, with 95% of those ages 13 to 17 saying they have ever used it, according to a Center survey conducted April 14-May 4, 2022, that asked about 10 online platforms. Two-thirds of teens report using TikTok, followed by roughly six-in-ten who say they use Instagram (62%) and Snapchat (59%). Much smaller shares of teens say they have ever used Twitter (23%), Twitch (20%), WhatsApp (17%), Reddit (14%) and Tumblr (5%).

A chart showing that since 2014-15 TikTok has started to rise, Facebook usage has dropped, Instagram and Snapchat have grown.

Facebook use among teens dropped from 71% in 2014-15 to 32% in 2022. Twitter and Tumblr also experienced declines in teen users during that span, but Instagram and Snapchat saw notable increases.

TikTok use is more common among Black teens and among teen girls. For example, roughly eight-in-ten Black teens (81%) say they use TikTok, compared with 71% of Hispanic teens and 62% of White teens. And Hispanic teens (29%) are more likely than Black (19%) or White teens (10%) to report using WhatsApp. (There were not enough Asian teens in the sample to analyze separately.)

Teens’ use of certain social media platforms also varies by gender. Teen girls are more likely than teen boys to report using TikTok (73% vs. 60%), Instagram (69% vs. 55%) and Snapchat (64% vs. 54%). Boys are more likely than girls to report using YouTube (97% vs. 92%), Twitch (26% vs. 13%) and Reddit (20% vs. 8%).

A chart showing that teen girls are more likely than boys to use TikTok, Instagram and Snapchat. Teen boys are more likely to use Twitch, Reddit and YouTube. Black teens are especially drawn to TikTok compared with other groups.

Majorities of teens use YouTube and TikTok every day, and some report using these sites almost constantly. About three-quarters of teens (77%) say they use YouTube daily, while a smaller majority of teens (58%) say the same about TikTok. About half of teens use Instagram (50%) or Snapchat (51%) at least once a day, while 19% report daily use of Facebook.

A chart that shows roughly one-in-five teens are almost constantly on YouTube, and 2% say the same for Facebook.

Some teens report using these platforms almost constantly. For example, 19% say they use YouTube almost constantly, while 16% and 15% say the same about TikTok and Snapchat, respectively.

More than half of teens say it would be difficult for them to give up social media. About a third of teens (36%) say they spend too much time on social media, while 55% say they spend about the right amount of time there and just 8% say they spend too little time. Girls are more likely than boys to say they spend too much time on social media (41% vs. 31%).

A chart that shows 54% of teens say it would be hard to give up social media.

Teens are relatively divided over whether it would be hard or easy for them to give up social media. Some 54% say it would be very or somewhat hard, while 46% say it would be very or somewhat easy.

Girls are more likely than boys to say it would be difficult for them to give up social media (58% vs. 49%). Older teens are also more likely than younger teens to say this: 58% of those ages 15 to 17 say it would be very or somewhat hard to give up social media, compared with 48% of those ages 13 to 14.

Teens are more likely to say social media has had a negative effect on others than on themselves. Some 32% say social media has had a mostly negative effect on people their age, while 9% say this about social media’s effect on themselves.

A chart showing that more teens say social media has had a negative effect on people their age than on them, personally.

Conversely, teens are more likely to say these platforms have had a mostly positive impact on their own life than on those of their peers. About a third of teens (32%) say social media has had a mostly positive effect on them personally, while roughly a quarter (24%) say it has been positive for other people their age.

Still, the largest shares of teens say social media has had neither a positive nor negative effect on themselves (59%) or on other teens (45%). These patterns are consistent across demographic groups.

Teens are more likely to report positive than negative experiences in their social media use. Majorities of teens report experiencing each of the four positive experiences asked about: feeling more connected to what is going on in their friends’ lives (80%), like they have a place where they can show their creative side (71%), like they have people who can support them through tough times (67%), and that they are more accepted (58%).

A chart that shows teen girls are more likely than teen boys to say social media makes them feel more supported but also overwhelmed by drama and excluded by their friends.

When it comes to negative experiences, 38% of teens say that what they see on social media makes them feel overwhelmed because of all the drama. Roughly three-in-ten say it makes them feel like their friends are leaving them out of things (31%) or feel pressure to post content that will get lots of comments or likes (29%). And 23% say that what they see on social media makes them feel worse about their own life.

There are several gender differences in the experiences teens report having while on social media. Teen girls are more likely than teen boys to say that what they see on social media makes them feel a lot like they have a place to express their creativity or like they have people who can support them. However, girls also report encountering some of the pressures at higher rates than boys. Some 45% of girls say they feel overwhelmed because of all the drama on social media, compared with 32% of boys. Girls are also more likely than boys to say social media has made them feel like their friends are leaving them out of things (37% vs. 24%) or feel worse about their own life (28% vs. 18%).

When it comes to abuse on social media platforms, many teens think criminal charges or permanent bans would help a lot. Half of teens think criminal charges or permanent bans for users who bully or harass others on social media would help a lot to reduce harassment and bullying on these platforms. 

A chart showing that half of teens think banning users who bully or criminal charges against them would help a lot in reducing the cyberbullying teens may face on social media.

About four-in-ten teens say it would help a lot if social media companies proactively deleted abusive posts or required social media users to use their real names and pictures. Three-in-ten teens say it would help a lot if school districts monitored students’ social media activity for bullying or harassment.

Some teens – especially older girls – avoid posting certain things on social media because of fear of embarrassment or other reasons. Roughly four-in-ten teens say they often or sometimes decide not to post something on social media because they worry people might use it to embarrass them (40%) or because it does not align with how they like to represent themselves on these platforms (38%). A third of teens say they avoid posting certain things out of concern for offending others by what they say, while 27% say they avoid posting things because it could hurt their chances when applying for schools or jobs.

A chart that shows older teen girls are more likely than younger girls or boys to say they don't post things on social media because they're worried it could be used to embarrass them.

These concerns are more prevalent among older teen girls. For example, roughly half of girls ages 15 to 17 say they often or sometimes decide not to post something on social media because they worry people might use it to embarrass them (50%) or because it doesn’t fit with how they’d like to represent themselves on these sites (51%), compared with smaller shares among younger girls and among boys overall.

Many teens do not feel like they are in the driver’s seat when it comes to controlling what information social media companies collect about them. Six-in-ten teens say they think they have little (40%) or no control (20%) over the personal information that social media companies collect about them. Another 26% aren’t sure how much control they have. Just 14% of teens think they have a lot of control.

Two charts that show a majority of teens feel as if they have little to no control over their data being collected by social media companies, but only one-in-five are extremely or very concerned about the amount of information these sites have about them.

Despite many feeling a lack of control, teens are largely unconcerned about companies collecting their information. Only 8% are extremely concerned about the amount of personal information that social media companies might have and 13% are very concerned. Still, 44% of teens say they have little or no concern about how much these companies might know about them.

Only around one-in-five teens think their parents are highly worried about their use of social media. Some 22% of teens think their parents are extremely or very worried about them using social media. But a larger share of teens (41%) think their parents are either not at all (16%) or a little worried (25%) about them using social media. About a quarter of teens (27%) fall more in the middle, saying they think their parents are somewhat worried.

A chart showing that only a minority of teens say their parents are extremely or very worried about their social media use.

Many teens also believe there is a disconnect between parental perceptions of social media and teens’ lived realities. Some 39% of teens say their experiences on social media are better than parents think, and 27% say their experiences are worse. A third of teens say parents’ views are about right.

Nearly half of parents with teens (46%) are highly worried that their child could be exposed to explicit content on social media. Parents of teens are more likely to be extremely or very concerned about this than about social media causing mental health issues like anxiety, depression or lower self-esteem. Some parents also fret about time management problems for their teen stemming from social media use, such as wasting time on these sites (42%) and being distracted from completing homework (38%).

A chart that shows parents are more likely to be concerned about their teens seeing explicit content on social media than these sites leading to anxiety, depression or lower self-esteem.

Note: Here are the questions used  for this report, along with responses, and its  methodology .

CORRECTION (May 17, 2023): In a previous version of this post, the percentages of teens using Instagram and Snapchat daily were transposed in the text. The original chart was correct. This change does not substantively affect the analysis.

research questions about the effects of social media

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Social Media Fact Sheet

7 facts about americans and instagram, social media use in 2021, 64% of americans say social media have a mostly negative effect on the way things are going in the u.s. today, share of u.s. adults using social media, including facebook, is mostly unchanged since 2018, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Study Tracks Shifts in Student Mental Health During College

Dartmouth study followed 200 students all four years, including through the pandemic.

Andrew Campbell seated by a window in a blue t-shirt and glasses

Phone App Uses AI to Detect Depression From Facial Cues

A four-year study by Dartmouth researchers captures the most in-depth data yet on how college students’ self-esteem and mental health fluctuates during their four years in academia, identifying key populations and stressors that the researchers say administrators could target to improve student well-being. 

The study also provides among the first real-time accounts of how the coronavirus pandemic affected students’ behavior and mental health. The stress and uncertainty of COVID-19 resulted in long-lasting behavioral changes that persisted as a “new normal” even as the pandemic diminished, including students feeling more stressed, less socially engaged, and sleeping more.

The researchers tracked more than 200 Dartmouth undergraduates in the classes of 2021 and 2022 for all four years of college. Students volunteered to let a specially developed app called StudentLife tap into the sensors that are built into smartphones. The app cataloged their daily physical and social activity, how long they slept, their location and travel, the time they spent on their phone, and how often they listened to music or watched videos. Students also filled out weekly behavioral surveys, and selected students gave post-study interviews. 

The study—which is the longest mobile-sensing study ever conducted—is published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies .

The researchers will present it at the Association of Computing Machinery’s UbiComp/ISWC 2024 conference in Melbourne, Australia, in October. 

These sorts of tools will have a tremendous impact on projecting forward and developing much more data-driven ways to intervene and respond exactly when students need it most.

The team made their anonymized data set publicly available —including self-reports, surveys, and phone-sensing and brain-imaging data—to help advance research into the mental health of students during their college years. 

Andrew Campbell , the paper’s senior author and Dartmouth’s Albert Bradley 1915 Third Century Professor of Computer Science, says that the study’s extensive data reinforces the importance of college and university administrators across the country being more attuned to how and when students’ mental well-being changes during the school year.

“For the first time, we’ve produced granular data about the ebb and flow of student mental health. It’s incredibly dynamic—there’s nothing that’s steady state through the term, let alone through the year,” he says. “These sorts of tools will have a tremendous impact on projecting forward and developing much more data-driven ways to intervene and respond exactly when students need it most.”

First-year and female students are especially at risk for high anxiety and low self-esteem, the study finds. Among first-year students, self-esteem dropped to its lowest point in the first weeks of their transition from high school to college but rose steadily every semester until it was about 10% higher by graduation.

“We can see that students came out of high school with a certain level of self-esteem that dropped off to the lowest point of the four years. Some said they started to experience ‘imposter syndrome’ from being around other high-performing students,” Campbell says. “As the years progress, though, we can draw a straight line from low to high as their self-esteem improves. I think we would see a similar trend class over class. To me, that’s a very positive thing.”

Female students—who made up 60% of study participants—experienced on average 5% greater stress levels and 10% lower self-esteem than male students. More significantly, the data show that female students tended to be less active, with male students walking 37% more often.

Sophomores were 40% more socially active compared to their first year, the researchers report. But these students also reported feeling 13% more stressed during their second year than during their first year as their workload increased, they felt pressure to socialize, or as first-year social groups dispersed.

One student in a sorority recalled that having pre-arranged activities “kind of adds stress as I feel like I should be having fun because everyone tells me that it is fun.” Another student noted that after the first year, “students have more access to the whole campus and that is when you start feeling excluded from things.” 

In a novel finding, the researchers identify an “anticipatory stress spike” of 17% experienced in the last two weeks of summer break. While still lower than mid-academic year stress, the spike was consistent across different summers.

In post-study interviews, some students pointed to returning to campus early for team sports as a source of stress. Others specified reconnecting with family and high school friends during their first summer home, saying they felt “a sense of leaving behind the comfort and familiarity of these long-standing friendships” as the break ended, the researchers report. 

“This is a foundational study,” says Subigya Nepal , first author of the study and a PhD candidate in Campbell’s research group. “It has more real-time granular data than anything we or anyone else has provided before. We don’t know yet how it will translate to campuses nationwide, but it can be a template for getting the conversation going.”

The depth and accuracy of the study data suggest that mobile-sensing software could eventually give universities the ability to create proactive mental-health policies specific to certain student populations and times of year, Campbell says.

For example, a paper Campbell’s research group published in 2022 based on StudentLife data showed that first-generation students experienced lower self-esteem and higher levels of depression than other students throughout their four years of college.

“We will be able to look at campus in much more nuanced ways than waiting for the results of an annual mental health study and then developing policy,” Campbell says. “We know that Dartmouth is a small and very tight-knit campus community. But if we applied these same methods to a college with similar attributes, I believe we would find very similar trends.”

Weathering the pandemic

When students returned home at the start of the coronavirus pandemic, the researchers found that self-esteem actually increased during the pandemic by 5% overall and by another 6% afterward when life returned closer to what it was before. One student suggested in their interview that getting older came with more confidence. Others indicated that being home led to them spending more time with friends talking on the phone, on social media, or streaming movies together. 

The data show that phone usage—measured by the duration a phone was unlocked—indeed increased by nearly 33 minutes, or 19%, during the pandemic, while time spent in physical activity dropped by 52 minutes, or 27%. By 2022, phone usage fell from its pandemic peak to just above pre-pandemic levels, while engagement in physical activity had recovered to exceed the pre-pandemic period by three minutes. 

Despite reporting higher self-esteem, students’ feelings of stress increased by more than 10% during the pandemic. By the end of the study in June 2022, stress had fallen by less than 2% of its pandemic peak, indicating that the experience had a lasting impact on student well-being, the researchers report. 

In early 2021, as students returned to campus, their reunion with friends and community was tempered by an overwhelming concern about the still-rampant coronavirus. “There was the first outbreak in winter 2021 and that was terrifying,” one student recalls. Another student adds: “You could be put into isolation for a long time even if you did not have COVID. Everyone was afraid to contact-trace anyone else in case they got mad at each other.”

Female students were especially concerned about the coronavirus, on average 13% more than male students. “Even though the girls might have been hanging out with each other more, they are more aware of the impact,” one female student reported. “I actually had COVID and exposed some friends of mine. All the girls that I told tested as they were worried. They were continually checking up to make sure that they did not have it and take it home to their family.”

Students still learning remotely had social levels 16% higher than students on campus, who engaged in activity an average of 10% less often than when they were learning from home. However, on-campus students used their phones 47% more often. When interviewed after the study, these students reported spending extended periods of time video-calling or streaming movies with friends and family.

Social activity and engagement had not yet returned to pre-pandemic levels by the end of the study in June 2022, recovering by a little less than 3% after a nearly 10% drop during the pandemic. Similarly, the pandemic correlates with students sticking closer to home, with their distance traveled nearly cut in half during the pandemic and holding at that level since then.

Campbell and several of his fellow researchers are now developing a smartphone app known as MoodCapture that uses artificial intelligence paired with facial-image processing software to reliably detect the onset of depression before the user even knows something is wrong.

Morgan Kelly can be reached at [email protected] .

  • Mental Health and Wellness
  • Innovation and Impact
  • Arts and Sciences
  • Class of 2021
  • Class of 2022
  • Department of Computer Science
  • Guarini School of Graduate and Advanced Studies
  • Mental Health

A Q&A With Film Critic and Theorist Vinzenz Hediger

Portrait of Montgomery Fellow Vinzenz Hediger

After a Year of Turmoil, Harvard’s Applications Drop

A newsletter briefing on the intersection of technology and politics.

Congress gives research into kids and social media a cash infusion

research questions about the effects of social media

Welcome to The Technology 202. A special thanks to Rep. Jamie Raskin (D-Md.) for taking the time to speak with me for today’s newsletter while dealing with the tragic bridge collapse in his home state. Today:

Congress gives research into children and social media a cash infusion

Researchers scrutinizing how social media impacts children’s health recently got a key assist as lawmakers tucked fresh funding for the cause into their sprawling spending legislation. 

Federal appropriators this year re-upped $15 million in funding for a program directing the National Institutes of Health and Department of Health and Human Services to lead studies examining technology’s impact on children’s development and mental health. With Congress initially allocating $15 million last year, total investment is now up to $30 million.

The initiative, first proposed by Sen. Edward J. Markey (D-Mass.) and Rep. Jamie Raskin (D-Md.), is one of the federal government’s most significant attempts yet to map out how much digital platforms are contributing to issues like depression, anxiety and drug abuse among youth.

The funding will provide “a critical window into Big Tech’s impact on the nation’s young people,” including topics like how exposure to racist posts may harm minority youths and how screen time could affect sleep, Markey said in an interview Wednesday. 

“You can’t manage what you don’t measure,” he said.

The 2023 appropriations helped NIH fund 26 grants looking into how tech impacts children, totaling $15.1 million, according to a fact sheet shared by Markey’s office. 

That included new research into “the effects of screen light exposure and stimulating media content on sleep regulation,” the impact on “race-related stereotypic content” on racial minorities and whether “social media experiences promote or diminish adolescents’ mental well-being.”

Next week, the agency is planning a meeting “to discuss the current state of and future directions for research on the positive and negative effects” of tech and digital media, which could serve as a launching point for additional projects under the program.

“It’s a tiny sum of money in the scope of the federal budget, but it will be used to investigate a matter that is of the utmost concern to families across America,” Raskin told me Wednesday.

But some researchers said that the federal government is still investing far too little and that it could be years before the time-intensive research fully bears fruit. 

Mitch Prinstein , chief science officer at the American Psychological Association, said that while he was “grateful” for lawmakers’ attention to the issue, another $15 million “is really just scratching the surface of what’s necessary.” 

 Major studies looking into children’s mental health can cost millions, and long-term research into areas like development could take half a decade or more to complete, noted Prinstein, who also serves as a psychology professor at the University of North Carolina. 

Fully grasping “how to best help children” could take “at least 10, if not 50, times more in funding” from the federal government, he argued.

There are also questions about the longevity of the program, which requires lawmakers to appropriate new funds annually to keep it afloat. 

Markey said he was “very hopeful” that their initiative would be “a foundation that’s going to lead to substantial additional funding for the study of this young person mental health crisis.”

“We will have to figure out some way to have a continual source of research and interpretation on the question of children’s health and social media,” Raskin said.

The appropriations package extended the previous year’s funding for the program, although the dollar amount is not explicitly linked in the bill text, according to a Markey aide who spoke on the condition of anonymity because they were not authorized to publicly speak on the matter.

The push to probe potential links between youth mental health and social media comes as lawmakers forge ahead with sweeping new proposals, from expanding guardrails for children online to restricting their access to platforms altogether. 

Many of those efforts face opposition from industry and digital rights groups, who argue they threaten to shut young people off from positive online resources and experiences, particularly marginalized youth. 

While lawmakers are calling for additional research in the area, “we know enough already to put an end to Big Tech’s invasive data practices, especially those involving children,” said Markey, who has spearheaded efforts to expand children’s privacy protections at the federal level.

“We need all the information we can get to inform public policy,” Raskin said. 

Government scanner

Oregon’s governor signs right-to-repair law that bans ‘parts pairing’ (The Verge)

Israel deploys expansive facial recognition program in Gaza (New York Times)

AI is making financial fraud easier and more sophisticated, Treasury warns (Bloomberg News)

Hill happenings

AI leaders press advantage with Congress as China tensions rise (New York Times)

Inside the industry

Amazon loses court fight to suspend E.U. tech rules’ ad clause (Reuters)

Competition watch

Amazon spends $2.75 billion on AI start-up Anthropic in its largest venture investment yet (CNBC)

Privacy monitor

Extremists in U.S. are increasingly doxxing executives, officials (Bloomberg News)

Workforce report

Apple turns to longtime Steve Jobs disciple to defend its ‘walled garden’ (Wall Street Journal)

Princess Catherine cancer video spawns fresh round of AI conspiracies (By Tatum Hunter)

  • A quick note: Tuesday’s newsletter was updated to clarify that the U.S. Marshals Service was enlisted for some, not all, of the executives under subpoena during the Senate hearing in January on child online safety.
  • Columbia University hosts an event , “AI’s Impact on the 2024 Global Elections,” today at 1:30 p.m.
  • AEI hosts an event , “Connecting America: Getting Taxpayers Their Money’s Worth in Broadband Expansion,” today at 2 p.m.

Before you log off

The six victims presumed dead in the Baltimore bridge collapse were fathers, husbands and hard workers; at least some of them had traveled to this country for a life they hoped would be prosperous and long. https://t.co/SRaMbJUkUI — The Washington Post (@washingtonpost) March 27, 2024

That ’ s all for today — thank you so much for joining us! Make sure to tell others to subscribe to  The Technology 202 here . Get in touch with Cristiano (via email or social media ) and Will (via email or social media ) for tips, feedback or greetings!

research questions about the effects of social media

Florida’s New Law Restricting Social Media Use Will Be Tough To Enforce

Mike Proulx , VP, Research Director

Yesterday, the governor of Florida signed a bill that will ban social media accounts for kids under the age of 14 and require parental consent for 14- and 15-year-olds. Under the new law, social media companies can be found liable for up to $50,000 per incident. Barring the outcomes of expected legal challenges, the law will take effect next year on January 1.

Forrester’s March 2024 Consumer Pulse Survey explored consumer sentiment around the potential of this law — finding general and widespread agreement among the 528 US online adult respondents.

There’s Bipartisan Legislative And Consumer Support For This Law

Half (50%) of US online adults indicated they would support Florida’s bill that would ban the use of social media platforms for kids under 16. ( Note: The new bill that was signed into law bans the use of social media for kids 13 and under). Cut across party lines, the data shows atypical bipartisanship, with agreement by 43% of Democrats and 57% of Republicans.

Most surprising is the breakdown by generation. Just over half (51%) of US Gen Z online adults support the (original, more extreme) bill — the exact same percentage as Baby Boomers. At 54%, US online Millennials indicated the most support, with Gen X (many of whom have teens) the least supportive at 42%.

Who Should Regulate Kids’ Social Media Use: Parents Or The Government?

Overwhelmingly — at 77% — US online adults agreed that parents, not the government, are responsible for monitoring their children’s social media use. But does that mean that the government shouldn’t regulate it? Not necessarily.

Sixty-seven percent of US online adults would support a law that requires parental consent for minors to create new social media accounts — which Florida’s HB3 bill requires of 14- and 15-year-olds. Following a more typical pattern, the younger the generation, the less support there is for this, with 59% of US Gen Z online adults indicating their support (yet still the majority).

People Believe Social Media Is Harmful When Overused

The majority of US online adults across every generation and political party believe that for teenagers, social media does more harm than good. And other than Baby Boomers, most agree that social media is just another thing that’s OK as long as it’s used in moderation. So what’s the solution to ensuring a moderated use of social media? It has always come down to a combination of efforts involving government regulation, platform tools, parental education, and kids’ awareness. But as my colleague Kelsey Chickering posted about in January, Alcohol Use Is “Age-Gated” — Why Isn’t Social Media?

The Biggest Question: How Will State Laws Be Enforced?

Yes, governments (like the Florida legislature) can pass laws to restrict social media usage, but can it actually be enforced in ways that materially make a difference? There remain lots of unanswered questions about how Florida’s HB3 gets enforced if/when it goes into effect next year. To call out just a few:

  • How will social media companies get credible parental consent for 14- and 15-year-olds looking to join (or remain on) social media?
  • What will be the blowback from teens under 14 when their accounts get deleted?
  • To what degree will AI play a leading role in preventing those under 14 from signing up?
  • What are the implications of potentially increased facial recognition of teens?
  • What happens when different states have varied laws?

So What Do Teens Think?

While everyone else weighs in about regulating teen social media use, what about the teens themselves? Forrester’s Youth Survey, 2023 , finds that only 35% of US online teens ages 12–17 agree that there should be rules or limits over their social media usage.

Forrester clients: Let’s chat more about this via a Forrester guidance session .

  • B2C Marketing
  • social media

research questions about the effects of social media

Thanks for signing up.

Stay tuned for updates from the Forrester blogs.

60% Of Skeptics Will Use (And Love!) GenAI

Surprised download forrester’s 2024 predictions guide to see why progress will be driven by exploration — and 14 other global predictions on ai, privacy, b2b buyers, hybrid work, and more., related insights, the us senate hearing to protect online children: alcohol use is “age-gated” — why isn’t social media, five key insights into consumers’ use of generative ai, the state of digital experiences in manufacturing in 2024, get the insights at work newsletter, help us improve.

COMMENTS

  1. Social Media Use and Its Connection to Mental Health: A Systematic Review

    Abstract. Social media are responsible for aggravating mental health problems. This systematic study summarizes the effects of social network usage on mental health. Fifty papers were shortlisted from google scholar databases, and after the application of various inclusion and exclusion criteria, 16 papers were chosen and all papers were ...

  2. The effect of social media on well-being differs from adolescent to

    The question whether social media use benefits or undermines adolescents' well-being is an important societal concern. Previous empirical studies have mostly established across-the-board effects ...

  3. The effects of social media usage on attention, motivation, and

    Yet other literature suggests electronic media usage is beneficial and does not have a negative impact on academic success (Kirkorian et al., 2008).Results indicate improvements in student learning potential with increased availability and accessibility of electronic media (Kirkorian et al., 2008).Yet, this research has mainly been conducted with children in the early stages of development (i ...

  4. Social media and adolescent psychosocial development: a systematic

    The potential impact of social media on psychosocial development is complex and is an emerging field of research. A systematic review was conducted to investigate existing research relating to social media's effects on psychosocial development. ... (PEO) were used to frame the research question and define a priori inclusion criteria (Laher ...

  5. Persistent interaction patterns across social media platforms ...

    A substantial portion of current research is devoted to examining harmful language on social media and its wider effects, online and offline 10,26. This examination is crucial, as it reveals how ...

  6. A systematic review: the influence of social media on depression

    Social media. The term 'social media' refers to the various internet-based networks that enable users to interact with others, verbally and visually (Carr & Hayes, Citation 2015).According to the Pew Research Centre (Citation 2015), at least 92% of teenagers are active on social media.Lenhart, Smith, Anderson, Duggan, and Perrin (Citation 2015) identified the 13-17 age group as ...

  7. Social media harms teens' mental health, mounting evidence shows. What now?

    Paired with the ubiquity of social media in general, the negative effects on mental health may well be larger now. Moreover, social media research tends to focus on young adults — an easier ...

  8. Impact of Social Media on Society: A Literature Review

    Abstract. This manuscript explores the profound impact of social media on society, with a focus on social behavior, politics, and cultural norms. Employing a systematic literature review ...

  9. Social media brings benefits and risks to teens. Here's how psychology

    The potential risks of social media may be especially acute during early adolescence when puberty delivers an onslaught of biological, psychological, and social changes. One longitudinal analysis of data from youth in the United Kingdom found distinct developmental windows during which adolescents are especially sensitive to social media's ...

  10. Social Media Use and Impact on Interpersonal Communication

    Abstract. This research paper presents the findings of a research project that investigated how young adult interpersonal communications have changed since using social media. Specifically, the research focused on determining if using social media had a beneficial or an adverse effect on the development of interaction and communication skills ...

  11. PDF Qualitative Research on Youths' Social Media Use: A review of the

    Schmeichel, Mardi; Hughes, Hilary E.; and Kutner, Mel (2018) "Qualitative Research on Youths' Social Media Use: A review of the literature," Middle Grades Review: Vol. 4 : Iss. 2 , Article 4. This Research is brought to you for free and open access by the College of Education and Social Services at ScholarWorks @ UVM.

  12. (PDF) SOCIAL MEDIA ADDICTION AND YOUNG PEOPLE: A ...

    social media addiction is negatively associated, in which the. higher the addiction in social media, the lower the young. people's academic performance (Hou et al., 2019). This i s. because ...

  13. Social Media in Education

    Question. 3 answers. May 14, 2017. The demands of digital society present one gap in time and effetiveness, now tools like digital mobile devices associated with social networks (facebook ...

  14. Social media in marketing research: Theoretical bases, methodological

    In the fifth research stream, social media are conceived as a general strategic marketing tool, with the bulk of studies focusing on the strategic role of social media adoption for marketing purposes, the impact of social media on organizational structure, social media usage and its management, and the strategic marketing perspective of social ...

  15. The Impact of Social Media on Mental Health: a Mixed-methods Research

    health clinicians to address the impact of heavy social media use on the clients' mental health. Social media's impact on mental health complicates social service delivery on the micro level due to the significant growth of mental health symptoms. As more individuals are presenting with anxiety, depression, low self-esteem, etc.

  16. (PDF) Questionnaire On "The impact of social media-used on the

    Researchers all over the world have varied findings on the effect of social media on the academic performance of university students. The current study deployed a survey methodology by filling questionnaires from the respondents, regarding the usage of social media and its effects on the academic performance of students.

  17. Social Media and Teen Mental Health: A Complex Mix

    As part of the new study, young people will have the chance to research the question and have a say in how to address it through a process called Youth Participatory Action Research. When it comes to social media's effects on an adolescent mental health, Dupont-Reyes hypothesizes that context matters quite a lot.

  18. Teens and social media: Key findings from Pew Research Center surveys

    About a third of teens (32%) say social media has had a mostly positive effect on them personally, while roughly a quarter (24%) say it has been positive for other people their age. Still, the largest shares of teens say social media has had neither a positive nor negative effect on themselves (59%) or on other teens (45%).

  19. Political Effects of the Internet and Social Media

    How do the Internet and social media affect political outcomes? We review empirical evidence from the recent political economy literature, focusing primarily on work that considers traits that distinguish the Internet and social media from traditional off-line media, such as low barriers to entry and reliance on user-generated content. We discuss the main results about the effects of the ...

  20. (PDF) The Impact Of Social Media: A Survey

    The Impact Of Social Media: A Survey. Hafiz Burhan Ul Haq Hashmi, Haroon Ur Rashid Kayani, Saba Khalil Toor, Abdullah Mansoor, Abdul Raheem. Abstract: Social media has become the most popular way ...

  21. Critics' pick or social media smash: the effect of critics' reviews and

    Focusing on our key variables of interest, traditional newspaper reviews and initial Twitter follower count, both favourable reviews and a high Twitter follower count contribute to the success of a Broadway show in terms of its average weekly grosses, with initial follower count having a more pronounced effect than favourable critical reviews ...

  22. Study Tracks Shifts in Student Mental Health During College

    The team made their anonymized data set publicly available—including self-reports, surveys, and phone-sensing and brain-imaging data—to help advance research into the mental health of students during their college years.. Andrew Campbell, the paper's senior author and Dartmouth's Albert Bradley 1915 Third Century Professor of Computer Science, says that the study's extensive data ...

  23. Congress gives research into kids and social media a cash infusion

    Congress gives research into children and social media a cash infusion. Researchers scrutinizing how social media impacts children's health recently got a key assist as lawmakers tucked fresh ...

  24. The Effects of Climate Change

    Global climate change is not a future problem. Changes to Earth's climate driven by increased human emissions of heat-trapping greenhouse gases are already having widespread effects on the environment: glaciers and ice sheets are shrinking, river and lake ice is breaking up earlier, plant and animal geographic ranges are shifting, and plants and trees are blooming sooner.

  25. Florida's New Social Media Law Will Be Tough To Enforce

    Yesterday, the governor of Florida signed a bill that will ban social media accounts for kids under the age of 14 and require parental consent for 14- and 15-year-olds. Under the new law, social media companies can be found liable for up to $50,000 per incident. Barring the outcomes of expected legal challenges, the law will take effect next year on January 1.