Growing concern surrounds the impact of social media platforms on public discourse1–4 and their influence on social dynamics5–9, especially in the context of toxicity10–12. Here, to better understand these phenomena, we use a comparative approach to isolate human behavioural patterns across multiple social media platforms. In particular, we analyse conversations in different online communities, focusing on identifying consistent patterns of toxic content. Drawing from an extensive dataset that spans eight platforms over 34 years—from Usenet to contemporary social media—our findings show consistent conversation patterns and user behaviour, irrespective of the platform, topic or time. Notably, although long conversations consistently exhibit higher toxicity, toxic language does not invariably discourage people from participating in a conversation, and toxicity does not necessarily escalate as discussions evolve. Our analysis suggests that debates and contras
As a worldwide epidemic in the digital age, cyberbullying is a pertinent but understudied concern especially from the perspective of language. Elucidating the linguistic features of cyberbullying is critical both to preventing it and to cultivating ethical and responsible digital citizens. In this study, a mixed-method approach integrating lexical feature analysis, sentiment polarity analysis, and semantic network analysis was adopted to develop a deeper understanding of cyberbullying language. Five cyberbullying cases on Chinese social media were analyzed to uncover explicit and implicit linguistic features. Results indicated that cyberbullying comments had significantly different linguistic profiles than non-bullying comments and that explicit and implicit bullying were distinct. The content of cases further suggested that cyberbullying language varied in the use of words, types of cyberbullying, and sentiment polarity. These findings offer useful insight for designing automatic cybe
Who Gets a Say in Our Dystopian Tech Future?
A.I. research scientist Timnit Gebru raised red flags about Google’s most exciting new tech. She says she was forced out for it.
Kimberly White/Getty Images
Timnit Gebru speaking at an industry event
Last Wednesday, Timnit Gebru, a staff research scientist and co-lead of the Ethical Artificial Intelligence team at Google, said that she had been ousted from the company over a research paper she co-authored and submitted for review. According to Gebru’s account, a manager had asked her to recant or remove her name from the paper, which raised ethical concerns about A.I. language models of the sort used in Google’s sprawling search engine. Gebru, one of the most prominent ethicists in Silicon Valley and one of Google’s few Black women in a leadership position, responded by asking for transparency about the review, offering that she would remove her name from the paper if the company gave her a fuller understanding of the process