The list of ways Twitter could be better is long. Many users think the platform should trash its unwelcome subscription models. Others call out CEO Elon Musk’s tanking of accessibility tools for profit. And, apart from the vocal few who see it as a form of free speech, many think the proliferation of hate and disinformation needs to be addressed stat.
It might make sense, then, to build these concerns into the launch of what could be Twitter’s most successful rival. But the first week of Meta’s new, text-based community forum Threads suggests that hasn’t been done sufficiently, according to advocates and civil rights groups.
How to turn your social profiles into hubs for charity
In addition to the absence of accessibility and other features in its launch, the new social platform is already home to the same kinds of hate speech and extremist accounts that have soured Twitter’s reputation, with no visible Threads-specific conduct or community policies outlining how the platform will address the problem, advocates warn.
In a letter(opens in a new tab) released by 24 civil rights, digital justice, and pro-democracy organizations — including nonprofit watchdog group Media Matters for America(opens in a new tab), the Center for Countering Digital Hate(opens in a new tab), and GLAAD(opens in a new tab) — the platform’s parent company is criticized for taking a step backwards in relation to creating a safer digital environment for users:
Rather than strengthen your policies, Threads has taken actions doing the opposite, by purposefully not extending Instagram’s fact-checking program to the platform and capitulating to bad actors, and by removing a policy to warn users when they are attempting to follow a serial misinformer. Without clear guardrails against future incitement of violence, it is unclear if Meta is prepared to protect users from high-profile purveyors of election disinformation who violate the platform’s written policies. To date, the platform remains without even the most basic tools for researchers to be able to analyze activity on Threads. Finally, Meta rolled out Threads at the same time that you have been laying off content moderators and civic engagement teams meant to curb the spread of disinformation on the platform.
Prior to the July 5 Threads launch, Meta reportedly fired members of a mis- and disinformation team(opens in a new tab) hired to combat election misinformation, part of a larger group tasked with countering disinformation campaigns online.
The letter also noted “neo-Nazi rhetoric, election lies, COVID and climate change denialism, and more toxicity” on the new platform, including accounts posting “bigoted slurs, election denial, COVID-19 conspiracies, targeted harassment of and denial of trans individuals’ existence, misogyny, and more.” According to a July report from the Anti-Defamation League (ADL), Meta flagship Facebook is the highest reported platform where hate and harassment occur. In addition, Instagram and Facebook both received failing grades in GLAAD’s 2023 Social Media Safety Index, while Twitter was named least safe.
In response to “concerning initial observations” within days of Threads’ launch, the ADL is monitoring the platform’s policies on hate speech, protection, and privacy(opens in a new tab). The organization pointed to Threads’ blocked accounts policy as a positive, user-forward move by the tech giant, automatically blocking users on Threads that have been previously blocked on Instagram.
However, the organization also highlighted instances of Threads allegedly exposing vulnerable targets to hate and harassment, including displaying personal information like hidden legal names, that could pose future problems for at-risk users.
At Threads’ launch, known social media accounts accused of routinely spreading misinformation were reportedly preemptively flagged by the platform, with many right-wing figures sharing their dissatisfaction with the site’s policy of warning fellow users of the account’s history. The warnings appeared to be removed not long after, with Mashable unable to replicate the profile flags. Instagram’s Community Guidelines currently read, “In some cases, we allow content for public awareness which would otherwise go against our Community Guidelines — if it is newsworthy and in the public interest. We do this only after weighing the public interest value against the risk of harm and we look to international human rights standards to make these judgments.”
As of this story’s publication, Threads has yet to publish its own on-site community guidelines or conduct policy, writing in its launch(opens in a new tab) that the platform would “enforce Instagram’s Community Guidelines on content and interactions in the app.” Threads’ Terms of Use(opens in a new tab) can be found in Instagram’s Help Center and state, “When using the Threads Service, all content that you upload or share must comply with the Instagram Community Guidelines(opens in a new tab) as the service is part of Instagram.” The Instagram Community Guidelines, in turn, link to Facebook Community Standards on hate speech(opens in a new tab). Currently, when trying to report abuse or spam on Threads, the platform redirects users to the Instagram Help page for “How do I report a post or profile on Instagram?”
In response to Mashable’s request for comment, and in a statement to Media Matters for America(opens in a new tab), a Meta spokesperson said: “Our industry leading integrity enforcement tools and human review are wired into Threads. Like all of our apps, hate speech policies apply. Additionally, we match misinformation ratings from independent fact checkers to content across our other apps, including Threads. We are considering additional ways to address misinformation in future updates.”
The advocates’ letter also includes three urgent recommendations for Threads:
Implement strong policies unique to Threads that meet the needs of a rapidly growing text-based platform, including strong policies against hate speech to protect marginalized communities.
Prioritize safety and equity by taking a proactive, human-centered approach to preventing machine learning bias and other AI-malfeasance.
Implement governance and leadership practices to engage regularly with civil society, including transparent and accessible data and methods for researchers to analyze Threads’ business models, content and moderation practices.
“For the safety of brands and users, Threads must implement guardrails that stem extremism, hate, and anti-democratic lies,” the letter reads. “Doing so isn’t just good for people: it’s good for business.”
Want more Social Good and tech stories in your inbox? Sign up for Mashable’s Top Stories newsletter today.