Notification

×

Iklan

Iklan

News Index

Tag Terpopuler

TikTok, Meta: Views Over Well-being

Thursday, March 19, 2026 | 6:59 AM WIB | 0 Views Last Updated 2026-03-19T00:01:04Z
    Share

Tech Giants Accused of Prioritising Engagement Over User Safety

Whistleblowers and former employees from both TikTok and Meta have come forward with serious allegations, suggesting that the social media giants have knowingly allowed harmful content to proliferate on their platforms in a relentless pursuit of user engagement and profit. These revelations, brought to light by internal documents and testimonies, paint a concerning picture of how algorithmic design and corporate priorities can inadvertently, or perhaps deliberately, expose users to risks ranging from violence and sexual blackmail to terrorist recruitment.

The claims emerge in the wake of research indicating that these tech giants may have designed algorithms that thrive on "outrage," a phenomenon that fuels increased views and interactions. A new documentary, "Inside the Rage Machine," delves into this issue, exploring how the industry has, in some instances, promoted harmful content to boost its visibility.

Allegations of Prioritising Political Content Over Child Safety on TikTok

One former member of TikTok's trust and safety team, who goes by the pseudonym Nick*, has shared alarming internal documentation and spoken extensively about his experiences. He claims that the volume of material related to terrorism, sexual violence, physical violence, abuse, and trafficking has been on the rise. More disturbingly, Nick asserts that the actions taken against such content are often at odds with the company's public pronouncements.

Evidence presented to the BBC suggests that TikTok has, at times, rated trivial cases involving politicians with higher priority for review by its safety team than several other cases involving significant harm to teenagers. In one particularly striking example, a political figure who was the subject of mockery, likened to a chicken, was assigned a higher priority than a 17-year-old who had reported being a victim of illegal cyberbullying and impersonation in France.

Another instance highlighted involved a 16-year-old in Iraq who reported that sexualised images purporting to be of her were being shared on the app. Nick recounted the lack of urgency assigned to this case, stating, "If you look at the country where this report comes from, it’s a very high risk because it’s a minor and it involves sexual blackmail and then you can see the priority here. The urgency is not high at all."

When members of the trust and safety team sought to prioritise cases involving young people over these political matters, Nick claims they were instructed not to, and to continue processing cases according to the pre-assigned rankings. He interprets this as a sign that the company was not adequately prioritising children's safety, instead favouring relationships with politicians and governments.

Nick's advice to parents whose children use TikTok is stark: "Delete it, keep them as far away as possible from the app for as long as possible."

A TikTok spokesperson, however, strongly refuted these claims, stating that they "ignore the reality of how TikTok enables millions to discover new interests, find community, and supports a thriving creator economy in the UK." The company highlighted its investment in technology designed to prevent harmful content from being viewed and maintained that teen accounts are equipped with over 50 preset safety features and settings, which are automatically enabled.

Meta's Instagram Reels Under Scrutiny for Harmful Content

At Meta, a former senior engineer, Tim*, revealed that the company allegedly instructed him to allow more "borderline harmful content" – such as misogyny and conspiracy theories – onto user feeds in an effort to compete with TikTok. This decision, he believes, was driven by a desire to boost engagement metrics.

A senior Meta researcher, Matt Motyl, also shared internal research with the BBC that suggests Instagram Reels, launched as Meta's competitor to TikTok in 2020, was introduced without adequate safeguards. This research indicated a significantly higher prevalence of bullying, harassment, hate speech, violence, and incitement in comments on Reels compared to other areas of Instagram.

Motyl provided dozens of research documents from Meta that appeared to demonstrate the company's awareness of issues with its algorithms. The research suggested that the algorithm was guiding users down a "path that maximises profits at the expense of their audience’s wellbeing."

One research paper shared with the BBC indicated that Meta struggled to mitigate harm on Reels after its launch. The findings suggested that Reels posts exhibited a higher prevalence of harmful comments than posts on the main Instagram feed, with bullying and harassment being 75% higher, hate speech 19% higher, and violence and incitement 7% higher.

Tim further elaborated that as views on Reels began to decline, the company became more desperate to regain ground. "You’re losing to TikTok, and therefore your stock price must suffer. People started becoming paranoid and reactive, and they were like, let’s just do whatever we can to catch up. Where can we get like 2%, 3% revenue for the next quarter?" he stated.

He indicated that the decision to relax restrictions on content that was potentially harmful but not illegal – yet was still engaging users – was made by a senior vice-president at Meta, who Tim believed reported directly to Mark Zuckerberg.

A Meta spokesperson vehemently denied the whistleblowers' allegations. "The truth is, we have strict policies to protect users on our platforms and have made significant investments in safety and security over the last decade," the spokesperson stated. The company also pointed to its efforts to protect teens online, including the introduction of a new Teen Accounts feature with built-in protections and parental management tools.

No comments:

Post a Comment

×
Latest news Update