By Tom Gerken
Technology reporter
The EU has warned TikTok about “disinformation” spreading on the platform after Hamas’ attack on Israel.
It urged CEO Shou Zi Chew in a letter to “urgently step up” efforts, and spell out “within the next 24 hours” how it is complying with European law.
Social media firms have seen a surge in misinformation about the conflict, including doctored images and mislabelled videos.
The EU previously warned X, formerly Twitter, and Meta about such content.
It said TikTok needed to be mindful of its popularity with young people.
“TikTok has a particular obligation to protect children & teenagers from violent content and terrorist propaganda as well as death challenges & potentially life-threatening content,” said EU commissioner Thierry Breton in a post on X.
The BBC has approached TikTok for comment.
X was given a similar 24-hour deadline on Tuesday. The firm’s chief executive Linda Yaccarino responded by telling the bloc it had removed or flagged “tens of thousands of pieces of content” since Hamas attacked Israel.
She also said it had removed hundreds of accounts.
Facebook and Instagram-owner Meta has also been handed a similar warning about disinformation – and a 24-hour deadline – by the EU.
The EU declined to comment on whether it had received a response from Meta, but a European Commission spokesperson said “contacts are ongoing” with the company’s compliance teams.
A Meta spokesperson told the BBC: “After the terrorist attacks by Hamas on Israel on Saturday, we quickly established a special operations centre staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation.”
“Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds.”
X chief executive, Ms Yaccarino, said the company had “redistributed resources and refocused internal teams” to deal with the content.
In her letter to the bloc, she said X had responded to more than 80 requests in the EU to remove content, as well as adding notes to certain posts, which give context to them.
“More than 700 unique notes related to the attacks and unfolding events are showing on X,” she wrote.
“These notes display on an additional 5,000+ posts that contain matching images or videos. This number grows automatically if the relevant images and videos are re-used in new posts.”
Meanwhile, in response to the “illegal content” claim from the EU, Ms Yacarrino said X “had not received any notices from Europol”.
EU commissioner Thierry Breton has demanded that X and Meta prove how they have taken “timely, diligent and objective action”.
The EU introduced new laws in August 2023 which regulate the kind of content that is allowed online.
The Digital Services Act (DSA) requires so-called “very large online platforms” to proactively remove “illegal content”, and show they have taken measures to do so if requested.
The EU told the BBC it was not currently in a position to comment on what would come next in these specific cases, but explained what was hypothetically possible under the law.
The DSA allows the EU to conduct interviews and inspections and, if it is unsatisfied, proceed to a formal investigation.
If it decides that a platform has not complied or is not addressing the problems it has identified, and risks harming users, the commission can take more drastic steps.
This can include a heavy fine, and as a last resort it can even request judges ban the platform from the EU temporarily.