No faith in Meta’s trusted partner programme

Media outfit Internews has published a report slamming Meta’s Trusted Partner programme.

This programme is supposed to give human rights groups a priority channel to alert Meta to harmful and dangerous content posted on Facebook and Instagram with the aim of identifying and removing this content as quickly as possible.

However the report claims that some organisations have received long delays when reporting this content – the same delays regular users of the social media platforms experience when making similar reports.

Response times were slow, inconsistent, and in some cases Meta has failed to react at all to even the most dangerous and time-sensitive content such as calls for and threats of imminent harm.

The report is based on dozens of interviews with some of Meta’s closest partners, and found that many of the most severe operational failures of the Trusted Partner program appear to relate directly to a lack of resourcing and poor staffing.

Many Trusted Partners are choosing to supplement or bypass the official Trusted Partner channel by communicating directly with personal contacts or at least copying them into official reports to ensure they are read. Partners who can usethese contacts receive better responses, which indicates that the program is not functioning as it should be.

Meta declined requests from Internews to provide information on average response times or internal targets.

Rafiq Copeland, Platform Accountability Advisor at Internews and author of the report said:

“Trusted flagger programs are vital to user safety, but Meta’s partners are deeply frustrated with how the program has been run. As Meta launches Threads to be a ‘public square for communities,’ can we trust the company to operate it responsibly? This research suggests more investment is needed to ensure Meta’s platforms are safe for users.”