Meta Deploys Advanced AI to Combat Harmful Content, Reduces Vendor Reliance
By admin | Mar 19, 2026 | 4 min read
Meta revealed on Thursday that it is beginning to deploy more sophisticated AI systems for content enforcement, alongside plans to scale back its use of third-party vendors. These enforcement tasks involve identifying and removing content related to terrorism, child exploitation, drugs, fraud, and scams. The company stated it will implement these advanced AI systems across its apps once they reliably surpass the performance of its current enforcement methods. Concurrently, Meta intends to decrease its dependence on external vendors for these moderation duties.
In a blog post, Meta explained, “While we’ll still have people who review content, these systems will be able to take on work that’s better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drug sales or scams.”
Meta anticipates that these AI systems will improve violation detection with greater accuracy, better prevent scams, respond more swiftly to real-world events, and reduce instances of over-enforcement. Early testing has shown promising results: the AI can detect twice as much violating adult sexual solicitation content as human review teams, while also cutting the error rate by more than 60%.
The systems are also designed to identify and prevent more impersonation accounts involving celebrities and other high-profile individuals, and help stop account takeovers by detecting suspicious signals like logins from new locations, password changes, or profile edits. Additionally, Meta reports the AI can identify and mitigate approximately 5,000 daily scam attempts where fraudsters try to obtain users’ login credentials.
“Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions,” Meta noted in the blog post. “For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.”

This shift follows a period over the past year or so during which Meta has been relaxing certain content moderation rules. Last year, the company discontinued its third-party fact-checking program, adopting a model similar to X’s Community Notes instead. It also lifted restrictions on “topics that are part of mainstream discourse” and encouraged users to take a “personalized” approach to political content.
The announcement also arrives as Meta and other major tech companies face multiple lawsuits seeking to hold social media platforms accountable for harms to children and young users.
Separately, Meta announced on Thursday the launch of a Meta AI support assistant, providing users with 24/7 access to support. This assistant is rolling out globally within the Facebook and Instagram apps for iOS and Android, as well as in the Help Center on Facebook and Instagram for desktop users.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!