Meta, the parent company of Facebook, is expanding its use of facial recognition technology to tackle celebrity scam ads, a prevalent issue where scammers exploit images of public figures to lure users into fraudulent websites. Meta says the move is part of a broader strategy to bolster its anti-scam measures and enhance user safety on its platforms.
Meta's VP of Content Policy, Monika Bickert, announced via a blog post that the company is integrating facial recognition as a supplementary measure to its existing ad review systems. These systems already employ machine learning classifiers to detect suspicious ads.
By using facial recognition, Meta aims to identify and block ads flagged as potentially fraudulent, particularly those featuring the likenesses of celebrities without authorization—a tactic commonly referred to as "celeb-bait."
In initial trials involving a select group of celebrities, Meta reports "promising" results. The technology works by scanning flagged ads for unauthorized uses of a public figure's image, comparing them against their official Facebook and Instagram profile pictures. If a match is found and deemed fraudulent, the ad is blocked.
Meta emphasizes that any facial data generated during this process is deleted immediately after comparison, ensuring it is not stored or used for any other purposes.
Meta also considers facial recognition an effective tool against deepfake scams—ads featuring AI-generated images of famous people—and celebrity imposter accounts. These measures aim to prevent fraudsters from impersonating public figures to exploit Meta's platforms for deceptive activities.
Another application of facial recognition that Meta is exploring is to streamline the account recovery process for users who have been locked out of their profiles due to scams. By using a video selfie for verification, Meta intends to offer a faster alternative to traditional document-based methods, enhancing user convenience while maintaining security.
The video selfies will be encrypted and securely stored, only used for immediate verification before being deleted.
While these facial recognition tests are being conducted globally, they notably exclude the U.K. and European Union, regions with stringent data protection regulations requiring explicit user consent for biometric processing.
This strategic omission highlights ongoing tensions between Meta's data practices and European privacy laws. Meta’s approach also raises important discussions around data ethics, privacy, and the potential regulatory challenges that accompany the use of biometric technologies.
In July, the European Commission said Meta's “pay or consent” model in Europe is not compliant with the E.U. Digital Markets Act, considering the Facebook and Instagram parent company wants to offer an ad-free subscription option for its European users, which implies users cannot opt for less personal data usage unless they pay for it.