Meta’s new scam prevention claims: Sue is not impressed

Adobe Stock

Did you hear that noise? That was me climbing up on my soapbox. AHEM.

This morning’s Social Media Today newsletter featured a really interesting – and eyeroll-inducing – headline: “Meta Touts Detection Efforts Ahead of Anti-Scam Summit.

Essentially, ahead of this big summit, Meta has released some figures to “prove” that it is doing something to combat the rampant scam ads all over its platforms. This, of course, would have nothing to do with last month’s positively astounding Reuters reporting that Meta knew about the rampant scams and in fact was set to make 10 percent of its 2024 revenue from scam ads – roughly 15 billion scam ads a day. Not a typo, billion with a big ol B. (If you have not read the Reuters report, take the time to do it).

OK so back to these new statistics Meta released to prove how it’s doing something about this (per the Social Media Today article above):

  • In the last 15 months, reports about scam ads have declined more than 50%, and so far in 2025, we’ve removed more than 134 million scam ads.

  • In the first half of 2025, our teams detected and disrupted nearly 12 million accounts – across Facebook, Instagram and WhatsApp – associated with the most adversarial and malicious scammers: criminal scam centers.

  • We’re using facial recognition technology to stop criminals that abuse images of celebrities and other public figures to lure people into scams.

OK, let’s look at these three data points with a critical (skeptical) eye. As someone who works with data all day, I am very familiar with the nifty little trick of picking the numbers that make you look the best. It’s all in the framing, baby. But what all three of the “wins” listed above have in common is that they are lacking the proper context. So, let’s add the context!

  • Claim #1: In the last 15 months, reports have decreased 50%. Well that sounds great. But let’s be honest – a lot of people have learned that tickets submitted to Meta Support go into a weird black hole of nothingness, and as a result, fewer people report anything because they feel defeated and unsure any action will actually be taken. In addition, Meta makes it harder and harder to even find the place to click to submit any report. So, I’m not sure really that a decrease in reporting means there’s been a decrease in troublesome ads. It just means the number of reports decreased – most likely for a whole bunch of other unrelated reasons.

  • Still Claim #1: So far in 2025, we’ve removed more than 134 million scam ads. Woo hoo! 134 million! That’s a ton! That’s like many millions! But remember what Reuters told us… There are 15 billion (with a B) scam ads per day. So, Meta saying that in almost an entire calendar year, 134 million (with an M) ads were taken down isn’t really that impressive. Let’s do the math for fun.

    • In an entire year, Meta removed 134 million scam ads. Divide that by 365 days in a year, and you get 367,124 per day (rounding up for even numbers). So 367,124 is roughly 0.2 percent of the 15 billion posts per day.

    • Looking at an annual calculation, if there are 15 billion scam ads a day, 15 billion x 365 days = 5,475,000,000,000 (almost 5.5 trillion) per year. And Meta took down 134 million ads in the same year. 134 million is 0.2 percent of 5.475 trillion.

    • So in essence, Meta is bragging about catching and removing 0.2 percent of scam ads. Wow. Sue is not impressed.

  • Claim #2: In the first half of 2025, our teams detected and disrupted nearly 12 million accounts – across Facebook, Instagram and WhatsApp. Again, yay. 12 million accounts is like a ton! But Facebook has more than 3 billion (with a B) users. 12 million of 3 billion is 0.4 percent of Meta’s global account footprint. If 10 percent of your ads are scams, it’s highly unlikely that all of those activities are coming from less than half a precent of your account profiles, even though I do acknowledge Meta said they targeted scam centers first, presumably to take out the worst, highest-volume offenders first. Sorry not sorry, Sue is still not impressed.

  • Claim #3: We’re using facial recognition technology to stop criminals that abuse images of celebrities and other public figures to lure people into scams. OK there’s no math on this one, but it is still missing lots of context.

    • Meta and all of the social media platforms have way, way, WAY too much of our personal information already. The addition of facial recognition is not good news to me, and it makes me wonder a whole lot about personal privacy rights. As I wrote earlier this year with the Discord data breach, this is only going to become a bigger issue as more countries follow Australia’s lead and place an age limit/ban on young users of social media. This means that young users (with clean credit records) will be submitting proof of age to the platforms to verify they are old enough to have an account.  And then the platforms will have all of this data on all of these teenagers – and they have not proved they can manage this type of data responsibly at all, certainly not at that scale.

    • It aggravates me that this third claim focuses on criminals abusing images of celebrities and public figures. Granted, these people are often used in scams because of their notoriety and popularity. But how about protecting the little people who aren’t famous for once? Sue is not impressed.

Just like Meta’s recently launched Brand Rights Protection Hub (see my post here),  it seems someone at Meta has decided that they need to do more to appear proactive on the issues of personal safety and brand safety. But when you truly evaluate these things, it’s a lot of smoke and mirrors. It’s lipstick on a great grandpa sized pig. The truth is that Meta’s efforts are not even half measures. They are drops in the very large bucket of security and fraud issues across their platforms.

I honestly don’t know what it will take for the social media platforms to take security more seriously and do more to protect their users. And, the honest truth is that they may be procrastinating intentionally. Remember that Reuters article? It said Meta earns $7 billion in annualized revenue from scam ads. That sounds like $7 billion reasons to pretend to take security seriously while continuing to collect all that money.

If you take a peek at the actual announcement from Meta where they shared these statistics, they actually give a wink and a nod to this very fact. The second of three takeaways in their own press release is “Scams don’t just harm individual victims — they undermine trust in our entire advertising ecosystem, which is the very foundation of our business model.

I don’t know any business that would intentionally destroy the foundation of their business model (especially one that’s so darn successful), do you?

Sue is definitely not impressed.

Next
Next

Discord data breach brings governance principles to the forefront in real time