“On 10.1% of ad revenue being generated from violating ads – this number was a rough and overly-inclusive estimate rather than a definitive or final figure; in fact, subsequent review revealed that many of these ads weren’t violating at all. The assessment was done to validate our planned integrity investments – including in combating frauds and scams – which we did.”
In the last 15 months, reports from users about scam ads have declined by more than 50%, the Meta spokesman said.
“And, so far in 2025, we’ve removed more than 134 million pieces of scam ad content. In addition to removing scam content outright, we also incorporate people’s feedback to better understand their experiences and identify situations in which it may not be immediately obvious that something is a scam. We use that feedback to train our systems to proactively deter similar scam content.”
Meta has repeatedly refused to address the question of what it does with revenue it identifies as being generated by scams.
FMA warns about continuing scam ads on Meta
But ads for scams continue to appear.
On August 22, the Financial Markets Authority warned: “Scammers who have been using social media advertisements, particularly on Meta platforms, to impersonate New Zealand celebrities, journalists, and politicians have now turned to impersonating financial commentators and business leaders by creating fake profiles. ”
The FMA said scammers were impersonating the likes of Carmel Fisher, Frances Cook, Mike Hosking and Gareth Morgan – using tools like deepfake videos to lure people into fake investments.
Billions being spent on scam ads
“Scam advertising — or malvertising — is one of the biggest threats to people’s online safety, making up more than 40% of all cybersecurity threats targeting individuals,” says Mark Gorrie, Asia-Pacific VP for Norton.
“The fact that scammers are spending billions to run these ads shows just how profitable the business of fraud has become.
“Stopping scam ads might mean a short-term hit to advertising revenue, but some things are worth more than profit.”
So what is to be done?
Stephen Kho works for a high-tech cyber-security firm, but has a low-tech suggestion for dealing with social media platforms and ads from scammers: Hit them with huge fines.
“If there’s a $100 million penalty, then a board will find the resources to take action,” he told the Herald. “Money is the language they understand.”
Kho, the Brisbane-based director of offensive security for Gen (owner of Norton, Avast and other security software brands), says Big Tech firms have made the most steps to tighten their systems in the EU, where cyber-scams can result in a fine of up to €20 million ($41m) or 4% of their total global annual revenue.
In Australia, the Scams Prevention Framework Act 2025, which passed in February, means social media platforms face fines of up to A$50m if they fail to take “reasonable steps” to detect and remove scams.
Around the same time – although it did not draw a link to the legislation – Meta Australia introduced new measures, including new verification requirements, for financial advertisers.
The Scams Prevention Framework Act was one of a series of measures that has seen the Australian Government claim a 25.9% fall in financial losses to scams, while the Herald understands that an NZ survey, which will be released Monday, will reveal a significant increase.
Our Government has kept a watching brief on Australia’s move to introduce megafines but mid-year, did match the Australian move to create a National Anti-Scam Centre.
But while across the Tasman a cross-agency taskforce has been put literally under one roof, NZ’s Anti-Scam Alliance is a virtual multi-agency collaboration.
Our Government also leaned on banks to finally introduce account name and number verification, making it harder for scammers to receive money from their victims.
AI threat
Kho said while AI could be used to boost cyber-protections, it was also being used by scammers to produce better ads – ”there’s no bad spelling or grammar any more” and more and better scams, faster.
Fraudsters could now use “vibe coding” – or natural language commands to an AI – to create a realistic knock-off of a retailer’s website (as recently happened to Kathmandu – then set up cloud hosting and promotional campaigns for it.
If you do respond to a scam ad, the process of contacting you can be delegated to a deepfake AI, too – although Kho says there are still protections. His company’s Norton 360 product, for example, has a deepfake detector that can identify an AI voice, which he says has a narrower modulation than a real person.
It’s the algorithm, stupid
Scams are as old as commerce.
But an open letter from US publication Consumer Reports today, which urges the US Bureau of Consumer Protection “to take action against Meta for knowingly showing billions of scam advertisements per day and for failing to take reasonable efforts to staunch the deluge of fraudulent ads on its websites,” says:
“Even savvy and sophisticated consumers could easily fall victim to one of the billions of deceptive ads shown on the platform.
“Meta’s ad-targeting system exacerbated the problem, as Meta users who clicked on one scam ad were likely to be shown more scam ads thanks to the company’s ad-personalisation algorithms, which attempt to show users more ads like ones they’ve interacted with.
“Thus, if someone is particularly vulnerable to scams, Meta’s algorithm ensures those scam ads are unavoidable.”
Meta had no comment on the Consumer Reports letter.
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.

