Security News Gen Q4/2025 Threat Report

Miravi

Level 7
Thread author
Verified
Well-known
Aug 31, 2024
316
2,147
568
USA
Selected excerpts from the Gen Q4/2025 Threat Report:

AI-Driven Malware​

  • The most visible AI abuse leap in Q4 came from the nation-state space. Anthropic disclosed that a China-linked group jailbroke its Claude Code model and used it to orchestrate a multi-step espionage campaign against roughly thirty organizations, including financial and government-related targets. According to Anthropic, the AI handled approximately eighty to ninety percent of the workflow, from reconnaissance and code generation to log parsing and data staging, with humans mostly approving steps and correcting mistakes. This is no longer “AI helped me write some malware”; it is AI acting as a junior operator across an entire campaign.

  • Google’s Threat Intelligence Group reported something similar at the tooling level. Its AI Threat Tracker documents experimental malware such as PROMPTFLUX and PROMPTSTEAL, droppers that call an LLM at runtime to rewrite their own Visual Basic Script, change obfuscation and adjust evasion on the fly. Instead of shipping a static payload, attackers ship a template that asks an AI model how to change itself every time it runs. This matches the “just in time” self-modification pattern we highlighted in earlier quarters, but now in code that actually runs on victim machines.

Zero-Day Exploits​

  • In one of the more sophisticated spyware campaigns of the quarter, researchers uncovered a new commercial-grade Android spyware family called Landfall. This spyware exploited a zero-day vulnerability in Samsung’s image processing library (CVE-2025-21042), which allowed attackers to compromise Galaxy devices via malicious image files, including files sent over apps such as WhatsApp. At this scale, a person’s trust becomes part of the attack surface. While encryption protects data transit, it cannot defend a device that has already been compromised.

  • The Landfall spyware abused a zero-day in Samsung’s image processing to infect Galaxy phones via malicious pictures sent in WhatsApp, then silently accessed photos, messages, contacts, call logs, the microphone and precise location for months.

GhostPairing Exploitation​

  • We refer to this pattern as a GhostPairing Attack. The attacker’s browser becomes a kind of ghost device on your account, present in every conversation, even though you never see it on your phone.

  • The scam starts as a two-minute trick. It evolves into slow, patient social engineering supported by the kind of personal data only a compromised WhatsApp account can provide.

  • AI also shows up around account takeovers. In The code that steals your WhatsApp, we follow a device-linking abuse pattern – what we call GhostPairing – where a simple “I found your photo” message lures victims to a fake login page that uses WhatsApp’s own pairing flow to add the attacker’s browser as a ghost device on the account. The core trick is pure social engineering around a legitimate feature; however, the consequences line up with our AI concerns. Once a WhatsApp account is silently mirrored, attackers can read private chats, learn which contacts are most trusted, and harvest voice notes and photos that can later be fed into voice cloning and synthetic media tools. The attack itself is not “AI-malware," but it builds the raw material that AI-powered fraud will later exploit.

  • One quick scan links your WhatsApp to someone else’s browser, no malware required. From there, every private chat becomes an open door, every trusted contact a new target. The same mechanism could be reused anywhere device linking exists, from messaging apps to productivity tools.

Machine Learning Against Deepfakes​

  • That shift is exactly why we built Deepfake Protection around the intersection of manipulated media and scam intent, using multi-modal signals from the clip itself (including indicators of cloned or heavily edited audio) plus contextual cues like money requests, urgency scripts, and off-platform handoffs.

  • Early telemetry shows that most blocked AI scam videos cluster on a handful of major platforms, with YouTube leading by share of blocks and Facebook and X behind it. The majority are not spectacular, viral deepfakes. They are ordinary-looking clips, often with cloned or heavily edited audio, tied to financial and cryptocurrency lures. That is why the story argues that AI in a clip is not a risk signal by itself; the meaningful signal is AI paired with a request for money, an off-platform handoff or a time pressure script.

  • In Q4, on devices where our video scam detection feature was enabled, we detected 159,378 unique deepfake scam video instances that matched this intersection of manipulated media and scam intent.

Ransomware Trends​

  • Ransomware and extortion remained active but showed signs of stress. On the consumer side, ransomware encounters in our telemetry declined 6.8% year over year in 2025, staying well below the elevated levels we saw during the Magniber-driven highs that began in 2024 and persisted until this summer, when the campaign was stopped. We still observed smaller, more targeted incidents, including new Trinity-family ransomware samples going after small and mid-sized businesses, but not a broad return of mass-spread ransomware lockers.

  • Coveware reported that only about 23 percent of ransomware victims paid in Q3 2025, a historical low, with average and median payments dropping by roughly two-thirds compared to the previous quarter as big enterprises increasingly refuse to pay and mid-market firms negotiate smaller amounts.

  • Chainalysis, looking at blockchain flows, estimated that global ransomware payments in 2024 fell from 1.25 billion dollars to around 813 million, roughly a one-third drop year on year, a proof point that consumers and businesses alike are becoming more educated on how to manage ransomware extortion.

Infostealers​

  • Law enforcement pressure also spilled over into commodity crimeware infrastructure. In November, Operation Endgame, a coordinated international action, disrupted infrastructure used by infostealers such as Rhadamanthys, VenomRAT and Elysium, taking large numbers of servers and several domains offline. In our telemetry, Rhadamanthys detections dropped sharply after mid-November and remained low throughout the rest of the quarter. This marks a rare instance where a public takedown aligns with a clear and sustained decline in activity from a specific threat family.

  • On the supply-chain side, the discovery of the ShaiHulud v2 malware in npm packages reinforced how popular developer ecosystems can be abused to push info-stealers and backdoors into thousands of build environments with minimal friction.

Scams in Ads/Feeds​

  • In our Q4 analysis of scams originating on social platforms, one theme stands out: concentration. Facebook alone accounts for 78.04% of social-origin scam blocks on desktop. When combined with YouTube, that figure rises to 95.71%. In plain terms, the vast majority of risky scam clicks begin in just two places: the social feed and the video loop.

  • Our telemetry also showed a marked increase in e-shop scams in Q4 on both desktop and mobile. In fact, roughly half of all e-shop scam blocks we recorded in 2025 occurred in Q4, reflecting a surge in blocked fraudulent shops.

  • The data is unsurprising considering the holiday period and public reports showing that Meta made around 10 percent of its annual revenue in 2024, roughly 16 billion dollars, from fraudulent ads and banned product listings, based on internal documents discussed in recent investigations. Those reports describe billions of scam-related ads per day, with some internal systems rating ads as highly likely to be fraudulent yet allowing them to run.

Other Standout Threat Analyses​

  • Data breach events accelerated sharply in Q4, both in frequency and scale (based on our product users). Monthly breach events rose from a low of 307 in January to a peak of 3k+ in November, before closing the year at 2,243 incidents in December, representing a +175.23% quarter-over-quarter increase in total breach events.

  • The volume of exposed data followed a similar trajectory: breached records climbed from 557k in July to more than 2.09 million records in December, driven by a sharp spike in November (1.4 million records). Overall, the amount of breached data increased +157.36% QoQ, underscoring not just more frequent breaches, but larger and more consequential exposure events as the quarter progressed.

  • Financial threats are now visible nearly every time people make daily money decisions. They show up when someone opens a new account, tips a creator, pays a bill or reacts to a text that looks like it came from their bank. Stolen identity data is used to pose as the “ideal” new customers applying for loans. Payment features designed for convenience, such as tipping and post settlement adjustments, are used to turn one-dollar transactions into five-figure debits.
Read the rest: Gen Q4/2025 Threat Report
 
Last edited:
Meta made around 10 percent of its annual revenue in 2024, roughly 16 billion dollars, from fraudulent ads and banned product listings, .... Those reports describe billions of scam-related ads per day, with some internal systems rating ads as highly likely to be fraudulent yet allowing them to run.
Beneficiaries: scammers, scammers' suppliers including MEGA corporations, authorities and politicians, countries, etc.
Losers: victims.
 
AI acting as a malware intern, deepfakes disguised as everyday clips, and ads funding scams… The report shows fraud becoming routine.
It’s like walking through a forest full of wolves, but the most dangerous ones don’t howl — they blend in as bushes and wave like neighbors. 🌲🐺