In this Help Net Security interview, Michal Tresner, CEO of ThreatMark, discusses how cybercriminals are weaponizing AI, automation, and social engineering to industrialize money mule operations. He looks at how these networks have changed and how behavioral intelligence is helping to catch fraud. Tresner also shares practical tips for CISOs trying to stop mule activity before it gets out of hand.
How are cybercriminals using automation, AI, or social engineering to scale mule recruitment and movement of funds?
The fraud landscape has fundamentally shifted with the adoption of generative AI. What once required manual effort can now be executed at an unprecedented rate, with much higher degrees of precision. Cybercriminals are leveraging automation to create highly personalized phishing attacks, using personal data scraped from social media to craft messages that appear entirely legitimate.
What’s particularly concerning is how AI has removed the technical barriers that previously limited these organizational-scale operations. Fraudsters can now generate convincing fake websites, write persuasive messages, and run large-scale recruitment campaigns across job sites, dating apps, and social media platforms. Bots are often used to identify and engage potential mules, automatically.
A particularly troubling trend involves romance and investment fraud scams that cultivate long-term relationships. These aren’t quick-hit operations. They’re sophisticated campaigns that gradually build trust before exploiting victims as money mules, often without victims realizing their participation in criminal activity. The scale is staggering: elderly victims alone lose approximately $80 billion annually, while romance scams account for $3.8 billion in losses.
Once criminals secure the funds, money mules act as human routers. By moving stolen money through real customer accounts, they erase its origin and shift it beyond the bank’s reach within minutes. Mule activity isn’t a by-product of fraud—it’s a deliberate tactic that enables scale, speed, and anonymity across jurisdictions. Remove the mules, and the fraud chain collapses.
Have you seen any recent trends in how mule networks are structured, managed, or disguised across jurisdictions?
Mule networks have evolved into highly organized, hierarchical operations that mirror legitimate businesses. We’re witnessing sophisticated “mule as a service” models where specialized groups recruit, verify, and manage mules across multiple jurisdictions, then sell these services to other criminal enterprises.
These networks are increasingly embedded within legitimate-looking businesses, particularly in e-commerce, crypto services, and consulting sectors. Rather than relying on synthetic identities, many exploit real banking customers as money mules, bypassing traditional fraud detection systems focused on identity verification.
Cross-border operations have become remarkably sophisticated. First-tier mules quickly move funds to secondary and tertiary accounts across different jurisdictions before converting them into to cryptocurrency or other hard-to-trace assets. This layering technique complicates how financial institutions to trace or recover stolen funds.
Perhaps most concerning is the agility of these networks. When part of their operation is exposed, they can quickly reroute operation, often without any noticeable loss in speed or scale.
From a cybersecurity perspective, what signals or behaviors might indicate that a user, device, or transaction is part of a mule operation?
Traditional fraud detection approaches focusing on device reputation, user identity, and location fall short when legitimate users are manipulated into committing fraud from their own devices. In these cases, the device and credentials may appear clean, but the behavior reveals a different story.
The industry needs to shift focus toward behavioral signals that can more accurately indicate manipulation.
These include signs of hesitation, confusion, or inconsistent user actions during a transaction, which suggest the person is under pressure or being manipulated. Real-time analysis of digital interactions can detect subtle behavioral shifts that point to coercion or external influence.
On the transactional side, telling signs include sudden inflows of funds followed by rapid transfers to new or previously inactive beneficiaries. Other markers include newly registered accounts making high-value payments shortly after account opening, or unusual transaction timing and sequencing that doesn’t match established user patterns.
The most reliable detection often combines behavioral intelligence with transaction pattern analysis and understanding the broader lifecycle of mule accounts—from recruitment through activation, utilization, and eventual abandonment.
How do you see money mule operations adapting as financial platforms adopt faster payments, crypto, or AI-driven services?
As financial systems evolve toward real-time payments and cryptocurrency integration, mule operations are becoming more agile. Faster payment systems compress the window for detection and intervention. Once funds move, recovery becomes significantly more difficult.
Criminal networks are beginning to leverage AI not just for recruitment but for adaptive transaction strategies that constantly probe for detection system weaknesses. Some are deploying their own AI tools to fine-tune transaction timing, amounts, and routing in ways that reduce the risk of being flagged.
As cryptocurrency becomes more mainstream and integrated with traditional banking, we’re seeing more advanced layering techniques. Funds are quickly moved through multiple conversion points—from fiat to various cryptocurrencies and back—creating complex transactional trails that are difficult to unwind.
Forward-thinking criminal organizations are already exploring emerging financial technologies like DeFi platforms and non-custodial wallets that operate outside traditional regulatory frameworks. The challenge for financial institutions is finding ways to maintain strong security controls without sacrificing the customer experience benefits these innovations provide.
What’s your advice to CISOs and fraud teams who want to build a more proactive strategy against money mule threats?
Financial institutions must shift from reactive fraud detection to proactive fraud disruption. This requires addressing the entire fraud lifecycle rather than isolated transactions.
First, adopt an ecosystem approach. Individual solutions cannot address the scale or complexity of sophisticated mule networks. Effecitve strategies should combine behavioral intelligence, early threat detection, user empowerment tools, and cross-institutional intelligence sharing.
Second, invest in behavioral analytics capabilities that can detect users acting under fraudster influence, even when using regular devices and credentials. This is critical as more than 70% of fraud losses now come from authorized push payment scams where traditional anomaly detection falls short.
Third, empower customers with tools to protect themselves. Banking customers need ways to verify suspicious content instantly across multiple channels including websites, SMS, emails, and messaging applications.
Finally, participate in fraud intelligence networks enabling privacy-preserving information sharing about emerging threats. Criminal organizations share intelligence effectively—financial institutions must do the same to stay ahead of evolving threats.
Reducing false positives is equally important as catching fraud. Holistic approaches typically achieve significant reductions in both false positives and actual fraud losses, improving both security and customer experience.