Insights from Mind the Sec: The New Frontier of AI-Driven Fraud
October 2, 2025 | 4 minute read
By Mario Cesar Santos
At Mind the Sec 2025, Aware Vice President of Latin America, Mario Cesar Santos, spoke on a subject that is rapidly transforming the identity security landscape: the rise of generative AI and large language models as enablers of fraud.

For those unfamiliar, a large language model (LLM) is an artificial intelligence system trained on massive amounts of text data to understand and generate human-like language. Tools like ChatGPT or Bard are examples. LLMs can write convincing text, answer questions, translate languages, or even mimic communication styles. These are capabilities that fraudsters can exploit to create realistic phishing messages, scripts, or fake personas at scale.
For years, organizations have battled fraudsters who relied on crude phishing emails, falsified documents, and stolen credentials. Those tactics have not disappeared, but they are being augmented (and in many cases eclipsed) by AI-driven tools that allow fraud to operate on an industrial scale.
The Rise of AI-Driven Fraud
Generative AI has lowered the barrier of entry for fraud and supercharged its execution. Fraudsters are now creating synthetic identities by combining fragments of real personal data with AI-generated images, videos, and documents that are increasingly indistinguishable from the genuine article. Deepfakes and video manipulations can bypass basic liveness checks and trick even trained human reviewers. At the same time, large language models are being used to craft highly convincing social engineering scripts in multiple languages, making attacks more personalized and effective than ever before.
The impact is not limited to phishing or impersonation. AI is also enabling the emulation of legitimate applications, particularly banking and identity verification apps, or injecting malicious code into existing ones. Additionally, LLMs can help fraudsters to find systems vulnerabilities and write malicious code without having to be fully versed in programming languages.
Attackers can now orchestrate massive campaigns with automated agents, producing endless variations of fraudulent content to avoid detection.
In contrast with the manual, opportunistic fraud attempts of the past, these new methods are realistic, adaptive, and relentless.
Why Current Defenses Are Falling Short
The uncomfortable truth is that many organizations have not yet adapted to this step change in fraud tactics. Traditional defenses were designed for a different era and are ineffective against synthetic identities, deepfakes, or AI-driven fraud at scale. Businesses that continue to rely on legacy identity systems or human revision of high-risk transactions risk becoming easy prey in this new environment.
Sectors such as financial services and e-government remain prime targets, where the stakes are highest and the potential rewards most enticing. Telecom, e-commerce, and healthcare are also increasingly in the crosshairs, along with corporate networks where weak identity and access management systems offer attackers an easy entry point. Yet across industries, most organizations remain underprepared. Reliance on passwords, document checks, and manual review is no longer sufficient. Many businesses continue to depend on outdated identity verification rules or rudimentary liveness detection methods that adversaries equipped with AI can bypass with relative ease.
Shifting Priorities in Identity Security
To confront this new reality, organizations are beginning to shift their priorities. Identity verification can no longer revolve around documents and passwords alone. Biometric-based solutions, particularly those incorporating multiple layers of defense, are becoming central. Advanced fraud detection tools capable of identifying artifacts invisible to human reviewers are now essential, as is continuous monitoring that evaluates risk across an entire user journey. At the same time, enterprises must balance effectiveness with fairness and compliance, ensuring that their systems do not introduce demographic bias or misuse sensitive personal data.
Regulators are paying close attention, demanding transparency, fairness, and accountability. Standards such as the U.S. Department of Homeland Security’s biometric evaluations and NIST’s FATE program provide benchmarks for accuracy and fairness. Meanwhile, privacy regulations such as GDPR, Brazil’s LGPD, and evolving U.S. state laws require organizations to prove that they are safeguarding biometric and personal data responsibly. The burden of compliance is increasing, but so is the risk of reputational damage for companies that fail to meet expectations around equity and transparency.
What the Future Holds for Fraud
The outlook for the next several years suggests that fraud will only become more advanced. Hyper-realistic deepfakes, fraud-as-a-service models, and multi-modal synthetic identities will blur the line between real and artificial.
At Aware, we believe the answer lies in biometrics fortified by layered defenses. Our work is focused on advancing liveness detection through passive and dynamic methods, building fusion “super models” that integrate multiple modalities, and delivering real-time fraud intelligence that evolves as quickly as adversaries do. Just as important, our platform is designed to provide transparency, flexibility, and independent validation to ensure trust in every deployment.
Key Takeaways for Leaders
The key takeaway is clear: generative AI-enabled fraud is not simply an incremental challenge, it is a step change. To remain secure, organizations must:
- Acknowledge the step change—GenAI-enabled fraud is not business as usual.
- Invest in biometrics as a core pillar of identity security.
- Adopt layered defenses that combine liveness, continuous monitoring, and multi-modal biometrics.
- Leverage benchmarks and external validation to ensure fairness, compliance, and trust.
- Build for agility, because adaptability is as important as strength.
Mario closed his talk with a reminder worth repeating: technology itself is neutral. It’s people who decide how it’s used. Fraudsters are using AI to exploit human weaknesses. Our job is to use advanced machine learning and biometrics to strengthen defenses, protect trust, and give businesses the confidence to grow in this new era. The fight has changed, but with the right tools and mindset, it’s a fight we can win.