If Part One of Facing Digital Challenges: How Biometrics Will Shape Secure Identity in 2026 focused on how fraud and identity teams operate inside organizations, Part Two went deeper into the foundations of trust itself.
In the second part of their conversation, Ajay Amlani, CEO of Aware, and Esther Scott, Head of Identity Product at Square (Block) explored what happens when digital interactions become easier to fake, agents become more autonomous, and traditional signals of trust start to fail. The result was a candid discussion about liveness, proof of personhood, biometric adoption, and why privacy and stewardship are now inseparable from security.
What follows is a recap of the key themes from Part Two of our webinar series.
Identity Lives in the Physical World and Biometrics Bridge It to Digital
Ajay opened Part Two by reframing how we think about “agentic” behavior online. While autonomous agents feel like a new phenomenon, he argued that almost every online transaction has always been agentic, and we’ve simply been delegating intent to relatively simple systems.
“The only things that aren’t agentic are when you’re face to face with someone,” Ajay said. “At a farmer’s market, you know who you’re buying from. Online, everything is mediated.”
As agents become smarter and more autonomous, the challenge intensifies: how do you reliably bind a digital action back to a real human being? Ajay shared a conversation that stuck with him, in which biometrics were described as the only reliable way to translate between physical identity (where identity truly lives) and digital transactions.
“Our identities aren’t documents,” he said. “They’re physical. The only way to translate that physical identity into the digital world is biometrics.”
In this framing, biometrics isn’t just a login convenience. They’re the connective tissue between real-world personhood and digital intent, especially in a future where humans increasingly delegate actions to software.
Consumer Acceptance Is a Moving Target and It’s Already Shifting
Ajay then turned to a question many teams wrestle with: why do biometrics feel natural in some industries but uncomfortable in others?
In ride sharing or stadium entry, attaching a face to an account feels intuitive. In financial services, historically, it hasn’t. But Esther cautioned against assuming those boundaries are fixed.
“Consumer sentiment on this stuff is a moving target,” she said. “A decade ago, if you were asking me for my photo everywhere, I’d be pretty weirded out.”
What’s changed is experience. As customers encounter fraud, account takeovers, and painful recovery processes, their tolerance for alternatives increases. New use cases, like face-based entry at venues or biometric payments, are also resetting expectations.
“When people have issues with account recovery, with fraud, with stolen identities,” Esther explained, “their interest and appetite for exploring alternatives markedly increases.”
Acceptance, in other words, isn’t about novelty. It’s about whether biometric experiences feel safer, easier, and more trustworthy than what they replace.
Deepfakes and the Risk of Reversing Digital Progress
One of the most striking moments in Part Two came when Esther described what happens if trust degrades too far.
“You can imagine trust degrading so much in online checks,” she said, “that I have to send you back into a branch or proxy this with the postal service.”
She wasn’t predicting a mass return to physical verification, but she used the image to illustrate the stakes. If digital assurance can’t keep up with deepfakes and synthetic actors, the fallback becomes higher friction and less scalable processes, reversing decades of progress.
Ajay connected this to a broader irony in today’s defenses. CAPTCHAs grow more difficult for humans, while automation gets better at solving them. Bots can now mimic human behavior convincingly, and increasingly, even visually.
“The only things that can solve some of these puzzles well are bots,” Ajay said. “And they can mimic human interaction really well.”
That’s why liveness and proof of humanness have become such critical capabilities. Online, there’s no physical context or human observer. The system has to answer one core question on its own: is there a real, live person on the other side of this interaction?
Liveness Isn’t About Convenience—It’s About Preventing Trust Collapse
Historically, biometric technology was deployed in controlled environments like airports, border checkpoints, physical locations with agents nearby. Today, the same checks happen remotely, at scale, and under constant attack.
“If the response to deepfakes isn’t strong enough,” Esther noted, “that’s how you get back to physical verification.”
Liveness changes that equation by allowing organizations to establish assurance digitally, without reverting to in-person checks. But it’s not just a technical challenge—it’s a design and trust challenge.
People may be willing to use biometrics to skip a line or recover an account, but only if they believe the system is safe, fair, and handled responsibly.
Stewardship, Privacy, and the One-Way Door of Trust
Both Ajay and Esther emphasized that biometrics raise the bar on stewardship. Unlike passwords, biometric identifiers can’t be reset.
“That’s mine,” Esther said. “I can’t get my face back.”
That reality changes everything. Any hint that biometric data isn’t protected, or is handled carelessly, can permanently damage trust.
“I think companies need to be proactively good stewards,” she said. “Once you have an issue, it’s too late.”
This is where standards, independent testing, and compliance regimes matter deeply. Ajay highlighted the role of rigorous evaluation around accuracy, bias, and liveness, especially as these technologies move from fraud prevention into critical infrastructure.
The takeaway: biometrics demand a higher level of responsibility, not just better algorithms.
Is Face ID Enough? Useful, Yes. Sufficient, No.
The session closed with a deceptively simple question: if customers already use Face ID on their phones, aren’t businesses already “doing biometrics”?
Esther’s answer was nuanced.
“For what?” she asked. “In that moment, maybe you feel like the right person is holding the phone.”
But organizations operate across far more complexity: account recovery, customer service, web and in-person experiences, and inevitable compromises. Device-based biometrics offer assurance, but they don’t give organizations ownership or orchestration across the full identity lifecycle.
“You’re taking an assurance from somebody else,” Esther explained. “That’s different from owning and orchestrating repeat use of biometrics within an organization.”
Her conclusion echoed a theme from Part One: nothing works in isolation.
“No, it’s not enough,” she said. “Nothing is enough. There have to be layers always.”
What is Clear for 2026
Part Two reinforced a reality fraud and identity leaders already feel: trust can no longer be assumed. It has to be designed, measured, and defended—especially in a world of deepfakes, agents, and scalable automation.
As Ajay and Esther made clear, biometrics isn’t about adding friction or novelty. It’s about preserving confidence in digital interactions when traditional signals fail. In 2026, we can expect that the organizations that succeed won’t be the ones that simply “use biometrics,” but the ones that deploy them thoughtfully—grounded in liveness, layered defenses, privacy stewardship, and independent validation.
Because in the next phase of digital identity, trust isn’t a feature. It’s the foundation.
Looking for a recap of Part One? Check out this article for the key takeaways.