Attackers are now skipping the camera.
Biometric authentication has become a cornerstone of modern digital identity. From unlocking smartphones to onboarding customers in financial services, biometrics offer a seamless way to verify identity without relying on passwords or knowledge-based credentials.
Increasingly, biometric technologies are also being used to establish proof of personhood, or confirming that a real, live human is present during a digital interaction.
But as biometric adoption grows, so too does the sophistication of the attacks designed to exploit these systems.
Much of the industry’s focus has historically been on presentation attacks, which are attempts to fool biometric sensors using masks, photos, or digital screens. Technologies like liveness detection have made significant progress in defending against these threats.
However, not all attacks occur in front of the camera or sensor. A growing category of threats targets something deeper: the biometric data pipeline itself.
These are known as replay and injection attacks, and they represent a rapidly evolving challenge for organizations deploying biometric identity verification.
To build truly resilient biometric systems, organizations must think beyond the algorithm and secure the entire data pipeline.
The Hidden Threat: Replay and Injection Attacks
Replay and injection attacks exploit weaknesses in how biometric data moves from the sensor to the authentication system.
In a replay attack, an attacker intercepts legitimate biometric data, such as facial images or biometric templates, and reuses it later to impersonate a user.
In an injection attack, attackers bypass the sensor entirely and inject manipulated biometric data directly into the authentication pipeline.
The critical distinction is that these attacks do not require the attacker to physically interact with the biometric sensor. Instead, they manipulate the software environment surrounding it.
This makes replay and injection attacks particularly relevant in mobile and web-based identity verification, where biometric capture relies on consumer devices, third-party SDKs, and layered application environments.
Attackers may attempt to:
- Replace real camera output with pre-recorded images or deepfake video streams
- Inject fabricated biometric templates or manipulated image frames into the authentication pipeline
- Replay previously captured biometric sessions through intercepted network traffic
- Simulate camera feeds using virtual cameras, device emulators, or synthetic media pipelines
In each case, the goal is the same: convince the system that a legitimate biometric capture event has occurred when, in reality, no authentic capture ever took place.
Why Traditional Defenses Aren’t Enough
Many biometric security discussions focus on improving algorithms to better detect spoofing attempts. While this remains important, it addresses only part of the threat landscape.
Injection attacks exploit architectural weaknesses rather than algorithmic ones.
The effectiveness of liveness detection depends on one critical assumption: that the biometric data being analyzed actually originates from a legitimate camera capture event.
If attackers can inject manipulated images, video streams, or synthetic media directly into the authentication pipeline, the system may never interact with the real sensor at all.
In these cases, the question is no longer simply:
“Is this a real human?”
It becomes:
“Did this data actually come from a real camera?”
When systems implicitly trust incoming data without validating its origin, even the most advanced liveness detection algorithms may end up analyzing fraudulent inputs that appear legitimate.
The Role of Liveness Detection in Modern Identity Security
Liveness detection remains one of the most important tools in defending biometric systems.
Passive liveness technologies can analyze subtle cues in captured images, such as texture patterns, depth characteristics, reflections, and micro-movements, to determine whether the subject is a real person rather than a spoof artifact.
These technologies are highly effective against traditional presentation attacks and are becoming increasingly important as deepfake technologies continue to evolve.
However, the effectiveness of liveness detection still depends on one key assumption: that the biometric data originates from a real capture event.
If attackers are able to inject fabricated images or video streams directly into the authentication pipeline, they may bypass the entire capture process.
For this reason, modern biometric systems must combine liveness detection with infrastructure-level protections that validate the authenticity of the capture process itself.
Building a Multi-Layer Defense Against Injection Attacks
Defending against replay and injection attacks requires securing multiple layers of the biometric architecture—from device hardware to backend systems.
Several foundational security controls can help mitigate these risks.
- Secure Transmission and Encryption: Biometric data should always be encrypted in transit using modern cryptographic protocols. Secure transmission helps prevent attackers from intercepting and replaying biometric data between the capture device and authentication service.
- Device Attestation: Device attestation verifies that biometric data originates from a legitimate, uncompromised device. By validating the integrity of the operating system and application environment, organizations can reduce the risk of manipulated mobile environments generating fraudulent biometric inputs.
- Trusted Execution Environments (TEEs): Many modern devices include secure hardware areas known as trusted execution environments, which isolate sensitive operations from the rest of the device. Processing biometric capture and security functions within a TEE reduces the likelihood that malware or compromised applications can tamper with biometric data before it reaches the authentication system.
- Sensor-Origin Validation: Another critical control is validating that biometric data truly originates from an authorized camera sensor. Without this verification step, systems remain vulnerable to injected data streams that masquerade as legitimate camera output.
- Integrity Checks Across the Pipeline: Ultimately, biometric systems must assume that any incoming data could be hostile. Implementing integrity checks across multiple layers, from capture to transmission to backend processing, helps detect anomalies that may indicate tampering or replay attempts. This layered security model reflects the core zero trust principle of modern cybersecurity: Trust nothing. Verify everything.
The Future of Biometric Security
As generative AI continues to make it easier to create convincing synthetic identities and deepfake media, attackers are increasingly shifting their focus from simple spoofing techniques to more sophisticated system-level attacks.
Replay and injection attacks are becoming more common as biometric authentication expands across mobile applications, digital onboarding flows, and remote identity verification systems.
For organizations deploying biometrics, this means security strategies must evolve accordingly.
Biometric accuracy and liveness detection remain essential, but they must be supported by secure architecture, device-level trust, and validation of the capture process itself.
Defending biometric systems in the era of deepfakes requires securing not just the human signal, but also verifying the authenticity of the capture event and the entire pathway that carries biometric data.
To help address this challenge, Aware recently completed an independent Injection Attack Detection (IAD) evaluation conducted by BixeLab in close alignment with the CEN/TS 18099:2024 technical specification.
The evaluation assessed Aware Intelligent Liveness across 300 injection attack attempts spanning 10 Injection Attack Instrument (IAI) species and multiple advanced attack methods, including virtual cameras, USB cameras, function hooking, and rooted device scenarios with tamper-detection bypasses. Across all scenarios tested, no successful injection bypasses were observed, while the solution maintained a 0% Bona Fide Presentation Classification Error Rate (BPCER) across 300 legitimate transactions.
As biometric threats continue to evolve, independent third-party testing has become increasingly important for organizations evaluating biometric security solutions. Real-world attack simulations help validate not only algorithmic performance, but also the resilience of the broader biometric architecture against sophisticated adversarial techniques. Evaluations aligned with emerging frameworks like CEN/TS 18099 provide valuable transparency into how solutions perform under realistic attack conditions and help organizations make more informed decisions about securing digital identity systems.
At Aware, we believe advancing biometric security requires more than innovation alone—it requires rigorous, independent validation against the threats organizations face today and those emerging tomorrow.