AFIX TRACKER RESOURCES

We often hear about the benefits and applications of “multimodal” biometrics; the use of different biometric modalities together (e.g. fingerprint and face) to increase the performance and enrollment flexibility of biometric search and match solutions. There are cases where going multimodal is an ideal solution, but others where it’s not an attractive option. The benefits can come at the cost of increased biometric capture time and system complexity, and it can take a long time to gain benefits when introducing a new modality into a system where legacy enrollments are unimodal.

A more recent phenomenon in biometrics is the growing use of multiple algorithms within the same modality to conduct a search. That is, using a variety algorithms—such as from different vendors—in the same search, leveraging the strengths of each to compensate for weaknesses of others. As biometric databases grow, it becomes increasingly difficult to search them without false match and false non-match errors. People experienced with biometric search know that it’s all about the data, and “beauty is in the eye of the beholder”, which in this context means that where some biometric search algorithms see a match, others see a non-match; every algorithm has its strengths and weaknesses. This is particularly true for facial biometrics, where the “signal to noise” and variability in image quality is high as compared to fingerprint and iris. The more different the algorithm, the more complementary they tend to be.

As database sizes climb into the tens of millions, the statistical nature of biometrics comes into play. At a database size of 10 million, a system with a false match rate of just 10-5 will yield on the order of 100 match candidates above the threshold. A lights-out result of one candidate might be out of reach, but getting this number down is critical to operating a useful biometric search platform, but adding a modality is often not an option for any number of reasons, particularly when the system make use of legacy data, such as an existing fingerprint-based database.

Biometric matching algorithms make use of a broad assortment of techniques to compare samples. Any single biometric search system applies many different algorithms to the complex and difficult problem of using a machine to do what our brains do with ease—thanks to a few million years of evolution—and that is to process imagery. Effective classification and comparison of biometric images and signals “in the wild”—in large-scale, real-world environments—relies on designing algorithms that can handle features and conditions beyond the norm; corner cases, unusual cases. For example, in our biometric database of 10 million, an algorithm that incorrectly classifies samples with some feature occurs in just 1 in 1000 samples results in 10,000 records that are compared incorrectly. There can be hundreds of such features found in any given database. The reality is that no single algorithm addresses them all.

This is why biometric scientists are looking at ways to combine different approaches to leverage the strengths and minimize the weaknesses of each; using more specialized algorithms to do what each can do particularly well, like players on a team. Algorithms from different vendors have evolved over twenty years or more, often drawing on public-domain research, but designed in isolation, targeting different applications and most importantly, trained and tested on different datasets.

How can algorithms from different suppliers be used together in the same system? At a minimum, there must be an architecture in place that is open and flexible. This means

  • enrollment hardware and software that is not tied to any particular matching algorithm or supplier,
  • middleware that can integrate matchers from different suppliers, and
  • dynamic matching workflow logic that can tune a biometric search for different situations.

One example of the utility of such a system is its use in optimizing a search to accommodate poor quality probe samples. Some algorithms just perform better than others with low quality data, so it makes sense to tune a search for this situation, even if it lengthens the duration of the search. A poor-quality probe is much more likely to result in a false non-match. Most large-scale fingerprint matchers use a tiered funnel approach, first applying fast algorithms that narrow the results and then slower algorithms that perform more careful analysis of the more difficult classifications. A flexible system can reduce the penetration of the fast matcher, and even divert the sample to a search algorithm optimized for low-quality.

Another example applies biometric fusion to put two different vendor solutions to achieve a more reliable result. Testing would be performed to see how the two work together; how often they agree and disagree, for example. Based on the results, the system can be optimized to get higher confidence on match and non-match results.

Aware’s Biometric Services Platform (BioSP™) is an example of an algorithm-agnostic middleware platform, and Astra™ is a highly scalable “ABIS” biometric matching platform that can deploy Aware’s Nexa™ biometric algorithms but also those from other suppliers. As biometric systems evolve and databases grow, you can expect to see more intramodal fusion of algorithms—even from different vendors—in a biometric search.

Want to learn more?

Schedule a demo to get started today