The use of biometrics in law enforcement dates as far back as the late 1800s. But early matching methods such as the Henry System were time-consuming and often inconclusive; it wasn’t until the 1970s that computers began helping law enforcement agencies automate fingerprint matching to achieve greater efficiency and accuracy.

In the 1980s, the FBI created the Automated Fingerprint Identification System (AFIS). This involved capturing traditional ink-on-card fingerprints, or tenprints, so they could be stored electronically. The FBI indexed these images so that fingerprints lifted from a crime scene could be compared to a large database of existing records in search of a match.

Modern one-to-many biometric search was born.

Today, biometric identification systems in law enforcement rely on advanced matching algorithms that can compare latent biometric samples to digital templates in a gallery. The amount of fingerprint data has also increased substantially. The FBI created IAFIS in 1999, which allowed law enforcement agencies from all over the country and the world to store, exchange, and compare fingerprint data in a digital format.

The FBI and other agencies have also incorporated additional biometric identification modalities such as face and iris recognition. This multimodal search system is called ABIS (automated biometric identification system). Like AFIS and IAFIS before it, biometrics algorithms underpin ABIS’s functionality and performance.

However, the difference between the ABIS of today and the AFIS of yesterday is the quantity of templates that need to be searched. Tens of millions of biometric records are stored within these systems. Getting an accurate hit during a search is more complicated than ever, especially in situations where the probe sample used to perform the search is of poor quality.

As biometric databases continue to grow, law enforcement agencies are starting to utilize biometric fusion to limit the rate of false matches and non-matches. This may refer to the use of more than one modality in a search or, alternatively, using multiple algorithms within the same modality.

Both may have a place in criminal ABIS, but for reasons we’ll address in this post, using multiple algorithms may represent the more immediate opportunity in law enforcement.

Before the search: priming samples for use in matching

Algorithms leverage a broad assortment of techniques to perform a one-to-many biometric search.

The accuracy of these algorithms is greatly influenced by biometric enrollment —the capture of high-quality images that make suitable biometric records or templates. During this process, software validates quality standards conformance of the image and extracts the specific features that will be compared during a search.

In facial recognition, for example, artificial intelligence algorithms are trained to recognize when different images are of the same individual.

The algorithms used in biometric enrollment software can also identify errors such as someone presenting the wrong hand (representing right as left or vice versa) while enrolling tenprints on a live-scan system.

Likewise, real-time analysis can tell an operator capturing a mug shot if the image is sufficient as a template, and automatically flag faults that could affect matching performance.

For iris recognition, specialized cameras that use near-infrared (NIR) sensors are used to capture the detailed features of the iris. Visible light (VIS) can pollute the sample and compromise matching performance, particularly in the case of dark irises. As with a fingerprint or face, the iris image will be checked for indicators of quality. It will then be compressed and saved as a biometric template.

The biometric enrollment process is how analog data from a fingerprint, face, iris, or even voice is transformed into digital information that can undergo computer analysis. If sample quality suffers during enrollment, it may compromise the integrity of the template, and the performance of a search.

In law enforcement applications, biometric enrollment usually takes place during a booking. Once captured, raw images are archived so they can be used to create new templates in case template generation and matching algorithms are updated in the future.

In some cases,such as use of biometrics sourced from latent fingerprints or surveillance video, law enforcement professionals cannot control the image quality of the sample. In this case, forensic analysis tools are used to aid the examiner in their search.

Biometric searching and matching

When it comes time for the actual search, algorithms might first classify an image by its quality level, then detect those useful features in the sample among the multitudes of templates in the database.

The function of those algorithms varies according to the modality. For example, algorithms attempting to find a match in a database for a fingerprint sample might compare minutiae like ridge endings and bifurcations.

Biometric search in law enforcement is intended to compare a sample such as a latent fingerprint against existing templates in a database. Likewise, facial images captured through video surveillance can be searched against a gallery of mugshots.

The goal is to identify the individual bound to the fingerprint or face in the system. For every record in a gallery, a match score assesses the likelihood that the probe and gallery samples come from the same individual.

This is different than biometric authentication, which is a one-to-one comparison between a sample and a trusted template. Biometric authentication has far fewer variables influencing performance since it only ever compares one sample to one template at a time.

More data means a statistically greater margin for error

In theory, access to more biometric records means a greater pool of individuals to compare a sample against. IAFIS was created for this purpose. State, local, and tribal law enforcement agencies can submit fingerprints collected during a booking or investigation so that other law enforcement agencies might be able match a print collected from a booking or crime scene to that existing record at a later time.

Government agencies both within the U.S. and abroad have also launched biometric data-sharing efforts. Interpol manages an international database containing nearly 200,000 fingerprint records and almost 11,000 latent fingerprints. Authorized users in member countries can submit fingerprint data to search for matching records.

A biometric search is limited to individuals who have been previously enrolled into a system. The ability to search more systems means the ability to search more individuals.

However, it also means more potential false-match errors. Amassing more records means that an algorithmic error or shortcoming could lead to a statistically greater number of false matches or non-matches.

For example, say that a specific feature occurs in just 1 in 1,000 samples. In a biometric database of 10 million records, an algorithm that incorrectly classifies samples with that feature will result in 10,000 records that are compared incorrectly. There can be hundreds of such features found in any given database. The reality is that no single algorithm addresses them all.

Biometric fusion: multimodal modalities versus multiple algorithms

Using a combination of multiple modalities (fingerprint, face, iris) significantly improves matching performance. Multimodal biometrics increase the amount of data that is analyzed, which helps biometric matching engines make more accurate comparisons between a live sample and enrolled templates. Simply put, the probability of a false match or non-match when both a face and a fingerprint are used as search criteria is much lower than either by itself.

But the benefits of multiple modalities often come at the cost of increased biometric capture time and system complexity. It can take a long time to realize the advantages of multimodal search when introducing a new modality into a system where legacy enrollments are unimodal.

In situations where multimodal search is impractical, the use of multiple, complementary algorithms may provide a path to improving unimodal search performance.

Effective classification and comparison of biometric images and signals in today’s large-scale, real-world environments requires designing algorithms that can handle features and conditions beyond the norm: like corner cases or other unusual cases that may involve “noise” (any variables that may influence an algorithm’s ability to accurately contextualize a sample).

And because no single algorithm can do it all, the next best solution is to look at ways to combine different approaches to leverage the strengths and minimize the weaknesses of multiple algorithms; think of it as using more specialized algorithms to do what each can do particularly well, like players on a team.

Algorithms from different vendors have evolved over the last 20-plus years, often drawing on public-domain research; but they’re designed in isolation, targeting different applications and most importantly, trained and tested on different datasets.

So how can algorithms from different suppliers be used together in the same system? At a minimum, there must be architecture in place that’s open and flexible. This means:

  • enrollment hardware and software that is not tied to any particular matching algorithm or supplier,
  • middleware that can integrate matchers from different suppliers, and
  • dynamic matching workflow logic that can tune a biometric search for different situations.

One example of the utility of such a system is its use in optimizing a search to accommodate poor-quality probe samples. Some algorithms just perform better than others with low-quality data, so it makes sense to tune a search for this situation, even if it lengthens the duration of the search. A poor-quality probe is much more likely to result in a false non-match. Most large-scale fingerprint matchers use a tiered funnel approach, first applying fast algorithms that narrow the results and then slower algorithms that perform careful analysis of the more difficult classifications. A flexible system can reduce the penetration of the fast matcher, and even divert the sample to a search algorithm optimized for low quality.

Another example applies biometric fusion to join two different vendor solutions to achieve a more reliable result. Testing would be performed to see how the two work together—how often they agree and disagree, for example. Based on the results, the system can be optimized to get higher confidence on match and non-match results.

AwareABIS™ is well-suited for this type of biometric fusion: its Biometric Services Platform (BioSP™) is an algorithm-agnostic middleware platform., and Astra™ is a highly scalable biometric matching platform that can deploy Aware’s Nexa™ biometric algorithms, but also algorithms from other suppliers.

Law enforcement is an especially compelling use case for biometric fusion, mainly because of the scale of one-to-many searches taking place in criminal databases every day.

As law enforcement agencies’ biometric systems evolve and databases grow, expect to see more intramodal fusion of algorithms—even from different vendors—in biometric search.

Want to learn more?

Schedule a demo to get started today