Getting facial recognition right
By Dr. Mohamed Lazzouni
This article first appeared on GCN.
To realize the benefits of facial recognition while maintaining ethical integrity, agencies must ensure systems’ accuracy, security and resistance to bias.
Privacy concerns around the use of biometrics, particularly facial recognition, often stem from ambiguous consent and transparency issues.
We believe organizations must have facial recognition policies and procedures that are clear and consent-based. They should have easy opt-in and opt-out options and be transparent about what information is being collected and how it is being used. This enables users to “own their identities” and helps them feel secure in how organizations are using their data.
We also believe, however, there is one exception where the consent-based approach should not be binding — and that is for specific homeland security, law enforcement and public safety use cases. U.S. federal agencies and law enforcement operations have had some great successes with facial recognition, and we believe few would question the integrity or permissibility of these.
Consider the example of New York City detectives using facial recognition to identify a man who, in 2019, left a pair of potential bombs in the Fulton Street subway station. With facial recognition it took only one hour to identify the suspect — a process that previously would have taken several hours or days — and perhaps have been too late.
Still, many privacy and human rights advocates believe facial recognition should be banned altogether as a violation of a person’s right to privacy. After all, a person’s “faceprint” is a form of personal data.
Is there a way to realize the benefits of facial recognition while maintaining ethical integrity? The answer is yes — by putting the proper guardrails in place:
- Never rely on an algorithm to be the ultimate arbiter. Final decisions on whether a person captured on video is the same person being presented should always be made by a human. Facial recognition can be used to narrow a list of possible suspects, but it should never be the be-all and end-all; rather it is just one piece of the puzzle. Keeping human beings in the loop can help rectify inaccurate results and prevent system performance issues. Similarly, facial recognition capabilities can supplement eyewitness identifications in criminal investigations, which are notoriously prone to error. As the Fulton Street subway incident demonstrated, the best results tend to happen when humans and machines work in tandem.
- Ensure the facial recognition system has a reputation for accuracy. The National Institute of Standards and Technology regularly evaluates facial recognition systems for levels of accuracy, which can exceed 99%. Selecting an algorithm with a high NIST rating is an obvious way to identify a valid system.
- Maintain the utmost security. Government facial recognition use cases extend beyond areas like homeland security, law enforcement and public safety, to granting government workers access to buildings, remote systems and more. Given that agency employees often have access to highly sensitive information, facial recognition systems can deliver superior speed and convenience combined with the utmost security.Additionally, agencies must ensure the data storage solutions supporting their facial recognition systems leverage the most accurate, secure and privacy-protective technologies available. Also, organizations should follow best practices from the private sector, such as retaining data for only the minimum amount of time needed. This can help prevent “function creep” — or the gradual widening of a technology use beyond its original intended purposes.
- Guard against discrimination. Facial recognition algorithms can inadvertently support discrimination if they are not “trained” on very diverse populations including people of different genders, ethnicity, racial background, national origin and more. This is extremely important, because if an algorithm is trained in only one type of facial morphology (for example, Caucasian male), it may be more prone to generalizing features of other morphologies and failing to accurately distinguish between two or more individuals.
Facial recognition can play a huge role in homeland security, law enforcement and public safety, as well as physical and information security. Despite its privacy and accuracy challenges, we believe the solution is not to ban facial recognition, but to put in place the necessary safeguards. Evaluating and updating these guardrails will require constant work as technology continues to advance. And it will require an ongoing, transparent conversation with independent human rights experts and civil society organizations to ensure privacy and consent obligations are always upheld and maintained.
Dr. Mohamed Lazzouni has been Aware’s Chief Technology Officer since November 2019, and currently serves as a board member of Epochal Technologies, Inc., a provider of demographic data solutions. Prior to joining Aware, Dr. Lazzouni served as President and CEO of Epochal Technologies, Inc. from August 2018 to November 2019; President of the Anti-Counterfeiting Business and Chief Operating Officer at Authentix, Inc., a provider of authentication solutions, from 2013 to 2018; Chief Technology Officer and Senior Vice President of MorphoTrust USA, LLC, a provider of identity assurance solutions, from 2006 to 2013; and as Chief Technology Officer and Senior Vice President of Viisage Technology, Inc., a provider of identity verification technology, from 2001 to 2006. Dr. Lazzouni received his Ph.D. in Physics from the University of Oxford, his Master’s degree in Physics from the University of London, and his Bachelor of Science degree in Physics from Badji Mokhtar University, Annaba (UBMA).