Guide to safe and ethical use of facial recognition tools launched

New guidance to ensure that Facial Recognition Technology (FRT) acts as a force for good in society has been published by BSI, aiming to help organisations navigate the ethical challenges associated with the use of the technology and build trust in its use as a result.

Use of Artificial intelligence (AI) powered facial recognition tools is increasingly common, including for security purposes such as at King Charles’ Coronation or major sporting events like football matches, and to curb shoplifting. FRT maps an individual’s physical features in an image to form a face template, which can be compared against other images stored within a database to either verify a high level of likeness or identify an individual’s presence at a specific location at a given time.

BSI’s recent research showed that 40% of people globally expect to be using biometric identification in airports by 2030. Its proliferation has prompted concerns about safe and ethical use, including around error rates linked to racial or gender differences, as well as high-profile legal cases, among them one involving software used by Uber. A 2022 audit assessed police use of facial recognition, finding that deployment regularly failed to meet minimum ethical and legal standards.

The new standard has been developed by BSI, in its role as the UK National Standards Body, to assuage concerns by helping organizations navigate the tools and build public trust. It follows BSI’s Trust in AI poll showing that 77% of people believed trust in AI was key for its use in surveillance.

Designed for both public and private organizations using and/or monitoring Video Surveillance Systems (VSS) and Biometric facial technologies, the code of practice is applicable to the whole supply chain, beginning with an assessment to determine the need to use FRT, to its procurement, installation, and appropriate use of the technology. Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)sets out six key overarching principles of ‘trustworthiness’. These are backed up with a summary of policies that are required and to be maintained by those across the supply chain. The guide covers its applicability across governance and accountability, human agency and oversight, privacy and data governance, technical robustness and safety, transparency and explain-ability, diversity, non-discrimination, and fairness.

With the industry expected to be worth $13.4 billion globally by 2028, the standard sets out the importance of regularly reviewing the ethics of AI and its application in FRT. It embeds best practice and gives guidance on the appropriate guardrails for safe and unbiased use of FRT through the definition of two scenarios: identification and verification. For the former, such as identifying individuals in crowds at events, the standard requires that FRT is used in conjunction with human intervention or human-in-the-loop measures to ensure accurate identification before action is taken.

In verification scenarios where the technology can operate autonomously, such as building access control, authenticating a payment transaction, or opening a phone, the standard puts guardrails in place for the technology’s learning by ensuring training data includes sets from diverse demographic pools and across a variety of lighting levels and camera angles, to eliminate inaccuracies and mitigate the risk for bias by way of false positives.

Scott Steedman, Director-General, Standards, BSI said: “AI-enabled facial recognition tools have the potential to be a driving force for good and benefit society through their ability to detect and monitor potential security threats.

“This code of practice is designed to help organizations navigate the ethical challenges associated with the use of FRT systems and build trust in its use as a result. It aims to embed best practice and give guidance on the appropriate guardrails organizations can put in place to safeguard civil rights and eliminate system bias and discrimination.”

Dave Wilkinson, Director of Technical Services, British Security Industry Association, said “The use of FRT has not come without its own challenges, whether that has been down to the accuracy of the technology, or how and where it is deployed. Many relevant questions have been asked by privacy groups, industry stakeholders and other interested parties on the appropriate and proportionate use of such technology. This code of practice aims to instil trustworthiness in the use of FRT by setting out key principles covering the whole process from assessing the need to use it, to ensuring its continued operation remains fit for purpose and justified.

“Aligned to the understanding of developing regulation both here in the UK and the wider international regulatory landscape, the code of practice sets out to build trust with those that develop, use, and are subject to its use.”

EthicalFacialGuidelaunchedRecognitionSafetools
Comments (0)
Add Comment