The UK’s policing minister has pushed for facial recognition to be rolled out across police forces nationally in a move that would ignore critics who claim the technology is inaccurate and some of its applications illegal.
According to a report to be submitted to parliament on Tuesday, the Home Office briefed the biometrics and surveillance camera commissioner during closed-door meetings on Chris Philp’s desire to expand the use of the controversial systems by law enforcement.
Any such move by Philp, who was appointed minister of state for crime, policing and fire by prime minister Rishi Sunak last October, is likely to be divisive.
The use of facial recognition has faced widespread criticism and scrutiny over its impact on privacy and human rights. The European Union is moving to ban the technology in public spaces through its upcoming Artificial Intelligence Act.
The report, co-authored by academics Pete Fussey and William Webster on behalf of the biometrics and surveillance commissioner, reviews proposals in the UK’s new data protection bill and argues that the proposed law will weaken the oversight of surveillance.
The report said that Philp had “expressed his desire to embed facial recognition technology in policing and is considering what more the government can do to support the police on this”. This expansion would probably explore the “integration of this tech with police body-worn video”, it added.
Facial recognition software has been used by the South Wales Police and London’s Metropolitan Police over the past five years across multiple trials during events such as the Notting Hill Carnival and, more recently, during the coronation.
The FT previously revealed that the private owners of King’s Cross in London were using the technology on the general public, scanning for known troublemakers and sharing the data with the Metropolitan Police. They have since stopped using the technology.
Facial recognition has been criticised by privacy campaigners and independent researchers who argue it is inaccurate and biased, particularly against darker-skinned people.
In 2020, appeal court judges ruled that previous trials by South Wales Police of facial recognition software were unlawful, although the force continues to use the technology. At the time of the ruling, South Wales Police said it would give the court findings “serious attention” and that its policies had evolved since the trials.
Last month, the Met police announced that it had conducted a review into the technology’s effectiveness and found “no statistically significant bias in relation to race and gender, and the chance of a false match is just 1 in 6,000 people who pass the camera”.
The UK’s data protection and information bill proposes to remove the requirement for a surveillance camera code of practice and abolish the role of the surveillance camera commissioner, a government-appointed official who encourages compliance with the code and has other powers.
“We are really at a critical moment in the expansion of surveillance technology. We need to think carefully about the value of this technology for policing,” Fussey, a criminologist at the University of Essex and co-author of the report, told the Financial Times.
“It is legitimate to use this technology to keep people safe, but the question is if it is legal and necessary to use it,”
He added that the data protection legislation was a “diminishing of the scant regulations of oversight of this technology” and that mechanisms were needed to ensure it was “used legally and responsibly”.
The Home Office said: “The government is committed to empower the police to use new technologies like facial recognition in a fair and proportionate way.
“Facial recognition plays a crucial role in helping the police tackle serious offences including murder, knife crime, rape, child sexual exploitation and terrorism.”