Asia-Pacific prisons deploy ‘dehumanising’ facial recognition


A policeman walks inside the Tihar Jail in New Delhi. Prisons are increasingly using facial recognition, but AI is prone to error and can entrench bias against minorities, rights experts say. — Reuters

In Singapore’s prisons, CCTVs in the cells watch over inmates, facial recognition is used for headcount checks, and an artificial intelligence-based behaviour detection system monitors for fights and other suspicious activities.

“Sometimes, the facial recognition cameras would turn on at odd times, without warning. Or the behaviour detection would alert the guards if people were just exercising in the cell,” said Tan, 26, a former inmate, who asked to go by his last name.

“I was arrested for a non-violent crime, yet made to feel like a dangerous terrorist who had to be watched all the time,” said Tan, who served two terms in prison of up to a year for smoking marijuana, a banned substance.

Officials say the technologies being used and piloted in the city-state’s Selarang and Changi prisons improve “effectiveness and efficiency”, and free up guards to focus on prisoner rehabilitation and other “more value-added work”.

Former inmates like Tan and human rights groups, however, say the constant surveillance violates prisoners’ privacy, that AI-based systems can be inaccurate and biased against minorities in particular, and that there is little clarity about data use.

“It’s not clear what is done with the data, how long it’s kept, and what recourse prisoners or former prisoners have if there is abuse or leaks of the data,” said Kirsten Han at Transformative Justice Collective, a rights group in Singapore.

“Prisoners already feel dehumanised and disrespected in prison, and the constant surveillance and lack of privacy ... alienates them and makes them feel like they aren’t treated with dignity,” she told the Thomson Reuters Foundation.

A spokesperson for the Singapore Prison Service said that data protection rules apply, and that the technologies are assessed for accuracy and reliability, and are “appropriately calibrated” for race and sex.

Reinforcing bias

Worldwide, facial recognition - which uses AI to match live images of a person for verification against a database of photographs - is increasingly used for everything from unlocking mobile phones to checking in for flights to making payments.

But its use by police and in prisons is problematic as inmates have limited rights, and because of the risk of bias against minority communities who are often overrepresented in the carceral system, rights groups say.

“Surveillance systems in prisons can be abused, especially in the case of political prisoners and other vulnerable people,” said Phil Robertson, deputy director of Human Rights Watch in Asia.

“Even considering the need to prevent violence, facial recognition in prisons is overly intrusive and unnecessary,” he added.

Asian cities – including Singapore, Delhi and others in India – have among the highest concentrations of surveillance cameras in the world, according to tech website Comparitech.

In Delhi, authorities are rolling out facial recognition systems in the city’s three prisons for greater safety and security, said Sanjay Beniwal, director general of Tihar jail, India’s largest prison complex.

The prisons, with a combined inmate population of about 20,000, already have a big network of CCTVs, so facial recognition technology is needed to analyse the feeds, he said.

“It will enable us to monitor suspicious activity, and also alert us to fights and falls,” he said.

The system will have “adequate checks and balances” to ensure data is secured, and that the rights of inmates are upheld, he added.

But India’s criminal justice system disproportionately targets marginalised communities, and the lack of a data protection law raises the risk of misuse, said Anushka Jain, policy counsel at Internet Freedom Foundation, a digital rights group.

“There is a high proportion of minority communities in prison, and if the data collected is used to train algorithms, people with similar facial features could be profiled as criminals or suspects, thus reinforcing the bias,” she said.

“Prisoners also risk being misidentified and being held responsible for acts they did not commit, which would reduce their chances of early release or parole, or lead to a further curbing of their rights,” she added.

Riddled with errors

Surveillance technologies are often tested on vulnerable populations such as refugees and prisoners before being rolled out elsewhere, and are increasingly used to also target dissidents and activists.

The Australian Human Rights Commission in 2021 called for a ban on the use of facial recognition in the country until “stronger, clearer and more targeted” human rights protections are in place.

Yet the technology has been rolled out in several prisons in New South Wales state, despite concerns about potential biases against Aboriginal people who are overrepresented in prisons.

The technology will ensure greater security and enable “faster, more accurate processing at all stages of the enrollment and identification process” for everyone who enters or exits a facility, said a spokesperson for Corrective Services NSW, the state agency that oversees prisons.

All biometric data is “encrypted on a secure system, with data stored on encrypted secure servers”, and the system complies with the state’s privacy safeguards, the spokesperson added.

But the technology undermines the right to privacy, said James Clark, executive director of Digital Rights Watch, an advocacy group.

“Facial surveillance only benefits (private prison service providers), while the risks will be carried by some of the most vulnerable people in our society,” he said.

The technology is also “riddled with errors” when it comes to identifying darker-skinned people and women, he added.

Prison inmates are unaware of these risks.

In Singapore, Tan and other former inmates said they received a short briefing on the technologies, but had no knowledge of how they worked, or what became of their data.

“At first I thought it was quite cool. But you can’t opt out,” he said.

“It’s like being in a fish bowl all the time. It was very dehumanising and just unnecessary.” – Thomson Reuters Foundation

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Apple to extend new core technology fee to iPadOS apps
Oracle updates database technology for AI chatbots
Singapore DBS’s digital services hit days after MAS ban ends
Nigeria court adjourns Binance and execs trial to May 17
US judge questions Google, DOJ in market power trial closing
Tesla interns say offers are getting revoked weeks before their start date
Man sexually assaults two women he met online on the same day, US cops say
AI startup Anthropic debuts Claude chatbot as an iPhone app
Microsoft will invest RM10.47bil in cloud and AI services in Malaysia
Sex offender asks Norway’s Supreme Court to declare social media access is a human right

Others Also Read