Facial recognition is increasingly being deployed around the world by law enforcement and border control to building management to secure access and improve surveillance. But how good is the technology in correctly identifying individuals under different conditions?
The accuracy of facial recognition systems has come under increased scrutiny as Oakland joined San Francisco in May in banning facial recognition software by city agencies, partly over concerns that the software would throw up false positives.
Although identification would depend on various factors and be very high without a mask concealing a face, the latest surveillance systems can be capable of making a correct match, according to interviews with several researchers who work at Chinese AI companies.
Their view is based on recent studies made on surveillance techniques beyond existing facial recognition systems, the researchers said. Those include methods to identify partially exposed faces, gait patterns and various distinguishing body features.
In China, the advances being made on facial recognition are used by various cities to name and shame minor offenders, spot a criminal among a crowd and verify the identities of passengers at airports. Using sophisticated algorithms, Chinese technology companies and other major businesses have adopted those systems in the retail, travel and banking environments.
Here are some insights on how facial recognition systems work, according to the mainland surveillance technology researchers.
How does artificial intelligence work in facial recognition systems?
Facial recognition systems involve the identification of people from a database of images, including still photographs and video. Use of deep learning – a subset of AI – speeds up a system’s face-scanning capability, as it learns more about the data it is processing. These systems require vast amounts of information to become faster and more accurate at doing its job.
These systems generate a so-called “unique face print” for each subject by reading and measuring dozens to thousands of “nodal points”, including the distance between eyes, the width of a person’s nose and depth of the eye socket. With a network of surveillance cameras, recognition systems process a wider range of features, including height, age and colour of clothes.
On the iPhone, the built-in camera can analyse more than 30,000 infrared dots and create a crude 3D model of a user’s face. That was designed to adapt for changes in a user’s appearance, such as wearing cosmetic make-up against various lighting conditions, or wearing a hat, scarf, glasses or contact lens, according to information from Apple’s website.
Does a mask keep its wearer completely unidentifiable?
Not exactly, according to the Chinese surveillance specialists.
Advanced facial recognition systems would be able to analyse the uncovered parts of the mask wearer’s face, they said. That could help narrow down a subject to a smaller number of candidates, under high tolerance of false negatives, and produce a correct match.
Their conclusion is supported by work from a team of researchers from the University of Bradford in the UK. These researchers published a paper in May that said facial recognition technology can achieve 100% correct identification even when the top, right half or three-quarters of a face is visible.
The UK researchers said a facial recognition system that was trained to do only full facial identification saw its success rate drop to 40% when only the eyes and nose are visible, and 60% when the bottom half of a face is uncovered, according to the paper. The system’s success rate improved up to 90% – when it received partial images in training – for the bottom half of the face, the visible eyes and nose, and even for faces with no eyes and nose shown.
How would existing facial recognition systems in China fare when a subject’s face is partly concealed?
While facial recognition technology has flourished and become a part of daily life in China, the accuracy of existing systems are based on having an unobstructed view of people’s faces.
Many consumers in China provide their personal data, which include their image, to the major online services providers in exchange for convenience in using these firms’ products. The broad availability of data, in turn, has helped these providers improve their AI algorithms and levels of service.
As such, most Chinese AI companies have focused their research on voluntary and face-front recognition in applications that demand high accuracy.
One researcher expects the correction rate in facial recognition systems to drop by about 70% if people concealed even one-fifth of their face. Others, however, said the rate would vary depending on which part and how much of a face is covered.
At present, some AI firms have started developing facial recognition software trained with scanning partially obstructed faces.
What are some of the recent advances to help identify people with partially concealed faces?
Gait recognition: Chinese artificial intelligence start-up Watrix introduced a software that can identify a person from 50m away – even if their face is covered or have their back to a camera.
The technology works by analysing thousands of metrics about a person’s walk to build an extensive database. These range from body contour to the angle of arm movement and to whether a person has a toe-in or toe-out gait.
This technique requires zero cooperation from subjects, who have included fugitives who avoid looking into surveillance cameras or get their face scanned, according to Watrix co-founder and chief executive Huang Yongzhen.
Voice recognition: This technology has come of age over the past decade, becoming widely used as smart assistants in cars and home appliances. The software is designed to recognise the user’s voice, memorise personal preferences and automate access. This system is best paired with surveillance cameras that also record audio to better analyse user data.
Heartbeat laser: The US Department of Defence has developed a device, called the Jetson, which can detect a person’s unique cardiac signature over distances of 200m or more using infrared laser, according to a report last month by the MIT Technology Review.
Contact infrared sensors are often used to automatically record a patient’s pulse, the report said. Those work by detecting the changes in reflection of infrared light caused by blood flow. In contrast, the Jetson uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket, the report said.
The Pentagon claims the Jetson has an accuracy rate of more than 95% under the right conditions and this might be further improved. One glaring limitation of this recognition system, however, is that it requires a database of heartbeat signatures. – South China Morning Post
Did you find this article insightful?