In your face: our acceptance of facial recognition technology depends on who is doing it – and where

Facial recognition technology is becoming more widely used, but this has not been matched by wider acceptance from the public.

Controversies continue to hit the media, with both public and private sector organisations frequently outed for flawed deployments of the technology.

The New Zealand Privacy Commissioner is currently evaluating the results of retailer Foodstuff North Island’s trial of live facial recognition in its stores.

The commissioner is also considering a potential code on the use of biometrics that would govern the use of people’s unique physical characteristics to identify them.

But as facial recognition becomes more common, public acceptance of the technology is inconsistent.

Retail stores, for example, tend to attract controversy when using facial recognition technology. But there has been little resistance to the use of it in airports. And the vast majority of people have no problem unlocking their phones using their faces.

My research draws together 15 studies on the public acceptance of facial recognition technology from the United States, United Kingdom and Australia.

There has been little analysis of New Zealand attitudes to the technology. So, these studies offer a view into how it is accepted in similar countries.

What I found is that public acceptance of facial recognition technology depended on the location of the recording – and why it was being captured.

Trusting personal use

According to the global research, individuals tended to place trust in the facial recognition technology on their own smart phones.

According to a 2019 study from the US, 58.9% of people were comfortable with using facial recognition to unlock their smartphone. And a 2024 survey found 68.8% of Australians felt the same.

This is interesting because while individuals physically “operate” the technology via an app on their phone, they don’t control the app itself or the data it collects.

Acceptance is, therefore, a product of perception. When someone uses facial recognition technology on their own phone they feel in control.

Less trust in the government

Public acceptance of government use of facial recognition varied greatly depending on what it was being used for.

The more familiar people were with a particular technology, the higher their level of acceptance of it was.

People were comfortable with governments using facial recognition for identifying passengers at airport customs, for example. But they were less happy with its use in identifying voters at polling places.

When it came to its deployment by police, people generally accepted the use of facial recognition technology to identify terrorists and investigate serious crimes. But research found resistance to it being used to identify minor offences and antisocial behaviours, such as parking violations and littering.

People were also uncomfortable with the idea of it being used in court to gain a conviction in the absence of other forms of evidence.

The more ambiguous the use of the technology was, the greater the degree of discomfort around it.

Deployments such as “monitoring crowds as they walk down the street” and “day-to-day policing” lead to concerns over ubiquitous surveillance and the loss of “practical obscurity” (the idea that even in public spaces, you have the right to some level of privacy).

Wary of the private sector

As mixed as public acceptance of government facial recognition technology may be, it was generally greater than that for the private sector.

People place little trust in businesses’ ability to operate the technology responsibly.

According to a 2024 survey from New Zealand’s privacy commissioner, 49% of respondents said they were concerned or very concerned about the use of facial recognition technology in stores.

But as the acceptability data on government use demonstrated, context is key.

Retail-focused research found the public was more accepting of facial recognition technology to identify shoplifters, antisocial patrons and fraudsters than for other purposes – such as loyalty programs, advertising, payments and the tracking of customer behaviour.

In the workplace, security-related deployments attracted limited although greater acceptance than uses relating to employee location and behaviour tracking.

The need for social licence

The question of why facial recognition technology is controversial in some cases but widely accepted in others is an important one.

The absence of research into the public acceptance of facial recognition in New Zealand means there is no evidence basis upon which to establish the social licence for the technology.

There is also a limited understanding of the range of scenarios social licence might cover.

As private businesses and public organisations increasingly use facial recognition technology, it’s important to understand more about how the public feels about having their faces recorded and matched to their identity in real time.The Conversation

Nicholas Xavier Dynon, Doctoral Candidate, Centre for Defence & Security Studies, Te Kunenga ki PÅ«rehuroa – Massey University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Einstein and anime: Hong Kong university tests AI professors


HONG KONG - Using virtual reality headsets, students at a Hong Kong university travel to a pavilion above the clouds to watch an AI-generated Albert Einstein explain game theory.

The students are part of a course at the Hong Kong University of Science and Technology (HKUST) that is testing the use of "AI lecturers" as the artificial intelligence revolution hits campuses around the world.

The mass availability of tools such as ChatGPT has sparked optimism about new leaps in productivity and teaching, but also fears over cheating, plagiarism and the replacement of human instructors.

Professor Pan Hui, the project lead for HKUST's AI project, is not worried about being replaced by the tech and believes it can actually help ease what he described as a global shortage of teachers.

"AI teachers can bring in diversity, bring in an interesting aspect, and even immersive storytelling," Hui told AFP.

In his "Social Media for Creatives" course, AI-generated instructors teach 30 post-graduate students about immersive technologies and the impact of digital platforms.

AFP | Peter PARKS

These instructors are generated after presentation slides are fed into a programme. The looks, voices and gestures of the avatars can be customised, and they can be displayed on a screen or VR headsets.

This is mixed with in-person teaching by Hui, who says the system frees human lecturers from the "more tedious" parts of their job.

For student Lerry Yang, whose PhD research focuses on the metaverse, the advantage of AI lecturers was in the ability to tailor them to individual preferences and boost learning.

If the AI teacher "makes me feel more mentally receptive, or if it feels approachable and friendly, that erases the feeling of distance between me and the professor", she told AFP.

- 'Everybody's doing it' -

AFP | Peter PARKS

Educators around the world are grappling with the growing use of generative AI, from trying to reliably detect plagiarism to setting the boundaries for the use of such tools.

While initially hesitant, most Hong Kong universities last year allowed students to use AI to degrees that vary from course to course.

At HKUST, Hui is testing avatars with different genders and ethnic backgrounds, including the likenesses of renowned academic figures such as Einstein and the economist John Nash.

"So far, the most popular type of lecturers are young, beautiful ladies," Hui said.

An experiment with Japanese anime characters split opinion, said Christie Pang, a PhD student working with Hui on the project.

"Those who liked it really loved it. But some students felt they couldn't trust what (the lecturer) said," she said.

AFP | Peter PARKS

There could be a future where AI teachers surpass humans in terms of trustworthiness, Hui said, though he said he preferred a mix of the two.

"We as university teachers will better take care of our students in, for example, their emotional intelligence, creativity and critical thinking," he said.

For now, despite the wow factor for students, the technology is far from the level where it could pose a serious threat to human teachers.

It cannot interact with students or answer questions and like all AI-powered content generators, it can offer false, even bizarre answers -- sometimes called "hallucinations".

In a survey of more than 400 students last year, University of Hong Kong professor Cecilia Chan found that respondents preferred humans over digital avatars.

AFP | Peter PARKS

"(Students) still prefer to talk to a real person, because a real teacher would provide their own experience, feedback and empathy," said Chan, who researches the intersection of AI and education.

"Would you prefer to hear from a computer 'Well done'?"

That said, students are already using AI tools to help them learn, Chan added.

"Everybody's doing it."

At HKUST, Hui's student Yang echoed that view: "You just can't go against the advancement of this technology."by Holmes Chan

Read More........