A woman of Māori origin was recently misidentified as a shoplifter in a supermarket testing new facial recognition technology. Far from an isolated event, this incident occurred amidst a broader trend of governments introducing new measures to combat the upsurge in shoplifting.
On April 2, Te Ani Solomon was shopping in a mall in Rotorua, a town a few hours south of Auckland, when she was singled out and mistakenly identified as a delinquent. The young māori woman produced identification and tried to reason with staff, insisting that she was not the person who had broken into the store to steal items a few months earlier.
The parent company of the supermarket in question, Foodstuffs, is currently testing technology that matches customers’ faces to known offenders in an attempt to prevent shoplifting.
Asked by local media whether she thought race had played a part in her misidentification, Solomon said it was a “major factor,” adding that businesses should not use the technology because of its potential bias. “It has unfortunately forced me to explain to my son what racism is,” she clarified, “I am now paranoid about being labeled as a thief when I shop.”
Human error?
Foodstuffs apologized to Ms. Solomon, calling the episode a human error. “It’s ironic to blame human error for artificial intelligence technology,” she retorted to the company.
Michael Webster, New Zealand’s privacy commissioner, had previously raised concerns about prejudice associated with the technology, saying he would be “worried” for Māori, Pasifika and Indian consumers if the technology became permanent in Foodstuffs supermarkets. “We need to make sure this doesn’t lead to unfair treatment of people of color in New Zealand,” he added.
In an article published by 1news, Mark Rickerby, senior lecturer at the University of Canterbury’s School of Product Design, believes that the supermarket company’s response, that it was a “human error,” fails to answer deeper questions about the use of AI and these kinds of automated systems.
“By focusing so crucially on the outcomes of automated decisions, it’s easy to overlook questions about how those decisions are applied,” he explained, adding that “investing in improving prediction accuracy seems an obvious priority for facial recognition systems. But this needs to be seen in a wider context of use, where the harm caused by a small number of erroneous predictions outweighs performance improvements in other areas.”
Skyrocketing retail crime rates are putting many New Zealand retailers on high alert. A recent Retail NZ survey found that over 92% of respondents had experienced this. Of the responses received, 82% identified shoplifting as the most frequent crime in the past year, with estimated losses to retailers of $1.1 billion.
A global phenomenon
New Zealand is not the only country to report an increase in shoplifting and violent behavior in stores. In the United Kingdom, where the phenomenon was recently described as a “crisis,” the government announced on April 10 an investment of over 55 million pounds (69 million dollars) in the expansion of facial recognition systems, as part of a new campaign to crack down on shoplifting.
The program was announced alongside the introduction of tougher penalties for shoplifters in England and Wales, including the requirement to wear a tracking system to ensure they do not return to the scene of their crime.
Australian supermarkets have also responded to retail crime with technological surveillance: body cameras have been issued to staff, as well as the automatic locking of carts and exit gates to prevent people from leaving the premises without paying.
In the United States, retail giant Target reported last May that the chain was poised to lose half a billion dollars due to increased theft.
At the head of the triggers for this global upsurge in shoplifting is, of course, inflation.
According to Burt Flickinger, retail expert and chief executive officer of retail consultancy Strategic Resource Group, interviewed by CNN, “millions of people can’t afford to shop, fill up their gas tanks, pay for public transportation, rent or pay off their credit cards.” According to a recent Gallup poll, three out of five Americans — 61% — are suffering financial hardship due to rising prices.
35% margin of error
Last December, U.S. drugstore chain Rite Aid was banned from using facial recognition technology in its stores for five years, following the use of systems to identify customers considered “likely” to shoplift without their consent.
The software wrongly identified several African-American, Latino, and Asian people. The technology sent alerts to Rite Aid employees, by email or phone, when it identified people entering the store who were on its watch list.
The Electronic Privacy Information Center (Epic), a civil liberties and digital rights group, said at the time that facial recognition can be harmful in any context, but that Rite Aid didn’t take even the most basic precautions. “The result was sadly predictable: thousands of misidentifications that disproportionately affected Black, Asian, and Latino customers, some of which led to humiliating searches and evictions from the store,” John Davisson, Epic’s General Counsel, explained to The Guardian last December.
Studies have shown that facial recognition systems regularly misidentify dark-skinned people. “There is a misconception that technology, unlike humans, is not biased. This is not accurate,” explains Dr. Gideon Christian, assistant professor at the Faculty of Law in Calgary and a specialist in the interaction between artificial intelligence and the law.
“Technology has been shown to have the ability to reproduce human bias. In some facial recognition technologies, the accuracy rate for recognizing white male faces is over 99%. Unfortunately, when it comes to recognizing faces of color, particularly Black female faces, the technology seems to manifest its highest error rate, which is around 35%,” he concludes.