Couriers say Uber’s ‘racist’ facial identification tech got them fired

BAME couriers working for Uber Eats and Uber claim that the company’s flawed identification technology is costing them their livelihoods
Getty Images / WIRED

Uber Eats couriers say they have been fired because the company’s “racist” facial identification software is incapable of recognising their faces. The system, which Uber describes as a “photo comparison” tool, prompts couriers and drivers to take a photograph of themselves and compares it to a photograph in the company’s database.

Fourteen Uber Eats couriers have shared evidence with WIRED that shows how the technology failed to recognise their faces. They were threatened with termination, had accounts frozen or were permanently fired after selfies they took failed the company’s “Real Time ID Check”. Another was fired after the selfie function refused to work. Trade unions claim this issue has affected countless more Uber Eats couriers across the country, as well as private-hire drivers.

Workers that made thousands of deliveries and had 100 per cent satisfaction rates say they were suddenly removed from the platform, in a process they allege was automated without any right to appeal. In messages to the company, many pleaded for their jobs back. Each of them received the same response saying their termination was “permanent and final” and that Uber hoped they would “understand the reason for the decision”.

William* would regularly work 16 hours a day, six or seven days a week delivering food for Uber Eats in Sheffield, until he was blocked from accessing his account in October 2020 after a selfie check. “I was trying to message them all day but they just gave me the same copied answer every time,” he says.

“I only earned money from Uber Eats. I used that money to pay for my bills, my rent, my car insurance, my food, my phone, everything,” he says. He says he was forced to borrow £1,000 to pay his bills that month. It was only after he contacted the Independent Workers’ Union of Great Britain (IWGB), which subsequently sent a letter to Uber threatening to go public, that the company recanted.

Uber says couriers are given a choice between AI and human verification when they submit their real-time selfies, but couriers say that if they opt for the latter, no one overrides the mistakes the software can make. David* had shaved his beard in preparation for a job interview when he logged into the app. When he submitted a selfie, the app said the photo was not of him and asked David to provide information about the person that was replacing him as a courier within 24 hours or risk termination.

“I tried going through the official routes but all it showed was options to provide information on my substitute. There was no way to say they had made a mistake,” David says. While a call from a journalist to Uber’s press team stopped his account from being permanently closed, many others weren’t as lucky.

William and David, like the other couriers affected, were accused of illegally subcontracting their shifts to another person. While their status as independent contractors technically allows Uber Eats couriers to subcontract their work, concerns have been raised about giving work to those who haven’t had background checks or can’t legally work. To stop this, in April last year Uber added an identificaiton step when people open the app. Couriers and drivers are required to take a selfie to prove that they are the ones logged on.

Uber uses Microsoft face-matching software to verify the identity of its couriers when they submit pictures of their own faces. But this type of software has a track record of failing to identify people with darker-skinned faces. In 2018, a similar version of the software used by Uber was found to have a failure rate of 20.8 per cent for darker-skinned female faces. For darker-skinned male faces – the vast majority of Uber Eats drivers are male and many are from BAME backgrounds – that figure was six per cent. For white men the figure was zero per cent.

One study of 189 different facial identification systems found that all of them performed markedly worse when identifying non-white faces, sometimes by a factor of ten or even 100. Then there’s the fact that things that work in a lab don’t always perform in the real world. Selfies taken by Uber drivers will be lower quality images, on older phones, often in badly lit areas – conditions a million miles from the professional studio images these programmes are trained on.

“There is no facial recognition software that performs equally across different ethnicities. That technology doesn’t exist,” explains professor Peter Fussey, a sociologist at the University of Essex who specialises in facial identification software. “If you’re bringing in that technology into an already unequal environment it just exacerbates those conditions and amplifies racial inequality.”

A spokesperson for Uber stressed that the company needed the verification check to protect against “potential fraud”. The spokesperson claims that any decision to remove partners from the platform “always involves a manual human review” and “anyone who is removed can contact us to appeal the decision”. The spokesperson did not comment on whether Uber had ever done any audits looking into the accuracy of its verification system. Microsoft did not respond to requests for comment on its technology’s failure rate.

“It’s what these workers fear most about heading to work,” says Alex Marshall, president of the IWGB. “And we’re definitely seeing more numbers of the BAME community being affected. It’s definitely indirect racism.” He says drivers are turning to food banks and shelters because they can’t afford to pay their rent and bills after losing the right to work on the app. One driver even had to “pitch a tent in a graveyard”, he says.

The IWGB says the lack of a legal framework protecting gig economy couriers from dismissal makes a court case hard, but it is fighting to reverse dismissals on a case-by-case basis and pushing for an Early Day Motion to be filed in parliament.

Uber has already been sued for its use of facial identification technology. In 2019, a Black driver in the US launched a claim, saying he was fired after selfie software didn’t recognise him in “pitch darkness”, forcing him to artificially lighten his photos.

All of the Uber Eats drivers WIRED spoke to said neither they nor anyone they know had ever subcontracted shifts, and that they earn so little it wouldn’t make sense to do so. “Most of our members who’ve been fired for substitution are working 60 hours a week and still can’t make ends meet,” says Marshall. “When are they meant to be able to lend their account to someone else?”

One reason illicit subcontracting has been turned into such an issue comes down to regulation. As part of its deal to reinstate Uber’s private-hire taxi licence to operate in London, Transport for London (TfL) pressured the company to crack down on illegal subcontracting to protect people from unlicensed drivers.

Uber’s facial identification system not only applies to workers at Uber but also at sister firm Uber Eats. A spokesperson for TfL highlighted that in late 2018 to early 2019, 14,000 trips were made by 43 unauthorised Uber drivers in London, while stressing that its concerns about fraud were supported by the chief magistrate. Critics of the decision have pointed out that those 43 drivers made up a small proportion of the 45,000 licensed Uber drivers operating in London at the time.

Uber isn’t the only company expanding into this area. Last December, fellow ride-share firm Bolt raised €150 million in a new funding drive to implement AI and driver facial identification checks similar to those used by Uber.

Updated 01.03.21, 12:40 GMT: Uber’s system uses facial identification technology, not facial recognition technology, to identify people.

This article was originally published by WIRED UK