In the midst of recent protests of Black Lives Matter, which raised questions about surveillance and racism in the United States and elsewhere, the technology company IBM announced its withdrawal from the general-purpose facial recognition market. Will it be a decisive moment in the use of these technologies – full of drawbacks – by security services?
The news went almost totally unnoticed.
On one side of the news is the poorly managed COVID-19 health crisis. On the other side, the protests and anger ignited by shocking police violence that revived the movement Black Lives Matter Worldwide. Between both sides, IBM announced on June 8 its decision to abandon the commercialization of facial recognition software, motivated by protest actions to honor its code of ethics.
“Technology can increase transparency and help police protect communities, but it should not promote discrimination or racial injustice,” said Arving Krishna, CEO of the company.
The announcement had some reactions, from few political economy analysts, technophiles, and activists. And it could be an indicator of big changes in the influence that technology has on our lives.
A controversial technology
Facial recognition software allows people to be identified from photos or videos in an automated way. To achieve this, it relies on two pillars: a prerecorded image reference data set, and high processing power. These are items that have made amazing progress recently, thanks to innovations in Big Data and artificial intelligence (AI). Massively expanding facial recognition became a possibility.
For several years, examples have emerged from around the world. Back in February 2005, the Los Angeles Police used a system developed by General Electric and Hamilton Pacific. It is a practice that later became general and accelerated. In 2019, China had a total of 200 million video surveillance cameras in the country. A denser network is in preparation in Russia. Not to mention the initiatives of cities like Nice, Nice, which is currently testing the technology, or London, where the cameras analyze the faces of passersby (without informing them) in order to locate people wanted by the authorities.
The authorities justify this automated surveillance with security imperatives: at the end of 2016, the international criminal organization Interpol claimed to have identified “more than 650 criminals, fugitives, suspects or disappeared (…)”. It is all done in the name of fighting crime, terrorism or, more recently, the spread of the coronavirus.
But as with other advanced technologies, facial recognition is a double-edged sword. The progress it brings is accompanied by threats, especially civil liberties.
Several digital rights organizations have been alerting the public to possible threats and abuses that allow facial recognition, such as the Electronic Frontier Foundation (EFF) and La Quadrature du Net. The latter is coordinating a campaign called Technopolice, an initiative that registers and exposes plans automatic surveillance in France, and requests for systematic resistance.
The most important detriment of using facial recognition is biases. These tools identify and verify people based on exposure to sampling data – the so-called training data set. If they are incomplete or irrelevant, the tool will render poor interpretations. It's called learning bias.
In June, to demonstrate those biases, Twitter users tested artificial intelligence that reconstruct portraits from pixelated images, and posted its anomalous results online. There were significantly more failures with African American or Hispanic subjects, such as Barack Obama and Alexandria Ocasio-Cortez. The dataset, which consisted almost exclusively of portraits of white people, erroneously zoomed the reconstructions towards the most likely profiles.
This Image of a White Barack Obama Is AI’s Racial Bias Problem In a Nutshell https://t.co/88Sl9H0lEp
– adafruit industries (@adafruit) June 27, 2020
An image of Barack Obama modified as the image of a white man has sparked another discussion about racial bias in artificial intelligence and digital learning..
This image of a white Barack Obama is the problem of artificial intelligence racial bias in short.
By using artificial intelligence to identify people from images instead of enhancing images, the analytical process likely shows a similar bias.
Imagine that you were the victim of danger, the risk of criminality of a person, based on the number of people who lived, the lieu de résidence, the couleur de peau, the diploma of the most elevé… et que pour entraîner votre logiciel, vous utilisiez les données fournies par les centers de détention, les prisons.
Alors il est fort likely to vote logiciel minimise fortement les risques pour les personnes blanches et l’augmente pour les autres.
Imagine that you decided to evaluate the dangerousness, the risk of criminality of a person based on parameters such as age, place of residence, skin color, higher academic qualification … and that, to train your software, you used data provided by detention centers, or prisons.
So it's highly likely that your software will seriously minimize the risks for white people and elevate them for others.
The facts speak for themselves. In London, real-time facial recognition shows an error rate of 81%; In Detroit, an African American protested his wrongful arrest due to faulty identification.
Legitimacy in dispute
Not only is facial recognition fallible, it exacerbates discrimination, as research by the ProPublica site confirmed in 2016.
Google Photos tagged two African Americans as “gorillas” (…) a recruitment aid tool used by Amazon disadvantaged women
Source: «IA: la reconnaissance faciale est-elle raciste? »Orange Digital Society Forum (fr)
Google Photos labeled two African Americans as “gorillas” (…) Amazon's recruitment tool harms women.
Source: Artificial Intelligence: Is Facial Recognition Racist ?: Orange Digital Society Forum
For the Police, frequently accused of discrimination, facial recognition is an additional element, highly flammable.
George Floyd's death in an incident of police violence on May 25 in Minneapolis triggered a wave of demonstrations in the United States initially, and then around the world. They began as a complaint of discrimination against ethnic minorities. But an increase in violence led protesters to demand the demilitarization of the authorities, with the slogan “Withdraw financing from the Police.” By extension, widespread surveillance tools are also under analysis, as are the private companies that supply them. Now, under pressure from Black Lives Matter activists, IBM announced its partial withdrawal from the facial recognition market.
It is no coincidence that Big Blue (nickname for IBM) was the first to react. The company has a long, and sometimes embarrassing history that it had to learn to cope with. In 1934, it collaborated with the Nazi regime through its German subsidiary. Much later, in 2013, she was implicated in the case of the PRISM surveillance program, exposed by informant Edward Snowden. Perhaps that is why it has been able to elude its role in the current conflict between a security-driven state and human rights activists. Of course, it is possible to find a much more rational argument for IBM's strategy, eager to protect future legal proceedings and their financial cost.
However, the reorientation of its activity is quite real, and has generated an initiative that other giants in the sector are following. On June 12, Microsoft stated that it would refuse to sell its facial recognition technology to law enforcement agencies; low peer pressure, Amazon declared a moratorium on his tool Rekognition.
A step towards reform?
The need for a regulatory framework has become obvious. In his announcement, Arvind Krishna, executive director of IBM, asked the Congress of the United States that “initiate a national dialogue to see if the authorities use, and how they use, the technologies of facial recognition”.
This call has been answered. On June 25, the congressmen presented a bill to prohibit the use of facial recognition by the Police. A day earlier, the ban was endorsed by the city of Boston municipality.
There is no doubt that this is just the beginning of a long political and legal battle, to force the use of facial recognition in a procedure that respects the citizens. But for the first time, human rights movements appear to be in a position to influence large technology companies and the political system, in the direction of the aspiration of profitable technology for all.