In recent months, revelations that dissidents, human-rights defenders, journalists, and opposition politicians around the world have been tracked by Pegasus, a spyware developed by the Israeli cyber-arms company NSO Group, have generated a global media and public outcry. But Pegasus and its ilk are merely a drop in a global multi-billion-dollar market, in which private companies compete to provide repressive governments with the tools to illegally track and spy on their own citizens.
One of the most widely discussed forms of spyware these days is biometric face recognition. This technology has already attracted much criticism, and in some countries its use has been banned altogether.
On the surface, face recognition technology may sound more innovative and futuristic than dangerous and harmful — which perhaps allowed this technology, and other means of biometric surveillance, to become a ubiquitous part of our everyday lives without much pushback. Today, it is used everywhere: in airports, in our cell phones, in the supermarket, and, of course, by “security forces.” We are also witnessing a massive incorporation of these technologies into hospitals, malls, and other public spaces.
Yet, while this technology does indeed have several potential advantages, it also raises a plethora of problems and risks concerning privacy, security, human rights, and the oppression of political dissidents and minority groups.
So how does it work? The software scans photo stocks, including pictures from driver’s licences and police photos, and crosschecks them with footage from security cameras, street cameras, and videos from other sources. These software systems then map out facial features that allow for matching and recognition. The main feature captured by these software systems is facial geometry: the distance between the eyes, between forehead and chin, etc. Those create what is called “signature identification” – a mathematical formula which can then be compared to the photo stocks.
The facial recognition market is growing exponentially. According to studies from the past year, facial recognition is expected to grow from a 3.8-billion-dollar industry in 2020 to $8.5 billion by 2025. The most significant use of this industry is in surveillance, which is generating concern among the public and human rights organizations worldwide — not least with regard to Israel’s control over the Palestinians.
A recent investigation by the Washington Post revealed that the Israeli army is using a facial recognition technology in the occupied West Bank called “Blue Wolf.” Using this system, Israel is building a database of the Palestinian population based on pictures taken by soldiers in the street, at checkpoints, and in Palestinians’ homes; according to the Post’s report, soldiers are competing to take the most photos of Palestinians in order to fill the database.
Even before this, it had been known for some time that Israel uses facial recognition technologies in the occupied territories. According to a 2019 investigative report from NBC, the Israeli company Oosto, which until recently was known by the name Anyvision, provided the Israeli military with a technology called Google Ayosh (an acronym for “Judea and Samaria” which the Israeli government uses to refer to the West Bank). This technology is based on cameras that are spread across the West Bank with the purpose of identifying individuals through facial recognition technologies.
It was further reported that this same system was used by the Israeli police to identify Palestinians in the streets of East Jerusalem. The reports sparked widespread international criticism of the company and caused Microsoft to divest from it.
Despite criticism and ethical concerns, however, the company received investments of $235 million in the last funding round. According to the Database of Israeli Military and Security Export (DIMSE), among the many countries worldwide which use Anyvision/Oosto’s facial recognition technology are Hong Kong, Spain, Mexico, Russia, Japan, and the United States. According to the organization Who Profits, the company’s technology stock currently amounts to 100,000 cameras which are spread across more than 40 countries.
In 2020, Anyvision/Oosto founded a subsidiary company called SightX, in partnership with the Israeli weapons company Rafael. SightX specializes in the development and manufacturing of technologies for military and security purposes, such as drones with facial recognition technologies that can be used inside cities and buildings. Avi Golan, the company’s CEO, said in an interview to Forbes magazine that while the company does not have drones with facial recognition technology yet, these will soon become a reality.
Drones are already used by the Israeli military in protests in Israel, the West Bank, and Gaza for surveillance purposes, and sometimes even for dropping tear gas grenades on crowds of protesters. It is only a matter of time before these drones are equipped with facial recognition technologies too.
Anyvision/Oosto is not the only player in the market of facial recognition technologies. Another very successful Israeli company is Corsight AI, which is co-owned by the Israeli company Cortica and the Canadian company AWZ. And much like Anyvision/Oosto, Corsight prides itself on the fact that its workers gained their expertise while working in Israeli intelligence and security forces.
DIMSE recently revealed that among Corsight’s clients are police departments in Brazil and Mexico, two countries known for their extreme levels of police brutality. According to DIMSE, Corsight itself has stated that the Israeli police is among its clients — a statement that the latter never confirmed. In an interview to AFP, Rob Watts, the company’s CEO, said that the company “has a number of contracts in Israel — governmental contracts and agencies.”
It remains unclear whether or not this same technology is used against Israeli citizens — Palestinians and Jews alike. Two petitions submitted by the Association for Civil Rights in Israel (ACRI), to the Israeli police and the Israeli army, have gone unanswered. However, a piece of legislation promoted last year by the Israeli police indicates that they are, at the very least, intending to deploy this technology within the Green Line.
The proposed bill would grant the police permission to use any footage caught by cameras in public spaces without the need for a court order, as well as enabling the establishment of a system of facial recognition cameras across the country. This would make it possible for police to identify the faces of civilians and compare them against police databases — a step that would further harm those groups already suffering from discriminatory treatment by the Israeli police: Ethiopians, Mizrahim, and Palestinians.
This law is so extreme that even Israel’s National Cyber Directorate, a government body, opposes it, stating that “the law raises concerns about the leaking of data collected by the cameras and might cause harm to innocent civilians, due to the cameras’ low identifying ability.” The same police force that uses dubious software like NSO’s Pegasus spyware to track Israeli and Palestinian civilians is now seeking to pass a law that would allow it to fully and “legally” use biometric technologies in public spaces. If we choose to believe Corsight, this is already the reality.
‘Technology is not neutral’
Facial recognition technology offers several advantages: the ability to organize images, to secure computers or other electronic devices, and to function as an aid for the visually impaired. It can also be used to better secure ATMs or prevent fraud and break-ins to online accounts. And, of course, the main rationale given by the police: it can be used in the war on terror and crime. But what are the dangers of using this technology?
Often, biometric and facial recognition technologies are discussed in the context of privacy violations. In some ways, this is correct: they constitute a violation of the individual’s right to privacy from the state, from private companies engaged in this sector, and from other agents who might break into and steal biometric databases. But the more important problem with these technologies is their ability to cement and enforce existing power relations.
Technology is not neutral; it is a product of society, made up of algorithms created by human beings. Many studies have shown, for example, that while biometric signatures work relatively well when identifying white men, they are very bad at identifying non-white men, and are terrible at identifying non-white women. Around the world, people have been wrongly arrested and accused due to errors in facial recognition.
One such case is that of Nijeer Parks from New Jersey, who was wrongly arrested for stealing a candy bar and attempting to run over a police officer. He spent 10 days in prison and paid $5,000 for a crime he did not commit, due to an error in a technology that had already been proven to be inaccurate and to regularly mis-recognize black people. Last year, Parks sued the police and the attorney general for false arrest and the violation of his rights.
Facial recognition technology is racist not only because of these shortcomings, but also as a result of how it is used. In the United States, for example, the technology was used to locate immigrant families; in China, a facial recognition technology was developed to identify the faces of Uyghur Muslims, and is being used today by Chinese state authorities to track and oppress this ethnic minority group within the context of an ongoing genocide.
Facial recognition technologies allow security forces to store photos of civilians, while simultaneously almost totally denying those civilians the right to avoid being photographed. The technologies used by intelligence and police forces are not exposed to the public, and the algorithms that are used to operate these technologies are hidden from researchers. It would be tantamount to convicting someone based on DNA sampling, with no one except the company doing the sampling having access to the methods with which it was conducted, or even the information regarding the DNA sequencing which is being tested.
What’s more, facial recognition technologies increase the power of the state to identify social and political movements, as well as the members of those groups. If the authorities have permission to scan demonstrations, and to then identify everyone who participated in them, this will have the effect of deterring people from demonstrating or participating in any form of opposition to the regime, especially in oppressive regimes.
Mass social movements like the demonstrations in Cairo’s Tahrir Square that ended Mubarak’s regime in 2011, the women’s demonstrations against the criminalization of abortion in Poland in recent years, or the demonstrations against evictions of Palestinian families from their homes in Sheikh Jarrah, East Jerusalem, are all based on the knowledge that, apart from a few leaders, the masses can participate in them under the cloak of relative anonymity. Facial recognition technology eliminates this anonymity.
In March 2022, for example, Israeli soldiers operating in the South Hebron Hills region of the West Bank photographed international human rights activists with a digital camera they received from the army to photograph Palestinians. A video from the incident depicts the soldiers discussing the “Blue Wolf” system and saying: “The brigade commander told me it’s very important to get pictures of their faces, so they don’t let them into the airport the next time.”
Exploiting a global health crisis
When the COVID-19 pandemic broke out, weapon and cyber companies, as well as the Mossad and the Israeli army, were incorporated into civic spaces and medical establishments in Israel. Anyvision/Oosto, for example, placed body-temperature cameras in hospitals across the country, and later also facial recognition cameras, which were used to identify people who refused to wear masks.
At Sheba hospital in the city of Ramat Gan, this system was connected to 600 cameras stationed across the hospital complex, and an alarm went off any time the system recognised a person without a mask; it is unknown whether the hospital staff, patients, or visitors were aware of the use of this technology.
Ichilov hospital in Tel Aviv used a similar system developed by Israel Aerospace Industries (IAI) even before the COVID-19 outbreak, to surveil patients in their rooms in order to reduce visits by nurses and doctors. Freedom of information requests submitted to the Ministry of Health, IAI, and the hospital, remain unanswered.
Corsight also took advantage of the pandemic to promote its facial recognition technologies. Shortly after the outbreak, the company boasted that it had developed a new technology that enabled facial recognition even of people wearing masks. That same month, the company received investments totaling $5 million.
And so, companies that produce facial recognition technology exploited a global health crisis to promote their own products, and to allow the state to track and spy on its citizens and their movement. Ostensibly, this is a temporary response to an acute public health emergency. But there is nothing to suggest the new tech isn’t here to stay.
As we have seen through the Shin Bet’s surveillance of Israeli citizens and Palestinians, as well as through the militarization of public spaces led by the Israeli police during the COVID-19 crisis, we face a real danger of surveillance systems being used under the guise of public health. This danger becomes greater when such actions take place in countries that violate human rights, like Israel, where the discourse on surveillance during COVID-19 takes place under a false dichotomy between “security” on the one hand and “freedom” on the other.
One of the biggest concerns for human rights activists is the normalization of such technologies: once a repressive policy (restrictions and surveillance) is implemented on the ground during a crisis, there is a danger that the state will continue implementing it even after the crisis is over.
We are at a critical moment. Amid the rise in usage of facial recognition technologies and the acceleration of its development during the pandemic, journalistic investigations are uncovering more and more ways that these Israeli cyber companies are violating privacy and human rights around the world, as well as here in Israel-Palestine.
The Israeli surveillance industry is unsupervised and is expanding at what seems like an unstoppable pace. Perhaps Israeli citizens are already being identified by the police on their way to demonstrations and political gatherings — and we know that this is already the case in Gaza and the West Bank. Soon, this identification will also be taking place via drones flying above us in demonstrations, and body cameras worn by police officers.
This is not just an invasion of privacy, it is a real threat to our most basic rights. The time to go out and protest against this industry is now, before it becomes too dangerous to protest at all.