Protests and pandemics

BLM

In our last issue of Security Buyer, our cover story looked at the BLM protests that are happening all over the world. The story looks at how facial recognition, policing and technology keep up with the ever changing environment of security, morality and health.

It’s been almost two weeks since people first took to the streets in Minneapolis to protest police brutality following the deaths of George Floyd and Breonna Taylor at the hands of the police. Since then, BLM demonstrations have gained momentum and spread to cities across the world.

The ability to protest is a critical component of healthy democracy, a megaphone to grab the attention of those in power and compel change. In the US, it’s a constitutional right. However, large scale protests have a habit of either provoking reaction from opposing groups or installing mob mentality in the form of rioting and looting. As Michael Rozin from Rozin Security states, “Many times a crowd turns a peaceful protest into a violent confrontation.” Sometimes, this violence is necessary, but most of the time it is not. 

This factor brings into question the capability of law enforcement policing these events and the ethics of technology usage. 

Facial recognition

Facial recognition has long been debated across the world, particularly in the US and the UK regarding the morality and privacy of civilians. Increasingly, law enforcement agencies are requesting protest footage and images and the latest technologies are bringing with them the power to cast an ever wider surveillance net.

When San Francisco became the first US city to ban facial recognition in May 2019, perhaps legislators had something like the recent weeks in mind. The Bay Area is no stranger to civil disobedience and demonstration. But in bygone eras, anonymous protest was guaranteed by numbers. Just a face in the crowd actually meant something. Now, with smartphones, high-definition cameras and powerful algorithms, anonymous protest may soon be a thing of the past, if it is not already.

While cities such as Oakland, Berkeley, California, Somerville and Brookline, Massachusetts have also banned facial recognition, other cities around the country still allow and have and are actively used facial recognition in law enforcement. Chad Wolf, the acting department of Homeland Security Secretary, said that CBP and Immigration and Customs Enforcement would be deployed to help state and local enforcement across the country with surveillance in light of recent events. 

The Department of Homeland Security, which oversees CBP and ICE, manages one of the country’s largest facial recognition databases, called IDENT. It houses the identities of more than 250 million people, taken from international airport arrivals and other border crossings. The DHS is working to be able to access more than 300 million additional people’s identities held by other federal agencies, though it can already access many of those by formally partnering with the agencies on an investigation.

The FBI’s Joint Terrorism Task Force was also deployed in Denver to arrest violent protesters. The FBI has access to a facial recognition database of more than 640 million images and is currently being sued by the ACLU for documents detailing how the technology is used. 

Facial recognition algorithms identify people by searching for and matching them with labeled images in vast databases. These databases can be limited to mugshots or they can include far bigger groups, like DMV drivers license photos. In a particularly contentious example, startup Clearview AI composed a database of billions of images scraped from thousands of sites online without consent—including the likes of Facebook and YouTube—and sold access to the database and facial recognition software to hundreds of law enforcement agencies.

Non-federal bodies have been accused of using Clearview AI. Hundreds of state and local police departments in the United States have access to Clearview AI, allowing them to run facial recognition searches against billions of photos scraped from social media. It’s unknown exactly how many people are in Clearview AI’s database, and little is known about how it operates. There has never been an independently verified audit of its accuracy. The software has been used hundreds of times by law enforcement officers in the Minneapolis area as of February, according to documents reviewed. The Minneapolis Police Department, Hennepin Sheriff’s Department (whose jurisdiction includes Minneapolis), and the Minnesota Fusion Center (which analyses law enforcement data for the state) all have access to the company’s technology.

The capability is there and the systems have been used, but how law enforcement is employing facial recognition day-to-day and during the protests isn’t always clear. 

Used responsibly, the technology can be a valuable tool for more successfully locating people who’ve committed crimes and producing further evidence in identification and CCTV. But its limitations have also been well-documented, not only in terms of overall accuracy, but also built-in bias, with some algorithms misidentifying people of color and women at much higher rates.

“At a high level, these surveillance technologies should not be used on protesters,” Neema Singh Guliani, a Senior Legislative Counsel for the ACLU describes. “The idea that you have groups of people that are raising legitimate concerns and now that could be subject to face recognition or surveillance, simply because they choose to protest, amplifies the overall concerns with law enforcement having this technology to begin with.” However, aside from the BLM movement, what if there is a march or protest that is not raising a genuine concern or issue such as BLM, gender equality, LGBTQ+ pride and more, and is instead projecting discriminative beliefs to the public, such as the rise in white supremacism. Would it then still have many condemning its ability and right to identify individuals?

London’s Laws

British police have been warned against using facial recognition technology during anti-racism protests, with privacy advocates saying such a move would “inflame tensions in an unimaginable way”. During protests that have stemmed from a police brutality event, despite the roots regarding systemic racism, is the use of a controversial facet a good idea when it could potentially further divide law enforcement and the security industry against the masses?

Big Brother Watch Director, Silkie Carlo said the Government urgently needs to introduce tougher laws surrounding use of the technology. She said the longer it waits “the worse it’s going to get. If we see live facial recognition being used in these protests, it will inflame tensions between communities and the police in an unimaginable way because it is probably the most dystopian surveillance technology that we’ve seen in this country for a generation.”

Despite concerns, over the past few months the Met have been re-introducing and using live facial recognition (LFR) for identification purposes. A Met spokesperson said, “Our primary focus is on the most serious crimes so the watchlist will include people wanted for serious offences for example knife and gun crime and those with outstanding warrants who are proving hard to find. The technology cannot identify people who are not on the watchlist.The biometric data of those who do not cause an alert is automatically and immediately deleted.” Police are alerted if there is a potential match and officers then decide whether to approach the member of the public, the Met explained. 

Defiance

Wearing makeup has long been seen as an act of defiance, from teenagers to New Romantics. Now that defiance has taken on a harder edge, as growing numbers of people use it to try to trick facial recognition systems. Interest in so-called dazzle camouflage appears to have grown substantially since the Metropolitan police announced that officers will be using live facial recognition cameras on London’s streets – a move described by privacy campaigners and political activists as “dangerous”, “oppressive” and “a huge threat to human rights”.

Unlike fingerprinting and DNA testing, there are few restrictions on how police can use the new technology. And some of those who are concerned have decided to assert their right not to be put under surveillance with the perhaps unlikely weapon of makeup. Members of the Dazzle Club have been conducting silent walks through London while wearing asymmetric makeup in patterns intended to prevent their faces from being matched on any database.

“There was this extraordinary experience of hiding in plain sight,” said Anna Hart, of Air, a not-for-profit art group, who founded the club with fellow artists Georgina Rowlands and Emily Roderick. “We made ourselves so visible in order to hide. The companies selling this tech talk about preventing crime. There is no evidence this prevents crime. It might be sometimes used when crime has been committed, but they push the idea that this will make us safer, that we will feel safer.”

Facial recognition works by mapping facial features – mainly the eyes, nose and chin – by identifying light and dark areas, then calculating the distance between them. That unique facial fingerprint is then matched with others on a database. Makeup attempts to disrupt this by putting dark and light colours in unexpected places, either to confuse the technology into mapping the wrong parts of the face or concluding there is no face to map. The concept was created by an artist, Adam Harvey, who coined the term “computer vision dazzle”, or “cv dazzle”, to mean a modern version of the camouflage used by the Royal Navy during the first world war.

This new use of defying technology poses another challenge for facial recognition security companies to tackle. As our society transcends further into a digital age, privacy is a concept of the past. We willingly update our lives on the internet, where nothing is private and CCTV is now the norm within city centres, as well as domestic buildings and offices. One look at our mobile phones and it unlocks, presenting our data, including bank information, photographs and emails, which are all now accessible. Our face is not private anymore. The rise in facial technology and biometrics within law enforcement, travel and security is inevitable. If it is being used correctly and in a positive light, why should it not be. If your identity is no longer masked within a crowd and the risk of consequence is present, a peaceful and meaningful protest may be prevented from turning violent. Violence tends to remain the focal point in the aftermath and puts the original message and group into a bad light. 

Facial covering

The timing of the BLM movement is unique. As the protests have taken part during a global pandemic where health is the top concern on everybody’s minds. In large cities, especially London and the US, at a time when mass gatherings are banned, health steps and preventatives are continuously being used in the form of social distancing and facial masks and gloves. With larger protests, where so many members of the public are present, social distancing is not possible, however many individuals and families have been wearing masks to protect themselves and others from the risk of spreading COVID-19. 

In the age of the coronavirus, face masks have become a part of normal life. Previously the use of masks, especially during protests and riots, had been associated with illegal acts of looting, violence and vandalism. However, they are now a safety requirement in many places and for some people, a fashion statement. But for facial recognition technology, they pose a major challenge. The US Centers for Disease Control and Prevention has recommended wearing face coverings to help fight the spread of COVID-19, that’s killed more than 302,000 people around the world. And governments in more than half the US states and other parts of the world are making masks mandatory in various public settings. 

But don a mask and stare at your iPhone or Android device to unlock it, and you quickly see the problem for facial recognition. 

Before the novel coronavirus hit, facial recognition providers were expecting to install their technology everywhere: in airports, casinos, restaurants and schools. Face masks threaten to change all that, but the industry is looking at the situation more as a speed bump than a roadblock. Surveillance products are now forced to identify people through just their eyes and cheekbones, which causes many issues when it comes to crime, surveillance and access control. 

Some companies assert that their technology isn’t affected by masks, and that artificial intelligence can still detect and identify people with a high accuracy rate, even when half the face is covered. A public beta program for Apple’s latest iOS release showed that the tech giant is updating its Face ID to account for people wearing masks. 

Experts on facial recognition are skeptical about claims that the technology isn’t fazed by masks. After all, even without masks, facial recognition can stumble — studies have found that the majority of facial recognition algorithms had a higher rate of false positives for people of color by a factor of “10 to 100 times.” And because of the pandemic, these algorithms can’t be properly tested with face masks by the US’ National Institute of Standards and Technology, or NIST, which many consider the leading authority on facial recognition accuracy rates. 

Still, facial recognition is being proposed as a solution for COVID-19, without any proof that the surveillance measure has any benefits, or even works properly with masks on. “These workarounds are part of a larger effort to make an ever-expanding surveillance infrastructure a fundamental component of COVID-19 response governance,” Evan Selinger, a Professor of Philosophy at the Rochester Institute of Technology, said in a statement.

Masks have long been a method for avoiding facial recognition. Protesters in Hong Kong relied on them to beat the government’s facial recognition, prompting a mask ban there. 

“The greatest amount of biometric data that uniquely sets us apart resides in the central portion of the face, just above the brow line all the way down to the chin,” said Eric Hess, Senior Director of Product Management for Face Recognition at facial recognition company SAFR. “When we put on face masks, we are blocking access to a significant amount of data points that help us differentiate one person from another.”

UK-based Facewatch said it’s releasing an algorithm that can handle detection and identification based on just a person’s eye and eyebrow region. The company is proposing its technology for retail stores and says the development will extend beyond masks to other coverings, such as the religious veil called a niqab that’s worn by some Muslim women. Facewatch had already been working on identifying people who are wearing hats and glasses, said company spokesman Stuart Greenfield. Its customers, mostly retail stores looking to keep shoplifters on a watchlist, didn’t consider mask detection much of a concern, until the pandemic began. 

“All we need is the government to insist on [face masks], and the whole sector will have to react very rapidly,” Greenfield said. He added that Facewatch’s new algorithm will be able to ID people because their eyes and eyebrows are fixed points on the face and don’t change over time.

Still, Facewatch expects some complications because of face masks. Its algorithm typically identifies a person in half a second, and Greenfield said it could take longer because of the masks. But the company said it’s doing everything it can to make the new algorithm effective. 

“Everyone’s working right now to ensure that we’re fit for the market,” Greenfield said. “Our future depends on having a product that works accurately.”

SAFR, which promotes its technology for use in schools, also says its tools can handle face masks. “Our algorithms are now being trained with images of people wearing face masks,” Hess said. Until recently, the masks hadn’t been very present in society, “so they were not really added as a training dynamic before,” he said. To train its algorithm, SAFR is relying on a hoard of photos of people wearing face masks, some shots that it creates on its own, and others its staff members have provided at the company’s request. Hess said the company is training its algorithm on a diverse set of images, to account for differences in gender, race and age.

The accuracy rate of the tools is 93.5% when people are wearing masks, Hess said, but only under ideal conditions, such as when the subjects are depicted in a high-quality photo with proper lighting.

The result

2020 has proven to be an interesting year to say the least. Not only has it tested the security and law enforcement industry to its limits, but it has also highlighted some major concerns in terms of way of life, of which surveillance, CCTV, thermal imaging and facial recognition industries must take on board and adapt to. A global health crisis, facial coverings, protests and movements, rioting, looting and vandalism, are to name but a few. 

If the industry, law enforcement and government wish to succeed, they must adapt new technologies and act within careful and considered means. 

 

 

Share this article on Twitter or LinkedIn.

See more news here.

Subscribe to our newsletter

Don't miss new updates on your email
Scroll to Top