The Attack on the Capitol Should Spark a New Debate On Facial Recognition
This week, the world looked on with concern, followed by horror and disgust, as an armed mob of insurgents stormed the US Capitol building, threatening lawmakers and briefly disrupting the process of certifying Joe Biden’s election victory. The Atlantic referred to the attack, accurately, as an attempted coup. The insurgents wandered the halls of Congress, swinging from the walls of the House chamber, ransacking offices, and walking off with mementos. One photo shows an insurgent wearing a Trump hat smiling and waving gleefully, as he walks off with a podium bearing the US seal.
Parties on both sides of the political spectrum have condemned the violence. But during the whole terrible spectacle, actual arrests were glaringly absent. According to NPR, by Thursday morning only 70 people had been arrested in relation to the attacks — most for minor charges of violating a citywide curfew — out of a mob that appeared hundreds strong. The law enforcement response to the attacks has already drawn harsh criticism, with the New York Times reporting that Capitol police were hopelessly outnumbered and under-supported, and others drawing stark comparisons with the response to Black Lives Matter protests. NBC News reported that the police likely felt that their lives were at risk, and stood their ground instead of attempting arrests.
That’s a big problem. As Vox’s German Lopez wrote — drawing on concepts from the field of criminology — “officials have to be serious about punishing these wrongdoers. Otherwise, they’ll send a signal that what transpired on Wednesday was actually fine, making it more likely to happen again.” If officers weren’t willing to make arrests on the spot, Lopez suggests, they need to go back and do so now. Through the actions of brave photojournalists who risked their lives to document the attack, “much of the day’s events were recorded and photographed”, Lopez writes, “If they’re serious about punishing these wrongdoers, police could use this evidence…to track down the hundreds of people involved.”
Law enforcement already has a tool to do this: facial recognition. Controversial companies like Clearview AI works with at least 2,400 law enforcement agencies, and has a face database 3.1 billion images strong. After the New York Times exposed Clearview’s activities in a landmark investigation in January of 2020, I used the California Consumer Privacy Act to obtain my own face profile from the company. From my own experience, I can confirm that Clearview can easily locate a person based only on a single social media photo of their face.
My article was cited in a landmark class action lawsuit against Clearview AI by the ACLU, and most coverage of my story focused on Clearview as a clear villain. Indiscriminate use of facial recognition is absolutely an evil that we should legislate away. And Clearview itself is a potentially dangerous entity, with alleged ties to the far-right. But nearly everyone missed a second point I made in my story.
As I wrote at the time, “Any legislation governing technologies like Clearview’s should protect citizens from random searches. But at the same time, it should allow authorities to use services like Clearview when their use is justified”. With the right checks and balances (both technical and legal), facial recognition has a time and a place where it’s valuable and necessary. And that time and place is right now.
One of the central ironies of the capitol attacks is the fact that because many of the insurgents were maskless, their faces are clearly visible in photos — both those taken by photojournalists and those posted to social media by the attackers themselves. This would likely provide more than enough visual evidence for facial recognition to identify every individual who showed their face inside the Capitol building.
As a quick test, I ran an image of a shirtless insurgent wearing horns — who has already been identified as extremist Jake Angeli — through Pimeyes, a publicly-available Clearview copycat. It quickly directed me to photos of Angeli at other rallies, dating back to November of 2020. Commercial facial recognition technology is almost certainly better than Pimeye’s. If the political and legal will exists to track down the Capitol attackers, the technical tools are almost certainly there — even, perhaps, in the rare cases where the attackers wore masks.
To be clear, any use of facial recognition technology is potentially problematic. But then, so is any search, seizure, or other action that potentially violates a person’s privacy and civil liberties. That’s why the United States has a constitution — as well as centuries of case law — defining when searches are and aren’t acceptable.
The problem is that so far, facial recognition has existed largely outside that system of law. Powerful platforms like Clearview’s have been handed out indiscriminately, and used for personal reasons, or for the investigation of minor crimes, like shoplifting. This has led to a string of false arrests, predominantly affecting People of Color. We wouldn’t call in the National Guard to hand out parking tickets, nor should we use facial recognition — especially without any court order — to investigate minor crimes.
But when the US Capitol is breached, we need a legal framework to use facial recognition technologies ethically, so attackers can be brought to justice. Moratoria on the technologies are probably the only way to protect innocent people in the short term. But they’re a blunt instrument, preventing the use (and development) of facial recognition in the few contexts where it would be genuinely valuable and compatible with America’s values.
One core issue with existing facial recognition technologies is the fact that they draw on a large, pre-existing database of faces. Building and maintaining this database requires profiling millions of innocent people, as well as potentially violating copyrights, international law, and the Terms of Service of thousands of websites, and exposing face data to breaches. Even if Clearview was used only for court-ordered searches in limited contexts, the company would still need to maintain a database of the faces of millions of Americans, which potentially violates States’ laws and individuals’ privacy.
Another solution would be to create a new, targeted database each time a specific investigation was launched. For the Capitol attacks, facial recognition could crawl the websites of known Q Anon groups, or begin with the social media profiles of attackers who have already been identified, seeking photos of others involved in the attack. They could expand their circle of data out only as far as was necessary to identify the individuals involved.
When the investigation was complete, the database could be deleted. In a traditional investigation, police already interview scores of “people of interest”, most of whom don’t turn out to be suspects. Gathering facial data on people potentially related to a crime or attack — and later deleting it — would be similar to this existing police procedure. It would also be compatible with the Fourth Amendment, which mandates that any search is “limited.”
This would be vastly more expensive and difficult than building a master database of every American and then reusing it ad infinitum, as companies like Clearview currently do. But then, using technologies like facial recognition — like deploying the US military to a city — should be difficult and expensive, so that neither measure is used routinely or indiscriminately.
Even with constitutional limitations, America desperately needs more and better legislation defining the situations where the use of facial recognition is permissible, and where it’s not. Absent this guidance, facial recognition companies are left with the option to either ignore laws and ethics (as Clearview AI has allegedly done), or to exit the field altogether (as IBM did last year). This creates a vacuum of ethical actors in the field, and leaves those who allegedly flaunt state laws as the only remaining vendors of facial recognition technologies.
Societies and developers also need to take aggressive steps to identify and combat the biases in existing facial recognition systems. Without these steps, the systems put specific racial or ethnic groups at increased risk of arrest and prosecution — an action which is explicitly forbidden by the Constitution. New technologies like synthetic content — as well as better training and oversight — may make strides towards systems which are less biased and less risky.
Attackers who stormed the Capitol could face sedition charges, and significant jailtime. Given that their actions led to the injury of multiple law enforcement officers and at least one death — not to mention a major erosion in the world’s trust of America’s democracy — these consequences seem appropriate. But these charges cannot be levied unless the attackers are actually caught and brought to justice.
The midst of a major crisis is often a bad time to enact legislation with far-reaching implications, like legislation to control the use of facial recognition technologies. By crises also serve to clarify which rights and abilities are important to a society, and what steps are necessary to protect our democracy and ensure equal protection.
The attack on the Capitol provides an opportunity to bring facial recognition technologies into the open, and to explore and debate the possibilities — and limits — of their use. Using these technologies to identify and prosecute the attackers would send a clear message that storming a Federal building is a major crime, and will be treated accordingly. Using the technologies now might also prompt a discussion, debate and new laws and procedures that ultimately makes their future use safer, more restrained, and more just.
Update 1/10: I updated this article to focus less on Clearview and more on facial recognition broadly, as several people pointed out that Clearview has far-right ties that distract from my 4th amendment argument.