Did a cop plant a bomb on a Washington, D.C., man? Perhaps. The D.C. police department claims that that isn’t true.

According to a recent article published in Metro Weekly by Whitney Beal, Clearview AI says that it “improved upon” the police department’s facial recognition software, in a blind beta test which profiled suspects based on personal information for which it was not given proper authorization to view. This included not only fingerprints, but also the suspect’s face, “The resulting system had stronger internal security features for people who were seen in the public, and had fewer false matches than the current system.” Clearview says that its software resulted in a 63 percent accuracy rate, considerably higher than the current system’s 53 percent accuracy rate.

The police department says that is it in the wrong, accusing the makers of the software of withholding information that may have given bias to the algorithm’s output. They also mention the algorithm’s patent rather emphatically.

The testing concluded that the algorithms from Clearview and BioServe Corp. were somewhat deficient and in need of improvement. One company official told Metro Weekly that the software failed to recognize a 55-year-old African-American man. That lack of accuracy led the police department to wrongly suspect him of being involved in a terrorist plot. “They say that they allowed us to look at a more facial dataset to see if we could get some better face recognition algorithms and then compared it to the [used] algorithm for the department.” That second dataset was not requested by the police department.

Clearview says that BioServe had rejected the police department’s requests for face-based data: “They say that they released some of their facial data to us under their SafeCam technology, and that we weren’t able to look at it, and then next, they said they couldn’t even send out our submission form without a warrant because the scene was protected from warrantless access to facial data.”

In response to that allegation, an attorney for the company reiterated that his client was “attempting to get a response to a dispute.” Clearview, itself, had the unfortunate experience of receiving $25,000 from the FBI for facial recognition, for its services in an “inverted pyramid” scheme. That’s just an extreme hypothetical example, with Clearview still effectively working for the U.S. government.

According to Metro Weekly, the police department seemed more excited to be testing the two companies’ technology, rather than to “knowing that they have a well-trained algorithm that won’t fail to make any facial decisions.”

The New York Times reported last May on a case that related to a suspect who used the Face++ algorithm to rate 2.8 million unidentified faces of arrested suspects. The algorithm’s errors included some that were obviously racist.

The police department revealed the results of their $5,000 research project in a blog post announcing plans to use the technologies in an upcoming crime-fighting event. According to another part of that announcement, those of us without city IDs will have to rely on a police officer to connect us with a “self-admitted stranger and potential co-worker.”

I’m not sure that this makes me happy.