In Pennsylvania, one middle-aged woman reportedly walked into a police station last November on suspicion of terrorism. A few days later, an actual suspicious person entered the same police station.

In the second case, police say a man dressed like an Islamist rebel entered an Airbnbs in Philadelphia. When security asked him to show identification, police say, he lied and claimed to be a Hezbollah rebel.

And now a UK company is being accused of wildly overstating the accuracy of its facial recognition software, and perhaps more ominously, of bungling attempts to authenticate the organizations behind the software.

Users around the world have flocked to the Singapore-based firm Clearview AI’s facial recognition software. Since last year, it has been used to identify 27,000 people, including celebs and politicians.

In both the Pennsylvania and Philadelphia incidents, the software was reported to the U.S. Department of Homeland Security.

The Pennsylvania incident led to an extensive manhunt for a man police thought was a terrorist. It turned out to be an ordinary citizen.

Meanwhile, Clearview's facial recognition software has been set up inside three hotels in the U.K.

Its machine-learning-based tool can identify people from a collection of photos in just a few seconds. (Roughly 60 percent of the public has never heard of it.)

The FBI has already tested the firm’s software, and it was recently praised by a CIA CEO.

But Clearview CEO Maksim Khorshedi-Masharoff has made a controversial suggestion to use facial recognition technology to track people to their actual homes, in order to unlock their electronic devices and laptop computers remotely.

So far, the company’s software has been used in about a dozen countries, and clear image-based facial recognition technology is set to go mainstream in the next year.

But in Europe, questions are being raised about the way facial recognition software has been and is being used.

The latest twist in the Clearview dispute in the UK involves three-year-old Katia Mousshina, who was reported missing from Harlow, Essex, by her family in November 2017.

Clearview’s tool did not record Katia’s face before she was dumped into a van miles away.

Unable to come up with a match, and reluctant to publish her image, police released a hazy head shot of a “white woman” that was rapidly branded fake.

And after four and a half months, the exhausted Essex Police referred the case to forensic expert Nik Palmer.

He examined images of a “smiley face” on Katia’s passport to see if it matched her image.

“I gave the matches to several search engines,” he said, “and they all came back with false negatives.”

Mr. Palmer decided to compare Katia’s card with several flight tickets and mobile phone photos.

He couldn’t find a match. So he called Clearview.

Clearview’s CEO Khorshedi-Masharoff is due to meet Britain’s Home Office for discussions about using facial recognition software to track people in their homes.

Clearview’s facial recognition software is widely popular across Europe.

The Guardian reported that it was used to identify 574,388 people in the Netherlands alone last year, while in Belgium, Portugal, Germany, Slovakia, and Austria it was used to pinpoint criminals and to screen potential migrants at ports and airports.

The technology is potentially hugely controversial, given that it can be used to track people to their actual homes from afar.